DAEs are AutoEncoder models trained to perform a denoising task. The model takes a partially corrupted input data and learns to clean and output the cleaned data.
Through the denoising task, the model learns the input distribution and produces latent representations that are robust to corruptions. The latent representations extracted from the model can be useful for a variety of downstream tasks. One can:
- Freeze the encoder layers and use the latent representations to train supervised ML models, rendering DAE as a vehicle for automatic feature engineering.
- Use the latent representations for unsupervised tasks like similarity query or clustering.
To train DAEs on tabular data, the most important piece is the noise generator. What makes sense and most effective is swap noise, through which each value in the training data maybe replaced by a random value from the same column.
This package implements:
- Swap Noise generator.
- Dataframe parser which converts arbitrary pandas dataframe to numpy arrays.
- Network constructor with configurable body blocks.
- Training function.
- Sklearn style
.fit
,.transform
API. - Sklearn style model also supports
save
andload
.
tabdae
is built with PyTorch. Make sure to install the dependencies listed in requirements.txt. Then install the package using pip:
pip install -r requirements.txt
pip install git+https://github.com/alexstedev/DenoisingAutoencoders.git
import pandas as pd
from tabdae.models.model import DAE
df = pd.read_csv(<path-to-csv-file>)
dae = DAE(
body_network='deepstack',
body_network_cfg=dict(hidden_size=1024),
swap_noise_probas=.15,
device='cuda',
)
dae.fit(df, verbose=1, optimizer_params={'lr': 3e-4})
# extract latent representation with the model
latent = dae.transform(df)
While I haven't been able to find an article introducing this method, it has won several Kaggle competitions, e.g. Porto Seguro's safe driver prediction and Tabular Playground Series - Feb 2021.