In this work we study how to efficiently add new data to a semantic parsing model without retraining it from scratch. Experiments include finetuning on new data, finetuning with subsampling from the old data and regularization techniques that improve the final preformance of the model and/or reduce the need of using large amounts of old data.
To work with the repository you need to install required packages and this package. Edit (-e) mode is perferred if you want to change the code.
pip install -r requirements.txt
pip install -e .
If you have issues install the requirements for this, install pip3 or conda and run the same txt and . file.
scripts/download_data.sh
downloads TOP and SNIPS datasets.
It also reformats SNIPS into TOP format.
# download data
sh scripts/download_data.sh
Preprocess script splits train set into pretrain and finetune parts, creates tokenizers, numericalizes the data and saves in to --output-dir
folder.
Convert the input data to the format that the model will use using the convert script. Add data in sanju_data.tsv as in the format in the file.
# convert data
python convert_data.py
# preprocess
DATA=data-bin/top_dataset
python cli/preprocess.py \
--data data/top-dataset-semantic-parsing \
--text-tokenizer bert-base-cased \
--split-amount 0.1 \
--output-dir $DATA \
Train script trains the model on the pretrain part and saves the model and the trainer to --output-dir
folder.
Edit the batch size to reduce overfitting once there is enough data (recommended batch size - 128/192)
Also, there can be a wanDB error which will require you to login to wanDB. Steps to login will be given in the error.
Run in the below command in console and copy the key you will get at wanDB site.
"" $wandb login ""
# train
DATA=data-bin/top_dataset
MODEL=output_dir/top_model
python cli/train_lightning.py \
--data-dir $DATA \
--encoder-model bert-base-cased \
--decoder-lr 0.2 \
--encoder-lr 0.02 \
--batch-size 4 \
--layers 4 \
--hidden 256 \
--dropout 0.2 \
--heads 4 \
--label-smoothing 0.1 \
--epochs 100 \
--warmup-steps 1500 \
--freeze-encoder 0 \
--unfreeze-encoder 500 \
--log-every 150 \
--early-stopping 10 \
--output-dir $MODEL \
Use the below script to load the model and type input data and get output data.
# script to test input data with output
python cli/script.py --data-dir $DATA --model-dir $MODEL
Use the below retrain model when dataset is enough to optimize the model.
Retrain script loads the model and optimizer from the checkpoint and finetunes on the finetune part of the training set.
DATA=data-bin/top_dataset
MODEL=output_dir/top_model
python cli/retrain.py \
--data-dir $DATA \
--model-dir $MODEL \
--batch-size 128 \
--dropout 0.2 \
--epochs 40 \
--log-every 100 \
--old-data-amount 0.1 \
--move-norm 0.1 \
You can find more usage examples in the scripts
directory.
This is not an officially supported Google product