In this work, we introduce a simple yet effective framework called FS-ABSA, which involves domain-adaptive pre-training and textinfilling fine-tuning. Specifically,
- we approach the End-to-End ABSA task as a text-infilling problem.
- we perform domain-adaptive pre-training with the text-infilling objective, narrowing the two gaps, i.e., domain gap and objective gap, and consequently facilitating the knowledge transfer.
To run the code, please install all the dependency packages by using the following command:
pip install -r requirements.txt
NOTE: All experiments are conducted on NVIDIA RTX 3090 (and Linux OS). Different versions of packages and GPU may lead to different results.
NOTE: All experiment scripts are with multiple runs (three seeds).
## English Dataset: 14lap
$ bash script/run_aspe_14lap.sh
## English Dataset: 14res
$ bash script/run_aspe_14res.sh
## Dutch Dataset: 16res
$ bash script/run_aspe_dutch.sh
## French Dataset: 16res
$ bash script/run_aspe_french.sh
## English Dataset: 14lap
$ bash script/run_aspe_fewshot_14lap.sh
## English Dataset: 14res
$ bash script/run_aspe_fewshot_14res.sh
## Dutch Dataset: 16res
$ bash script/run_aspe_fewshot_dutch.sh
## French Dataset: 16res
$ bash script/run_aspe_fewshot_french.sh
Results on 14-Lap and 14-Res under different training data size scenarios
Comparison with SOTA under the full data setting
Results in two low-resource languages under different training data sizes
If you have any questions related to this work, you can open an issue with details or feel free to email Zengzhi([email protected]
), Qiming([email protected]
).
Our code is based on ABSA-QUAD. Thanks for their work.