GithubHelp home page GithubHelp logo

autodistil's Introduction

Environment

  • This code is modified based on the repository developed by Hugging Face: Transformers v2.1.1

  • Prepare environment

    pip install -r requirements.txt

Data

GLUE Data

  • Download the GLUE data by running this script and unpack it to directory GLUE_DIR

  • TASK_NAME can be one of CoLA, SST-2, MRPC, STS-B, QQP, MNLI, QNLI, RTE.

    ${GLUE_DIR}/${TASK_NAME}

Wiki Data

  • The original Wiki data used in this project can be found here (It is called 'Raw text from English Wikipedia for general distillation')

  • The processed Wiki data can be generated by the method in TinyBERT using the following script from this repository

    python pregenerate_training_data.py --train_corpus ${CORPUS_RAW} \ --bert_model ${BERT_BASE_DIR}$
    --reduce_memory --do_lower_case
    --epochs_to_generate 3
    --output_dir ${CORPUS_JSON_DIR}$

    ${BERT_BASE_DIR}$ includes the BERT-base teacher model, e.g., BERT-base-uncased

(i) Example on MNLI data (task-agnostic)

First train model on MNLI. Then finetune it on RTE. Finally evaluate it on RTE.

Step 0. Go to the configuration folder

cd Code/run_yaml/

Step 1. Training on MNLI (task-agnostic)

  • $$AMLT_DATA_DIR/Data_GLUE/glue_data/{TASK_NAME}/ is data folder
  • $$AMLT_DATA_DIR/Local_models/pretrained_BERTs/BERT_base_uncased/ contains the teacher and student initialization.
  • Please create model_dir and download the pretrained BERT_base_uncased and put it here

Train_NoAssist_SeriesEpochs_NoHard_PreModel_RndSampl.yaml

Step 2. Finetuning on RTE

  • $$AMLT_DATA_DIR/Outputs/glue/MNLI/NoAssist/All_NoAug_NoHardLabel_PreModel/ contains the models trained on MNLI
  • Epochs_{Epochs_TrainMNLI} is the different model trained on MNLI
  • Please create the folder of $$AMLT_DATA_DIR/Outputs/glue/MNLI/NoAssist/All_NoAug_NoHardLabel_PreModel/ and put the output models of Step 1 here

Train_finalfinetuning_SpecificSubs_SeriesEpochs_NoAssist_NoHardLabel_PretrainedModel.yaml

Step 3. Evaluation on RTE

  • $$AMLT_DATA_DIR/Outputs/glue/{TASK_NAME}/NoAssist/All_FINETUNING_NoHardLabel_PreModel/SpecificSubs/ contains the models finetuned on RTE
  • FinetuneEpochs_{Finetune_Epochs}EpochsMNLI{Epochs_TrainMNLI}Sub{Subs} is the different model finetuned on RTE
  • Please create the folder of $$AMLT_DATA_DIR/Outputs/glue/{TASK_NAME}/NoAssist/All_FINETUNING_NoHardLabel_PreModel/SpecificSubs/ and put the output models of Step 2 here

Evaluate_SpecificSubs_NoAssist_NoHardLabel_PretrainedModel.yaml

(ii) Example on Wiki data

First train model on Wiki. Then finetune it on RTE. Finally evaluate it on RTE.

Step 0. Go to the configuration folder

cd Code/run_yaml/

Step 1. Training on Wiki

  • $$AMLT_DATA_DIR/English_Wiki/corpus_jsonfile_for_general_KD/ contains the processed Wiki data

Train_wiki_NoAssist_NoHard_PreModel_RndSampl.yaml

Step 2. Finetuning on RTE

Step 3. Evaluation on RTE

autodistil's People

Contributors

dongkuanx27 avatar lucianodelcorro avatar microsoftopensource avatar subhomj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

autodistil's Issues

Three questions about autodistil

Hello! Thank you for your excellent work and I would like to ask you three questions:
1. Why is it necessary to use task-agnostic methods to train the supernet? Is it also possible if we use MNLI data to train the supernet in a supervised way? After all, you also stated in the paper that the performance on the MNLI task is related to the performance of the other eight tasks of Glue.
2. You used three supernets of increasing scale in your paper, but if you use one supernet that covers all scales, is the accuracy necessarily lower than your three supernets? You only said in the paper that the purpose of doing so is to reduce the influence between model’s weights, but there is no experimental comparison.
3. You borrowed the idea of the few shot NAS, but the idea of few-shot NAS("Few-shot Neural Architecture Search") is to solve the problem that the subnet performance evaluation ranking encountered by the one-shot NAS is inconsistent with the performance of the subnet trained from scratch by extending the weight of the supernet to a larger scale. I don't feel like it has much to do with the few shot NAS.
I would appreciate it if you could answer my questions.Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.