GithubHelp home page GithubHelp logo

topekekere / zindi_masakhane_pos Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ashatilov/zindi_masakhane_pos

0.0 0.0 0.0 7.15 MB

Code for Lacuna Masakhane Parts of Speech Classification Challenge

Python 0.24% Jupyter Notebook 99.76%

zindi_masakhane_pos's Introduction

Lacuna Masakhane Parts of Speech Classification Challenge

Open In Colab

Data

Only competition data from repo https://github.com/masakhane-io/masakhane-pos is used.

Model

Pretrained NLLB-200 is an encoder-decoder multilanguage translation model (https://huggingface.co/facebook/nllb-200-distilled-600M).

We use only encoder part of it for POS tagging task, and define M2M100ForTokenClassification class, it consist of encoder part on NLLB model and token classification head on top of it. Class is defined here masakhane_pos/m2m_100_encoder/modeling_m2m_100.py

Training

LoRA approach is used for model fine-tuning.

All data processing and utility functions are defined in masakhane_pos/utils.py.

Training script: train.py

Notebook with full code to reproduce solution: train.ipynb - open and run it in Google Colab, full training takes about 30 minutes. Open In Colab

Finding the best solution process

Firstly, iteratively perform a greedy search for the best set of languages to train on based on the public score:

  1. Train on a set of 1 language, validate on all the others, and find the one that gives the best public score.
  2. Train on a set of 2 languages: the best from the previous step (lug) and iteratively select all others to find the best set based on the public score.
  3. Train on a set of 3 languages: the best set from the previous step (lug, ibo) and iteratively select all others to find the best set based on the public score.
  4. And so on.

The final set of languages that was used in the final submission is lug, ibo, mos, sna. A set of 5 languages gave a slightly worse result than a set of 4.

Secondly, using pseudo labels:

  1. Train the model on the found best set of languages (lug, ibo, mos, sna) - it gave ~0.728 public score.
  2. Predict labels for luo and tsn using the model from the previous step, and add them as training data. This increased the public score to ~0.742.

zindi_masakhane_pos's People

Contributors

ashatilov avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.