GithubHelp home page GithubHelp logo

strategist922 / offensive-language-detection Goto Github PK

View Code? Open in Web Editor NEW

This project forked from batuhanguler/offensive-language-detection

0.0 2.0 0.0 163 KB

Classification of Offensive tweets, part of OffensEval 2019 Competition.

Jupyter Notebook 100.00%

offensive-language-detection's Introduction

Offensive_language_detection

This project was completed part of the NLP module at Imperial College. The goal was to propose an original approach to solve this problem. We proposed a deep learning approach that uses transfer learning in order to adress the data scarcity problem. We used both unsupervised transfer learning (used pre-trained embeddings) and sequential transfer learning (tasks learned sequentially). A paper has been released where we discuss the details of our implementation.

Creators: Batuhan Güler - Alexis Laignelet - Nicolo Frisiani

Description of the project

The project is the OffensEval 2019 competition from the Codalab plateform. The full description of the project is accessible here.

Offensive language is pervasive in social media. Individuals frequently take advantage of the perceived anonymity of computer-mediated communication, using this to engage in behavior that many of them would not consider in real life. Online communities, social media platforms, and technology companies have been investing heavily in ways to cope with offensive language to prevent abusive behavior in social media.

In OffensEval we break down offensive content into three sub-tasks taking the type and target of offenses into account.

Sub-tasks

  • Sub-task A - Offensive language identification
  • Sub-task B - Automatic categorization of offense types
  • Sub-task C - Offense target identification

Paper

A paper that contains the details regarding our submission to the OffensEval 2019 (SemEval 2019 - Task 6) has been released. The competition was based on the Offensive Language Identification Dataset. We first discuss the details of the classifier implemented and the type of input data used and pre-processing performed. We then move onto critically evaluating our performance. We have achieved a macro-average F1-score of 0.76, 0.68, 0.54, respectively for Sub-task A, Sub-task B, and Sub-task C, which we believe reflects on the level of sophistication of the models implemented. Finally, we will be discussing the difficulties encountered and possible improvements for the future.

offensive-language-detection's People

Contributors

batuhanguler avatar

Watchers

 avatar paper2code - bot avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.