This repository shows how to build a sentiment classification model using BERT pretrained model. BERT is a large-scale transformer-based Language Model that can be finetuned for a variety of tasks. From data loading to model training and classification, all the steps are described here.
In summary, this repository performs the following operations:
- Importing SMILE Twitter Emotion dataset for the purpose of classifying emotions
- Exploring the features of the dataset and performing necessary preprocessing
- Loading tokenizer and encoding the dataset
- Setting up BERT Pretrained Model
- Configuring the model to be trained and defining accuracy metrics
- Training and evaluating the model
- Saving and loading the trained model