Natural Language Processing course - CA5
This is a collaborative project on Neural Machine Translation (NMT) by employing two renowned tools (FairSeq, OpenNMT).
The first step was processing, which encompasses utilizing BPE for tokenization (we used the subword-nmt library); moreover, converting all the letters into lowercase.
Considering FairSeq, we had to specify some hyperparameters for the model, such as the loss function, optimizer, batch size, and so forth. Here is the sample output when training the model with the FairSeq tool:
The same procedure was applied to OpenNMT, and the sample output is as follows: