Paper | Authors | Domain | Link |
---|---|---|---|
Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation | Chenze Shao, Jinchao Zhang , Yang Feng , Fandong Meng and Jie Zhou | Training, Non-Autoregressive NMT | https://arxiv.org/abs/1911.09320 |
Neural Machine Translation with Byte-Level Subwords | Changhan Wang , Kyunghyun Cho and Jiatao Gu | Data Processing | https://arxiv.org/abs/1909.03341 |
Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation | Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu | Curriculum Learning, Non-Autoregressive NMT | https://arxiv.org/abs/1911.08717 |
Unsupervised Neural Dialect Translation with Commonality and Diversity Modeling | Yu Wan, Baosong Yang, Derek F. Wong, Lidia S. Chao, Haihua Du, Ben C.H. Ao | Dialect Translation | https://arxiv.org/abs/1912.05134 |
Transductive Ensemble Learning for Neural Machine Translation | Yiren Wang, Lijun Wu, Yingce Xia, Tao Qin, ChengXiang Zhai, Tie-Yan Liu | Neural Network Architecture | - |
IntroVNMT: An Introspective Model for Variational Neural Machine Translation | Xin Sheng, Linli Xu, Junliang Guo, Jingchang Liu, Ruoyu Zhao, Yinlong Xu | Neural Network Architecture | - |
Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior | Raphael Shu , Jason Lee , Hideki Nakayama , and Kyunghyun Cho | Non-Autoregressive NMT | https://arxiv.org/abs/1908.07181 |
Improving Context-Aware Neural Machine Translation Using Self-Attentive Sentence Embedding | Hyeongu Yun, Yongkeun Hwang, Kyomin Jung | Context-Aware NMT | - |
Controlling Neural Machine Translation Formality with Synthetic Supervision | Xing Niu, Marine Carpuat | Training | https://arxiv.org/abs/1911.08706 |
A Meta Learning Method Leveraging Multiple Domain Data for Low Resource Machine Translation | Rumeng Li, Xun Wang, Hong Yu | Low-Resource | - |
Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation | Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman | Multilingual NMT | https://arxiv.org/abs/1909.00437 |
Explicit Sentence Compression for Neural Machine Translation | Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao | Neural Network Architecture | https://arxiv.org/abs/1912.11980 |
Neuron Interaction Based Representation Composition for Neural Machine Translation | Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu | Neural Network Architecture | http://arxiv.org/abs/1911.09877?context=cs.CL |
Neural Machine Translation with Joint Representation | Yanyang Li , Qiang Wang , Tong Xiao , Tongran Liu and Jingbo Zhu | Neural Network Architecture | https://arxiv.org/abs/2002.06546 |
Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation | Baijun Ji , Zhirui Zhang , Xiangyu Duan, Min Zhang, Boxing Chen and Weihua Luo | Low-resource | https://arxiv.org/abs/1912.01214 |
Reinforced Curriculum Learning on Pre-trained Neural Machine Translation Models | Mingjun Zhao, Haijiang Wu, Di Niu, Xiaoli Wang | Curriculum Learning | - |
Towards Making the Most of BERT in Neural Machine Translation | Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Weinan Zhang, Lei Li | Training | https://arxiv.org/abs/1908.05672 |
Modeling Fluency and Faithfulness for Diverse Neural Machine Translation | Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, Dong Yu | Decoding | https://arxiv.org/abs/1912.00178 |
Acquiring Knowledge from Pre-trained Model to Neural Machine Translation | Rongxiang Weng , Heng Yu , Shujian Huang1 , Shanbo Cheng , Weihua Luo | Training using pre-trained model | https://arxiv.org/abs/1912.01774 |
Visual Agreement Regularized Training for Multi-Modal Machine Translation | Pengcheng Yang, Boxing Chen, Pei Zhang, Xu Sun | Multi-modal machine translation | https://arxiv.org/abs/1912.12014 |
Balancing Quality and Human Involvement: an Effective Approach to Interactive Neural Machine Translation | tianxiang zhao, Lemao Liu, Guoping Huang, Zhaopeng Tu, Huayang Li, Yingling Liu, Liu GuiQuan, Shuming Shi | Neural Network Architecture | - |
GRET: Global Representation Enhanced Transformer | Rongxiang Weng, Haoran Wei, Shujian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jiajun Chen | Neural Network Architecture | https://arxiv.org/abs/2002.10101 |
test's Introduction
test's People
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.