Knowledge

Issue-26-Context-and-Copying-in-Neural-MT

Author: Raj Patel, Machine Translation Scientist @ Iconic

When translating from one language to another, certain words and tokens need to be copied, and not translated, per se, in the target sentence. This includes things like proper nouns, names, numbers, and 'unknown' tokens. We want these to appear in the translation just as they were in the original text. Neural MT systems with...

Read More
Issue-25-Improving-Neural-MT-with-Cross-Language-Model-Pretraining

Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic

One of the reasons for the success of Neural MT, and deep learning techniques in general, is the more effective and efficient utilization of large amounts of training data without too much overhead in terms of the time it takes to infer, and the size of the resulting models. This also opens the door to...

Read More
Issue-24-Exploring-language-models-for-Neural-MT

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

Monolingual language models were a critical part of Phrase-based Statistical Machine Translation systems. They are also used in unsupervised Neural MT systems (unsupervised means that no parallel data is available to supervise training, in other words only monolingual data is used). However, they are not used in standard supervised Neural MT engines and training language...

Read More
Issue-23-Unbiased-Neural-MT

Author: Raj Patel, Machine Translation Scientist @ Iconic

A recent topic of conversation and interest in the area of Neural MT - and Artificial Intelligence in general - is gender bias. Neural models are trained using large text corpora which inherently contain social biases and stereotypes, and as a consequence, translation models inherit these biases. In this article, we’ll try to understand how gender...

Read More
Issue-22-Mixture-Models-in-Neural-MT

Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic

It goes without saying that Neural Machine Translation has become state of the art in MT. However, one challenge we still face is developing a single general MT system which works well across a variety of different input types. As we know from long-standing research into domain adaptation, a system trained on patent data doesn’t...

Read More
Issue-21-Revisiting-Data-Filtering-for-Neural-MT

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

The Neural MT Weekly is back for 2019 after a short break over the holidays! 2018 was a very exciting year for machine translation, as documented over the first 20 articles in this series. What was striking was the pace of development, even in the 6 months since we starting publishing these articles. This was...

Read More
Issue-20-Dynamic-Vocabulary-in-Neural-MT

Author: Dr. Raj Patel, Machine Translation Scientist @ Iconic

As has been covered a number of times in this series, Neural MT requires good data for training, and acquiring such data for new languages can be costly and not always feasible. One approach in Neural MT literature for improving translation quality for low-resource language is transfer-learning. A common practice is to reuse the model...

Read More
Issue-19-Adaptive-Neural-MT

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

Neural Machine Translation is known to be particularly poor at translating out-of-domain data. That is, an engine trained on generic data will be much worse at translating medical documents than an engine trained on medical data. It is much more sensitive to such differences than, say, Statistical MT. This problem is partially solved by domain...

Read More