Knowledge

NMT Issue 36 Average Attention Network for Neural Machine Translation

Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic

In Issue#32, we covered the Transformer model for neural machine translation which is the state of the art in neural MT. In this post we explore a technique presented by Zhang et. al. 2018, which modifies the transformer model and speeds up the translation process by 4-7 times across a range of different engines.....

Read More
Issue-35-Text-Repair-Model-for-Neural-Machine-Translation

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

Neural machine translation engines produce systematic errors which are not always easy to detect and correct in an end-to-end framework with millions of hidden parameters. One potential way to resolve these issues is doing so after the fact - correcting the errors by post-processing the output with an automatic post-editing (APE) step. This week we take...

Read More
Issue-29-Improving-Robustness-in-Neural-MT

Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic

Despite the high level of performance in current Neural MT engines, there remains a significant issue with robustness when it comes to unexpected, noisy input. When the input is not clean, the quality of the output drops drastically. In this issue, we will take a look at the impact of various types of 'noise' on...

Read More
Issue-28-–-Hybrid-Unsupervised-Machine-Translation

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

In Issue #11 of this series, we first looked directly at the topic of unsupervised machine translation - training an engine without any parallel data. Since then, it has gone from a promising concept, to one that can produce effective systems that perform close to the level of fully supervised engines (trained with parallel data). The...

Read More