NMT 134 A Targeted Attack on Black-Box Neural Machine Translation

Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic Introduction Last week we looked at how neural machine translation (NMT) systems are naturally susceptible to gender bias. In today’s blog post we look at the vulnerability of an NMT system to targeted attacks, which could result in unsolicited or harmful translations. Specifically we report on work by Xu et al., 2021, which examines attacks on black-box NMT...

Read More

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

Neural machine translation engines produce systematic errors which are not always easy to detect and correct in an end-to-end framework with millions of hidden parameters. One potential way to resolve these issues is doing so after the fact - correcting the errors by post-processing the output with an automatic post-editing (APE) step. This week we take...

Read More

Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic

Despite the high level of performance in current Neural MT engines, there remains a significant issue with robustness when it comes to unexpected, noisy input. When the input is not clean, the quality of the output drops drastically. In this issue, we will take a look at the impact of various types of 'noise' on...

Read More

Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic

Training a neural machine translation engine is a time consuming task. It typically takes a number of days or even weeks, when running powerful GPUs. Reducing this time is a priority of any neural MT developer. In this post we explore a recent work (Ott et al, 2018), whereby, without compromising the translation quality, they...

Read More