Market Globalization Despite Protectionism
I was pleasantly surprised the first time I peeked at the translation industry market data: according to a report on “The Language Services Market: 2018” by Common Sense Advisory (CSA), the global market for outsourced language services and technology is US$46.52 billion in 2018. CSA predicts that the language services industry will continue to grow and that the market will increase to US$56.18 billion by 2021. In a separate report The Nimdzi 100 annual report, the language services industry will reach USD 53.6 billion in 2019 and is projected to reach USD 70 billion by 2023.
It was surprising to see that the translation industry, which has existed for centuries, is seeing double-digit growth. This growth has endured in the backdrop despite protectionist policies between countries in recent years. Globalization is real and as organizations around the world continue to transaction across borders and make their products and services available in more languages, the translation industry market continues to increase in size.
Neural Machine Translation
Recent advances in deep learning, also known as using neural machine learning, has proven to achieve state of the art results in machine translation. The moment confirming neural AI technology is the future was when Google Translate announced they had made the full switch to neural machine translation in September of 2016.
The key benefit to neural machine translation is that a single system can be trained directly on source and target text, and no longer required a pipeline of specialized systems used in statistical machine learning.
Earlier versions of machine translation used multilayer perceptron neural network models but were limited by a fixed-length input sequence where the output must be the same length. Since then, the architecture of these models improved through the use of recurrent neural networks organized into an encoder-decoder architecture allowing for variable length input and output sequences. Later on, adding attention mechanisms allowed these models to improve translation of long sequences of words by permitting the model to learn where to place attention on the input sequence as each word of the output sequence is decoded.
State of the art neural machine translation engines are now capable of instantly translating texts with 60-90% accuracy (more on this accuracy later). But the technology is not without its faults when put in practice for real-world translation. One such sore thumb with the technology is it cannot translate texts consistently. Due to the encoder-decoder with attention model architecture described above, only sequences with sentence-like lengths can be used as inputs to the model. This is fine when one sentence is translated, but when long paragraphs and documents of texts need to be translated, the model will translate each sentence individually without knowledge of preceding sentences, resulting in translations where keywords are inconsistent between translated sentences.
It’s Not Humans vs. AI
While there is some truth regarding AI technology replacing humans of some jobs. At this point, AI is still far away from being able to do this for a wide variety of tasks. Since founding TranslateFX, human translators we work with often ask if the artificial intelligence software we’re developing will put them out of their careers. The simple answer is NO. The long answer is AI software will continue to improve and make translators much more productive than in the past. The most likely scenario is the software will assist us at particular tasks (such as those with repetition or require recall) but humans are still an important part of the translation process. It is not a zero-sum game between humans and translators. We will work together, or more accurately, AI software will augment humans.
Post-Editing is the new Human Input
Neural machine translation will likely continue to improve over time through better neural network architecture, vetted quality data, and more computation. This change in neural AI technology will require human translators to adapt to the benefits of the technology and focus on what humans are good at. Since neural machine translation can instantly produce accurate first drafts, one such task humans will spend more time doing is post-editing or reviewing machine translated texts. At this point in time, I would say Lilt and TranslateFX are the only two such companies I know that are positioned for this upcoming change in the translation industry.
Still About Context
Professional translators know the key to quality translation is context. This continues to be true more than ever in the future of translation. Most of the well-known machine translation engines we know of today (such as Google or Bing translate) are generic. They are trained to translate of an assortment of texts, from recipes to restaurant menus to storyboards to chat log transcripts and many more. Like humans without context, machines also cannot accurately translate texts accurately without understanding the use, circumstance, and audience of the texts to be translated. Financial statements for example should obviously not be translated with the same tone, style, and terminology as when translating a story book for children.
The Future is Customization
Based on the technology of neural network AI, we will see more custom machine translation engines designed for specific use cases and contexts to improve upon the accuracy of machine translation. In my experience, custom developed machine translation engines can improve 20+% vs generic machine translation engines. Many of these custom machine translation engines will be developed for company or industry, for example TranslateFX focuses on AI machine translation models specifically for financial and legal documents, and also more specific cases such as equity research reports.
New and different AI models will also be developed to supplement neural machine translation. As discussed above, consistency issues are an aftereffect neural machine translation, this type of issue are being resolved with additional machine learning or natural language processing algorithms. These augmenting algorithms will also be developed per context.
Lastly. computer assisted translation tools will also need to evolve with neural machine translation. Computer assisted translation tools of today are largely based off of statistical machine translation technology, and some of its core features such as translation memory will need to adapt to the neural technology powering the translation tools. Based on my observations, all these changes will cause some amount of confusion among the translation industry -an industry where human translators take much pride in their work.
About Erik Chan
Erik Chan is technology co-founder of TranslateFX. A serial entrepreneur and technologist, Erik previously co-founded RocketClub, an equity based crowdfunding platform for startups, MicroPay Technology, a gaming payments company, and social and online game companies 28wins and Bottomless Pit Games. Prior to Erik’s entrepreneur experiences, he spent time at Activision Blizzard and Midway Games first as a systems engineer and then as a producer. Previously, Erik was awarded 1st prize at Intel’s best multi-threaded game competition and is also a recipient of the James F. Lincoln Arc Welded Engineering Design Award.
Erik holds a MSc in Management from MIT, a MBA from Tsinghua University, and a BSc in Biomedical engineering with Computer Science from Johns Hopkins University. Erik Chan also spent time doing research at the MIT Media Lab in the Center of Bits and Atoms and Software Agents groups.
Content provided by Erick Chan. Views expressed in this article do not represent the opinion of Synced Review or its editors.