When it comes to AI use cases, language translation is amongst the most practical and widely adopted by companies and organizations alike. From Canva to Bloomberg, companies have pursued the flywheel of automatic multi-lingual translation, seeking to make content more accessible to employees, customers, communities, and the public. Since introducing the transformer architecture in 2017, which powers today’s large language models (or LLMs), Google has continued to produce pioneering work, including many advancements in AI Translation. In this blog post, we’re pleased to announce a new generative model for Google Cloud’s Translation API and an overview of other recent advances that are helping businesses to accelerate translation use cases with AI.
Translation API introduces a specialized generative AI model, fine-tuned for translations
The latest addition to Google Cloud’s Translation AI portfolio lets Translation API customers choose between our traditional machine translation model (a.k.a. NMT) or our new Translation LLM. Finetuned on millions of translation source and target segments, the Translation LLM is well-suited to longer context, so it should be considered for use in translating paragraphs and articles. NMT may still be optimal for chat conversations and short text, low-latency experiences, or use cases in which consistency and terminology management are critical. You can try out our Translation LLM on Vertex AI Studio in the translation mode.
More flexible translation in real time with Generative AI
Launched to GA in February 2024 on Translation API Advanced, Adaptive Translation is an integrated API method that works in concert with our specialized Translation LLM. When customers request an adaptive translation, they import both the text to be translated as well as a small dataset of translated examples (as few as five or as many as 30,000). The API applies an algorithm to find the best examples for each translation request, then passes this refined context to the LLM at inference time. The result provides customers with a quick and easy method to optimize a translation output to better fit style requirements and use cases in real-time.
At Google Cloud Next ‘24 in April, AI-enabled translation platform Smartling co-presented on responsive translation using generative AI. Smartling shared benchmarks of Google Adaptive Translation spanning multiple verticals and nine languages. The findings included Google Adaptive Translation outperforming Google Translate with an up to 23% increase in quality.
“Adding Google Adaptive Translation to our portfolio of engines is a pivotal point in the Smartling translation and AI strategy due to its easily customizable, dynamic and dramatic improvement in quality,” noted Olga Beregovaya, VP of AI at Smartling. “Unlike other general purpose LLMs, Google Cloud Adaptive Translation is particularly well suited for translation tasks. It is an attractive solution due to best results in terms of performance-cost tradeoff and is especially well suited for clients with sparse data scenarios who are beginning their localization journey, entering new markets or are seeking to minimize content drift.”
Translation in AI Studio
Want to try out your content with a few different models in one go? We now make it fast and easy to test translations with not only our Specialized Translation LLM, but also Gemini, or Google Traditional translation models, by offering translation in AI Studio.
Quality gains for German, Japanese, Hindi, and Chinese on traditional translation models: Throughout 2023, we quietly refreshed models for 30 language pairs, landing quality gains along the way. Now as of April 1, 2024, customers using translation APIs will automatically benefit from our latest model refreshes, for German, Japanese, Hindi, and Chinese. The translation model updates for Google’s pretrained general translation models (a.k.a NMT) lands strong quality gains with significant MQM error reduction across 4 languages, bi-directional. A lot of these gains are from enabling multi-sentence context retention (aka context window) to improve fluency and accuracy within a paragraph.
Which model should you choose?
Should you pay for specialized models for translation to get high quality, hundreds of languages, and low latency? Or should you go with general-purpose large language models like Gemini to benefit from the long context window or low cost, even at the cost of throughput? Generative models are excellent tools for summarization, question-answer use cases, generating content, and content editing, but today they are orders of magnitude slower throughput than traditional translation models, so running large translation workflows on them can significantly slow time to delivery. On the other hand, traditional translation models mostly translate sentence by sentence context, and generally are not flexible for real time customization of the output based on context.
Ultimately finding the right fit will vary based on your particular needs and use case, the good news is that you can find it all on Vertex AI in Google Cloud. Vertex AI platform provides model choice, global availability, and scale so that customers can choose the best model to fit with their use case, language, domain, and workflow.
Cloud BlogRead More