हिन्दी ಕನ್ನಡ తెలుగు मराठी ગુજરાતી বাংলা ਪੰਜਾਬੀ தமிழ் অসমীয়া മലയാളം मनी9 TV9 UP
India Sports Tech World Business Career Religion Entertainment LifeStyle Photos Shorts Education Science Cities Videos

Google introduces TranslateGemma, powerful open translation models built on Gemma 3

TranslateGemma is a new open translation model suite built on Gemma 3, supporting high-quality translation across 55 languages. The models deliver stronger accuracy while using fewer parameters, with smaller versions outperforming larger baselines.

Designed for mobile, local, and cloud use, TranslateGemma aims to make advanced AI translation more accessible worldwide.
| Updated on: Jan 16, 2026 | 12:37 PM
Trusted Source

New Delhi: A new suite of open translation models called TranslateGemma is released to expand the frontiers of speaking more than one language. The models are based on the Gemma 3 architecture and are aimed at providing high-quality translations in 55 languages but with high efficiency to be utilised on a smartphone to a cloud GPU.

The release is a significant milestone in the development of OpenAI, as the models are both highly accurate and smaller in size. The developers will ensure the translation process made by the developers of the new translation software can be trusted by researchers, developers, and regular users across the globe by shrinking the performance of the significantly larger systems into lighter forms.

Also Read

Smaller models, bigger performance

Internal testing has yielded one of the most outstanding findings, which is efficiency. The smaller model, 12B TranslateGemma, achieves better scores on the WMT24++ benchmark than the larger Gemma 3 27B baseline. 12B TranslateGemma has fewer than half the number of parameters. The 4B base is even small enough to provide results similar to the older 12B baseline and is suitable in mobile and edge device applications.

High- and low-resource tests in 55 languages demonstrate uniformly lower error rates than previous Gemma models. It can be translated to better results at no additional computing cost.

Built using Gemini Intelligence

TranslateGemma finds its own powers through a two-phase training programme which condenses experience gained on sophisticated Gemini systems at Google. The models were originally optimised on human and synthetic parallel translation volumes of size. It was then followed by reinforcement learning with state-of-the-art evaluation measures to enhance fluency and context accuracy.

Broad language coverage

In addition to the 55 languages that it is trained on, TranslateGemma has been trained on approximately 500 other language pairs. Although not all of them are benchmarked as yet, the extended coverage renders the models a solid foundation for subsequent fine-tuning, particularly of underserved languages.

The models also have multimodal capabilities of Gemma 3. Preliminary demonstrations indicate that text translation in images is better, even without special multimodal training. TranslateGemma can be used on a phone, laptop, or single high-end GPU in the cloud, depending on the size of the processor.

TranslateGemma has been made available to researchers and developers. The release will reduce the language barriers and provide community-based innovation in world translation tools.

Photo Gallery

Entertainment

World

Sports

Lifestyle

India

Technology

Business

Religion

Shorts

Career

Videos

Education

Science

Cities