Google presenta Gemma, famiglia di modelli di IA aperti thumbnail

Google presents Gemma, a family of open AI models

One could say, and one would not be far wrong, that it is a short step from Gemini to Gemma.

Not only from a linguistic point of view but also from a technological one. In fact, a few days after the presentation of Gemini 1.5, new and powerful artificial intelligence software, Google announces Gemma, a family of AI models that have a lot in common with Gemini. But they have the particularity of being openi.

What do we know about Google Gem?

Google announces Gemma

Google presents Gemma, and does so on Wednesday 21 February with a note in its official blog. The news also came out at the same time on the Italian blog.

Let’s read that it is “a new generation of open models from Google to assist developers and researchers in creating responsible artificial intelligence.”

Google Gemma has been available worldwide since February 21st. It is, as we said, a family of open, lightweight, cutting-edge AI models, designed with the same technology used for Gemini.

It is also similar in name, as the note explains: Gemma is inspired by twins, and means “precious stone”.

Credits: Google

Gemma 2B and Gemma 7B, two open models (but not open source)

Google has put the Gemma 2B and Gemma 7B open models on the market for free, with 2 and 7 billion parameters respectively.

The big difference with Gemini is precisely in the fact that Google Gemma is a family of open templates. Open though, and not open source.

An open source model would allow developers to access the code, modify it and disseminate it once modified.

An open model, on the other hand, only gives the possibility of partially intervening. That is, refining the software to make it more precise and effective in responding in a specific area. It involves (very simplifying) working on the weight of the model, which has tokens as its unit of measurement.

Optimized and pre-trained version

We said that Google Gemma models can be modified with further training to refine them in specific tasks.

The company explains that “this process is called model optimization, and while this technique improves a model’s ability to perform targeted tasks, it can also make the model worse at other tasks. For this reason, Gemma models are available in both instruction-optimized and pre-trained versions.”

In the optimized version, the models are trained with human language interactions and can respond to conversational input like a chat bot. The pre-trained versions “are not trained on specific tasks or instructions beyond Gemma’s main data training set. You shouldn’t deploy these models without doing some optimizations.”

Areas of use

In the introductory guide we read that Gemma 2B, lighter, is recommended for laptops and mobile devices, while Gemma 2B is intended for desktop computers and small servers.

“Gemma models are available in different sizes to allow you to build generative AI solutions based on the compute resources available, the functionality you need and where you want to run them. If you don’t know where to start, try the 2 billion parameter size for lower resource requirements and greater flexibility in model deployment.”

The toolkit for responsible use

To make the most of the open system feature, Google Gemma is released together with a toolkit that “provides resources for applying best practices for the responsible use of open models.” In short, for safe and responsible use.

Always on the safety side, Google Gemma code removes personal information from data used for training. And sustained human feedback ensures responsible, non-offensive responses from the software.

“To understand and reduce the risk profile of Gemma models, we conducted extensive evaluations including manual red-teaming, automated adversarial testing, and model capability assessments for dangerous tasks.”

Who was Google Gemma designed for?

On the Google blog we read that Gemma “is designed for the open community of developers and researchers fueling AI innovation. You can start working with Gemma today using free access to Kaggle” and Hugging Face.

Gemma models are optimized for Nvidia GPUs and integrateable into the Google Vertex AI platform, “with a range of optimization options and one-click deployment using built-in inference optimizations.”

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.