Here’s What You Know About Google’s New Gemma Model


  • Google has released Gemma, its first set of open AI models.
  • Gemma is lightweight and runs on laptops, making AI more accessible.
  • Google’s openness with Gemma could accelerate AI development through collaboration.

The AI competition between Google and OpenAI has intensified with just two months into 2024. On February 15th, OpenAI announced Sora, which is unarguably the best AI video generator ever developed. On Wednesday, Google came out with a new groundbreaking product called Gemma. 

Gemma is actually not a single product but rather a family of two large language models. In Google’s own words, “Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models.”

According to the announcement, Gemma is currently available to developers worldwide. The models can be used to develop different tools like chatbots and pretty much everything else that LLMs can do but with some unique advantages. 

Here are some details to know about Gemma.

1. Gemma Open Models

Gemma is Google’s first major model built for the open community of developers and researchers. Google has mostly upheld a closed-access approach with its AI products, including Gemini, which is the company’s most advanced model that powers some of its products like Bard (now called Gemini). 

Google’s closed approach has drawn criticism from open-source proponents like Meta chief scientist Yann LeCun, who argued that DeepMind, Google’s AI development division, is “becoming less and less open. […] I think it’s going to slow down progress in the entire field. So I’m not too happy about this.”

Not that Gemma’s actual source code or training data will be available for people to access as an “open model,” but the model’s “weights,” or pre-trained parameters, will be made available, Forbes reported, citing Google spokesperson Jane Park. 

2. Gemma Model Weights Comes in Two Sizes

Gemma comes in two weight versions – Gemma 2B and Gemma 7B. Per the announcement, each size is released with pre-trained and instruction-tuned variants. Parameters represent the connections and weights within the model that allow it to capture nuances and relationships in the data.

The higher the number of parameters a model has, the better it can learn complex patterns and perform intricate tasks, like translating languages, writing different kinds of creative text formats, or understanding complex questions.

3. Gemma is Laptop-Friendly

Gemma, while a powerful language model, is specifically designed to be efficient and run on less demanding environments compared to some other large language models. Google said the model is cross-compatible and can run on multiple devices like laptops, desktops, IoT, mobile and cloud, enabling broadly accessible AI capabilities.

“Pre-trained and instruction-tuned Gemma models can run on your laptop, workstation, or Google Cloud with easy deployment on Vertex AI and Google Kubernetes Engine (GKE),” the announcement reads. 

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Ibiam Wayas

Ibiam is an optimistic crypto journalist. Five years from now, he sees himself establishing a unique crypto media outlet that will breach the gap between the crypto world and the general public. He loves to associate with like-minded individuals and collaborate with them on similar projects. He spends much of his time honing his writing and critical thinking skills.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan