JAKARTA - Google DeepMind announced the launch of Gemma 2, a generalized artificial intelligence (AI) model, for researchers and developers around the world.
Available in parameter sizes 9 billion (9B) and 27 billion (27B), the tech giant says Gemma 2 performs higher and is more efficient in inference than the first generation.
Built specifically for developers and researchers, Gemma 2 is not only more sophisticated, but is also designed to be more easily integrated into your workflow, such as:
Open and accessible: Just like the original Gemma model, Gemma 2 is available under Gemma's commercially friendly Google license, providing developers and researchers with the ability to share and commercialize their innovations.
Compatibility of a broad framework: Use Gemma 2 easily with tools and work flow of your choice thanks to its compatibility with AI frameworks such as Hugging Face Transformers, and JAX, PyTorch, and Tensor Flow through Hard 3.0, VLLM, Gemma.cpp, Llama.cpp, and original Ollama.
SEE ALSO:
Easy implementation: Starting next month, Google Cloud subscribers will be able to easily implement and manage Gemma 2 at Vertex AI.
With Gemma 2 launching, Google is again demonstrating its commitment to providing the resources that developers and researchers need to build and implement AI responsibly.
"While training Gemma 2, we are following our strict internal safety process, filtering pre-training data and conducting rigorous testing and evaluation of a comprehensive set of metrics to identify and mitigate potential biases and risks," Google explained.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)