Google Launches Gemini 2.0 Preview: More Advanced AI Model
JAKARTA - Almost a week after OpenAI launched the Model o1 available to the public, today it is Google's turn to announce the preview of the next generation Gemini 2.0 model.
Gemini 2.0 comes one year after Google introduced Gemini 1, in December. Alphabet CEO Sundar Pichai said that their first model made great progress in understanding information through text, videos, images, audio, and code, as well as processing more.
"We are proud to introduce Gemini 2.0, our most sophisticated model so far. With new advances in multimodality and the use of original tools, this will allow us to build new AI agents that bring us closer to our vision of a universal assistant," Pichai wrote in Google's official blog.
With this launch, Google will bring 2.0 to its developers and trusted testers today, and will soon bring its newest models to their products starting from Gemini and Search.
In addition, Google will also launch a new feature for Gemini Advanced subscribers called Deep Research. This feature has advanced reasoning and long context capabilities to act as research assistants, explore complex topics, and compile reports on your behalf.
If Gemini 1.0 is about regulating and understanding information, Gemini 2.0 is about making it much more useful. I can't wait to see what will happen in the next era," he continued.
SEE ALSO:
Sementara itu, CEO Google DeepMind, Demis Hassabis dan CTO Google DeepMind, Koray Kanukcuoglu, menegaskan bahwa Gemini 2.0 Flash tidak hanya mendukung masukan multimoda seperti gambar, video, dan audio saja.
But Gemini 2.0 Flash also supports multimodal outputs such as images made natively mixed with text and multitext-to-speech (TTS) language audio that can be directed.
"Even Flash can also call tools such as Google Search, code execution, and functions determined by third-party users in a native manner," he concluded.