Google releases artificial intelligence model Gemini, claiming to surpass GPT-4
[ad_1]
Gemini can handle text, audio and video.Image source: Google Inc.
Google announced the launch of a new artificial intelligence model called Gemini on the 6th, claiming that the model performed better than OpenAI’s GPT-4 model and “expert-level” humans in a series of intelligence tests.
Gemini has created 3 versions for different applications, namely Nano, Pro and Ultra. Google declined to answer questions about the size of Pro and Ultra, the number of parameters they contain, the size or source of training data.
Its smallest version, the Nano, is designed to run on smartphones and actually comes in two models: one for slower phones, with 1.8 billion parameters; and one for more powerful phones, with 3.25 billion parameter.
Google claims that the mid-range Pro version of Gemini beat some other models, but the more powerful Ultra exceeds the capabilities of all existing AI models. It scores 90% on the industry-standard MMLU benchmark, while “expert” humans are expected to achieve 89.8%.
It’s the first time an artificial intelligence has beaten humans in the test, and it’s the highest score among existing models. The test involves a range of tough questions including logical fallacies, everyday ethical issues, medical issues, economics and geography.
In the same test, GPT-4 scored 87%, LLAMA-2 scored 68%, and Anthropic Claude 2 scored 78.5%. Gemini beat all of these models on 8 out of 9 other common benchmarks.
Last year, AlphaCode released by Google’s “Deep Thinking” can beat 50% of human developers, and the newly released Gemini claims to beat 85% of human programmers.
Eric Collins, Google’s deepthinker, said Gemini is “state-of-the-art in almost every area.”
[ad_2]
Source link