Finally, the Gemini 3.0 performance of Google DeepMind has been unveiled. There is no other way to explain it other than to say it is literally overwhelming.
Above all, the non-linear leap leading to Gemini 2.5 Pro → 3.0 Pro has dashed skepticism that the era of pre-training is over recently.
Now, the numbers clearly show that the claim that the limitations of the foundation model were already seen was premature.
The benchmarks released by Google DeepMind are shockingly clear.
This isn’t just a simple improvement. It is a result of a straightforward reversal of the conventional wisdom that the performance will no longer increase even if the model weight is raised and data is put in, and it is a moment that has demonstrated once again how great the potential to be raised in the pre-learning stage is.
In other words, scaling is still valid, there is infinite room for algorithm improvement, and pre-training is not over yet.
With the announcement of Gemini 3.0, we may be at the beginning of another explosive J-curve.
출처: https://blog.google/products/gemini/gemini-3/
01/08 U.S. stocks close mixed amid profit-taking sales, led by semiconductor sector The U.S. stock…
26/1/8 #TeslaNews Summary Tesla Captures First Highway Driving Test of Cybercab• Tesla's self-driving car Cybercab…
01/07 U.S. stock markets open for sale on impact of Trump regulatory issues after digesting…
26/1/7 #TeslaNewsSummary Canada's Ontario launches large-scale ESS operation based on Tesla's MegapacTesla's $90 million Mega…
In a 5% drop from the high on the Nasdaq, a collective liquidation of the…
01/06 U.S. stock markets, chip sector gains sharply, leading index on Jensen's remarks The U.S.…