AI researchers have made another breakthrough.
The study, published in the journal Nature Computational Science, suggests a new way of computation that enables large language models (LLMs) to run 100 times faster and 10,000 times more energy-efficiently than they are today.
The point is simple. Existing GPUs have to constantly exchange data between the calculator and memory. It’s like reading a sentence by going back and forth between a bookcase and inserting it back into place. However, the proposed method processes storage and operation simultaneously in the same place. This simple method reduces delay and almost eliminates power consumption.
More surprisingly, we have already implemented GPT-2-level performance on this hardware without re-learning.
If this technology becomes a reality, we can expect a future where models like GPT-5 will soon operate in my hands, offline, and with very little energy than now, not in the data center.
Even if the technology is fast, it is too fast.
출처: https://www.nature.com/articles/s43588-025-00854-1
There's a company called Padura. It's a company that makes semiconductors that are used in…
The reason why Coupang and Kim Bum-seok see Korean consumers as dog pigs and ignore…
China's AI Future Map Exceeds DeepSeek's Genius Girl Prelude to China's AI Automobile-Cell Phone-Robot-House Integrated…
25/12/19 #TaslaNews Summary Tesla Seeks First Test Run Of 'Cybercap' On Austin Public RoadTesla's driverless…
When the CPI in November was lower than expected at 2.7 percent for headlines and…
Bitcoin's price trend is unusual. After surpassing 126,000 dollars in early October, it continued to…