Not long ago, I wrote a brief introduction to LLM costs,
In the end, I think that technology is divided into winners and losers, or sectors of goods, by economic logic.
Look at the GPT-4, Claude-3 opus. There is such an amazing model, but you can just use it. What’s the point of making a model somewhere else? … Every time I hear that, I can ride a Ferrari, Lamborghini, Bentley, Mercedes-Benz S600. What’s the point of making the rest of the cars? It sounds like they’re all really bad. (Maybe because he’s the head of LLM service cost?)
GPT-4, Claude-3 opus… That’s great. But do you know how many GPUs you have to use to attend and how expensive the service is? And do you know how much you lose money in Korean compared to English in terms of cost? Are you talking about it after using only a $20 subscription? (That’s just a demo! Look at the API market price.) Do you think that all the models in Korea are useless except for the top-notch models..
The attached picture is not actually a rigorous comparison, but it’s good to note. You can see that the cost difference is approximately 300x depending on the model. LLM believes that it is eventually driven by economic logic.
You don’t need a Korean talkizer? You would feel that discrimination is severe if you get 20% more sonata or even Shin Ramen in Korea than in the United States. Is it okay to sell the same quality car for 50 million won in the United States for 150 million won in Korea? Korean language specialized models are needed for ethics, social issues, and even for economic logic, regardless of Sovereign AI.
The professors and reporters said, “We are not making a model like GPT-4 yet. We are falling behind. We need to reflect!” They said very strongly. I felt emotional. (Of course, Naver is constantly updating enormous internally.)
Various LLMs have no choice but to coexist, but the important LLM cost talks are actually on the back burner, and the discussion is taking place, so I’m a little frustrated today… I’m talking about LLM cost again.
LLM is very expensive. Which companies buy hundreds of thousands of GPUs? Still, it doesn’t lower the cost of inference. With the release of a new inference chip, of course, the LLM market will change. I think it’s time to keep paying attention, especially from an economic point of view, and look at many things carefully.
● Meaning of LG Energy Solution's self-disclosure and what to pay attention to in the…
It feels like inflation is an issue again. The U.S. consumer price index has bounced…
😁To summarize what seems complicated below, from the standpoint of LP (fund investors - investors);…
📌 Clearing the history of Tesla stock purchases by major institutions (Q4 2024) Goldman Sachs:…
25/2/10 #TeslaNews Summary Tesla To Begin Deployment Of FSD v13.2.7 On Some VehiclesTesla has begun…
■2025 9th book "Tesla Master Plan – Eason" Like Nvidia, which left its first book…