Not long ago, I wrote a brief introduction to LLM costs,

Not long ago, I wrote a brief introduction to LLM costs,
In the end, I think that technology is divided into winners and losers, or sectors of goods, by economic logic.

Look at the GPT-4, Claude-3 opus. There is such an amazing model, but you can just use it. What’s the point of making a model somewhere else? … Every time I hear that, I can ride a Ferrari, Lamborghini, Bentley, Mercedes-Benz S600. What’s the point of making the rest of the cars? It sounds like they’re all really bad. (Maybe because he’s the head of LLM service cost?)

GPT-4, Claude-3 opus… That’s great. But do you know how many GPUs you have to use to attend and how expensive the service is? And do you know how much you lose money in Korean compared to English in terms of cost? Are you talking about it after using only a $20 subscription? (That’s just a demo! Look at the API market price.) Do you think that all the models in Korea are useless except for the top-notch models..

The attached picture is not actually a rigorous comparison, but it’s good to note. You can see that the cost difference is approximately 300x depending on the model. LLM believes that it is eventually driven by economic logic.

You don’t need a Korean talkizer? You would feel that discrimination is severe if you get 20% more sonata or even Shin Ramen in Korea than in the United States. Is it okay to sell the same quality car for 50 million won in the United States for 150 million won in Korea? Korean language specialized models are needed for ethics, social issues, and even for economic logic, regardless of Sovereign AI.

The professors and reporters said, “We are not making a model like GPT-4 yet. We are falling behind. We need to reflect!” They said very strongly. I felt emotional. (Of course, Naver is constantly updating enormous internally.)

Various LLMs have no choice but to coexist, but the important LLM cost talks are actually on the back burner, and the discussion is taking place, so I’m a little frustrated today… I’m talking about LLM cost again.
LLM is very expensive. Which companies buy hundreds of thousands of GPUs? Still, it doesn’t lower the cost of inference. With the release of a new inference chip, of course, the LLM market will change. I think it’s time to keep paying attention, especially from an economic point of view, and look at many things carefully.

tslaaftermarket

Share
Published by
tslaaftermarket

Recent Posts

European stock markets, software sector

02/16 European stock markets, software sector declines Vs. defense, financial strength close mixed amid differentiation…

4일 ago

The U.S. market closed flat, with a

The U.S. market closed flat, with a lower-than-expected CPI, or inflation-quenched indicator, indicating that FED…

4일 ago

Tesla launches Ipswich Megapack in Australia

26/2/14 #Tesla News Summary Tesla launches Ipswich Megapack in AustraliaThe roughly $130 million Tesla Megapack…

6일 ago

Tesla tops French car confidence

26/2/10 #TeslaNews Summary Tesla tops French car confidenceTesla overtook Toyota in a national reliability assessment…

2주 ago

Bitcoin, next cycle $600-$700,000″— Morgan Creek CEO Mark Usco

🚀 "Bitcoin, next cycle $600-$700,000"— Morgan Creek CEO Mark Usco 📌 2 of his calculations…

2주 ago

Tesla To Open AI Training Center In China

26/2/7 #TeslaNews Summary Tesla To Open AI Training Center In ChinaTesla has started operating an…

2주 ago