Not long ago, I wrote a brief introduction to LLM costs,

Not long ago, I wrote a brief introduction to LLM costs,
In the end, I think that technology is divided into winners and losers, or sectors of goods, by economic logic.

Look at the GPT-4, Claude-3 opus. There is such an amazing model, but you can just use it. What’s the point of making a model somewhere else? … Every time I hear that, I can ride a Ferrari, Lamborghini, Bentley, Mercedes-Benz S600. What’s the point of making the rest of the cars? It sounds like they’re all really bad. (Maybe because he’s the head of LLM service cost?)

GPT-4, Claude-3 opus… That’s great. But do you know how many GPUs you have to use to attend and how expensive the service is? And do you know how much you lose money in Korean compared to English in terms of cost? Are you talking about it after using only a $20 subscription? (That’s just a demo! Look at the API market price.) Do you think that all the models in Korea are useless except for the top-notch models..

The attached picture is not actually a rigorous comparison, but it’s good to note. You can see that the cost difference is approximately 300x depending on the model. LLM believes that it is eventually driven by economic logic.

You don’t need a Korean talkizer? You would feel that discrimination is severe if you get 20% more sonata or even Shin Ramen in Korea than in the United States. Is it okay to sell the same quality car for 50 million won in the United States for 150 million won in Korea? Korean language specialized models are needed for ethics, social issues, and even for economic logic, regardless of Sovereign AI.

The professors and reporters said, “We are not making a model like GPT-4 yet. We are falling behind. We need to reflect!” They said very strongly. I felt emotional. (Of course, Naver is constantly updating enormous internally.)

Various LLMs have no choice but to coexist, but the important LLM cost talks are actually on the back burner, and the discussion is taking place, so I’m a little frustrated today… I’m talking about LLM cost again.
LLM is very expensive. Which companies buy hundreds of thousands of GPUs? Still, it doesn’t lower the cost of inference. With the release of a new inference chip, of course, the LLM market will change. I think it’s time to keep paying attention, especially from an economic point of view, and look at many things carefully.

tslaaftermarket

Share
Published by
tslaaftermarket

Recent Posts

Tesla News Summarizes Rising and Falling Again Due to Tariffs

Tesla News Summarizes Rising and Falling Again Due to Tariffs Tesla Launches FSD Supervision Service…

3일 ago

Tesla Earnings Call Summaries Up With Beekeeping (Elon Musk And CFO Key Remarks)

Tesla Earnings Call Summaries Up With Beekeeping (Elon Musk And CFO Key Remarks) 1. Company…

4일 ago

Tesla Quietly Handed Over the Troubles, Q1 2025 Earnings Call

Tesla Quietly Handed Over the Troubles, Q1 2025 Earnings Call Elon Musk, "There will be…

4일 ago

Key summary (focus on investment points)TSLA

🔹 Key summary (focus on investment points)📉 Performance summary (based on YoY)Total sales: $19.3B (-9%)Automotive…

4일 ago

Bad Tesla News Recap Down Every Day

Bad Tesla News Recap Down Every Day Tesla Emails Two-Day Test Drive OpportunityTesla is ramping…

5일 ago

Weekly Issue Review: April 21, 2025 – Earnings Season and Trade Tensions

Weekly Issue Review: April 21, 2025 – Earnings Season and Trade Tensions The week of…

7일 ago