Not long ago, I wrote a brief introduction to LLM costs,

Not long ago, I wrote a brief introduction to LLM costs,
In the end, I think that technology is divided into winners and losers, or sectors of goods, by economic logic.

Look at the GPT-4, Claude-3 opus. There is such an amazing model, but you can just use it. What’s the point of making a model somewhere else? … Every time I hear that, I can ride a Ferrari, Lamborghini, Bentley, Mercedes-Benz S600. What’s the point of making the rest of the cars? It sounds like they’re all really bad. (Maybe because he’s the head of LLM service cost?)

GPT-4, Claude-3 opus… That’s great. But do you know how many GPUs you have to use to attend and how expensive the service is? And do you know how much you lose money in Korean compared to English in terms of cost? Are you talking about it after using only a $20 subscription? (That’s just a demo! Look at the API market price.) Do you think that all the models in Korea are useless except for the top-notch models..

The attached picture is not actually a rigorous comparison, but it’s good to note. You can see that the cost difference is approximately 300x depending on the model. LLM believes that it is eventually driven by economic logic.

You don’t need a Korean talkizer? You would feel that discrimination is severe if you get 20% more sonata or even Shin Ramen in Korea than in the United States. Is it okay to sell the same quality car for 50 million won in the United States for 150 million won in Korea? Korean language specialized models are needed for ethics, social issues, and even for economic logic, regardless of Sovereign AI.

The professors and reporters said, “We are not making a model like GPT-4 yet. We are falling behind. We need to reflect!” They said very strongly. I felt emotional. (Of course, Naver is constantly updating enormous internally.)

Various LLMs have no choice but to coexist, but the important LLM cost talks are actually on the back burner, and the discussion is taking place, so I’m a little frustrated today… I’m talking about LLM cost again.
LLM is very expensive. Which companies buy hundreds of thousands of GPUs? Still, it doesn’t lower the cost of inference. With the release of a new inference chip, of course, the LLM market will change. I think it’s time to keep paying attention, especially from an economic point of view, and look at many things carefully.

tslaaftermarket

Share
Published by
tslaaftermarket

Recent Posts

The market characteristics are the same. When

The market characteristics are the same. When it is reported that Iran destroyed oil fields…

1주 ago

Economic news is being buried because of the

Economic news is being buried because of the Iran-US war. I understand that a big…

1주 ago

ChatGPT was quietly losing. Most people didn’t know yet.

ChatGPT was quietly losing. Most people didn't know yet. November 30, 2022. Chatgpt was launched.…

2주 ago

The U.S. market initially started lower on rising

The U.S. market initially started lower on rising oil prices and uncertainty in the war…

2주 ago

U.S., Israel’s attacks on Iran and financial markets

U.S., Israel's attacks on Iran and financial markets The U.S. and Israel's attacks on Iran…

2주 ago

U.S. stock markets shrink fall on U.S.-Iran issue amid accelerated AI rotation on Nvidia earnings pretext

02/26 U.S. stock markets shrink fall on U.S.-Iran issue amid accelerated AI rotation on Nvidia…

2주 ago