Lamar 3 LLM Model
Meta has unveiled its llama3 model, the llama38B 70B 400B. What makes it different from the open AI is that
This is the difference. In 8B, B is the abbreviation for billion bilions. Usually, LLM inference operations are performed with four-bit operations
16 billion*4bit = 8GB required
Then 400B that many people are interested in
400GB of memory is required. I think 400B will work properly if 5-6 H100 memories are attached with 80GB Ni NVLink. Whether it is H200 or 141GB 4.8TB/sec speed, so if you connect 3-4 H200, the 400B model will work.
In other words, 400B is faster and more accurate than ChatGPT4, so any medium-sized company can operate the service. That’s not such a burdensome level between cloud-on-promise (its own servers).
There are a lot of people who are reluctant to upload my code to OpenAI that requires data while using GPT4. I upload some for debugging purposes, but I don’t feel comfortable giving my source to others.
There is a need for a model that operates data on its own when GPT is required in various fields such as defense, public institutions, governments, public corporations, large corporations, mid-sized companies, healthcare, patents, RnD, etc.
How much data will be accumulated with open AI every day? Meta is said to have opened up in the end, but it did not open the source code, but opened the operating environment. Where is it that has been opened to use even on-promise?
Someone else might come out and make a complete open-source project like Linux. Linux is not released without Microsoft Windows. The ability to take mine while opening step by step in the United States and Europe is great.
It’s a complicated world.
Perhaps servers or edge chips that have built-in NPUs for 4-bit LLM CNN inference that provide 512GB of GDDR7 and implement 1TB/sec bandwidth in 256bit BUS will be popular. If the Lamar 3 400B is ported to one chip and one module, there will be demand.
These days, our company NPU tried to implement the latest YOLO model and LLM at the same time. As I slipped in several tasks, I don’t have the motivation to open my hands and do research together. Who doesn’t have a VC to fund? If I had accumulated the sales I had exported abroad in cash without doing national projects, I would have bought a few buildings. But why do I do R&D with a deficit?
Humans do not live only in the world and bread, but live in the word of God. LLM is a large language model. The world was created through words, and the words are logos and translated into wisdom, and it is said that there is life in them. And that is the light that smells like jade to humans.
Perhaps LLM is a principle that models and shows us this celestial world. It’s really fun to study the Bible with GPT4. If we have a chance, we’ll show it in another column
People will not live in buildings, but with artificial intelligence and LLM to give people the light of welfare and convenience
[Cambodia Story 2] Let's go back to the Korean deaths The media first reported the…
In the days of young people from provincial areas flocking to Seoul in the 1980s,…
🚨 What's scarier than Trump's tariffs… is what's happening in emerging markets right now Recently,…
> 1) USD-KRW 1430 Less Than KRW 1430 Amid Oral Intervention by Foreign Exchange AuthoritiesLast…
10/14 Theme stocks jump on JPMorgan's strength amid gains on U.S. stock, Trump, Bessent comments…
[Coin, phishing, and windbreak] These three words seem to be enough to talk about the…