-
Why Nvidia Soars 10%
Why Nvidia Soars 10% My estimate is that the llama38b performance is similar to the ChatGPT3.5 performance. There are still 30b and 400b in this. Open AI and Maso have 150,000 GPUs each and only barely at the level of GPT4. If OpenAI sells about 2 trillion won as a paid service, not to mention…
-
What E2E Needs 1. AI Computing 2. Data 3. Manpower 4. Platform (Vehicle, Chipset, etc.)
What E2E Needs 1. AI Computing 2. Data 3. Manpower 4. Platform (Vehicle, Chipset, etc.) Currently, legacy automakers have not completed any of the above four to start E2E. Even if you bring NVIDIA’s chipset for autonomous driving and use it, you still have a long way to go before achieving full-scale SDV. Even for…
-
Under what conditions can the cost of the product be lowered?
Under what conditions can the cost of the product be lowered?Comparison of Renewable Energy and Nuclear Power Plant… =====[Jessi Peltan]Many people compare small modular reactors (SMR) to solar and batteries. Solar panels have been repeatedly produced on a large scale, making them affordable. But solar panels didn’t become cheap by breaking them into smaller pieces…
-
FSD Engineer Today Space Yoak
FSD Engineer Today Space Yoak 1) So far, the equipment is in the era of HBMs Until now, the industry for equipment has been divided into extremes, depending on whether it is related to AI or not. Korea-U.S. semiconductor ARM is doing well, but semiconductor equipment stocks that are not related to HBM are still…
-
Unfortunately, it is a short context length, but there are some techniques to overcome this problem, and the follow-up model will support a much longer context, so it is worth looking forward to.
With the release of the LLaMA3 model, I asked it in Korean, and WoW is pretty plausible. Of course, only 5% of the learning data is multilingual, so we can’t expect as much performance in English, but the future seems bright. Unfortunately, it is a short context length, but there are some techniques to overcome…