I’ve been using it steadily since the launch of ChatGPT. I’ve been using it every day, especially since the introduction of the paid plan
Now, the boundaries of the LLM, where it can go, where it can’t, and the boundaries of the limits can be seen more and more clearly.
.
LLM is very good at finding information. In particular, it would not be unreasonable to express the ability to group and organize fragmented information and express it at an expert level.
However, LLM does not play much of a role in the role of “connecting” various information, that is, in the realm of “Connecting the Dots,” which Steve Jobs has emphasized.
Usually, what you expect from a “professional” in a field may have practical experience in solving problems, but your ability to solve current problems by borrowing experience and knowledge from similar fields is the greatest. LLM has a wide range of knowledge in the field, so at first glance, it seems like an expert in the field. Even if you ask for someone else’s knowledge, they are surprisingly sophisticated and know in detail.
However, LLM does not do well in the area that combines the knowledge of those two areas.
It can be induced to connect information through conversation, but such an induction process itself is something that a person must do, so it cannot be viewed as the capability of LLM.
LLM can’t create knowledge.
Although it may sum up brilliantly.
Improving the model does not seem to solve this problem. You must take a different approach (I don’t know what it is, but it won’t work with the current model structure). Similar knowledge in different fields can be combined.
.
LLM also has obvious limitations, and this limitation in language models means that there is a high possibility that there will be similar limitations in generative models in other fields such as image and voice. (I don’t know much about image or voice, but LLM or image and voice Gen AI models have the same basic structure. So the structural limitations will be similar.)
.
Well, then I wouldn’t be the only one who felt this.
The first thing to think about is the answer to what humans can do in the age of AI. AI is still a tool to use, and people must obtain a wide range of high-quality information much faster and more accurately than before. And by connecting that information, we must create value by breaking down the silos of specialized fields. Then, for the next 10 years, we will be people who ‘use’ AI and not people who ‘use’ AI.
The second thing to think about is corporate AI investment. Companies that are obsessed with LLM or Gen AI need to be careful. It is also difficult to concentrate on Generative AI. Unless it is a company whose main job is Gen AI like OpenAI, Gen AI should be one of the portfolios. There are many companies suddenly trying to change their constitution to AI, and the limitations of this generation of AI will also be clear.
Lastly, I’ve been talking since April, do you think HBM demand will continue to grow? I don’t think so.
Respect for values used to be established only after harsh experiences. After the Great Depression,…
● Meaning of LG Energy Solution's self-disclosure and what to pay attention to in the…
It feels like inflation is an issue again. The U.S. consumer price index has bounced…
😁To summarize what seems complicated below, from the standpoint of LP (fund investors - investors);…
📌 Clearing the history of Tesla stock purchases by major institutions (Q4 2024) Goldman Sachs:…
25/2/10 #TeslaNews Summary Tesla To Begin Deployment Of FSD v13.2.7 On Some VehiclesTesla has begun…