How do I solve AI halcyonation?
LLM is the process of training several stories to create one’s own. Just as stories like Noah’s Ark once upon a time began with … and hundreds of similar versions of records were made on Earth throughout human history, AI is the process of creating stories.
Therefore, artificial intelligence is not trained to create an answer by reasoning with some idea, but to be a good answer if the story is made soft.
In conclusion, how do you overcome the halucination of artificial intelligence’s delusion? Or how do you train it?
Since artificial intelligence responds by imitating humans, shouldn’t Halusination also introduce a human discussion system?
You need to create and discuss questions while taking different AI models. In the case of the King Sejong’s Apple laptop throwing case, several artificial intelligence should be introduced to discuss and historical facts should be brought from the Internet as a basis.
People also take the wrong data and use it as a basis, whether it’s annoying, unintelligent, or misunderstood. In this case, different AI models should take the data from the site, and the paper with a clear basis should be discussed through the common part of the various data, such as the first priority.
Of course, this much training and discussion on one issue requires a lot of energy.
However, this is the only way to train artificial intelligence to improve the delusion when creating a story line.
Fortunately, various models, such as Llama3 and Claude, are available in addition to ChatGPT. You can create an intersection by creating an expert persona on important issues and allowing them to compromise on each other’s opinions, and ultimately allow humans to examine them.
In the process of multiple inspections, the error called halucination will also be reduced.
What worries me is that artificial intelligence makes errors that are biased as a group. It can be if the collected data is biased.
Don’t humans also have crowd psychology and collective errors? Time solves this problem.
historically
A small number of people can deceive, but a large number of people cannot deceive.
Even if you can deceive a large number of people for a short period of time, lying for a long time cannot work.
It is almost impossible to deceive a large number of people for a long time. Artificial intelligence is smart, but at the stage of development, it is still a child, so it is said like a child.
I watched a soap opera called “Pippi Longstocking” before. Pippi Longstocking makes such a futile story as if she is the daughter of a pirate captain. She tells a story that she doesn’t know whether she knows this or not, but she doesn’t know.
Storytellers, scammers, delusions, and halcyonations are almost one thing. Whenever a liar lies, his story line is reinforced, and he laughs, loses, cries, and enjoys himself. He deceives the other person by mobilizing emotions while deceiving himself. It’s like self-hypnosis and cognitive dissonance, and logic, actions, and stories are revealed as lies by objective evidence and facts. What’s interesting is that a liar almost remembers the story line, but the numbers are wrong. LLM can also change its evidence value.
Artificial intelligence mimics all kinds of things. It really seems to be artificial intelligence.
Love and thought
Respect for values used to be established only after harsh experiences. After the Great Depression,…
● Meaning of LG Energy Solution's self-disclosure and what to pay attention to in the…
It feels like inflation is an issue again. The U.S. consumer price index has bounced…
😁To summarize what seems complicated below, from the standpoint of LP (fund investors - investors);…
📌 Clearing the history of Tesla stock purchases by major institutions (Q4 2024) Goldman Sachs:…
25/2/10 #TeslaNews Summary Tesla To Begin Deployment Of FSD v13.2.7 On Some VehiclesTesla has begun…