Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Asking LLMs for imaginary facts is the wrong thing here, not the hallucination of the LLMs.

LLMs have constraints, these are computation power and model size. Just like a human would get overwhelmed if you request too much with vague instructions LLMs also get overwhelmed.

We need to learn how to write efficient prompts to use LLMs. If you do not understand the matter, be able to provide enough context, the LLM hallucinates.

Currently criticising LLMs on hallucinations by asking factual questions is akin to saying I tried to divide by zero on my calculator and it doesn't work. LLMs were not designed for providing factual information without context, they are thinking machines excelling at higher level intellectual work.



akin to saying I tried to divide by zero on my calculator and it doesn't work

The big difference is that if I try to divide by zero on my calculator, it will tell me it doesn't work and perhaps even given me a useful error message. It won't confidently tell me the answer is 17.


> Currently criticising LLMs on hallucinations by asking factual questions is akin to saying I tried to divide by zero on my calculator and it doesn't work. LLMs were not designed for providing factual information without context, they are thinking machines excelling at higher level intellectual work.

I would agree with you, but they're currently billed as information retrieval machines. I think it's perfectly valid to object to their accuracy at a task they're bad at, but being sold as a replacement for.


This reminds me of movies shot in early times of the internet. We were warned that information on the internet could be inaccurate or falsified.

We found solutions to minimize wrong information for example we built and maintain Wikipedia.

LLMs will also come to a point where we can work with them comfortably. Maybe we will ask a council of various LLMs before taking an answer for granted, just like we would surf a couple of websites.


A human would (should?) tell you “I’m overwhelmed, leave me alone!”

AI just spits out “stuff…”


That's true, LLMs do not say I cannot understand I am overwhelmed at this stage. That is big drawback. You need to make sure that the AI understood it.

Some LLMs stop responding midway if the token limit is reached. That is another way of knowing that the LLM is overwhelmed. But most of the time they give lesser quality responses when overwhelmed.


Because it doesn't understand or have intelligence. It just knows correlations, which is unfortunately very good for fooling people. If there is anything else in there it's because it was explicitly programmed in like 1960's AI.


I disagree. AI in 1960s relied on expert systems where each fact and rule was handcoded by humans. As far as I know LLMs learn on their own on vast bodies of text. There is some level of supervision, but it is bot 1960s AI. That is the reason we get hallucinations as well.

Expert systems are more accurate as they rely on first order logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: