By your logic, any system which spews random strings is intelligent because sometimes it’s randomness coincidentally aligns with the input you give it.
we use words like reason and understand and think, and try to apply them to LLMs, when it's just dot products and matrix multiplications at the heart of it, which is where we go wrong. these things are a new alien kind of intelligence, and we're struggling with it because it's completely foreign. it's more than a random coincidence. your logic was that, in your run of the same query, because it made a mistake on the last letter, which is a mistake an inattentive teenager or drunk adult could easily make, we can't consider it intelligent.
we're not talking about any system here, we're talking about LLMs and their ability to generate random coincidental text that does happen to align with the input given. when the output, coincidental and random as it may well be, is aligned with the input in a way that resembles intelligence, we do have to ponder not just what intelligence actually is, but also what it means to be intelligent. octopuses are intelligent but they're not able to solve your particular puzzle.
That’s not weird at all. LLMs often give different answers to the same query. Which has been demonstrated several times in this thread.
> Does that make it intelligent, then?
No, it does not, because it isn’t consistent, it demonstrated it doesn’t understand.
https://news.ycombinator.com/item?id=40368446
By your logic, any system which spews random strings is intelligent because sometimes it’s randomness coincidentally aligns with the input you give it.