6. On “This is just a critique of current models—not AGI itself”
No.
This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.
If your system is:
a) Finite
b)Bounded by symbols
C) Built on recursive closure
…it breaks down where things get fuzzy:
where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.
That’s not a tuning issue, that IS the boundary.
(And we’re already seeing it.)
In The Illusion of Reasoning (Shojaee et al., 2025, Apple),
they found that as task complexity rises:
- LLMs try less
- Answers get shorter, shallower
- Recursive tasks—like the Tower of Hanoi—just fall apart
- etc
That’s IOpenER in the wild:Information Opens. Entropy Rises.
The theory predicts the divergence, and the models are confirming it—one hallucination at a time.
No.
This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.
If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure
…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.
That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)
In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc
That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.