Hacker Newsnew | past | comments | ask | show | jobs | submit | dernett's commentslogin

This is crazy. It's clear that these models don't have human intelligence, but it's undeniable at this point that they have _some_ form of intelligence.

If LLMs weren't created by us but where something discovered in another species' behaviour it would be 100% labelled intelligence

Yes, same for the case where the technology would have been found embodied in machinery aboard a crashed UFO.

My take is that a huge part of human intelligence is pattern matching. We just didn’t understand how much multidimensional geometry influenced our matches

Yes, it could be that intelligence is essentially a sophisticated form of recursive, brute force pattern matching.

I'm beginning to think the Bitter Lesson applies to organic intelligence as well, because basic pattern matching can be implemented relatively simply using very basic mathematical operations like multiply and accumulate, and so it can scale with massive parallelization of relatively simple building blocks.


Intelligence is almost certainly a fundamentally recursive process.

The ability to think about your own thinking over and over as deeply as needed is where all the magic happens. Counterfactual reasoning occurs every time you pop a mental stack frame. By augmenting our stack with external tools (paper, computers, etc.), we can extend this process as far as it needs to go.

LLMs start to look a lot more capable when you put them into recursive loops with feedback from the environment. A trillion tokens worth of "what if..." can be expended without touching a single token in the caller's context. This can happen at every level as many times as needed if we're using proper recursive machinery. The theoretical scaling around this is extremely favorable.


Anatomically good candidate, the thalamal-cortical loop: https://en.wikipedia.org/wiki/Cortico-basal_ganglia-thalamo-...

I don't think it's accurate to describe LLMs as pattern matching. Prediction is the mechanism they use to ingest and output information, and they end up with a (relatively) deep model of the world under the hood.

The "pattern matching" perspective is true if you zoom in close enough, just like "protein reactions in water" is true for brains. But if you zoom out you see both humans and LLMs interact with external environments which provide opportunity for novel exploration. The true source of originality is not inside but in the environment. Making it be all about the model inside is a mistake, what matters more than the model is the data loop and solution space being explored.

"Pattern matching" is not sufficiently specified here for us to say if LLMs do pattern matching or not. E.g. we can say that an LLM predicts the next token because that token (or rather, its embedding) is the best "match" to the previous tokens, which form a path ("pattern") in embedding space. In this sense LLMs are most definitely pattern matching. Under other formulations of the term, they may not be (e.g. when pattern matching refers to abstraction or abstracting to actual logical patterns, rather than strictly semantic patterns).

> I don't think it's accurate to describe LLMs as pattern matching

I’m talking about the inference step, which uses tensor geometry arithmetic to find patterns in text. We don’t understand what those patterns are but it’s clear it’s doing some heavy lifting since llm inference is expressing logic and reasoning under the guise of our reductive “next token prediction”


Yes, the world model building is achieved via pattern matching and happens during ingestion and training, but that is also part of the intelligence.

Which is even more true for humans.

Intelligence is hallucination that happens to produce useful results in the real world.

I don't think they will ever have human intelligence. It will always be an alien intelligence.

But I think the trend line unmistakably points to a future where it can be MORE intelligent than a human in exactly the colloquial way we define "more intelligent"

The fact that one of the greatest mathematicians alive has a page and is seriously bench marking this shows how likely he believes this can happen.


Well, Alpha Go and Stockfish can beat you at their games. Why shouldn't these models beat us at math proofs?

Chess and Go have very restrictive rules. It seems a lot more obvious to me why a computer can beat a human at it. They have a huge advantage just by being able to calculate very deep lines in a very short time. I actually find it impressive for how long humans were able to beat computers at go. Math proofs seem a lot more open ended to me.

Alpha go and stockfish were specifically designed and trained to win at those games.

And we can train models specifically at math proofs? I think only difference is that math is bigger....

It's pattern matching. Which is actually what we measure in IQ tests, just saying.

There's some nuance. IQ tests measure pattern matching and, in an underlying way, other facets of intelligence - memory, for example. How well can an LLM 'remember' a thing? Sometimes Claude will perform compaction when its context window reaches 200k "tokens" then it seems a little colder to me, but maybe that's just my imagination. I'm kind of a "power user".

I call it matching. Pattern matching had a different meaning.

what are you referring to? LLMs are neural networks at their core and the most simple versions of neural networks are all about reproducing patterns observed during training

You need to understand the difference between general matching and pattern matching. Maybe should have read more older AI books. A LLM is a general fuzzy matcher. A pattern matcher is an exact matcher using an abstract language, the "pattern". A general matcher uses a distance function instead, no pattern needed.

Ie you want to find a subimage in a big image, possibly rotated, scaled, tilted, distorted, with noise. You cannot do that with a pattern matcher, but you can do that with a matcher, such as a fuzzy matcher, a LLM.

You want to find a go position on a go board. A LLM is perfect for that, because you don't need to come up with a special language to describe go positions (older chess programs did that), you just train the model if that position is good or bad, and this can be fully automated via existing literature and later by playing against itself. You train the matcher not via patterns but a function (win or loose).


Depends on what you mean by intelligence, human intelligence and human

As someone who doesn't understand this shit, and how it's always the experts who fiddle the LLMs to get good outputs, it feels natural to attribute the intelligence to the operator (or the training set), rather than the LLM itself.

Yes it is intelligent, but so what? Its not conscious, sentient or sapient. It's a pattern matching chinese room.

Lean defines a != b as a = b => False, so it seems that we have a function from proofs of a = b to proofs of False. I guess this being bijective means that there are no proofs of a = b, since there are no proofs of False, which is an equivalent way of looking at a != b.


This really sounds like it was generated by an LLM. "X isn't a Y. It's a Z", examples that don't make any sense (why would it think you're a farmer?), etc. Perhaps not the most surprising thing that an LLM advocates for itself...


Is it possible to create animations using something like Shadertoy's `iTime`?


I'm going to try formalizing this course in Lean--not sure how hard it is going to be. If anyone is interested in doing the same, please feel free to contribute!

https://github.com/dernett/Lean61200J


This sounds very interesting and relevant to the goals of the CSLib initiative that apparently just got started. I don't have a better public link to it now except this LinkedIn post (perhaps there's a Zulip tag):

https://www.linkedin.com/posts/lean-fro_leanlang-cslib-forma...


What will that accomplish?


You can write proofs along with the course, and since they are machine checked you can have confidence that they are correct.

If you don't know, writing a proof in isolation can be difficult, since you may be writing on that isn't actually sound.


Learning math is more about the big ideas. Behind each proof is an insight. Formalizing in a computer is like spell checking a document. It helps you catch small mistakes but doesn’t change the content,

I just think this is a distraction unless your goal is to learn lean and not math.


Hard disagree.

Errors are found in human proofs all the time. And like everything else, going through the process of formalizing to a machine only increases the clarity and accuracy of what you’re doing.


You are correct that mistakes are made all the time - but they tend to be "oh yeah let me fix that right now" mistakes. Or, "oh yeah that's not true in general, but it still works for this case". That's because the experts are thinking about the content of the material - and they are familiar with it enough to tell if an idea has merit or not. Formalism is just a mode of presentation.

Over-emphasis on formalism leads me to conclude you just don't understand the purpose of proofs. You are primarily interested in formal logic - not math.

I would invite you to read a few pages of famous papers - for example Perelman's paper on the Poincaré Conjecture.


I'm assuming he's talking about this specific small string optimization: https://www.youtube.com/watch?v=kPR8h4-qZdk&t=409s


just watched, yes, that is the same optimization


This is really helpful. Minor nit under Curry-Howard correspondence: "True propositions have exactly one term" should be "have at least one term".


I don't believe so. I worked out the permutations for n = 3 and, accounting for rotations, you only get 2: [0, 3, 4, 1, 2, 5] [0, 5, 2, 1, 4, 3]. Of course, you get the expected answer of 12 if you multiply by 6.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: