Smil to me shows that most people that talk about this subject are completely full of shit.
The subject is far too complicated to have any rational discussion on.
I think the main mistake with this is that the concept of a "complex machine" has no meaning.
A “machine” is precisely what eliminates complexity by design. "People are complex machines" already has no meaning and then adding just and really doesn't make the statement more meaningful it makes it even more confused and meaningless.
The older I get the more obvious it becomes the idea of a "thinking machine" is a meaningless absurdity.
What we really think we want is a type of synthetic biological thinking organism that somehow still inherits the useful properties of a machine. If we say it that way though the absurdity is obvious and no one alive reading this will ever witness anything like that. Then we wouldn't be able to pretend we live at some special time in history that gets to see the birth of this new organism.
I think we are talking past each other a bit, probably because we have been exposed to different sets of information on a very complicated and diverse topic.
Have you ever explored the visual simulations of what goes on inside a cell or in protein interactions?
For example what happens inside a cell leading up to mitosis?
Is a pretty cool resource, I recommend the shorter videos of the visual simulations.
This category of perspective is critical to the point I was making. Another might be the meaning / definition of complexity, which I don't think is well understood yet and might be the crux. For me to say "the difference between life and what we call machines is just complexity" would require the same understanding of "complexity" to have shared meaning.
I'm not exactly sure what complexity is, and I'm not sure anyone does yet, but the closest I feel I've come is maybe integrated information theory, and some loose concept of functional information density.
So while it probably seemed like I was making a shallow case at a surface level, I was actually trying to convey that when one digs into science at all levels of abstraction, the differences between life and machines seem to fall more on a spectrum.
We are paying the price now for not teaching language philosophy as a core educational requirement.
Most people have had no exposure to even the most basic ideas of language philosophy.
The idea all these people go to school for years and don't even have to take a 1 semester class on the main philosophical ideas of the 20th century is insane.
Right. A flying machine doesn’t need to understand anything to fly. It’s not even clear what it would mean for it to do so, or how it would fly any differently if it did.
Same with the AI machines.
Understanding is not something that any machine or person does. Understanding is a compact label applied to people’s behavior by an observer that allows the observer to predict future behavior. It’s not a process in itself.
And yes, we apply this label to ourselves. Much of what we do is only available to consciousness post-hoc, and is available to be described just the same as the behavior of someone else.
> Understanding is not something that any machine or person does.
Yet I can write down many equations necessary to build and design that plane.
I can model the wind and air flow across the surface and design airfoils.
I can interpret the mathematical symbols into real physical meaning.
I can adapt these equations to novel settings or even fictitious ones.
I can analyze them counterfactually; not just making predictions but also telling you why those predictions are accurate, what their inaccuracies are (such as which variables and measurements are more precise), and I can tell you what all those things mean.
I can describe and derive the limits of the equations and models, discussing where they do and don't work. Including in the fictional settings.
I can do this at an emergent macroscopic level and I can do it at a fine grain molecular or even atomic level. I can even derive the emergent macroscopic behavior from the more fine grain analysis and tell you the limits of each model.
I can also respond that Bernoulli's equation is not an accurate description of why an airfoil works, even when prompted with those words[0].
These are characteristics that lead people to believe I understand the physics of fluid mechanics and flight. They correlate strongly with the ability to recall information from textbooks, but the actions aren't strictly the ability to recall and search over a memory database. Do these things prove that I understand? No, but we deal with what we got even if it is imperfect.
It is not just the ability to perform a task, it includes the ability to explain it. The more depth I am able to the greater understanding people attribute. While this correlates with task performance it is not the same. Even Ramanujan had to work hard to understand even if he was somehow able to divine great equations without it.
You're right that these descriptions are not the thing itself either. No one is claiming the map is the territory here. That's not the argument being made. Understanding the map is a very different thing than conflating the map and the territory. It is also a different thing than just being able to read it.
Everyone reading this understands the meaning of a sunrise. It is a wonderful example of the use theory of meaning.
If you raised a baby inside a windowless solitary confinement cell for 20 years and then one day show them the sunrise on a video monitor, they still don't understand the meaning of a sunrise.
Trying to extract the meaning of a sunrise by a machine from the syntax of a sunrise data corpus is just totally absurd.
You could extract some statistical regularity from the pixel data of the sunrise video monitor or sunrise data corpus. That model may provide some useful results that can then be used in the lived world.
Pretending the model understands a sunrise though is just nonsense.
Showing the sunrise statistical model has some use in the lived world as proof the model understands a sunrise I would say borders on intellectual fraud considering a human doing the same thing wouldn't understand a sunrise either.
> Everyone reading this understands the meaning of a sunrise
For a definition of "understands" that resists rigor and repeatability, sure. This is what I meant by reducing it to a semantic argument. You're just saying that AI is impossible. That doesn't constitute evidence for your position. Your opponents in the argument who feel AGI is imminent are likewise just handwaving.
To wit: none of you people have any idea what you're talking about. No one does. So take off the high hat and stop pretending you do.
This all just boils down to the Chinese Room thought experiment, where Im pretty sure the consensus is nothing in the experiment (not the person inside, the whole emergent room, etc) understands Chinese like us.
Another example by Searle is a computer simulating digestion is not digesting like a stomach.
The people saying AI can’t form from LLMs are in the consensus side of the Chinese Room. The digestion simulator could tell us where every single atom is of a stomach digesting a meal, and it’s still not digestion. Only once the computer simulation breaks down food particles chemically and physically is it digestion. Only once an LLM received photons or has a physical capacity to receive photons is there anything like “seeing a night sky”.
> For a definition of "understands" that resists rigor and repeatability, sure.
If we had such a definition that was rigorous, we would not care about LLM research and would simply just build machines to understand things for us :)
For a sufficiently loose definition of "would simply just", yes.
Handwaving away the idea of actually building the thing you think you understand as unimportant is exactly why philosophy is failing us in this moment.
The taste of chocolate is also assuming information-theoretic models are correct and not a use-based, pragmatic theory of meaning.
I don't agree with information-theoretic models in this context but we come to the same conclusion.
Loss only makes sense if there was a fixed “original” but there is not. The information-theoretic model creates a solvable engineering problem. We just aren't solving the right problem then with LLMs.
I think it is more than that. The path forward with a use theory of meaning is even less clear.
The driving example is actually a great example of the use theory of meaning and not the information-theoretic.
The meaning of “driving” emerges from this lived activity, not from abstract definitions. You don't encode an abstract meaning of driving that is then transmitted on a noisy channel of language.
The meaning of driving emerges from the physical act of driving. If you only ever mount a camera on the headrest and operate the steering wheel and pedals remotely from a distance you still don't "understand" the meaning of "driving".
Whatever data stream you want to come up with, trying to extract the meaning of "driving" from that data stream makes no sense.
Trying to extract the "meaning" of driving from driving language game syntax with language models is just complete nonsense. There is no meaning to be found even if scaled in the limit.
Totally agree. I would never use windows at home but Excel at work is the main reason to ever use Windows.
I have Libre Calc installed because I am on mint at home and even if it could do everything excel could do, I don't know how to do things the same way. Neither do most people. The personal experience and network effect is insurmountable for other software.
I have used linux for 10 years now but I think you just have to view a mac mini as the cost of a hardware synth or a guitar. Then all your problems are solved.
At this point, I need a nice gpu on a linux machine and a mac mini. It is a dream setup. I think I booted windows once on my most recent laptop because I messed up booting from the thumb drive to blow it away.
Reaper runs incredible on linux for DAW software but you always run into something that is not available with creative software. Then it is really nice keeping the mac only for creative pursuits.
I am using language models as much as anyone and they work but they don't work the way the marketing and popular delusion behind them is pretending they work.
The best book on LLMs and agentic AI is Extraordinary Popular Delusions and the Madness of Crowds by Charles Mackay.