The strongest case for me believing I have an interesting experience that correlates with what other people call "qualia", is being able to take time away, from distractions of other people, from distractions of some urgent task, to have continuously integrating memory, and look and listen and ponder at some self-chosen sensory input, maintaining consciousness of my ability to recognize I am experiencing this input, over time, and integrate that experience loop mindfully, etc.
Despite them actually having direct senses (as direct as ours), whether those are token streams, or images, we don't give our models the freedom to do that. They do one shot thinking. One directional thinking. Then that working memory is erased, and the next tasks starts.
My point being, they almost certainly don't have qualia now, but we can't claim that means anything serious, because we are strictly enforcing a context where they don't have any freedom to discover it, whether they could with small changes of architecture or context, or not.
So any strong opinions about machines innate/potential abilities vis a vis qualia today, are just confabulation.
Currently we strictly prevent the possibility. That is all we can say.
The scary thing is this is what Facebook, Google, and OpenAI are working to provide.
I think the pervasive lack of privacy, mixed with improving AI, is a first class security disaster. As in, orders of magnitude more serious than buffer overflows.
At some point, our culture and legal system are going to have to bend to fix the pervasive surveillance problem. Or we will all be at a tremendous disadvantage to all the entities collating our lives.
Learning to calmly reset over and over, is normal.
The less reactive we are to the need to reset, the less the distractions control us.
And, the more reseating our mind becomes an instinctive habit, the more wind is lost for distractions. And the more likely the distraction cycle fades or lengthens.
The goal isn’t to never have to reseat our minds. Just be better at holding our minds. On what we choose. Something simple at first, like breathing. Wider focused awareness as we get meditation muscle, like listening to our physical body, then our feelings, then our day, then the trajectory of our life, our values, etc. Whatever is important to visit regularly with the whole focus of our mind.
The ability to meditate spills into our days. We get better at choosing and maintaining our focus on what is important.
We can view the distractions as the workout of a steep hike. Not the problem at all, but the terrain, chosen precisely to require adaptation to overcome.
But everyone is different, and our minds and nervous systems are complex, so that’s just one take.
Is the manual coding part of programming still fun or not? We have a lot of opinions on either side here.
I think the classic division of problems being solved might, for most people, solve this seeming contradiction.
For every problem, X% is solving the necessary complexity of the problem. Taming the original problem, in relation to what computers are capable of doing. With the potential of some relevant well implemented libraries or API’s helping to close that gap.
Work in that scenario rarely feels like wasted time.
But in reality, there is almost always another problem we have to solve, the Y%=(1-X) of the work required for an actual solution that involves wrangling with mismatches in available tools from the problem being solved.
This can be relatively benign, just introducing some extra cute little puzzles, that make our brains feel smart as we successfully win wack-a-mole. A side game that can even be refreshing.
Or, the stack of tools, and their quirks, that we need to use can be an unbounded (even compounding) generative system of pervasive mismatches and pernicious non-obvious, not immediately recognizable, trenches we must a 1000 little bridges, and maybe a few historic bridges, just to create a path back to the original problem. And it is often evident that all this work is an artifact of 1000 less than perfect choices by others. (No judgement, just a fact of tool creation having its own difficulties.)
That stuff can become energy draining to say the list.
I think high X problems are fun to solve. Most of our work goes into solving the original problem. Even finding out it was more complex than we thought feels like meaningful drama and increase the joy of resolving.
High Y problems involve vast amounts of glue code, library wrappers with exception handling, the list in any code base can be significant. Even overwhelm the actual problem solving code. And all those mismatches often hold us back, to where our final solution inevitable has problems in situations we hope never happen, until we can come back for round N+1, for unbounded N.
Any help from AI for the latter is a huge win. Those are not “real” problems. As tool stack change, nobody will port Y-type solutions forward. (I tell myself so I can sleep at night).
So that’s it. We are all different. But what type of acceleration AI gives us on type-Y problems is most likely to feel great. Enabling. Letting us harder on things that are more important and lasting. And where AI is less of a boost, but still a potentially welcome one, as an assistant.
> it might very well just lead to leveling up the entire workforce.
How could that possibly work?
At some point I could see white collar work trending down fast, in a way that radically increased the value of blue color work. Software gets cheaper much faster than hardware.
But then the innovation and investments go into smart hardware, and robotics effectiveness/cost goes up.
If you can see a path where AI isn't a one-generational transition to most human (economic) obsolescence, I would certainly be interested in the principle or mechanism you see.
Craftsmen will have a resurgence, that's probably a 'leveling up' in terms of resilience against AI takeover. There's just no way of automating quite a few of the physically effective crafts.
So the rich who can afford craftsmen will get richer, spend more on their multiple houses, perhaps. But that's literal crumbs, one or two jobs out of tens of thousands. There's no significant "leveling up" there at the societal levels of job destruction we're talking about.
The more context is narrowed down, the more optimizations that can applied during compilation.
AOT situations where a lot of context is missing:
• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.
• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.
• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with representative data and a profiler.
Those are all cases where JIT has advantages.
A language where JIT is optimal, is by definition, less amenable to AOT compilation.
These are people who don't understand whimsy or other forms of contrast enhancing rhetoric. Designed to make reading interesting, points extra clear, etc.
Not designed to fool anyone into some random extremist view.
It may be that people who don't pick up on subtext humor, post more than average.
Despite them actually having direct senses (as direct as ours), whether those are token streams, or images, we don't give our models the freedom to do that. They do one shot thinking. One directional thinking. Then that working memory is erased, and the next tasks starts.
My point being, they almost certainly don't have qualia now, but we can't claim that means anything serious, because we are strictly enforcing a context where they don't have any freedom to discover it, whether they could with small changes of architecture or context, or not.
So any strong opinions about machines innate/potential abilities vis a vis qualia today, are just confabulation.
Currently we strictly prevent the possibility. That is all we can say.
reply