> I can see this kind of survival-bias stories distorting the reality.
That was my take with the entire report which I think lends to an inherent bias within the data and stories. You have the entrepreneurial stories, then you have the ones where people are both impacted and receiving benefits.
The infographics and charts even call out how countries that are "first-world" with fewer safety nets are more likely to be in "survival" mode compared to countries with them.
The bit from George Carlin standup routine regarding how the poor are there just to scare the hell out of the middle class rings true in this reflection. Poorer countries accept their current realities and the feedback reflects the hustle. Richer countries with safety nets reflect the existential issues with previous industrial revolutions. Richer countries without safety nets reflect the fear that their efforts will be made "replaceable" by AI.
As for the rest - massive testing creating false positives - that is an issue of implementation and the errors introduced by humans, not data itself. If the process were in large part made more automated, it could screen for a larger panel of issues for less cost.
From my experience working deep in data and human factors - the issue in quantifying the root cause isn't reality, we live a shared experience in general. The issue is the data isn't good enough. What bugs us about it is the psychology that our perceptions are different enough to the degree that we will fight to prove an unknown.
Memes are funny when they hold an element of truth to them. And reading through your blog post, there are some moments of laughter.
But, your conclusion that it's just about one side coming to the other rings hollow.
The reason you are fixing a database at 9pm on a Friday is because you or your predecessors or company in general did not care to pay off their tech debt. Doing int and bigint was always a bug to bite you in the ass. Not having your cloud metrics properly setup and observable is also tech debt because there was a decision made somewhere not to build it, or document it in a runbook, or whatever. And there you are, 9pm on a Friday, self-doubting yourself.
Agentic AI, I hope, will evolve into a decentralized model - because the user is the edge, not the cloud. The problem is that the hardware that it takes to run LLM at scale only works in the cloud - until the blackbox of LLM magic is distilled into an ASIC well enough to be mass manufactured.
Without this happening, without a way to take a state of the machine and put it into soda cans, it remains the realm of those with the cash to keep their well from drying out.
They do need hosting and now they need a very particular hosting with very particular hardware which is the bottleneck.
Now here is the trick - exporting the magic that makes LLM work (transformers) into ASIC hardware to get it out of the GPU. The problem being the blackbox of logic gates within the gpu that makes the LLM work.
There are a few that have figured it out. There should be more, way more. Else this will never scale and we'll be stuck within the trap of cloud - because nobody is asking for less except in their bills.
>The real question is why would anyone want, or want to help build, such an obscenity.
Power Saws and CNC mills have no autonomy. They have to be guided every inch or instruction by hand. Autonomous AI agents remove the hand. So if we don't define the role of humans in the process of creation, we get AI building things we didn't ask for or need.
AI is coming regardless. There are advantages that we all accept it can do. But the machine is a 'slave' only if we refuse to be 'masters'.
There is a term called social ecology.
It is based on the conviction that nearly all of our present ecological problems originate in deep-seated social problems. In effect, the way human beings deal with each other as social beings is crucial to addressing ecological crisis.
The point of social ecology emphasizes is not that moral and spiritual persuation and renewal are meaningless or unnecessary; they are necessary and can be educational. But modern capitalism is 'structually amoral' and hence impervious to moral appeals.
Power will always belong to the elite and commanding strata if it is not institutionalized in face-to-face democracies, among people who are fully empowered as social beings to make decisions in new communal assemblies. Power that does not belong to the people invariably belongs to the state and the exploitative interests it represents.
What is obscene is measuring outputs by 19th century standards. As long as we believe that "being born doesn't entitle you to food", we will stay on the hedonic treadmill until the planet or our psyches break.
>someone raised the question of “what would be the role of humans in an AI-first society”.
Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like:
- 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.'
- 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'
- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'
The human purpose is not to compete but to safeguard the telology (purpose) of the system.
>- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'
I have this vision that in absence of the ability for people to form social hierarchies on the back of their economic value to society, there will be this AI fueled class hierarchy of people's general social ability. So rather than money determining your neighborhood, your ability to not be violent or crazy does.
If we have post scarcity due to AI, everything becomes so uncertain. Why would we still have violent and crazy people? Surely the ASI could figure it out and fix whatever is going on in their brains. It's so fuzzy after that event horizon I have no confidence in any predictions.
Why are some people able to bear suffering whereas others go bonkers? Or what if the only source of happiness of some of those crazy people is domination of other people and exclusivity of social hierarchies? How would AI fix that?
There are easy fixes to get rid of violent and crazy people. Why would a powerful ASI bother with fixing them? A rabid dog just gets put down by humans. Why would we expect anything better of our overlords?
This seems to suggest a single dimensional evaluation. The complexity of social compatibility is high and the potential capacity to evaluate could also be greater.
Alvin Toffler's book "Future Shock" describes what's going on within this thread.
Toffler predicted that as change accelerated, we'd face the paradox of too many options (like a Cheesecake Factory menu) or, conversely, feeling like we have no options due to the framerate of change. He argued that we would enter a state of transience where our relationships, jobs and values would become "temporary". And thus when the rate of change turns everything "temporary", all the old institutions - religion, family, nation, profession - can no longer provide a frame of reference.
In short, the "simulation" of our existence may be starting to drop keyframes - causing pixelization in our society which we obviously see as glitches.
The machine is just going to do whatever we tell it - it is a horse with blinders on or a steam engine going round and round. It doesn't know it needs to work within the human framework. Physics and society only intersect where it's needed for safety - this seems like one of those cases where we need to make sure we define the conditions how both the dog and tail can wag each other.
There was a court ruling earlier that I think starts to set this up: "AI generated images cannot be copyrighted". The same could be said about the rest of the 3 M's. Then expand upon that. AI generated content not being eligible for copyright would go a long way to put value back into people's work efforts.
Let machines deal with improving the framerate of life. Let humans decide what life should be. Hopefully it will finally have more than 50% humanity in it instead of amoral capitalism.
My real personal "doom" theory is that AI will, err, remove 99.99% of humans, pretty much everyone except for the top 100,000 based whatever fractally complex metric scheme it deems important.
Then those 100,000 get a utopia, the AI gets everything else, and ultimately the humans are just nice pets.
I think its important to remember that humans are not that far removed from the native animals that we share the earth with. Civilization is just a thin layer of rules we use to try and keep the peace between us.
Just being born doesn't entitle somebody to food and shelter, you have to go out and find it. You have to work.
A magpie is not provided food and shelter, it has to hunt, fight for territory, and build its nest.
Humans don't have some inalienable "worth". But if you can work, you might choose to trade it for some food and shelter.
AI is not going change that. We might think the AI owners have a moral obligation to feed people who can't find work, but there is no guarantee this will happen.
Also, for the short term at least, we need to stop talking about AI like its a thing, and talk about the companies that build and own the AI. Why would Google build an AI that can do everyone's job, then turn around and start building farms to feed us for free?
Do we perhaps imagine our Governments are going to start building super automated farms to feed us. How are they going to pay Google for the AI with no tax income?
>A magpie is not provided food and shelter, it has to hunt, fight for territory, and build its nest.
>Humans don't have some inalienable "worth". But if you can work, you might choose to trade it for some food and shelter.
A magpie is a slave to its environment (high entropy). Humans are capable of building systems that alter the environment (low entropy).
If we are apathetical to AI, we choose to ignore the benefits and improvements from technology. And ever since the plow, bows and arrows and sharpened rocks, we have always depended on technology to improve our condition. Which is why naturalists find it amazing when we find other species of life on this planet use tools to give them advantages that nature and evolution didn't supplement them with through genetics.
There is a difference between "survival" and "purpose". We have developed our ape-selves to become more than meat in the circle of life. With purpose, we can be more than the magpie.
AI is not an environment - it's a technology as much as the hammer or plow. If it is built to concentrate wealth or kill more people, that's an architectural choice and not a law of physics.
Human labor is more than product outputs. If we cannot change the social contract that defines worth to shift towards human participation and stewardship, then it's a death sentence for the majority of the world's human population.
While companies are not charities, they do depend on consumers. If you take away the income of consumers, do you have a market? If anything, AI should be treated like the telephone or electricity - a public utility - where it can be used to re-engineer how systems, like agriculture, can be done.
At some point we will reach a point of post-scarcity. Where energy is effectively little cost if not free and is able to create all our needs. What happens when things are no longer scarce?
We (humans) need to work on ourselves to overcome our base natures like greed.
You did get the memo from POTUS that loyalty is more important than intelligence, right?
Un-bias intelligence in this operation is not welcomed. One is told what is "factual truth" (not facts themselves) by those who operate out of Pennsylvania Avenue in DC.
If you're not blindly loyal and in line with the administration, then you'll be at risk of losing whatever role you have unless your loyalty is proven then you may receive some of that back based on how much you have demonstrated.
--
The problem in infosec in this world is not competence, it is cult of personality. This is why black t-shirt dislike black polo shirts not so secretly.
I left my shocked face next to the mountain of documented government abuses...
Why are people thinking this is new? Your data down to your current browser session and current location have been sold as much as possible for almost 30 years. Tracking pixels replaced tracking cookies that is linked to your habits and metadata heuristics can pick a single person out of the noise of thousands doom scrolling the same dumb information.
And the .gov has been using it for quite some time - data brokers sell it as readily as adtech has sold your habits to .com. Best part for them - they do not need a judicial oversight to access it and they have unlimited resources to pay for it - look at CALEA.
I doubt they are looking for protesters in general. But it does make it easier for them to "target" people not aware or smart enough to maintain good OPSEC.
Back in a time when you had to pay out the nose for long distance calling, you had outdial service through X.25 PSN or more often, as ARPANet turned into Internet, you had Telnet accessible outdials.
I'm sure the 5% employee tax in Seattle and the bill being introduced in Olympia will do more to smooth things over than some quirky blipvert will.
I think most people in Seattle know how economics works, logic follows:
while "techbro" don't work is true:
if "techbro" debt > income:
unless assets == 0:
sellgighustle
else
sellhousebeforeforeclosure
nomoreseattleforyou("techbro")
end
else
"gigbot" isn't summoned and people don't get paid.
"techbro" health-- due to high expense of COBRA.
[etc...]
end
end
That was my take with the entire report which I think lends to an inherent bias within the data and stories. You have the entrepreneurial stories, then you have the ones where people are both impacted and receiving benefits.
The infographics and charts even call out how countries that are "first-world" with fewer safety nets are more likely to be in "survival" mode compared to countries with them.
The bit from George Carlin standup routine regarding how the poor are there just to scare the hell out of the middle class rings true in this reflection. Poorer countries accept their current realities and the feedback reflects the hustle. Richer countries with safety nets reflect the existential issues with previous industrial revolutions. Richer countries without safety nets reflect the fear that their efforts will be made "replaceable" by AI.
As for the rest - massive testing creating false positives - that is an issue of implementation and the errors introduced by humans, not data itself. If the process were in large part made more automated, it could screen for a larger panel of issues for less cost.
From my experience working deep in data and human factors - the issue in quantifying the root cause isn't reality, we live a shared experience in general. The issue is the data isn't good enough. What bugs us about it is the psychology that our perceptions are different enough to the degree that we will fight to prove an unknown.
reply