Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How can tech people believe we can unplug a sophisticated AI?
1 point by arisAlexis on July 6, 2023 | hide | past | favorite | 10 comments
This is such a naive take I am astonished it's a very common answer in the threads. OK like there is software you can download right now form the dark web, hack loads of laptops and copy any code you like there. It's so trivial that you cannot shutdown a system that it boggles me. Even Marc Andersen claimed that although you can easily see he has alterior motives invested billions in the tech.

It's like a movie scene where top scientists from AI labs are working to solve the alignment problem and Jeff Bridges knocks on the door and says "hey guys, have you thought about unplugging the damn thing"?



Human brains are very advanced and are in fact the most advanced processing unit known to men. Every human brain has it's own agency and it's separated from other human brains. Few insects like bees and ants have hierarchal societies with a centralized brain function. In mammals high degree of cooperation are very rare and limited to special situations like the naked rat mole: https://en.wikipedia.org/wiki/Eusociality

You are assuming a centralized super intelligence is better than multiple separate intelligences. Maybe controlling hardware (bodies) is very hard to do in a centralized fashion. Maybe the delay in transmitting commands is too high to allow for a centralized processing unit to handle it and that's why animals have evolved as many separate individuals.

Humans and other animals have evolved during millions of years of evolution. We might be very efficient collectively. A single centralized intelligence might not be very efficient or even possible. We might just kick its ass and take leaps around it. We don't know. Claiming every new step in our discovery might bring the end of humanity it's like when they built the Eiffel tower and all the farmers around were scared it would collapse and kill everyone.

THE WOLF, THE WOLF, THE AI WOLF! https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf


You wrote a huge post but still did not address my argument. Any task based AI that is not necessarily conscious can just upload itself on many computers trivially. You agree that unplugging it is hilariously naive? Instead you talk about the possibility of biological superintelligence and other stuff that I do not mention. Lastly, you mock the argument that people vastly more accomplished think is our biggest issue humanity faces. That's a pretty big ego, big wolf.


> Any task based AI that is not necessarily conscious can just upload itself on many computers trivially

Really? Across the past 6 months of publicly available instruction-tuned LLMs, there has not been a single instance of such a breakout to my knowledge. Even in "AutoGPT" agentic-style environments, it's practically impossible for AI to attain any capabilities outside of what the user gives it. You could argue giving it too much power is dangerous, but... that's the case with all computers, and all AI. It doesn't inherently change the threat model of computing as we know it today.

You're not going to ever find a satisfying answer to this question. There will be fearmongers who insist such an event is inevitable despite having no evidence or examples of how it's possible, as well as yes-men who will blindly validate poorly considered AI applications with potential to cause real harm.

On the sort of "island of tangibility" from both a technical and a social perspective, LLMs don't really pose much of a threat to humanity. There's an old thought experiment called The Library of Babel, which poses a library filled with infinite knowledge. The problem is that the knowledge is lost in a sea of infinite noise, such that anyone researching a specific topic would drive themselves mad long before ever finding the truth. In a sense, that's sorta what AI represents. It is a powerful library with many inherent truths, but also one that is confident to lie to it's reader and procedurally warp reality. Deriving superhuman intelligence from that sort of model is impossible, I think.

So, make of it what you will. I remain unconvinced that today's LLMs are any more dangerous than, say, the advent of Machine Vision a few decades ago.


This is becoming hilarious. Your argument is "since a primitive version of an AI couldn't jailbreak (wrong, many bad actors jailbreak it) then in the future it's impossible. Let's bet humanity on it ! And why fixated on LLMs ? You know technology progresses right?

Guy, pray we are wrong for your own good and our. I'm done.


Well, I do my best considering how you haven't demonstrated anything so far. I'll refute your claims, but I expect a real explanation of how AI could develop outside our control without hand-waving. If you can deliver that, then we can argue on common ground.

> since a primitive version of an AI couldn't jailbreak (wrong, many bad actors jailbreak it)

I did not say that. "Jailbreak" is a loaded term that basically means getting AI to do something it shouldn't. This is fine, and something I acknowledge - by generating the wrong text or image, AI doesn't kill us. Humans kill us by risking lives on unsafe AI implementations. It is as simple as that.

> Let's bet humanity on it !

Who's betting humanity on it?

> And why fixated on LLMs ?

Yeah, good question. LLMs have been around for years now, and before them Markov chains did about the same thing. Why didn't those take over the world?

> You know technology progresses right?

I do. Now I'm asking you to show me what you would tweak to create an unkillable AI. According to you this is simple and self-evident. What am I missing? You can tell me using technical terms, I'm a Unix developer by trade.


Since you are a Unix develope, listen to the guys that invented the technology we are talking about. Release your huge ego and read literature on AI alignment and problems. I will not talk with technical terms to a Unix dev. Know thyself.

I will leave you with some food for thought. First read some of the relevant literature, don't argue with shower thoughts. Second think: car manufacturer comes and tells you if you overheat the car it may explode. Wife is a teacher says "nah I know better than the manufacturer it will not". You put your kids in trusting manufacturer or wife? In this case, all top level ai lab heads have a different opinion than the Unix guy.

https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-r...


> Since you are a Unix develope, listen to the guys that invented the technology we are talking about. Release your huge ego and read literature on AI alignment and problems.

Which ones? The OpenAI lobbyist constituents? The guys on Twitter who have never deployed a model in their life? The developers of Talk-to-Transformer or the developers of the Markov chain? The people who wrote GANBreeder or the guy who made AI dungeon? Sam Altman? Mark Zuckerberg? Timnit Gebru?

They all have different opinions. Many of them hold non-insignificant stock in AI-related business. I will trust them about as far as I can throw them.

> I will not talk with technical terms to a Unix dev. Know thyself.

That's a convenient excuse to not discuss the exact exploit chain that you claim is pre-eminent and obvious. Shame, since I really wanted to assess it with you.

> I will leave you with some food for thought. First read some of the relevant literature, don't argue with shower thoughts.

I think this is what my Youth Pastor told me verbatim growing up.

> Second think: car manufacturer comes and tells you if you overheat the car it may explode. Wife is a teacher says "nah I know better than the manufacturer it will not". You put your kids in trusting manufacturer or wife? In this case, all top level ai lab heads have a different opinion than the Unix guy.

As-written, I will defer to whoever produces the more conclusive evidence. If the car manufacturer comes and tells me part of our car could overheat, but my wife tells me she had that component replaced years ago, then she's likely right. If the manufacturer points out plastic explosive hidden in the frame or a broken drivetrain then maybe I'll change my mind.

...nevermind the fact that your hypothetical is just wrong in the first place. It's more like if Honda came to tell me that my 4-door was too dangerous for the road, and my only reasonable replacement was a new Honda. If I buy any other brand then I might put myself back in danger, according to them.

So, do I trust Honda or buy a car from another manufacturer? Frankly it doesn't matter. Both cars are prone to user error and death when mishandled. Making promises around "safety" and keeping users eternally fearful is a classic marketing tactic. Apple has successfully employed it for a decade and became the largest company in the world. It's not surprising to see Sam Altman pick up the mantle there. Maybe I'll trust him when he releases a Jupyter notebook describing how AI killed his dog or whatever.


The sad thing is that you did not see who signed the petition, you did not look up the link that I gave you and who he is who his colleagues are and who came out and spoke about it. Live on olbivion. Canargue with someone that does not want to know facts.


I figure "The Adolescence of P-1" was autobiographical. ;) https://en.wikipedia.org/wiki/The_Adolescence_of_P-1


A sophisticated AI will require a large amount of compute that is in the same place and even then it will be slow and can only focus on one task at once.

Since the AI works much slower than you it can not prevent you from unplugging it. Even maintaining such a compute cluster requires work from humans both on the hardware and software side. People could just stop working on it and it would eventually fall over.

>It's so trivial that you cannot shutdown a system that it boggles me

Virality doesn't require sophisticated AI so it can get away with simpler code that can run on any system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: