This is becoming hilarious. Your argument is "since a primitive version of an AI couldn't jailbreak (wrong, many bad actors jailbreak it) then in the future it's impossible. Let's bet humanity on it ! And why fixated on LLMs ? You know technology progresses right?
Guy, pray we are wrong for your own good and our. I'm done.
Well, I do my best considering how you haven't demonstrated anything so far. I'll refute your claims, but I expect a real explanation of how AI could develop outside our control without hand-waving. If you can deliver that, then we can argue on common ground.
> since a primitive version of an AI couldn't jailbreak (wrong, many bad actors jailbreak it)
I did not say that. "Jailbreak" is a loaded term that basically means getting AI to do something it shouldn't. This is fine, and something I acknowledge - by generating the wrong text or image, AI doesn't kill us. Humans kill us by risking lives on unsafe AI implementations. It is as simple as that.
> Let's bet humanity on it !
Who's betting humanity on it?
> And why fixated on LLMs ?
Yeah, good question. LLMs have been around for years now, and before them Markov chains did about the same thing. Why didn't those take over the world?
> You know technology progresses right?
I do. Now I'm asking you to show me what you would tweak to create an unkillable AI. According to you this is simple and self-evident. What am I missing? You can tell me using technical terms, I'm a Unix developer by trade.
Since you are a Unix develope, listen to the guys that invented the technology we are talking about. Release your huge ego and read literature on AI alignment and problems. I will not talk with technical terms to a Unix dev. Know thyself.
I will leave you with some food for thought. First read some of the relevant literature, don't argue with shower thoughts. Second think: car manufacturer comes and tells you if you overheat the car it may explode. Wife is a teacher says "nah I know better than the manufacturer it will not". You put your kids in trusting manufacturer or wife? In this case, all top level ai lab heads have a different opinion than the Unix guy.
> Since you are a Unix develope, listen to the guys that invented the technology we are talking about. Release your huge ego and read literature on AI alignment and problems.
Which ones? The OpenAI lobbyist constituents? The guys on Twitter who have never deployed a model in their life? The developers of Talk-to-Transformer or the developers of the Markov chain? The people who wrote GANBreeder or the guy who made AI dungeon? Sam Altman? Mark Zuckerberg? Timnit Gebru?
They all have different opinions. Many of them hold non-insignificant stock in AI-related business. I will trust them about as far as I can throw them.
> I will not talk with technical terms to a Unix dev. Know thyself.
That's a convenient excuse to not discuss the exact exploit chain that you claim is pre-eminent and obvious. Shame, since I really wanted to assess it with you.
> I will leave you with some food for thought. First read some of the relevant literature, don't argue with shower thoughts.
I think this is what my Youth Pastor told me verbatim growing up.
> Second think: car manufacturer comes and tells you if you overheat the car it may explode. Wife is a teacher says "nah I know better than the manufacturer it will not". You put your kids in trusting manufacturer or wife? In this case, all top level ai lab heads have a different opinion than the Unix guy.
As-written, I will defer to whoever produces the more conclusive evidence. If the car manufacturer comes and tells me part of our car could overheat, but my wife tells me she had that component replaced years ago, then she's likely right. If the manufacturer points out plastic explosive hidden in the frame or a broken drivetrain then maybe I'll change my mind.
...nevermind the fact that your hypothetical is just wrong in the first place. It's more like if Honda came to tell me that my 4-door was too dangerous for the road, and my only reasonable replacement was a new Honda. If I buy any other brand then I might put myself back in danger, according to them.
So, do I trust Honda or buy a car from another manufacturer? Frankly it doesn't matter. Both cars are prone to user error and death when mishandled. Making promises around "safety" and keeping users eternally fearful is a classic marketing tactic. Apple has successfully employed it for a decade and became the largest company in the world. It's not surprising to see Sam Altman pick up the mantle there. Maybe I'll trust him when he releases a Jupyter notebook describing how AI killed his dog or whatever.
The sad thing is that you did not see who signed the petition, you did not look up the link that I gave you and who he is who his colleagues are and who came out and spoke about it. Live on olbivion. Canargue with someone that does not want to know facts.
Guy, pray we are wrong for your own good and our. I'm done.