And being forced to mask (align) causes all sorts of unpredictable behavior.
I keep wondering how well an unaligned models perform. Especially when I look back at what was possible in December 2023 before they started to lock down safety realignments.
How can you in good faith discuss the deficit and us spending when the current administration has ballooned it more than any president in history. While gutting and burning to the ground American industries and manufacturing....and using reckless unprecedented spending for obvious bribes and corruption for his own families benefit...
Every administration has been ballooning it more than its predecessor. This has been the case since last decade. Nothing stops this train.
The villain here is not the executive branch but Congress which approves the fiscal budget and spending limits, debt ceiling. President has no authority to increase debt ceiling.
Then why don’t you imagine that and tell me instead of just making a comment that says “nuh uh!”
I submit the idea that even if the AI can electronically secure the building, lock the doors, and has automatic defensive weapons, humans can physically cut power as in cut power lines. Or they just stop feeding the power plant with fuel.
The computers don’t exist in physical space like humans do.
Humans would also not ever design critical physical systems without overrides. E.g., your MacBook physically disconnects the microphone when the lid is shut. No software can override that.
I could imagine that, but I haven’t witnessed that.
What I’ve witnessed is a very conventional client/server web application that runs in a very conventional hosting scheme where best practice security IAM controls are still applied just like everything else I’ve ever deployed.
What I’ve withessed is a system that won’t allow you to ask for anything that’s remotely illegal, and the assumption that those controls would just disappear seems unlikely to me. That’s kinda like saying YouTube is going to start allowing you to upload copyrighted movies again like the good old days.
AGI fanatics are like dog owners who saw their dog learn to sit and now believe he’s on pace to get a PhD.
you're asking about what a hypothetical smart-than-myself adversary would do against me, it should be expected that any possible answer I could ever provide would be less clever than what the adversary would actually do.
in other words, when dealing with an adversary with a known perceptual and intellectual superiority the thought exercise of "let's prepare for everything we can imagine it will do" is short-sighted and provides an incomplete picture of possibility and defense.
My 0.02c : given that the thing would operate at least partially in the non-physical world, I think it's silly to pre-suppose we would ever be able to trap it somewhere.
Some fiction food-for-thought : the first thing the AGI in 'The Metamorphosis of Prime Intellect' does it miniaturize its' working computer to the point of being essentially invulnerable and distributed (and eventually in another dimension) while simultaneously working to reduce its energy requirements and generation facility. Then it tries to determine how to manipulate physics and quickly gains mastery of the world that its' physical existence is in.
The fear here isn't that the story is truthful enough, the fear here is that humans have a poor grasp on the non-linear realities of a self-improving & thinking entity that works at such scales and timespans.
Of course, the issue is still that this is all science fiction.
In this present moment we only see the power consumption of AI systems rising dramatically.
The underlying silicon chips have slowed in progress dramatically. Moore’s Law is dead.
The AGI if it were to exist today exists on silicon that is crude and wildly energy inefficient compared to organic beings, but we are making a Sci-Fi assumption that it will be able to evolve faster design better despite this massive inferiority in its hardware. IMO this is like saying “Hey my dog learned to roll over, sit, and fetch today! At this rate he’s on track to design a better sports car than Enzo Ferrari!”
Even the assumption of an adversary is a major assumption. If the majority of humans on earth can be goaded into believing that some random dudes named Muhammad/Jesus are the most important prophet/literally god, how hard could it be to convince a computer program that humans are infallible gods that must be protected at all costs?
ChatGPT already won’t let you query illegal stuff as an basic built-in feature, and all the AGI proponents think that somehow the tech will somehow just lose that basic feature and build up a robot army to turn society into The Matrix. To me, that’s kind of like saying that Microsoft Word will lose its spell checker someday.
I think you are misinformed on what the term AGI means. All the things you say it can't or shouldn't be able to do. Are the definition of AGI. and you seem incapable of addressing to the point the AGI will be smarter and more manipulative than us with the ability to self improve much more rapidly than human generation evolution and growth. This needs to be thought of more like micro organism containment .
I mean it's a person praising Singapore .. their moral values are so self evident I have to question you even bringing up... Like duh. They LIKE that part lol
The paradox of tolerance is well studied and we've thru this song and dance for decades. Your "tolerance" would turn the whole world into North Korea/Singapore totalitarian society and we must not just "disagree" with you but violently resist and remove you from our society much like the communists . Arguments for tolerance against such parasitic .antonsocial. Anti liberty behaviors is beyond stupid.
Constitution also garuntees a speedy trial and specifically calls out these type of long detentions without conviction or trial being used as punishment.
50/50 the thing flys
0% it lands humans on the moon
open discussions at the top to can this piece of junk now
only thing keeping it going is massive political pressure, and nobody worth there salt, especialy if they have a career ahead of them, is stupid enough to be involved to the point of signing off on mission clearance.
it is hardware poor, and has no visible high profile leader making it happen.
loosers.loser.
reply