Damn, who knew there would be an arms race to gobble up those domains and be the sole judge to decide if we reached arbitrary milestones? I should have thought of that sooner :P
That’s a coward’s take, and even if you are taking the middle-ground route there are sufficiently legal ways. You just won’t find much enthusiasm about it among people here because the demographic of this platform is living comfortably.
What's strange is I just saw a TikTok video in my Waymo earlier.
"a perfect explanation of quantum tunneling"
It was a baseball game. A pitcher had thrown a pitch, and there was some optical illusion of the ball appearing to go through the batter's swing. It looked like the ball went through the bat. Apparently this is quantum tunneling. The atoms aligned perfectly and the ball passed through.
How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.
Same thing with “psychosis”, which is a manufactured moral panic crisis.
If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.
By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.
The article you linked talks about the voice personality prompt for "unhinged mode", which is an entertainment mode. It has nothing to do with the code writing model.
The fact that that represents something the folks at xAI think would be entertaining can certainly be a basis for thinking twice about trusting their judgement in other matters, though, right?
I got a lot of entertainment out of it, don't knock it till you tried it, it's just a prompt.
The great thing about xAI is that it is just a company and there are other AI companies that have AIs that match your values, even though between Grok, ChatGPT, and Claude there are minimal actual differences.
An AI will be anything that the prompt says it is. Because a prompt exists doesn't condemn the company.
> An AI will be anything that the prompt says it is
Within the boundaries of pre-training, yes. It is definitely possible, in training and in fine-tuning, to make a LLM resistant to engaging in the role-playing requested in the prompt.
reply