Hacker Newsnew | past | comments | ask | show | jobs | submit | Nuzzerino's commentslogin

Damn, who knew there would be an arms race to gobble up those domains and be the sole judge to decide if we reached arbitrary milestones? I should have thought of that sooner :P

That’s a coward’s take, and even if you are taking the middle-ground route there are sufficiently legal ways. You just won’t find much enthusiasm about it among people here because the demographic of this platform is living comfortably.

It’s okay to look at things as art. Not everything needs to be explained to have value.


What's strange is I just saw a TikTok video in my Waymo earlier.

"a perfect explanation of quantum tunneling"

It was a baseball game. A pitcher had thrown a pitch, and there was some optical illusion of the ball appearing to go through the batter's swing. It looked like the ball went through the bat. Apparently this is quantum tunneling. The atoms aligned perfectly and the ball passed through.


How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.

Same thing with “psychosis”, which is a manufactured moral panic crisis.

If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.

It’s time to stop parroting slurs like that.


‘How does a model “trigger” self-harm?’

By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.


At least they didn’t claim to invent AGI this time from prompts only… lol


There are other languages like Linear A that could use attention as well!


Ever tried to get a remote job lately?


How can you do this in the spirit of what the author is talking about if you have some kind of chronic pain?


That's not necessarily a bad thing.


The article you linked talks about the voice personality prompt for "unhinged mode", which is an entertainment mode. It has nothing to do with the code writing model.


The fact that that represents something the folks at xAI think would be entertaining can certainly be a basis for thinking twice about trusting their judgement in other matters, though, right?


I got a lot of entertainment out of it, don't knock it till you tried it, it's just a prompt.

The great thing about xAI is that it is just a company and there are other AI companies that have AIs that match your values, even though between Grok, ChatGPT, and Claude there are minimal actual differences.

An AI will be anything that the prompt says it is. Because a prompt exists doesn't condemn the company.


> An AI will be anything that the prompt says it is

Within the boundaries of pre-training, yes. It is definitely possible, in training and in fine-tuning, to make a LLM resistant to engaging in the role-playing requested in the prompt.


If they represent it as entertainment… it’s a common genre to make fun of what you see as the most extreme views of the other side.


Its the next iteration of pewdiepipeline. The end of the jokes in genocide. Not a joking matter


That’s fair… for every 99 they it cements their ridicule, there might be one who takes it seriously, and maybe that is dangerous…


It's a comment about the company/brand behind the models, not the individual models themselves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: