Hacker Newsnew | past | comments | ask | show | jobs | submit | 2983592's commentslogin

But they are treated as holy scripture ...

How do you know? If you have access you are not unbiased, otherwise you cannot know by definition.

AI companies routinely claim that something is too dangerous to release (I think GPT-2 was the first case) for marketing reasons. There are at least 10 documented high profile cases.

They keep it secret because they now sell to the MIC with China and North Korea bullshit stories as well as to companies who are invested in the AI hype themselves.


I prefer a more cautios approach than the musk style were stuff gets fixed after.

And with gpt-2 the worry was mass emails a lot better and more detailed and personal, social media campaigns etc.

How many bots are deployed today on X and influencing democrazy around the globe?

Its fair to say it had an impact and LLMs still have.


GPT-2 was obviously too dangerous to release at the time! It's OK-ish now, when the knowledge that AI can produce arbitrary text is widely shared. It would have been a disaster for scammers and phishers to get GPT-2 at a time when almost everyone still assumed that large volumes of detailed text proved there's a real human being on the other end of the conversation.

And, as we all know, humans can't be scammers. They need the robots to lie.

> How do you know? If you have access you are not unbiased, otherwise you cannot know by definition.

The platonic ideal of how to dismiss any argument by anyone about anything.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: