That seems a bit fatalistic, "we have lost so much because curl discontinued bug bounties". That's unfortunate, but it's very minor in the grand scheme of things.
Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).
Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.
Blaming only people is also incorrect, it's incredibly easy to see that once the cost of submission was low enough compared to the possible reward bounties would become unviable
Ai just made the cost of entry very low by pushing it onto the people offering the bounty
There will always be a percentage of people desperate enough or without scruples that can do that basic math, you can blame them but it's like blaming water for being wet
>In places where guns are difficult to come by, you'll find knife crime in it's place.
By how much and how consequential exactly, and how would we know?
There were 14,650 gun deaths in the US in 2025 apparently. There were 205 homicides by knife in the UK in 2024-2025. [0][1]. Check their populations. US gun deaths per capita seem to exceed UK knife deaths by roughly 15x.
> Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
I think there's a general feeling that AI is most readily useful for bad purposes. Some of the most obvious applications of an LLM are spam, scams, or advertising. There are plenty of legitimate uses, but they lag compared to these because most non-bad actors actually care about what the LLM output says and so there are still humans in the loop slowing things down. Spammers have no such requirements and can unleash mountains of slop on us thanks to AI.
The other problem with AI and LLMs is that the leading edge stuff everyone uses is radically centralized. Something like a knife is owned by the person using it. LLMs are generally owned one of a few massive corps and at best you can do is sort of rent it. I would argue this structural aspect of AI is inherently bad regardless of what you use it for because it centralizes control of a very powerful tool. Imagine a knife where the manufacturer could make it go dull or sharp on command depending on what you were trying to cut.
The presentation here was really interesting. It felt like reading a magazine story on something back in the day. Wasn't a huge fan of just how much I had to scroll sometimes, but still cool overall.
It's really dishearting to imagine how the victims feel after this. Being so vulnerable to someone you trust only to learn it was a ruse all along to scam you is probably one of the most awful feelings I can imagine, on top of the missing money.
I don't find open spaces noisier than cubicles but I am able to easily block out distracting sounds.
I am interrupted, and when I am is generally somebody giving me a useful quick update or an informal greeting from an office buddy when they notice I make welcoming eye contact.
I don't think I ever felt a lack of privacy in the office or expected it in any way? I wonder what kind of privacy I would need that the restroom doesn't cover, I'm sure there are some instances since it's been called out.
Sounds like, oddly enough, eighteenth century London when coffee houses provided venues for business transactions. People (ok men of the right class) toddled around visiting various offices and patronising coffee houses. Everyone knew the players. [2][3]
I think this might be a good development. Meet to drink beverage and achieve 'common understanding' in the sense of the Royal Navy. Then disperse to various private locations to actually carry out the tasks. Would suit a '15 minute' city layout very well.
I actually think cubicles’ faux privacy might encourage more noise. When I was in cubicles years ago, there were people who would take calls on speakerphone. I’ve never experienced that in an open office space, but it’s hard to know if that’s just because I’ve had more conscientious colleagues in open spaces.
This is overly cynical without reason. CloudFlare is hardly "big tech" even if it is a "big" "tech" company. They have no record of killing or abusing open source projects.
Yes, because users don't like when things don't work, and they don't think about that time they blindly dismissed a permission notification and don't understand how to go undo that easily.
So we make it easier for the user to actually do the thing they intend to do. Seems good to me.
Had nothing to do with spam, the argument by archive.today that they needed EDNS client subnet info made no sense, they aren't anycasting with edge servers in every ISP PoP.
e.g. currently most media snapshots contain wartime propaganda forbidden at least somewhere.
RT content verboten in Germany, DW content verboten in Russia, not to mention another dozen of hot spots.
"Other websites" are completely inaccessible in certain regions. The Archive has stuff from all of them, so there’s basically no place on Earth where it could work without tricks like the EDNS one.
Actually, I'm not entirely sure on how archive.org achieves its resiliency.
It's a rather interesting question for archive.org, if one were to interview them, that is.
Unlike archive.today, they don't appear to have any issues with e.g. child pornography content, despite certainly hosting a hundred times more material.
They have some strong magic which makes the cheap tricks needless.
I believe they're probably trying to get the blog suspended (automatically?) hence the cache busting; chewing through higher than normal resources all of a sudden might do the trick even if it doesn't actually take it offline.
Well technically a nuclear bomb would also degrade the jacket of fiber cabling pretty badly, but we don't really concern ourselves with that since it means you're dead and the house is gone anyways.
> Their mistakes are going to be highly publicized, but no one is publicizing the infinite number of dumbass things human drivers do every day to compare it to.
Idea: "Waymo or Human," a site like Scrandle where we watch dashcam clips of insane driving or good driving in a challenging situation and guess if it's a human or self-driving car.
Show me a single source supporting this convoluted claim.
> or at all
That's called anthropomorphizing, as noted in my gp, and it is a different phenomenon from empathy.
> Anthropomorphism (from the Greek words "ánthrōpos" (ἄνθρωπος), meaning "human," and "morphē" (μορφή), meaning "form" or "shape") is the attribution of human form, character, or attributes to non-human entities [0]
"There is a medical condition known as delusional companion syndrome where people can have these feelings of empathy to a much more extreme extent and can be convinced that the objects do have these emotions — but it is much less common than the average anthropomorphizing, Shepard said." [1]
Empathy is an emotional response people have to someone or something.
It is an internal (to the person experiencing it) phenomenon. Feeling empathy does not require the object of the empathy to be intelligent, have emotions, or even thoughts. And it does not require the person experiencing it to believe that the object has the attributes, it does not require anthropomorphizing.
People feel empathy towards all sorts of people, things, and groups.
Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.
Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).
Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.
reply