Hacker Newsnew | past | comments | ask | show | jobs | submit | _alternator_'s commentslogin

Wonder how a court would treat it if users just reply to the email updating the terms of service on our behalf and claiming that they have accepted the terms by not doing anything. (Eg add stringent PII protection, no tracking requirements…)

My guess is that you would probably get kicked off the service if anyone reads your TOS, so make sure to add onerous cancellation charges due to the user in your updated TOS.


In the US at least, the courts would probably side with the big corporation, since doing so seems to be the legal precedent.

I could imagine an AI sidekick that does all this work for you, and always has the last word because it'll never give up.

A place like Meta or Microsoft would tell you to pound sand, but an aligned army of collective-bargaining agents might succeed in removing a specific term from a smaller service.


Devils advocate take: I think the quality of the ShowHN projects are in fact getting higher, at least the ones that land in the front page. The issue is that projects that used to take weeks, months, or even years of work now can be done in a weekend or so. It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort _with AI assistance_.

So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.


I disagree in that the last few I can think of have involved things like services that do not really explain what they do properly and then ask for full permissions to your github account, or claim to be far more than they are (ie "I made this thing" but it's just a shim for someone else's stuff).

But the issue is not only show HN, even generic posts are increasingly from new accounts, some of them are reaching front page too.

One example: https://news.ycombinator.com/item?id=46884481


This made the frontpage two days ago: https://news.ycombinator.com/item?id=47275291

Read the comments and you'll see it took time and effort, from people who know at least a little about what they're saying, to point out that it's AI slop that doesn't live up to the claims written in its own docs.


I'd pose a different perspective, that Show HN in non-hype cycles tend to have a higher self-imposed bar before posting. With the democratizing, there are many posts where time from first commit to Show HN is on the order of hours, 25m being the shortest I have personally seen. I would contend that community standards have not changed meaningfully, but due to the underlying mix changing, the front page changes too.

That being said, there is an above average, low quality submissions sub-trend, that are obviously trying to plant a money tree. This is largely driven by the "look ma, no hands" Ai tools like OpenClaw, mixed (venn) with the crypto crowd looking to make easy money with near-zero effort.

With that being said, I have definitely seen some real bangers that have large Ai contributions. So I am generally in favor of minimally changing how HN works today. One small change would be adding to the Guidelines and FAQ, giving the agents something to read before posting (such that they know that automated submissions are not allowed[1])

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Well it's not just that... picture a community group talking among themselves and then some rando shows up, yells "I built this thing that you all might like", hangs out for an hour and then is never heard from again.

I think that's great in moderation as it stimulates ideas and discussions, shows us what folks are working on, etc... but this can't become Product Hunt. The reasons for posting here should be vastly different than posting on Product Hunt.


Except the quality hasn't been getting higher. Most of those projects wouldn't be considered HN worthy if a human being had made them, they only get the praise they do because they were generated by an LLM and as such they aren't projects so much as demonstrations of the latest model's capabilities.

Also the purpose of Show HN along with HN in general is to spark intellectual curiosity and create interesting conversation, and nothing about LLM generated code does that, because the person who prompted the AI to make it doesn't understand it and can't discuss it in any depth.


> It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort _with AI assistance_.

This also appears to cause a serious shift in the kind of projects that are submitted (i.e.: towards things that are much more accelerated by AI assistance).


I was thinking about this the other day. If someone made TempleOS today, people wouldn't be as impressed, because they'd just assume they used AI.

They'd assume this, even if they hadn't used AI, and even if AI didn't have to ability to pull it off.


That dev made many videos about its creation and motivations though and along with their personality I think people would be understanding.

Yeah, live streaming it would be a good option, I thought of that too.

Not sure I understand your 2nd argument though?


> Not sure I understand your 2nd argument though?

Sorry, I meant in the context of that original dev their earnest fixation/obsession with their creation came across in their personality that I think made people sympathetic.


Another thing that’s not been mentioned here: there is a relationship between volume and pitch. In short, you strike a string hard and it goes a bit sharp. The issue is that the tonal math makes a linearization of the string physics, but the highly activated string is effectively a little tighter than the idealized version.

Humans are also not perfect at fretting with the exact same pressure every time, or without inducing some bend in the strings. This is really noticeable with the G string which always sounds out of tune while playing, because our tuning system gives it a half-step-down intonation as a trade-off to make it easier to form chords.

James Taylor compensates by tuning everything down a few cents, between -12 at the low E and -3 at the high E, with a little break in the pattern with -4 cents at the G to deal with its weirdness. Good electronic tuners have "sweetened" presets which do something similar.


Peterson guitar tuners can do custom tunings, and have the James Taylor tuning built in as a preset. (On Peterson tuners, it's called the 'acoustic' preset, but is actually the JT tuning.)

I think a tragic mistake like this was foreseeable (in a vague sense), but I highly doubt that anyone intentionally bombed an elementary school full of children.

The NYT had some good reporting on this, and you can see how the mistake was made. The elementary school used to be part of the IRGC base until 2016. Then it was fenced off and made an elementary school. The “shooter” (in this case, the USA) had a duty to check that the target was currently a valid military target. This verification, if it was done at all, was clearly the problem.

I’m sure you have someone directly responsible for this mistake who is going to have a hard time living with themselves. But like I said, starting a war leads to inevitable tragedy, and I doubt the people who are indirectly responsible will ever recognize their culpability in this.


It really doesn't matter whether it was a mistake or how the mistake was made. If it were your kid's elementary school that got blown up, would you say "Oh, well, it wasn't intentional. The bad guys just had outdated intelligence. These things happen."

> It really doesn't matter whether it was a mistake

It does matter if people go around saying that “they want death, so they are bombing shit indiscriminately.”


I'm not sure how "they want war, so they are bombing negligently" is any different. Or morally better.

It's not, but that's not what the USA wants. They want Iran to stop destabilising the ME, and to eliminate the threat to the USA consisting of the Iranian nuke program, the ballistic missile program, and the religious zeal to use them.

What on earth makes you assert the USA just 'wants war'? If this war goes on for too long Trump is cooked. He'll lose the election and might even be unpopular enough to cop the persecution he deserves.


> What on earth makes you assert the USA just 'wants war'?

The "Department of War" they created before promptly starting an absolute textbook War of Aggression is evidence beyond a reasonable doubt that the war was premeditated.


> They want Iran to stop destabilising the ME, and to eliminate the threat to the USA consisting of the Iranian nuke program, the ballistic missile program, and the religious zeal to use them.

US and Israeli leaders have said recently that the war will continue until Iran is not a threat, for mostly the exact reasons you just listed. I'm not sure how you bomb the religious zeal out of someone.

I was listening to a fairly right wing British pundit today, who is very publicly funded by British and European NATO interests, he said that the US saying they will continue to bomb Iran until it is not a threat reminded him a lot of one of Russia's state Ukraine war goals of "we will de-Nazify Ukraine".

very open ended nebulous goals with the benefit of being easily stretched around any agenda the invading party has at the moment.


> stop destabilising the ME

USA is the destabilizing force. In the case of Iran specifically, what happenes today is in many ways a consequence of the 1953 coup.


> “they want death, so they are bombing shit indiscriminately.”

It's still the most probable explanation


Disagree, negligence seems more likely

https://www.pbs.org/newshour/world/hegseth-insists-the-iran-...

"No stupid rules of engagement, no nation building quagmire, no democracy building exercise, no politically correct wars. We fight to win, and we don't waste time or lives," Hegseth said.

Words of your Secretary of War, not mine.

This is not a woke war. This is a war where you bomb schools and kill children.


The ridiculous renaming to "Department of War" supports this attitude, as well. They're declaring to everyone our intent to be belligerents. That the US military is meant to be aggressors and instigators, rather than defenders. All signs point to an administration bent on aggression and destruction.

I mean, to be fair, the US always has been the instigators, but it's now official, something this administration is proud of.


And? It's quite a leap to take from that statement that they intentionally bombed a school. In fact, if they were trying to bomb schools, then it's quite the coincidence that they missed all the rest, and just happened to hit the one that used to be a military base.

They made it as clear as possible they don't give a shit about collateral damage. You have some very immense reading difficulties.

Surgical_fire wrote:

"Hitting a school was not a mistake, it was the point."

And

"This is not a woke war. This is a war where you bomb schools and kill children."

I also never said they especially cared about collateral damage, try not to project your opinions onto other people's comments.


And schools, hospitals, aid workers, etc have of course been "khamas" and "irgc" to these two invaders so that's hardly surprising.

Aka they don't care about innocent deaths, they want to cause deaths.

You're intentionally missing the point. Every time a bomb drops we're rolling the dice. Hits on civilian targets are inevitable, just like bugs are inevitable. The only solution is not to go to war at all. Don't blame the person who dropped the bomb, blame the people who ordered the bombs to be dropped.

No, I firmly believe that decades of dehumanization of Iranians in particular and Muslims in general makes this sort of "tragic mistake" desirable.

I don't think whoever was responsible for this gives many fucks about the lives of Iranians.

If a foreign power bombed anything in the US and children died people would just consider them monsters, without further considerations. No one would be pondering about faulty intel.

I refuse to launder the vileness of the aggressors here.


>but I highly doubt that anyone intentionally bombed an elementary school full of children

Hegseth said to your face "No stupid rules of engagement", "This is not a politically correct war"

These are the people who have been purposely and loudly defending Israel bombing innocent people. They genuinely believe, as they say to your face, that it is important and necessary to be brutal and extreme to win war.

Intentionally disregarding rules of engagement and protecting innocent life IS intentionally bombing that school. Civilian casualties are a reality of war and the best you can do is work your ass off to reduce them, so openly advocating for NOT doing that is intentionally killing people.


Trust me, I’m not trying to defend the leadership of the DoW. But I do believe that there is a difference between reckless indifference and actually intentionally bombing a girls school.

Both sound like war crimes to me, but the latter sounds implausible given the known facts. Let’s not redefine words like ‘intentional’ just because we are appalled. Giving something awful an “awfuller” name is not going to help.


Largely agree, with a bit of clarification. Junior devs can indeed prompt better than some of the old timers, but the blast radius of their inexperienced decisions is much higher. High competence senior devs who embrace the new tools are gonna crush it relative to juniors.

It's like having an early/broken chess engine.

An amateur with a chess engine that blunders 10% of the time will hardly play much better than if they didn't use it. They might even play worse. Over the course of a game, those small probabilities stack up to make a blunder a certainty, and the amateur will not be able to distinguish it from a good move.

However, an experienced player with the same broken engine will easily beat even a grandmaster since they will be able to recognise the blunder and ignore it.

I often find myself asking LLMs "but if you do X won't it be broken because Y?". If you can't see the blunders and use LLMs as slot machines then you're going to spend more money in order to iterate slower.


> Junior devs can indeed prompt better than some of the old timers

I guess? I don't really see why that would be the case. Being a senior is also about understanding the requirements better and knowing how/what to test. I mean we're talking about prompting text into a textarea, something I think even an "old timer" can do pretty well.


I've seen a few people I would consider senior engineers, good ones, who seem to have somewhat fallen for the marketing if you look at the prompts they're using. Closer to a magical "make it so" than "build the code to meet this spec, that I wrote with the context of my existing technical skills".

I'm not sure why junior engineers would be any better at that though, unless it's just that they're approaching it with less bias and reaping beginners luck.


Man 404 Media is really crushing it lately. Thanks to the team!

Another bit I’m surprised seems to have gotten completely glossed over: there is a deep relationship between _entropy_ and mass which puts bounds on the amount of information you can place in a given volume.

TLDR: a given region of space can’t have more entropy than a black hole of the same volume. Rearranging terms, you find that N bits of information (for large N) has an equivalent black hole size, which in turn has a mass…


Anyone have a link to the full text of the letter?

I found a copy on this website: https://www.teamblind.com/post/darios-email-to-anthropic-att...

I don't know how reliable that source is. In any case, here's the text from that link, for posterity:

"I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything sees it for what it is. Although there is a lot we don’t know about the contract they signed with DoW (and that maybe they don’t even know as well — it could be highly unclear), we do know the following:

Sam’s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contract works is that the model is made available without any legal restrictions ("all lawful usee") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications.

"Safety layer" could also mean something that partners such as Palantir tried to offer us during these negotiations,which is that they on their end offered us some kind of classifier or machine learning system, or software layer, that claims to allow some applications and not others. There is also some suggestion of OpenAI employees ("FDE’s") looking over the usage of the model to prevent bad applications.

Our general sense is that these kinds of approaches, while they don’t have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater. The basic issue is that whether a model is conducting applications like mass surveillance or fully autonomous weapons depends substantially on wider context: a model doesn’t "know" if there’s a human in the loop in the broad situation it is in (for autonomous weapons), and doesn’t know the provenance of the data is it analyzing (so doesn’t know if this is US domestic data vs foreign, doesn’t know if it’s enterprise data given by customers with consent or data bought in sketchier ways, etc).

The kind of "safety layer" stuff that Palantir offered us (and presumably offered OpenAI) is even worse:our sense was that it was almost entirely safety theater, and that Palantir assumed that our problem was "you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that’s the service we provide".

Finally, the idea of having Anthropic/OpenAI employees monitor the deployments is something that came up in discussion within Anthropic a few months ago when we were expanding our classified AUP of our own accord. We were very clear that this is possible only in a small fraction of cases, that we will do it as much as we can, but that it’s not a safeguard people should rely on and isn’t easy to do in the classified world. We do, by the way, try to do this as much as possible, there’s no difference between our approach and OpenAI’s approach here.

So overall what I’m saying here is that the approaches OAI is taking mostly do not work: the main reason OAI accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses. They don’t have zero efficacy, and we’re doing many of them as well, but they are nowhere near sufficient for purpose. It is simultaneously the case that the DoW did not treat OpenAI and us the same here.

We actually attempted to include some of the same safeguards as OAI in our contract, in addition to the AUP which we considered the more important thing, and DoW rejected them with us. We have evidence of this in the email chain of the contract negotiations (I’m writing this with a lot to do, but I might get someone to follow up with the actual language). Thus, it is false that "OpenAIs terms were offered to us and we rejected them", at the same time that it is also false that OpenAI’s terms meaningfully protect them against domestic mass surveillance and fully autonomous weapons.

Finally, there is some suggestion in Sam/OpenAI’s language that the red lines we are talking about, fully autonomous weapons and domestic mass surveillance, are already illegal and so an AUP about these is unnecessary. This mirrors and seems coordinated with DoW’s messaging. It is however completely false. As we explained in our statement yesterday, the DoW does have domestic surveillance authorities, that are not of great concern in a pre--AI world but take on a different meaning in a post-AI world.

For example, it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.

Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about "analysis of bulk acquired data", which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious. On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden admin) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint.

A lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them.

I think these facts suggest a pattern of behavior that Ive seen often from Sam Altman, and that I want to make sure people are equipped to recognize:

He started out this morning by saying he shares Anthropic’s redlines, in order to appear to support us, get some of the credit, and not be attacked when they take over the contract. He also presented himself as someone who wants to "set the same contract for everyone in the industry" — e.g. he’s presenting himself as a peacemaker and dealmaker.

Behind the scenes, he’s working with the DoW to sign a contract with them, to replace us the instant we are designated a supply chain risk. But he has to do this in a way that doesn’t make it seem like he gave up on the red lines and sold out when we wouldn’t. He is able to superficially appear to do this, because (1.) he can sign up for all the safety theater that Anthropic rejected, and that the DoW and partners are willing to collude in presenting as compelling to his employees, and (2.) the DoW is also willing to accept some terms from him that they were not willing to accept from us. Both of these things make it possible for OAI to get a deal when we could not.

The real reasons DoW and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot), we haven’t given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we’ve told the truth about a number of AI policy issues (like job displacement), and we’ve actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palantir, our political consultants, etc, assumed was the problem we were trying to solve).

Sam is now (with the help of DoW) trying to spin this as we were unreasonable, we didn’t engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is.

Vague justifications like "person X was hard to work with" are often used to hide real reasons that look really bad, like the reasons I gave above about political donations, political loyalty, and safety theater. It’s important that everyone understand this and push back on this narrative at least in private, when talking to OpenAI employees.

Thus, Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing.

I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!). Itis working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.

Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees."


"person X was hard to work with"

Life is this simple: in any argument, when someone attacks the person, instead of the topic, that's when you discover that they understand the topic is indefensible.


This is exactly what it says: the only restrictions are the restrictions that are already in law. This seems like the weasel language Dario was talking about.

Laws that can be changed on a whim by "executive orders", or laws that apparently can be ignored completely, like international law.

Like by an administration who is constantly ignoring and violating both domestic and international law?

Like by an administration that likes to act extra judiciously and ignore habeas corups?

I wonder where we'd find such a government. Probably shouldn't give them the power to "do anything legal NOR 'consistent with operational requirements'". That's the power to do anything they want


No, executive orders can't change law and international law, unless ratified by congress, is not democratically legitimized and applicable law in the US to begin with

You mean like the tariffs congress didn't approve?

Dictators rarely gain power legitimately, and always keep it with violence.


There's a stark difference between de jure and de facto here. Executive orders will brazen, tyrannical effects and are often reined in late or never.

We just started a war with Iran without congressional approval or briefing, so I'm not sure if law has meaning anymore.

War Powers Resolution. Obviously, there’s a law of which multiple presidents have used. Congress can change this law but there is a law that does give the POTUS this authority.

Nope, the War Powers Resolution gives the president broad authority to respond to an active attack on the United States (which makes sense). But it does not allow the President to unilaterally start an aggressive war against some random country without Congressional approval.

Not that we live in country where laws or the Constitution matter much right now. It's theoretically possible that some people might someday be prosecuted for breaking laws or violating people's Constitutional rights. But even there, I world expect that many of the law breakers will simply be pardoned.


What about the argument that Congress has always gone along with this in the past?

I mean it isn't quite that stark, but the last president that actually asked congress for and got a declaration of war was Roosevelt. The last president that asked for and got permission for the use of military force was George Bush (junior) after 9/11 (obv. he meant against the Taliban).

Which means all US conflicts are "based on" George Bush's approval for use of military force, about 1 per presidential term: military intervention in Lybia, the campaign against ISIS, campaign against Syria and Iraq militias/continuation against ISIS, and now Iran. Iran is a different scale I guess, but ...


LOL. you really believe that?

They do note that their contract language specifically references the laws as they exist today.

Presumably if the laws become less restrictive, that does not impact OpenAI's contract with them (nothing would change) but if the laws become more restrictive (eg certain loopholes in processing American's data get closed) then OpenAI and the DoD should presumably^ not break the new laws.

^ we all get to decide how much work this presumably is doing


> They do note that their contract language specifically references the laws as they exist today.

Where?

> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

Sounds like it's worded to specifically apply to whatever law is currently applicable, no?


Not that this means the big AI corps should relax their values (it truly doesn't), but I would be extremely surprised if the DoD/DoW doesn't have anyone capable of fine tuning an open weights model for this purpose.

And, I mean, if they don't, gpt 5.3 is going to be pretty good help

Given the volume fine tuning a small model is probably the only cost effective way to do it anyway


Contrary to benchmarks, open weight models are way behind the frontier.

My point is that you don't want a big model for the kind of analysis being discussed here

Even if they were paying frontier prices they would be choosing 5 mini or nano with no thinking

At that point, a fine tuned open source model is going to be on the pareto frontier


The language allows for the DoD to use the model for anything that they deem legal. Read it carefully.

It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.

As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.

Sam Altman is either a fool, or he thinks the rest of us are.


No, that is incorrect.

This is an objective standard as a matter of contract interpretation. If it was the government’s right to determine the lawfulness of a usage, it would say so. Perhaps it does elsewhere in the agreement, but that’s not the case here.


Ok, honest question: Can you point to language in the contract that definitively limits the use of OAI tools that’s beyond what current laws or regulations require?

Sorry, I think we may be talking past each other. The language you quoted is an objective standard. If, for example, a court ruled that the government had violated the Constitution using the tool, that language would be breached. I don’t think anything I’ve seen (though we haven’t seen the whole agreement!) allows the government to use the product in violation of the law. Anthropic wanted to go further by further limiting the uses in specific cases.

Ok I think we are largely in agreement, though perhaps missing the main point: Anthropic wanted restrictions above and beyond “all legal uses”. This was widely reported in the last few days.

OpenAI is passing off their deal as providing additional safeguards beyond “all legal uses” but the language they’ve released doesn’t seem to support that narrative. I’m incensed, and am attempting to point out the hypocrisy in the hopes that OAI gets some blowback for this cynical stunt.


Ok, but I thought my analysis was pretty clear on that point:

> OpenAI acceded to demands that the US Government can do whatever it wants that is legal.


Both. He is a fool who thinks he knows better than anyone else.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: