Hacker Newsnew | past | comments | ask | show | jobs | submit | mkozlows's commentslogin

I think it's definitely true that you should never count on a company to do principled things forever. But that doesn't mean that nothing is real or good.

Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.

In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.


Google's support for the open week is a great example because it was obviously a good thing but also obviously built into their business model that they'd take that position. That made them a much more trustworthy company in those days, because abandoning that position would have required not just losing money for a while but changing their internal structure.

So there are two possibilities here:

1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.

2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.

Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.


> OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand.

Just to be clear, you believe that the correct, principled stand is that it's OK to use their models for killing people and civilian surveillance?

Both OAI and Anthropic have the same moral leg to stand on here, OAI is just not hypocritical about it.


If you believe that any country should have a military and intelligence apparatus, the job of that apparatus is to kill people and surveil foreigners. I do think the US government should have a military and intelligence apparatus. Therefore, any company that works with it, from suppliers of clothing and food to suppliers of compute and AI, are supporting an organization with that mission.

The US military _does not_ need to build autonomous weapon systems and _should not_ surveil US citizens broadly.


If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.

Depends on the company, but I think it's fair to say that not every company has a roadmap to infinite growth.

... but you have a staff that can come up with ideas, and now you can say yes to more of them.

"Infinite growth" framing is asking a lot, but for most of my career, I've seen teams, departments or companies solicit ideas of what to do next quarter/year/whatever, and really aggressively winnow it down -- in large part b/c there weren't enough people to do it (and we could only afford so many people).

And we were _bad_ at prioritizing; we'd often have like a list of multiple things declared P0 and a longer list of things called P1, and a stack of stuff that didn't make the cut to maybe revisit in the future.

But if the same number of people can build and ship and iterate faster, then why not do more?


You can say yes to more of them, _but they still need to be worth it._ If you have an infinite well of ideas to grow your business, awesome. But there are a lot of companies that just don't have those ideas, where their growth is limited on some other factor related to the market they're in.

My experience is that "asking staff for ideas" does not lead to successful products. Sometimes, sure, but in general it does not.

I've never seen a roadmap planning process that didn't involve some component of asking departments and teams what needs to be done.

To the extent you have successful products, it's because you have product managers and engineers and data scientists and depending on the product, integration/forward deployed staff. These should be the people with a view to how the product needs to meet the needs of future customers, the challenges faced by existing customers, and the technical components needed to get there. I'm not saying you encourage them to just spitball ideas from ignorance, I'm saying you solicit their expertise on the limits and needs of your products, systems, tools, processes, messaging etc.


This depends on your goals. If your goal is to drive efficiency into your processes, drive down tech debt, or fix pain points for customers of your existing products, sure. Most people at a your company with have thoughts, and lots of them will have good ideas.

If your goal is to pivot the company into new verticals, or to develop an entirely new product, then "asking staff for ideas" isn't a likely way to succeed.


I didn't add a why. Here is why.

Most of the staff doesn't have the visibility into the business to understand what may or may not make money. You can have a great idea, even on that could be a successful product, but it could still be a bad fit for the business.


The personal computer, laptops, web browsers, cell phones, smartphones, AJAX/DHTML, digital cameras, SSDs, WiFi, LCD displays, LED lightbulbs. At some point, all of these things were "overhyped" and "didn't live up to the promise." And then they did.

I feel like the Ergodox was traditionally a lot of people's first wacky split keyboard -- but for those that stuck around, it was rarely their last. These days, there are so many better options that it's hard to recommend Ergodox despite its historic importance.

Can you recommend one that is clearly better that ergodox in every manner?

Lots of options, but the one I use and would recommend is the Iris: https://keeb.io/products/iris-se-kit

Advantages over the Ergodox:

1. No pointless layer of "inner" keys that you never use 2. The thumb keys are closer to the main keyboard, so more of them are in a natural reach, rather than being a big stretch (this is the biggest one in usage) 3. Uses all 1u keys, so greater keycap compatibility (any ortho kit will work) 4. If you're comparing to the Ergodox EZ, construction is better, with a metal case instead of plastic 5. Takes up less desk space

And it's still QMK, still hotswappable, still has the columnar layout. I don't think the Ergodox offers anything over it.


> No pointless layer of "inner" keys that you never use

I use all the keys on my ergodox-ez, so this keyboard has not enough keys for me to switch to.


Ergodox offers extra keys, they are not a disadvantage, they are a trade-off.

That's true for the "Path 1" keyboards this article talks about. The other ones definitely take some time.

Conversely, I'm totally sold on it. The shoulder-hunching thing is so real.

The thing I thought was ergonomic BS was the benefits of columnar/ortho layouts; everyone talked about how your fingers just moved vertically and it was so much better for them, and I rolled my eyes. But dang if it hasn't proved to be meaningfully true for me, too -- when I have to type on a legacy keyboard, I can clearly feel the pain in my fingers. (The disclaimer here is that my fingers are totally screwed up; if you don't feel pain normally, this probably matters less.)


The wild thing is, that "plateau" link is from September 2025, aka two months before Opus 4.5.

Yeah, it's not a plateau.


I feel like the bigger issue is that Cruise evidently had an unsafe company culture (like Uber): It wasn't just that they had an incident, it's that they lied about the incident and tried to cover things up.

This has been a pretty consistent pattern -- Cruise was always less transparent about its safety data than Waymo, and its claims tended to be opaque and non-measurable, whereas Waymo was partnering with insurance companies to get hard data.

Waymo is going to have incidents, too, but I think they have made the (correct) decision that being open and transparent about safety stuff is the way they move past those; Cruise made a decision in the opposite direction, and it killed them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: