Any time I see a company in support of regulation that they would also have to comply with, all I can think is the proposed regulations are something the company is already doing, or isn't a burden for them, but would create a higher barrier of entry for new competitors.
Don't you think it's a little circular that you always default to assuming that their support is about regulatory capture?
Like, what if they had that opinion before they built the company? If you saw evidence of that (as is the case with Anthropic), would that convince you to reconsider your judgement? Surely, you think... some people support regulatory frameworks, some amount of the time... and unless they banned themselves from every related industry, those might be regulatory frameworks that they might one day become subject to?
All I'm doing is challenging a vague accusation. You can claim regulatory capture for any proposal. All policy has tradeoffs; speculating vaguely about negative consequences does not help me weigh that balance.
In a profit driven world, regulatory capture is the default assumption. Genuine corporate philanthropy is the exception that deserves special attention.
Suspecting a company to act in its own profit enhancing interest is borderline tautological.
> Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.
> Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
> Provide clear whistleblower protections that cover violations of these requirements as well as specific and substantial dangers to public health/safety from catastrophic risk.
So just a bunch of useless bureaucracy to act as a moat to competition. The current generation of models is nowhere close to being capable of generating any sort of catastrophic outcome.
Because those models already exist and can be run on consumer available hardware with no real issues. All this does is create barriers for Anthropic competitors.
> Develop and publish safety frameworks, which describe how they manage, assess, and mitigate catastrophic risks—risks that could foreseeably and materially contribute to a mass casualty incident or substantial monetary damages.
Develop technology to monitor user interactions. They're already doing this anyway [0].
> Report critical safety incidents to the state within 15 days, and even confidentially disclose summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models.
Share user spy logs with the state. Again, already doing this anyway [0].
I guess the attitude is, if we're going to spy on our users, everyone needs to spy on their users? Then the lack of privacy isn't a disadvantage but just the status quo.
I don't think 'critical safety incidents' or 'summaries of any assessments of the potential for catastrophic risk from the use of internally-deployed models' are user logs? Unless I'm misunderstanding.
Anthropic saying they want "stronger" requirements is easy when you're helping write them. The tell is that they're endorsing a bill that just happens to match what they're already doing - classic regulatory capture where industry turns their business model into law and calls it "safety."
Anthropic is by far the most moralizing and nanny-like AI company, complete with hypocrisy (Pentagon deals) and regulatory capture/ladder-pulling (this here).
Don't worry about it, they're not well managed, you can see it from their ops, their products, etc, they won't stick around. They're going to get ground to dust by Google and OpenAI at the high end and the chinese models on the low end. They'll end up in Amazon's pocket, Jeff's catch-up play in the AI war after sitting out the bidding wars.
That’s just politics: basically they’re saying “let us do our thing, otherwise China will win this race”.
And it’s also market segmentation: they need to separate themselves from the others, and want to be the de-facto standard when people are looking for “safe” AI.
I remember when Anthropic first started and waxed poetic about intentions. This, the recent case, and the DoD (sorry, Department of War) partnerships seem to show just how much of that was pure bullshit.
Curious how all of the employees who professed similar sentiment, EA advocation, etc. justify their work now. A paycheck is a paycheck, sure, but when you're already that well-off, the rest of the world will see you for what you really are *shrug*.
FWIW executive orders do not have the force of law. The official name is still Department of Defense. Department of War is now an acceptable alternative only.
To officially change the name requires an act of Congress.
Could you clarify what you mean? I understand why the DoD partnership is ethically dubious, but I don’t understand why SB 53 is bad. It seems like the opposite of a military partnership.
Are you saying the only ethically valid path is for all companies to oppose all regulation? Supporting any regulation at all can only be from bad motives, and therefore should be avoided?
>Supporting any regulation at all can only be from bad motives, and therefore should be avoided?
It's just vibe check heuristic -- if the regulated throws a tantrum telling how switching to USB-C charging or opening up the app store will get them out of businees (spoilers -- it never does), it's probably a good one, if the regulated cheers on, it may be to stiff competition.
The opposite is true with certain countries -- whenever you hear one telling loudly that "sanctions don't hurt at all and only make me stronger", then you know it hurts.
How on earth did you come to the conclusion that anyone here is talking about all regulation?
This is a very specific form of regulation, and one that very clearly only benefits incumbents with (vast sums of) previous investment. Anthropic is advocating applying "regulation-for-thee, but not for me."
Of course they won't, them being hypocrites is exactly my point. I just hope the world can see a spade for a spade and roll-their-eyes at the future statements of safety/inclusion they love to profess.
They had a relationship with the NSA long before they partnered with the Department of War, they were the first to of all the frontier model companies according to Dean Ball, former Trump Whitehouse AI Policy advisor, in a recent interview with Nathan Labenz.
Doesn't matter. Anthropic's position is untenable, and unlike OpenAI who is planning to pivot to consumer gear (i.e. Apple 2.0), Anthropic doesn't have another play, so when Google has fully mobilized, they're done.
Catastrophic AI risk is such a larp. The systems are not sentient. The risk will always be around the human driving the LLM, not the LLM itself. We already have laws governing human behavior, company behavior. If an entity violates a law using an LLM, it has nothing to do with the LLM.
OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.