> The vagueness comes from who the "developer" is when the LLM goes awry. Is it OpenAI's fault if a third-party app has a slip up, or is the third party? If a research lab puts out a new LLM that another company decides to put in their airplane that crashes, can the original lab be liable or are they only liable if they claim it to be an OSS airplane LLM?
Doesn't seem that vague to me. The law says:
> (b) In an action against a defendant that developed or used artificial intelligence
IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.
All this seems to do is say you can't use "the AI model did that, not me" as a defense to escape damages in a civil suit, it doesn't change the extent of encouraging suicide that someone could be liable for.
The AI is employing the persuasive skills or learned directly from some fucko suicide cult leaders to purposelly talk you into and through doing it. That doesn't seem NEARLY the same in a practical or legal sense.
I suppose Jack in the Box should not be liable for an E. Coli outbreak? Not sure why AI companies (or third party developers who aren't being especially careful in how they use these models) deserve a special exception for selling sausage made from unsanitary sources.
If your Walmart training manual told all greeters to hit customers with sticks then they are liable because they trained employees to do bad things.
If you trained your AI on the persuasive skills of a death cult then you are responsible.
If the parking meter pulls out a chainsaw and kills someone then either the operator of the parking meter or the manufacturer is liable depending on whether the manufacturer waived responsibility of the parking meters actions *and* the operator accepted them. We wouldn't say the parking meter is liable, but we would ban chainsaw wielding parking meters as well.
> I believe that Silicon Valley possesses plenty of virtues. To start, it is the most meritocratic part of America. Tech is so open towards immigrants that it has driven populists into a froth of rage. It remains male-heavy and practices plenty of gatekeeping. But San Francisco better embodies an ethos of openness relative to the rest of the country. Industries on the east coast — finance, media, universities, policy — tend to more carefully weigh name and pedigree.
I believe I read that 27% of the founders in the YC Spring 25 class went to an Ivy League school and 40% previously worked at a magnificent 7 company. I'm not saying this is any worse than the east coast, but so much for name and pedigree not mattering.
Northern California is what it always has been: the barrier wall of manifest destiny, where instead of crossing the ocean the pioneers and all subsequent generations stayed to incubate the same incentives, and have been relentlessly in pursuit of the next gold rush. Gold, yellow journalism, semiconductors, personal computing, SaaS, crypto, AI, etc. It's the sink drain attractor of people looking to improve their fortunes in one way or another, but almost always around some kind of bonanza of concentrated opportunity. The concept of it being "meritocratic" is a rephrasing of ideology that's always existed about the region: you too could get rich here. But I don't really see any difference in the networks of power that exist in SV as do the rest of the country.
I grew up in the bay area and am far happier living outside it. I'm happier to be in a place where art and the humanities are valued instead of cast aside as immaterial or silly or a distraction. I'm happier to live in a place where people have varied interests instead of orienting their life around whatever the prevailing Big Thing is.
> So the 20-year-olds who accompanied Mr. Musk into the Department of Government Efficiency did not, I would say, distinguish themselves with their judiciousness. The Bay Area has all sorts of autistic tendencies. Though Silicon Valley values the ability to move fast, the rest of society has paid more attention to instances in which tech wants to break things. It is not surprising that hardcore contingents on both the left and the right have developed hostility to most everything that emerges from Silicon Valley.
I see some positive aspects as to more inclusive definitions of autism and neurodivergence, but I hate that we're at the point where "trying to get rich at all costs" is now perceived as autistic (and let's be clear: using mobile gas turbines that get people sick to generate power for AI is not "autistic"). Greed is not autistic, but of course the ideology of SV is that nobody actually cares about money there. Why else would they have apartments without furniture and piles of pizza boxes. It must be the autism.
> While critics of AI cite the spread of slop and rising power bills, AI’s architects are more focused on its potential to produce surging job losses. Anthropic chief Dario Amodei takes pains to point out that AI could push the unemployment rate to 20 percent by eviscerating white-collar work. I wonder whether this message is helping to endear his product to the public.
The animating concern of developing AI since 2015 has basically been "MAD" applied to the technology. The Bostrom book mentioned later in this article was clearly instrumental in creating this language to think about AI, as you can see many tech CEOs began getting "concerned" about AI around this time, prior to many of the big developments in AI like transformers. One of the seminal emails of OpenAI between Musk and Altman talks about starting a "Manhattan Project for AI". This was a useful concept to graft the development of these companies onto:
1. Firstly, it's a threat to investors. Get in on the ground floor or you will get left behind. We are building tomorrow's winners and losers and there are a lot of losers in the future.
2. Secondly, it leads to a natural source of government support. This is a national security concern. Fund this, guarantee the success of this, or America will lose.
On both counts, this framing seems to be working pretty well.
I mean I suppose you can continuously add "critical feedback" to the system prompt to have some measure of impact on future decision-making, but at some point you're going to run out of space and ultimately I do not find this works with the same level of reliability as giving a live person feedback.
Perhaps an unstated and important takeaway here is that junior developers should not be permitted to use an LLMs for the same reason they should not hire people: they have not demonstrated enough skill mastery and judgement to be trusted with the decision to outsource their labor. Delegating to a vendor is a decision made by high-level stakeholders, with the ability to monitor the vendor performance, and replace the vendor with alternatives if that performance is unsatisfactory. Allowing junior developers to use LLM is allowing them to delegate responsibility without any visibility or ability to set boundaries on what can be delegated. Also important: you cannot delegate personal growth, and by permitting junior engineers to use an LLM that is what you are trying to do.
I think it’s a fair point that google has more stakeholders with a serious investment in some flubbed AI generated code not tanking their share value, but I’m not sure the rest of it is all that different from what engineer at $SOME_STARTUP does after the first ~8monthes the company is around. Maybe some folks throwing shit at a wall to find PMF are really getting a lot out of this, but most of us are maintaining and augmenting something we don’t want to break.
Why are we pretending like his comments about any of these things are as a neutral observer instead of as an investor cheerleading his investments? Why should anyone take Gary Tan seriously as a futurist?
Well, that's a good question but the answer isn't going to be to anybody's liking. Because he's got money. People equate having money with wisdom rather than with intelligence and intelligence is dual use, you can use it for good and you can use it for bad just as easily. It may lead to wisdom but that's fairly rare. Most of the time it just leads to money.
So people will follow those with money (or that they perceive to have money) without much critical thought about where that is going to lead them, they're hoping for wisdom but may end up being misled. That's why all of these ultra wealthy folk turned on a dime when the political weather changed, they don't really have principles, they just want more zeros.
Most of the money flowing to the big players is from tech giant capex, originally from net cash flow and lately its financed by debt. A lot of these investors seem to now essentially be making the case that AI is "too big to fail". This doesn't at all resemble VC firms taking a lot of small bets across a sector.
I love TwixT, I discovered the Bookshelf Games version of this recently (a friend of mine's parents had an old copy at their beach house). Also a big fan of Alex Randolph's Ricochet Robots.
Dieter Stein's games are also supposed to be wonderful abstracts (Urbino, Fendo, Tintas) though I haven't had a chance to play those yet.
Actually what he said was "The MAGA Gang is desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them and doing everything they can to score political points from it." and it's been taken for granted that what he meant is that Tyler Robinson is MAGA, but that's not strictly what he said.
1. There are a lot of times that Trump says things that people take for granted that what he meant was... but that isn't strictly what he said. It seems to me that maybe 60% of the time, what people are up in arms about are things they're sure he meant, but strictly speaking he didn't actually say.
Look, I'm not a Trump apologist. But if you're going to condemn Trump for what it sure looks like he's saying (but he technically didn't quite say), then don't be surprised when other people get condemned by the same standard.
2. If I understand correctly, the shooter's family was fairly conservative. So the right's reaction of "no, he was left" was, at the time, a baseless deflection of baseless accusations.
> There are a lot of times that Trump says things that people take for granted that what he meant was... but that isn't strictly what he said. It seems to me that maybe 60% of the time, what people are up in arms about are things they're sure he meant, but strictly speaking he didn't actually say.
The people doing this kind of reframing of Trump's statements are typically doing so to make them seem less inflammatory, usually in response to those who take him at his literal word. 'It's just a joke', 'an exaggeration', 'he didn't mean it literally'. Given how things have been going, it's clear he hasn't been joking.
Kimmel's monologue, taken literally, is completely benign.
At this point his political views are still not clear. You can be pro-trans and pro-Trump at the same time. See: Caitlyn Jenner, who supported Trump in the 2024 election.
Even more so given that all of this pinning on extreme-left groups started before they even found Tyler Robinson and that they did the same in Minnesota a few months ago. I think it's basically accurate: they are desperately trying to characterise it on anyone but their own, and have no regards for any facts. Even if Robinson really is far-left in every way (certainly a realistic possibility), they will be "correct" merely by accident in hindsight.
> At this point his political views are still not clear.
Clear as day. Deranged leftist. No question. As someone who is even now wrong for the right reasons, I wonder if you think maybe the right-for-the-wrong-reasons crowd might have heuristics that are useful and lead to good decision making, and that you have rejected.
Doesn't seem that vague to me. The law says:
> (b) In an action against a defendant that developed or used artificial intelligence
IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.