Has this ever happened? Not revoking certificates, which they've certainly done for malware or e.g. iOS "signing services", but because a developer used non-Apple hardware.
I don’t know the answer to that but a quick search shows lots of examples of people complaining that their developer certificate has been revoked, demonstrating a willingness by Apple to revoke certificates if they believe the developer violated their terms of service. I doubt Apple would go out of their way to include language in the agreement that binds developers to their own sanctioned platform if they didn’t intend to enforce it.
I agree, but I think a better wager (and what GP probably meant) would be that all of these developers had their certificates revoked because Apple thought they were distributing malware. That's what the system is for.
> For my tastes telling me "no" instead of hallucinating an answer is a real breakthrough.
It's all anecdata--I'm convinced anecdata is the least bad way to evaluate these models, benchmarks don't work--but this is the behavior I've come to expect from earlier Claude models as well, especially after several back and forth passes where you rejected the initial answers. I don't think it's new.
I can concur that previous models would say "No, that isn't possible" or "No, that doesn't exist". There was one time where I asked it to update a Go module from version X.XX to version X.YY and it would refuse to do so because version X.YY "didn't exist". This back with 3.7 if I recall, and to be clear, that version was released before its knowledge cut off.
I wish I remembered the exact versions involved. I mostly just recall how pissed I was that it was fighting me on changing a single line in my go.mod.
alas, 4.5 often hallucinates academic papers or creates false quotes. I think it's better at knowing that coding answers have deterministic output and being firm there.
I'm paying $100 per month even though I don't write code professionally. It is purely personal use. I've used the subscription to have Claude create a bunch of custom apps that I use in my daily life.
This did require some amount of effort on my part, to test and iterate and so on, but much less than if I needed to write all the code myself. And, because these programs are for personal use, I don't need to review all the code, I don't have security concerns and so on.
$100 every month for a service that writes me custom applications... I don't know, maybe I'm being stupid with my money, but at the moment it feels well worth the price.
> I’m saying this to point out that we’re really already in the enshittification phase before the rapid growth phase has even ended. You’re paying $200 and acting like that’s a cheap SaaS product for an individual.
Traditional SaaS products don't write code for me. They also cost much less to run.
I'm having a lot of trouble seeing this as enshittification. I'm not saying it won't happen some day, but I don't think we're there. $200 per month is a lot, but it depends on what you're getting. In this case, I'm getting a service that writes code for me on demand.
We can see especially in the case of Claude AI Max that while it sounds like you’re getting better value than the cheaper plans, the company is now encouraging less efficient use of the tool (having multiple agents talking to each other, rather than improving models so that one agent is doing work correctly).
> Traditional SaaS products literally “write code” for you (they implement business logic). See: Zapier, Excel.
Eh, I'd call those a sort of programming language. The user is still writing code, albeit in a "friendlier" manner. You can't just ask for what you want in English.
> The enshittification is that the costs are going up faster than inflation and companies like OpenAI are talking about adding advertisements.
In 1980, IT would have cost $0 at most companies. It's okay for costs to go up if you're getting a service you were not getting before.
In 1980, the costs associated with what we today call IT were not $0, they were just spread around in administrative clerical duties performed by a lot of humans.
Okay, but I think the analogy still works with that framing. These AI products can do tasks that would previously have been performed by a larger number of humans.
It's even easier than that. A lot of older ignition locks could be defeated by a screwdriver so you just smash the window, jimmy the ignition lock with the screw driver and off you go! There was a specific model of jeep that was stolen a lot because the rear lock could be popped out easily with pliers, a matching key made, and you return later with the key to steal the car.
I might be misinformed but I've been told that for a while now (maybe 20 years or so), new cars have been built to be exceptionally difficult to hot-wire.
A South African friend told me that some brand of four wheel drive could be hot-wired but it involved getting behind one of the front head-lamp bulbs - doable, but a damaging process if you're in a rush.
You'd have to be stupid and desperate to steal from a garage.
The people who work there aren't office workers; you've got blue collar workers who spend all day working together and hanging out using heavy equipment right in the back. And they're going to be well acquainted with the local tow truck drivers and the local police - so unless you're somewhere like Detroit, you better be on your way across state lines the moment you're out of there. And you're not conning a typical corporate drone who sees 100 faces a day; they'll be able to give a good description.
And then what? You're either stuck filing off VINs and faking a bunch of paperwork, or you have to sell it to a chop shop. The only way it'd plausibly have a decent enough payoff is if you're scouting for unique vehicles with some value (say, a mint condition 3000GT), but that's an even worse proposition for social engineering - people working in a garage are car guys, when someone brings in a cool vehicle everyone's talking about it and the guy who brought it in. Good luck with that :)
Dealership? Even worse proposition, they're actual targets so they know how to track down missing vehicles.
If you really want to steal a car via social engineering, hit a car rental place, give them fake documentation, then drive to a different state to unload it - you still have to fake all the paperwork, and strip anything that identifies it as a rental, and you won't be able to sell to anyone reputable so it'll be a slow process, and you'll need to disguise your appearance differently both times so descriptions don't match later. IOW - if you're doing it right so it has a chance in hell of working, that office job starts to sound a whole lot less tedious.
Stolen cars are often sold for low amounts of money - like $50 - and then used to commit crimes that are not traceable from their plates. It hasn't really been possible to steal and resell a car in the United States for many years, barring a few carefully watched loopholes (Vermont out-of-state registrations is one example that was recently closed).
When Kia and Hyundai were recently selling models without real keys or ignition interlocks, that was the main thing folks did when they stole them.
In Canada there's been a big problem with stolen cars lately. Mostly trucks, and other high value vehicles though. Selling them locally isn't feasible, but there's a criminal organization that's gotten very good at getting them on container ships and out to countries that don't care if the vehicles are stolen. So even with tracking, there's nothing people can do. Stopping it at the port is the obvious fix, but somehow that's not what is being done. Probably bribery to look the other way.
Same thing in Australia - some gang was busted recently for stealing mid-range four wheel drives, packing them in shipping containers with partially dismantled cars (I guess so that a cursory inspection would just show "car parts" rather than a single nice looking car) and then shipping them around the world (I guess an overseas buyer isn't checking if a car with this VIN has been stolen on the other side of the world).
Yeah, the only way to do it would be a cash transaction where you'd have to forge a legitimate looking title/registration and pass it off to a naive buyer. So it's still technically possible, but not in any kind of remotely scalable way.
I reckon it is infinitely riskier to be caught attempting to break into a car than it is to just walk in to a service garage and pretending you own the Vdub in the parking lot. There is still a bit of deniability in the 2nd option but good luck explaining to the police why you are using a set of tools specifically for picking vehicle locks (because you can't just use regular pick and tension wrenches) to break into a vehicle that you don't own.
Well Switch games aren't really designed for small screens either.
I haven't bought a Steam Deck because I think it's (1) too big (see my sibling comment on console size—to be fair I own a Switch 1 and 2, but the Steam Deck is even bulkier) and (2) too fiddly, this is the thing I've never liked about PC gaming.
> Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.
But they are paying, aren't they?
Advertisements don't generate money from thin air. Advertisements cause people to spend money they otherwise would not have spent. That's why companies buy ads in the first place.
And if you're showing ads to poor people, you're probably causing them to spend money they don't have.
It's a bit more complicated. On average, someone is paying, but averages can be misleading. As we see with free-to-play games, whales can subsidize a lot of usage by other people who don't pay a thing.
It seems like the same is true of advertising. Yes, some people are spending money but it doesn't necessarily follow that they're people who can't afford it.
reply