I personally reported that around time when Mac OS X 10.9 (first non-cat) came out and immediately saw it marked as duplicate. So at least 13 years and counting.
Imagine being a person like me who has always been expressing himself like that. Using em dash, too.
LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.
And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?
My comment was not really meant as a criticism (of AI) but more of an agreement that I am also confident in the fact that the post is AI-generated (while the parent comment does not seem to be so confident).
But to add a personal comment or criticism, I don't like this style of writing. If you like prompt your AI to write in a better style which is easier on the eyes (and it works) then please, go ahead.
The most jarring point that they mentioned, having sudden one-off boldfaced sentences in their own paragraphs, is not something I had ever seen before LLMs. It's possible that this could be a habit humans have picked up from them and started adding it the middle of other text that similarly evokes all of the other LLM tropes, but it doesn't seem particularly likely.
Your point about being able to prompt LLMs to sound different is valid, but I'd argue that it somewhat misses the point (although largely because the point isn't being made precisely). If an LLM-generated blog post was actually crafted with care and intent, it would certainly be possible to make less obvious, but what people are likely actually criticizing is content that's produced in I'll call "default ChatGPT" style that overuses the stylistic elements that get brought up. The extreme density of certain patterns is a signal that the content might have been generated and published without much attention to detail. There's was already a huge amount of content out there even before generating it with LLMs became mainstream, so people will necessarily use heuristics to figure out if something is worth their time. The heuristic "heavy use of default ChatGPT style" is useful if it correlates with the more fundamental issues that the top-level comment of this thread points out, and it's clear that there's a sizable contingent of people who have experienced that this is the case.
> although largely because the point isn't being made precisely
I agree. I wasn't really trying to make a point. But yes, what I am implying is that posts that you can immediately recognize as AI are low effort posts, which are not worth my time.
Agree, to me this "research" is like proving grocery stores are vulnerable to theft by sending students to shoplift. If review process guaranteed that vulnerabilities can't pass, wouldn't that mean that the current kernel should be pristinely devoid of them?
I feel like many people in the comments aren't aware that Karpathy is an ML scientist for whom programming is a complementary skill, not a profession. The only reason he came up with "vibe coding" is because maximum complexity of his hobby projects made it seem believable. Maybe take his opinions about fate of programming with a grain of salt.
It's interesting that some months ago when his nanochat project came out the HN Anti-AI crowd celebrated him saying "I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution"
But now it is working for him he's suddenly not an expert...
What you’re calling the “crowd” was not the same people. Every time someone makes a claim like yours, I go and check and don’t see the same usernames in the conversation. “Different people have different opinions and different ways to express them” isn’t really an insight; it tells us nothing nor does it make anyone worthy of criticism.
You can’t, in an honest argument, lump different strangers into a group you invented to accuse them of duplicity or hypocrisy.
Having created 100 of nano-sized projects does not add up to having developed and maintained one large code base.
Coding agents are eating up programming from the lowest end, starting from pressing button on the keyboard to type the code in: completion was literally their first application. I don't think it will go all the way to the top, though, the essential part of the profession will remain until true AGI.
Metaphorically, think how integrated chips didn't replace electrical engineering, just changed which production tools and components engineers deal with and how.
Obviously we all are adapting to changes, but if he or someone are panicking about being behind, that can only be because they've never been in too deep.
Calling him a liar seems fairly unnecessary? For one thing people's minds can change, or that can be talking in different contexts. Or - as in this case - new technology could have been deployed that changed the game.
Maybe that's true, but I will say that one of the reasons I recommend his Python ML videos to people is not just the ML content but also his Python is good and idiomatic. So I would not agree; I think his programming is a well practiced skill.
FWIW though I think his predicted worldview will render it very difficult to acquire this skill, as people grow reliant on gen AI for programming rudiments.
As far as "programming skill" goes, writing "good and idiomatic" Python is pretty bottom of the barrel. I don't think the GP is all that off, most people who are famous for some programming-adjacent skill (or even programming) aren't good at programming.
>As far as "programming skill" goes, writing "good and idiomatic" Python is pretty bottom of the barrel.
Complete bullshit. Beginning programmers writing good and idiomatic Python isn't "bottom of the barrel", or did you think I was recommending his videos to 20 year seasoned pros to improve their coding?
Some people on this site need to check their arrogance and humble themselves a bit before opening their mouths.
A great opportunity to bring up that a robot that operates 100% locally and is located within Bluetooth range has never needed a cloud account, has never had to become unavailable whenever AWS goes down, and certainly doesn't have to be reduced to a manual dud when its company ceases to exist. I wonder what whoever produced such "Systems Design" would have to say to customers now.
Neato's D-Series Botvac just works (e.g., BVD8-SD/HP). No Bluetooth. No cloud. No Wi-Fi. Zero network connectivity required. Had mine about 10 years. Replaced the battery once, probably due for another one. Still cleans well.
I don't understand the appeal of having local appliances bound to the fate of network services.
I have a Neato D650 which I assume meets that classification and is covered by the service withdrawal, it is now pretty degraded -- no notifications, no mapping, no keep-out zones.
No notifications means if it gets stuck it stays there.
No mapping means if it doesn't fully clean the space (eg, a door is closed) then I have no way of knowing without baby-sitting it.
No keep-out zones means every clean involves carefully preparing the space to hang up trailing wires out of the way -- previously I just had some keep-outs near the wires and that worked perfectly.
Without all these features I have stopped using it; it is quicker to just use a stick vacuum.
I am really surprised how well AI excuse works, most journalists just take CEO words as is, and make no effort to assess: is that even credible?
Obviously layoffs correlate with AI age, but it's most definitely not AI replacing jobs, not yet. Even in 2025 stories about a job fully taken by AI need to be scraped for, and it's almost always about non-SWE jobs. And in 2023, when the first layoffs already started, models sucked and none of the existing tools and agents even existed yet! But if you search for example for headcount growth in India/Mexico, the numbers can only be described as "booming".
I don't know what exactly is going on, but it's pretty obvious the companies are moving offshore or simply doing less work, and for some reason need to lie about why.
The best and cheapest is open-air, where voices fly into the sky and never return, it would take like a thousand of people before it stops being enough.
Second best are large open windows, missing walls (porch/balcony) or multiple rooms.
Beyond that I don't think there can be a solution without some sort of room soundproofing, which is usually no-go for rented spaces and private houses. The closest one can get is to maximize soft surfaces (rugs, curtains esp. along walls).
Speaking of which, I wish bars, restaurants and other venues were required to place echo reducers on the ceiling, such simple and cheap measure would dramatically improve ability to talk there when they're full.
> Speaking of which, I wish bars, restaurants and other venues were required to place echo reducers on the ceiling, such simple and cheap measure would dramatically improve ability to talk there when they're full.
It's possible they aren't aware, but I have to wonder if it's sometimes intentional. As someone who doesn't drink, I find most bars close to if not entirely intolerable as places to hang out in, not because I mind being around other people drinking, but because they're always so loud. I've always assumed that drinking is what makes this tolerable to people, so now that you bring this up, the idea that this could be a way to sell more alcohol occurs to me. Probably a silly conspiracy theory, but who knows!
Also probably to do with cost. Acoustic panels are pretty expensive. I can imagine selling this to a manager/owner won't be easy since it's not clearly bound to return on investment.
Sometimes I think the way CRDT research formulates the problem, itself obstructs evolution of local-first.
That obsession with Google Docs-like collaborative real-time text editing, a pretty marginal use case, derails the progress from where local-first apps really need it:
- offline-enabled, rather than realtime/collaborated
- branching/undoing/rebasing, rather than combining edits/messages
- help me create conflict-aware user workflows, rather than pursue conflict-free/auto-converging magic
- embeddable database that syncs, not algorithms or data types
CRDT research gives us `/usr/bin/merge` when local-first apps actually need `/usr/bin/git`. I don't care much how good the merge algorithm is, I need what calls it.
Locals have always been allowed to quit the new job on day 1, and it has never been a crisis for employers.
A company that is confident it is offering worthy salary and career should have no extra reason to worry a foreign worker will quit during first week, than that a local worker would do the same thing.
The only difference a fee would make under such conditions is that locals become cheaper to hire, which is the point.
Part of the proposal is that the employer pays the government a large fee to sponsor the visa. They're not doing that for local workers; it's an entirely incomparable situation.
reply