Hacker Newsnew | past | comments | ask | show | jobs | submit | ispeaknumbers's commentslogin

this reads like an ad for your project


It reads like it's AI-edited, which is deliciously ironic.

(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)


My creative writing teacher in college drilled the m dash into me. I can’t really write without them now


I think the presence of em dashes is a very poor metric for determining if something is AI generated. I'm not sure why it's so popular.


For me it is that they are wrongly used in this piece. Em dashes as appositives have the feel of interruption—like this—and are to be used very sparingly. They're a big bump in the narrative's flow, and are to be used only when you want a big bump. Otherwise appositives should be set off with commas, when the appositive is critical to the narrative, or parentheses (for when it isn't). Clause changes are similar—the em dash is the biggest interruption. Colons have a sense of finality: you were building up to this: and now it is here. Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time. Like this. And so full stops should be your default clause splice when you're revising.

Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.

(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)

But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.


> Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time.

IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.


OMG, beautifully described! (not sarcastic!)


Ironically, years ago I fell into the habit of using too many non-interrupting em dashes because people thought semicolons were pretentious.

But introductory rhetorical questions? As sentence fragments? There I draw the line.


Also, for sheer delightful perversity, I ran the above comment through Copilot/ChatGPT and asked it to revise, and this is what I got. Note the text structuring and how it has changed! (And how my punctuation games are gone, but we expected that.)

>>>

For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.

Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.

(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)

But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.


I think it's because it is difficult to actually add an em dash when writing with a keyboard (except I heard on Macs). So it's either they 1)memorized the em dash alt code, 2)had a keyboard shortcut for the key, or 3)are using the character map to insert it every time, all of which are a stretch for a random online post.


You just type hyphen twice in many programs... Or on mobile you hold hyphen for a moment and choose em dash. I don't use it, but it's very easy to use.


Related article posted here https://news.ycombinator.com/item?id=46133941 explains it: "Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop."


So the funny thing is m dashes have always been a great trick to help your writing flow better. I guess gpt4o figured this out in RLHF and now it's everywhere


I've been using em-dashes for at least two decades now.. At least I have in Word, where it's been autocorrecting regular dashes to em-dashes since at least Word 2007.


Ironic? The author is working on an AI project.


The irony is that AI writing style is pretty off-putting, and the story itself was about people being put off by the author's AI project.


You mean Wanderfugl???


An iconic name


I'm not sure if you can claim these were "less prevalent than anecdotal online reports". From their article:

> Approximately 30% of Claude Code users had at least one message routed to the wrong server type, resulting in degraded responses.

> However, some users were affected more severely, as our routing is "sticky". This meant that once a request was served by the incorrect server, subsequent follow-ups were likely to be served by the same incorrect server.

30% of Claude Code users getting a degraded response is a huge bug.


I don't know about you but my feed is filled with people claiming that they are surely quantizating the model, Anthropic is purposefully degrading things to save money, etc etc. 70% of users were not impacted. 30% had at least one message degraded. One message is basically nothing.

I would have appreciated if they had released the full distribution of impact though.


> 30% had at least one message degraded. One message is basically nothing.

They don't give an upper bound though. 30% had at least one message degraded. Some proportion of that 30% (maybe most of them?) had some larger proportion of their messages (maybe most of them?) degraded. That matters, and presumably the reason we're not given those numbers is that they're bad.


Routing bug was sticky, "one message is basically nothing" is not what was happening - if you were affected, you were more likely to be affected even more.


> Anthropic is purposefully degrading things to save money

Regardless of whether it’s to save money, it’s purposefully inaccurate:

“When Claude generates text, it calculates probabilities for each possible next word, then randomly chooses a sample from this probability distribution.”

I think the reason for this is that if you were to always choose the highest probable next word, you may actually always end up with the wrong answer and/or get stuck in a loop.

They could sandbag their quality or rate limit, and I know they will rate limit because I’ve seen it. But, this is a race. It’s not like Microsoft being able to take in the money for years because people will keep buying Windows. AI companies can try to offer cheap service to government and college students, but brand loyalty is less important than selecting the smarter AI to help you.


> I think the reason for this is that if you were to always choose the highest probable next word, you may actually always end up with the wrong answer and/or get stuck in a loop.

No, it's just the definition of sampling at non-zero temperature. You can set T=0 to always get the most likely token. Temperature trades of consistency for variety. You can set T to zero in the API, I assume the defaults for Claude code and their chat are nonzero.


>or get stuck in a loop

You are absolutely right! Greedy decoding does exactly that for longer seqs: https://huggingface.co/docs/transformers/generation_strategi...

Interestingly DeepSeek recommends a temperature of 0 for math/coding, effectively greedy.


That 30% is of ALL users, not users who made a request, important to note the weasel wording there.

How many users forget they have a sub? How many get a sub through work and don't use it often?

I'd bet a large number tbh based on other subscription services.


(I work at Anthropic) It's 30% of all CC users that made a request during that period. We've updated the post to be clearer.


Thanks for the correction and updating the post.

I typically read corporate posts as cynically as possible, since it's so common to word things in any way to make the company look better.

Glad to see an outlier!


That's a pretty cynical read. My personal impression is that Anthropic has a high level of integrity as an organization. Believe what you want, I'm inclined to give them the benefit of the doubt here and move on.


> My personal impression is that Anthropic has a high level of integrity as an organization.

Unless you consider service responsiveness as a factor of integrity. Still waiting on a service message reply from third week of May. I’m sure it’s right around the corner though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: