Hacker Newsnew | past | comments | ask | show | jobs | submit | UltraLutra's commentslogin

It’s not bad for summarizing or translating.

I like categorize AI outputs by prompt + context input information size vs output information size.

Summaries: output < input. It’s pretty good at this for most low-to-medium stakes tasks.

Translate: output ≈ input but in different format/language. It’s decent at this, but requires more checking.

Generative expansion: output > input. This is where the danger is. Like asking for a cheeseburger and it infers a sesame seed bun because that matches its model of a cheeseburger. Generally that’s fine. Unless you’re deathly allergic to sesame seeds. Then it’s a big problem. So you have to be careful in these cases. And, at best, the anything inferred beyond the input is average by definition. Hence AI slop.


> Claude is simply too good at coding in well-represented languages like Python and Typescript to not pay hundreds of dollars a month for (if not thousands, subsidized by employers).

I think the cost is more in thousands to cover inference. And, no, I don’t think it’s been proven out that an engineer is so much more productive to justify thousands of dollars a month cost. The models are great for greenfield projects. But a lot of engineering is iterating and maintaining an existing code base——a code base that the engineer is fluent in. So the time savings is writing code specific enough to implement a new feature vs writing a prompt specific enough that the AI can write code specific enough to implement a new feature. The difference between those two tasks is the time savings.

Say that difference is like 10%. You save 10% of your time by using AI, meaning you have 4 more hours a week than you did before. Are you going to spend 4 more hours writing code? No. Some will be spent in meetings. Some will be spent reading Hacker News. Maybe you’ll get two hours a week of additional coding time. So you’re really only increasing your output by 5%.

The so the employer gets 5% more from you if you have AI. If your salary is 10k per month, they wouldn’t pay more than $500. Per month. And you’re probably costing Anthropic >$10k in inference costs per _week_. The economics just don’t make sense.

You can sub out the numbers here and play around with the scenario. I think the cost of inference needs to drastically fall. And I don’t think that happens soon. What might happen 10 years from now is developers are given a laptop with a built-in GPU for AI inference that does much better code auto-complete using AI. That’s something an employer can pay 3k-5k for _once_ as a hardware investment. But the future of AI coding won’t be agents. It won’t be prompt-engineering. The models aren’t going to get much better. It will be simple and standard and useful but unimpressive. It’s going to feel boring. It’s going to feel boring. When it’s working, when it’s mature, when it becomes economical, it always feels boring. And that’s a good thing.


* And you’re probably costing Anthropic >$1k in inference costs per _week_. Not >$10k. Typo.


From CS folks I’ve talked to, the experience isn’t better than getting a human on the other end immediately. It’s better than not being able to reach a human (e.g. outside support hours). Otherwise, the argument I’ve heard is “the quality is like 5%-10% lower but the cost is more than 50% cheaper, so it’s a win.”

Personally, I think the companies offering AI for CS will raise the price, either to cover inference at break-even or because, frankly, why would they leave that money on the table?


It’s cute they think managers are evaluated on the quality of their employee performance review.

I don’t disagree with the content. A well-thought review will help employees perform better. AI can’t create a good performance review from whole cloth. You have to at least put some thought into the content and bullet out what you want the review to say. An AI could clean up the language, but not create good content.

But most managers’ managers would consider that wasted effort. The unspoken fact is that performance reviews are pretty arbitrary and end up reflecting whatever is needed at the time. If budgets are tight, performance suddenly becomes less impressive and no one deserves a promotion. If hiring is difficult and an employee expresses dissatisfaction with their pay then suddenly they’re a star performer. The efficient manager tells the AI “make this a good review that makes a case for promotion” or “make this a mediocre review with room for improvement” and lets the AI write something vague and unhelpful that agrees with the result they want. An _effective_ manager will put time into their reviews and growing talent. Managers aren’t incentivized to be effective leaders though.

Source: my experience; your mileage may vary.


I spent a while in management, and what I learned about the review process was pretty eye-opening. Basically everyone was graded on the curve, and you couldn’t afford to have too many “exceeds expectations” ratings. That would mean they probably met expectations above their role, and they could make a strong case for promotion. If you can’t give out lots of promotions, then the top performers get pissed. So, just don’t let them all know how great they’re doing.

Performance reviews are a dumb system, and you shouldn’t be learning anything new from them anyway. Just give helpful feedback throughout the year, and give promotions to people strategically as you’re able.


> Performance reviews are a dumb system

The purpose of them is to avoid discrimination lawsuits.


When everybody exceeds expectations then the expectations are too low.


I'm talking more about cases when there are (for instance) initially 4 "exceeds expectations" seniors out of a department of 50, and management really just wanted a max of 2. So they curve things more harshly until only 2 get "exceeds expectations."


> …so it is hard to explain why their kids would be better off knowing something they don't need.

Math is full of extremely useful concepts that aren’t otherwise obvious. To me, it’s less about “I’m going to need to use this equation” and more about “This is a pattern I will encounter throughout the world.”


Yep. I work in finance. There are a ton of things like this that are just convention. Like treasury bonds being priced in 1/32nd increments. It probably made sense at the beginning, doesn’t now, but the whole market is built around the convention so we’re stuck with it.

Regulation is the other reason. APR is required to be rate-per-period * period-per-year while also accounting for fees. But APY is rate-per-period compounded over a year. These have more to do with the grifts and bubbles that gave birth to the regulations. Again, it made sense at the time and now we’re kind of stuck with it. Not the best standard but better than no standard.


I'm not in anyway an expert, so I googled what some research says. Here's an interesting meta-analysis (https://link.springer.com/article/10.3758/s13423-023-02303-4). Memory and creativity are a lot more complex than I realized. There are different types of each, and it seems like they interact in complex ways. Here's the findings from the abstract:

> We found a small but significant (r = .19) correlation between memory and creative cognition. Among semantic, episodic, working, and short-term memory, all correlations were significant, but semantic memory – particularly verbal fluency, the ability to strategically retrieve information from long-term memory – was found to drive this relationship. Further, working memory capacity was found to be more strongly related to convergent than divergent creative thinking. We also found that within visual creativity, the relationship with visual memory was greater than that of verbal memory, but within verbal creativity, the relationship with verbal memory was greater than that of visual memory. Finally, the memory- creativity correlation was larger for children compared to young adults despite no impact of age on the overall effect size. These results yield three key conclusions: (1) semantic memory supports both verbal and nonverbal creative thinking, (2) working memory supports convergent creative thinking, and (3) the cognitive control of memory is central to performance on creative thinking tasks.

So some memory seems to be correlated with convergent creativity, which according to wikipedia (https://en.wikipedia.org/wiki/Convergent_thinking) is "the ability to give the 'correct' answer to questions that do not require novel ideas, for instance on standardized multiple-choice tests for intelligence." It sounds like there's less correlation with divergent creativity, which (again from wikipedia (https://en.wikipedia.org/wiki/Divergent_thinking)) is "a thought process used to generate creative ideas by exploring many possible solutions."

But my real takeaway is that people here seem to have strong (emotional?) opinions on "memorization vs creativity: which is better", but few people seemed to bother reading page 1 google results on the topic. So I like to think that bothering to do some cursory research beats both. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: