Hacker Newsnew | past | comments | ask | show | jobs | submit | fancyfredbot's commentslogin

SpaceX has baled out Grok, Twitter and Tesla, so now it's our turn to bale out SpaceX in the IPO.

The American tax payer has been bailing out Tesla and SpaceX for many years. Elon is the biggest welfare queen in history.

US gave him trillion dollars?

Try setting up one laundry which charges by the hour and washes clothes really really slowly, and another which washes clothes at normal speed at cost plus some margin similar to your competitors.

The one which maximizes ROI will not be the one you rigged to cost more and take longer.


I don't think the analogy is correct here.

Directionally, tokens are not equivalent to "time spent processing your query", but rather a measure of effort/resource expended to process your query.

So a more germane analogy would be:

What if you set up a laundry which charges you based on the amount of laundry detergent used to clean your clothes?

Sounds fair.

But then, what if the top engineers at the laundry offered an "auto-dispenser" that uses extremely advanced algorithms to apply just the right optimal amount of detergent for each wash?

Sounds like value-added for the customer.

... but now you end up with a system where the laundry management team has strong incentives to influence how liberally the auto-dispenser will "spend" to give you "best results"


Shades of “repeat” in lather, rinse, repeat.

Wow that is terrible. In my memory GPT 2 was more interesting than that. I remember thinking it could pass a Turing test but that output is barely better than a Markov chain.

I guess I was using the large model?


Here is the XL model. 20x the size of the medium model. Still just 2B parameters, but on the bright side it was trained pre-wordslop.

https://huggingface.co/openai-community/gpt2-xl


There’s an art to GPT sampling. You have to use temperature 0.7. People never believe it makes such a massive difference, but it does.

Probably a much better prompt, too. I just literally pasted in the top part of my comment and let fly to see what would happen.

The article is about two models which have either 2B or 4B parameters. Both are dense models. The 2B version will certainly use less power than qwen3-coder-next.

The models are quite good. They aren't just a tech demo.


"Epic refresh pull" is my personal pet hate right now. Although "like if you are watching this in <year>" on older videos is close behind.

The comments pretending Marvel actors (e.g. Benedict Cumberbatch) are their Marvel characters in other movies (e.g. Sam Mendes' 1917) kill me.

It was apparently very successful:

https://en.wikipedia.org/wiki/EPB


Unfortunately too successful!

What will happen now is not clasons problem anymore I guess.

The point they seem to be making is that it never was their problem, but they were just solving it for everyone for free anyway, and in return they were doing it wrong and they should stop interacting with people.

Honestly even when people are being paid to work for you and their job is to do what you ask them to, speaking to them like that is never going to work out.


Someone presumably pitched this idea within HP and other people agreed it was something they should try. I guess probably HP didn't put its best and brightest in charge of call centres but still, isn't that sort of amazing?

I wonder if it's the same people who eventually decided it was a bad idea after all, or whether some other group discovered what was happening and got them to stop.


I’ve seen it pitched here even, with the idea that deflecting some call volume will make call centre jobs less hell. The thing it misses is that call center jobs are hell because they’ve used metrics to optimise to the minimum number of staff, and any reduction in average call volume will just result in the company cutting staff, so now staff still have the same workload but callers are XX minutes of waiting more frustrated.


Let’s not kid ourselves, they knew exactly what they were doing. They were hoping people would just hang up and give up. This would save money in the short term but lose money in the long term but that’s what you get when the current quarter is all that matters.

Anyway my experience with HP has taught me to never buy their products ever again.


> Let’s not kid ourselves, they knew exactly what they were doing.

Not at all, they say they’re “always looking for ways to improve customer experience” and just wanted to “encourage people to self solve” to increase customer satisfaction. /s


Optimizing the wrong thing, probably wanted to shave customer support costs by having lower call volumes, but those that need support probably were hanging onto the calls since nobody that can fix things calls support (so no savings) AND reduced customer satisfaction.


I think HP was absolutely right in doing this. How many times have you opened a GitHub issues only to come back an hour later with "nvm I figured it out" and close it?

The hope is always that you figure it out autonomously.


If offering free support is too expensive, then they shouldn’t offer free support, instead of externalizing the costs by wasting the time of every customer who calls.

Charge callers some small fee and refund it if it was a real issue.


Paid B2C support is a real tough sale. A lot of low-cost airlines don't provide support at all (if not in person), I guess because it's not practical. If costs could be covered this way, everyone could offer support. Even Google! And yet they don't.


Or just accept that support is a feature of whatever junk you’re selling and build it into the price.

Instead of being actively hostile towards the naive idiots who gave you money for your junk of a product which you now refuse to even support.


It depends what your goal is. If HP gets charged per call answered, then their goal is to minimize the number of calls they answer. If they see a most of their calls are like "my internet is slow" or the laptop won't turn on because it's not charged up, it's easy to see how this could be approved. Same thing if they've just spent a ton of money on some AI chat agent that they need to justify as well.


Depending of the country, legislation (and changes in them), the waiting time might be taxed as well. So a way to recoup some little costs.


I was not evicting expecting to see OpenClaw here either. It's out of keeping with the rest of the article...

At least there's acknowledgement of limitations and it's not just hype. Overall a useful data point in terms of what's possible.


Intrigued to see a blatant grammatical error ("took that logic farther" should be "took that logic further").

Is this incompetence or a deliberate error to indicate human authorship?

If the former then why aren't they using at least an AI to proof read? If the latter then what do anthropic think is wrong with AI written text?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: