Hacker Newsnew | past | comments | ask | show | jobs | submit | sleight42's commentslogin

This is why I keep asking myself if I should continue playing Marathon or just exclusively play ARC Raiders. The latter can be far more relaxing yet still challenging. The former encourages that hyper-competitiveness that often stresses me out.

One word: Retroencabulator.

That is done in jest though (technobabel is also used in scifi of the softer variety of course, Star Trek is infamous for it).

Corporate speak is unfortunately used seriously by some people, not just in Dilbert and similar.


I can't see walled garden platforms or any website that monetizes based on ads offering WebMCP. Agents using their site represent humans who aren't.


Ironic. This makes me want to quit ChatGPT in favor of Claude because fuck this administration.


As someone raised as a Jew in America, this is beautifully said. Thank you.


Yet the Claw is powered by an LLM provider whose underlying model may not align with your priorities? Do I understand that correctly?


That's right. And don't forget that the chips it runs on are manufactured by companies I might not agree with. Nor the mining companies that got the metal. Nor the energy company that powers it.

The wonderful thing about markets that work is that you can swap things out without being under their boot.

I worry about a LLM duopology. But as long as open weight models are nipping at their heels, it is the consumer that stands to benefit.

The train we're on means a lot of tech companies will feel a creative destruction sort of pain. They might want to stop it but are forced by the market to participate.

Remember that Google sat on their AI tech before being forced to productize it by OpenAI.

In a working market, companies are forced to give consumers what they want.


> And don't forget that the chips it runs on are manufactured by companies I might not agree with. Nor the mining companies that got the metal. Nor the energy company that powers it.

You see that this is a non sequitur right? No matter who makes the chips or mines the metal or supplies the power, the behavior of the thing won't be affected. That isn't the case when we're talking about who's training the LLM that's running your shit.


It's a good thing that there are so many LLM choices out there, then.

Maybe the fundamental disagreement is whether LLMs will be a commodity product or not.

I think they will be since there hasn't been an indicator that secret sauce lasts more than a few months. The open weight models are, at most, a year behind.

We're in a different environment. The last tech rules of e.g. network effect cannot be directly applied.


What do you think a GPU is? A chip manufacturer absolutely has the ability to add their own bias in firmware and drivers.


Care to explain how chip makers can influence the inference outcome of LLMs?


>Remember that Google sat on their AI tech before being forced to productize it by OpenAI.

Google knew this tech wasn't ready for prime-time, they already had plenty of revenue and didn't need to release shoddy shit, but were forced to roll out "AI" even with "hallucinations" and the resulting liabilities to keep up with the new hotness. The tech is still so shoddy, I can't believe people use it for anything beyond a curiosity.


> The wonderful thing about markets that work is that you can swap things out without being under their boot.

This is an illusion. You literally describe Zizek's "Desert of the real": Billionaires own the illusion and you are telling me I get to pick from a selection of choices carefully curated and presented to me.


> In a working market, companies are forced to give consumers what they want.

I want personal nuclear weapons, so the market hasn't been working for me. Time to roll back those pesky laws, regulations, and ethical boundaries. Prosecute executives who won't give me what I want.


Cmon man. “Consumers” in aggregate. Not “every consumer”. But you knew that.


Many consumers want things that are arguably harmful for everyone involved. Users asking Grok to generate a large amount of CSAM from kid pics on Twitter is but one example.


Just to clarify; you're comparing nuclear weapons to lines of code that generate text/images?


he is comparing the acquisition and utility of those two.


I don't understand why folks are buying Mac Minis specifically for this? Why not repurpose an old existing computer? Run Linux? What am I missing?


Hype and confusion.

OpenClaw is hyped for running local/private LLMs and controlling your data, but these people don't realize the difference between

(1) running local open source LLMs

(2) and API calls to cloud LLMs.

The vast majority will do #2. To your point, a Raspberry Pi is sufficient.

For the former, you still need a lot of RAM (+32GB for larger models) so most minis are underpowered despite having unified memory and higher efficiency.


Yup. Been building my own "Claw" in Go using cloud LLMs and it's running very happily on a $6/mo VPS with 1 vCPU and 1GB of RAM.


If you're running local models, Apple Silicon's shared memory architecture makes them much better at it than other similarly-specced platforms.

If you want your "skills" to include sending iMessage (quite important in the USA), then you need a Mac of some kind.

If you don't care about iMessage and you're just doing API calls for the inference, then it's good old Mass Abundance. Nice excuse to get that cool little Mini you've been wanting.


While others will point to hardware or local LLMs or such IMO the biggest reason...

Because it's the easiest way to give "claw" iMessage access and that's the primary communication channel for a lot of the claw users I've seen.


Mac minis are particularly suited to running AI models because they can have a pretty good quantity of RAM (64GB) assigned to the GPU at a reasonable price compared to Nvidia offerings. Mac minis have unified memory which means it can be split between CPU and GPU in a configurable way. I think apple didn’t price mac minis with AI stuff in mind, so they end up being good value.


Sure but the GPUs are fairly anemic, right? I get that they have more Gpu-addressable memory from the shared pool.

I have a 10900K with 65GB RAN and a 3090 24GB VRAM lying around gathering dust. 24GB isn't as much as a Mac but my cores run a whole lot faster. I may be able to run a 34B 4bit quantized model in that. Granted, the mofo will eat a lot of power.


Where do you get the AI acceleration? Apple Silicon chips are decent AI perf for the price afaiu


There's no need for (local) AI acceleration if you are leveraging a remote LLM (Claude, ChatGPT, etc). The vast, vast majority of users are most likely just making API calls to a remote service. No need for specialized or beefy hardware.


Aren't there other Android releases that run on older hardware and no Google?


Cook is, ultimately, the enshittifier of things Apple. The buck stops with him. But he does things to keep the stock price up so...


I've also debated Android because there are cheaper phones that, ostensibly, don't suck—and can run non-Google-ized not-spying-on-you Android.


Exactly my thought as well. Im considering trying graphene now that I have one myself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: