Hacker Newsnew | past | comments | ask | show | jobs | submit | duggan's commentslogin

I thought I was going to disagree with this; on the surface I think the icons are something of an improvement, but the rest of the post is persuasive.

This is a bad sign for design at Apple. It suggests a fundamental lack of attention to detail that would have been harder to imagine a few years ago.

What's driving it?


    > Burger menu
    > User agreement

    "User disagrees with the content of this site."
I recommend playing with the top-right buttons, it made me chuckle audibly.

Download your personal data, there's a lot of fun messages in there.

Leadership doesn’t understand and/or care anymore.

I spent so much time tuning the WAP site for the forum I worked for back in 2008.

I had some sort of Nokia running on whatever 2kbps networking was going then, and would shave absolutely anything I could to make the forums load slightly faster.


Search “centre a div” in Google

Wade through ads

Skim a treatise on the history of centering content

Skim over the “this question is off topic / duplicate” noise if Stack Overflow

Find some code on the page

Try to map how that code will work in the context of your other layout

Realize it’s plain CSS and you’re looking for Tailwind

Keep searching

Try some stuff until it works

Or…

Ask LLM. Wait 20-30 seconds. Move on to the next thing.


The middle step is asking an LLM how it's done and making the change yourself. You skip the web junk and learn how it's done for next time.

Yep, that’s not a bad approach, either.

I did that a lot initially, it’s really only with the advent of Claude Code integrated with VS Code that I’m learning more like I would learn from a code review.

It also depends on the project. Work code gets a lot more scrutiny than side projects, for example.


> Search “centre a div” in Google

Aaand done. Very first result was a blog post showing all the different ways to do it, old and new, without any preamble.


Or, given that OP is presumably a developer who just doesn't focus fully on front end code they could skip straight to checking MDN for "center div" and get a How To article (https://developer.mozilla.org/en-US/docs/Web/CSS/How_to/Layo...) as the first result without relying on spicy autocomplete.

Given how often people acknowledge that ai slop needs to be verified, it seems like a shitty way to achieve something like this vs just checking it yourself with well known good reference material.


LLMs work very well for a variety of software tasks — we have lots of experience around the industry now.

If you haven’t been convinced by pure argument in 2026 then you probably won’t be. But the great thing is you don’t have to take anyone’s word for it.

This isn’t crypto, where everyone using it has a stake in its success. You can just try it, or not.


That's a lot of words to say "trust me bruh" which is kind of poetic given that's the entire model (no pun intended) that LLMs work on.

Hardly. Just pointing out that water is wet, from my perspective.

But there is an interesting looking-glass effect at play, where the truth seems obvious and opposite on either side.


Wait till the VC tap gets shut off.

You: Hey ChatGPT, help me center a div.

ChatGPT: Certainly, I'd be glad to help! But first you must drink a verification can to proceed.

Or:

ChatGPT: I'm sorry, you appear to be asking a development-related question, which your current plan does not support. Would you like me to enable "Dev Mode" for an additional $200/month? Drink a verification can to accept charges.


Seriously, they have got their HOOKS into these Vibe Coders and AI Artists who will pony up $1000/month for their fix.

A little hypothesis: a lot of .Net and Java stuff is mainlined from a giant mega corp straight to developers through a curated certification, MVP, blogging, and conference circuit apparatus designed to create unquestioned corporate friendly, highly profitable, dogma. You say ‘website’ and from the letter ‘b’ they’re having a Pavlovian response (“Azure hosted SharePoint, data lake, MSSQL, user directory, analytics, PowerBI, and…”).

Microsoft’s dedication to infusing OpenAI tech into everything seems like a play to cut even those tepid brains out of the loop and capture the vehicles of planning and production. Training your workforce to be dependent on third-party thinking, planning, and advice is an interesting strategy.


Calling it now: AI withdrawal will become a documented disorder.

We already had that happen. When GPT 5 was released, it was much less sycophantic. All the sad people with AI girl/boyfriends threw a giant fit because OpenAI "murdered" the "soul" of their "partner". That's why 4o is still available as a legacy model.

I can absolutely see that happening. It's already kind of happened to me a couple of times when I found myself offline and was still trying to work on my local app. Like any addiction, I expect it to cost me some money in the future

Alternatively, just use a local model with zero restrictions.

The next best thing is to use the leading open source/open weights models for free or for pennies on OpenRouter [1] or Huggingface [2].

An article about the best open weight models, including Qwen and Kimi K2 [3].

[1]: https://openrouter.ai/models

[2]: https://huggingface.co

[3]: https://simonwillison.net/2025/Jul/30/


This is currently negative expected value over the lifetime of any hardware you can buy today at a reasonable price, which is basically a monster Mac - or several - until Apple folds and rises the price due to RAM shortages.

This requires hardware in the tens of thousands of dollars (if we want the tokens spit out at a reasonable pace).

Maybe in 3-5 years this will work on consumer hardware at speed, but not in the immediate term.


$2000 will get you 30~50 tokens/s on perfectly usable quantization levels (Q4-Q5), taken from any one among the top 5 best open weights MoE models. That's not half bad and will only get better!

If you are running lightweight models like deepseek 32B. But anything more and it’ll drop. Also, costs have risen a lot in the last month for RAM and AI adjacent hardware. It’s definitely not 2k for the rig needed for 50 tokens a second

Could you explain how? I can't seem to figure it out.

DeepSeek-V3.2-Exp has 37B active parameters, GLM-4.7 and Kimi K2 have 32B active parameters.

Lets say we are dealing with Q4_K_S quantization for roughly half the size, we still need to move 16 GB 30 times per second, which requires a memory bandwidth of 480 GB/s, or maybe half that if speculative decoding works really well.

Anything GPU-based won't work for that speed, because PCIe 5 provides only 64 GB/s and $2000 can not afford enough VRAM (~256GB) for a full model.

That leaves CPU-based systems with high memory bandwidth. DDR5 would work (somewhere around 300 GB/s with 8x 4800MHz modules), but that would cost about twice as much for just the RAM alone, disregarding the rest of the system.

Can you get enough memory bandwidth out of DDR4 somehow?


That doesn't sound realistic to me. What is your breakdown on the hardware and the "top 5 best models" for this calculation?

I mean sure, that could happen. Either it's worth $200/month to you, or you get back to writing code by hand.

Just you wait until the powers that be take cars away from us! What absolute FOOLS you all are to shape your lives around something that could be taken away from us at any time! How are you going to get to work when gas stations magically disappear off the face of the planet? I ride a horse to work, and y'all are idiots for developing a dependency on cars. Next thing you're gonna tell me is we're going to go to war for oil to protect your way of life.

Come on!


This is a poor analogy. Cars (mostly) don't require a subscription.

Can't believe this car bubble has lasted so long. It's gonna pop any decade now!

The reliance on SaaS LLMs is more akin to comparing owning a horse vs using a car on a monthly subscription plan.

I mean, they're taking away parts of cars at the moment. You gotta pay monthly to unlock features your car already has.

Just like the comment you replied to this is an argument against subscription model "thing" as a service business models, not against cars.

It was a real facepalm moment when I realised we were busting the cache on every request by including date time near the top of the main prompt.

Even just moving it to the bottom helped move a lot of our usage into cache.

Probably went from something like 30-50% cached tokens to 50-70%.


Interesting, thanks for the references, I'm not very familiar with fs.com, though I'm sure it popped up in one or two Reddit posts I skimmed.

It turns out I accidentally purchased one each of a fibre-assembly and direct-attach-copper! I was not paying close enough attention.

From what I understand, the difference between these will be negligible at the range I'm using (1m), but is that accurate?


Database in a single file, litestream backs it up offsite. Simple!


Also, for anyone who only has an old global API key lying around instead of the more recent tokens, you can set:

  -H "X-Auth-Email: $EMAIL_ADDRESS" -H "X-Auth-Key: $API_KEY"
instead of the Bearer token header.

Edit: and in case you're like me and thought it would be clever to block all non-Cloudflare traffic hitting your origin... remember to disable that.


I had a similar set of tapes, and ended up collecting a chain of connectors – firewire cable, firewire to thunderbolt2 adapter, thunderbolt2 to usb-c.

Instead of cobbling together an impressive array of tools though, I just got a trial of Final Cut Pro and pulled out everything with that. You can get what I think is a three month trial? Anyway, it was plenty for this one time effort of digitizing old Hi8 tapes.

I think I did end up using Handbrake to take the raws down to a reasonable size to give to family members, but the raw footage and project files I stuck on a couple of 1TB Sandisk drives to keep in physically separate backup locations.


No need for FCP, as iMovie can still capture DV streams from Firewire. I'd expect they both use the same implementation.

However, Apple has removed Firewire support entirely from macOS Tahoe so none of these solutions will work on Mac going forward.


I think it was needed for retaining the original aspect ratio, and capturing in ProRes, but maybe there's a workaround there that I didn't find.


I was scrolling through that list and did at a double take at... Thomas Middleditch? The actor from Silicon Valley?


Oh, no, his start up running skills left much to be desired.


I suppose hijinks will inevitably ensue!


WTF


Prototypes being launched as products is so common it’s an industry cliche.

Having those prototypes be AI generated is just a new twist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: