Hacker Newsnew | past | comments | ask | show | jobs | submit | Miraste's commentslogin

Inexplicably, Samsung removed the ability to hide the taskbar with One UI 8 last year.

They rebuilt DeX on Google's desktop codebase. So obviously a lot of features were lost. Hopefully what we'll gain is wider app support.

Apple doesn't have huge sales volume for Macs because of macOS and their astronomical pricing schemes, but it's not because of the hardware. Macbooks are easily the best laptops you can buy for most purposes, and they have been since the M1 came out. That has never been true of Apple computers before.

It's because of the hardware. For mobile Apple is competitive, for desktop applications they don't even show up on most benchmarks next to AMD/Nvidia hardware.

For example, you have to scroll beneath last-gen laptop GPUs before you can find any Apple hardware on the OpenCL charts: https://browser.geekbench.com/opencl-benchmarks


That's also because of software. Apple deprecated OpenCL in MacOS eight years ago. In productivity software with solid Metal implementations, like Blender, the M4 Max is on par with the top of Nvidia's (mobile) 5xxx line, except with much more VRAM.

No software fix exists, Apple's GPUs are architecturally limited to raster efficiency (and now, matmul ops). It's frankly bewildering that a raster-optimized SOC struggles to decisively outperform a tensor-optimized CUDA system in 2026.

I get the feeling you had a specific use case that didn't work well with Apple GPUs? I'd be curious what it was. The architecture does have some unusual limitations.

By software problem, though, I meant referencing OpenCL benchmarks. No one in 2026 should be using OpenCL on macOS at all, and the benchmarks aren’t representative of the hardware.


I do wonder if it's possible to be a brilliant marketer, and reach the levels Jobs did, without being an asshole. The core of the profession is learning how to manipulate and use people better than anyone else.

I believe that's what Isaacson tries to write about in the Jobs and Musk biographies, indirectly. He seems to think that being an asshole has nothing to do with being brilliant.

Personally, I think it has more to do with having an emotional hole. Creators who do so primarily for its own sake, be they musicians, visual artists, or coders, are different from those who want to rule the world. The latter may genuinely enjoy the craft, but it's often subordinate to the deeper need for validation (see: emotional hole). It's this need that makes people assholes, imo.


Implementation differences do matter. I haven't found Copilot to have as many issues as people say it does, but they are there. Their Gemini implementation is unusable, for example, and it's not because of the underlying models. They work fine in other harnesses.

Learning human languages is not a similar process to learning programming languages at all. I've never been sure why so many people think it is.

I provided it as a counter example to the learning how to bike myth.

Learning how to bike requires only a handful of skills, most of them are located in the motor control centers in your brain (mostly in the Cerebellum), which is known to retain skills much better then any other parts of your brain. Your programing skills are comprised of thousands of separate skills which are mostly located in your frontal-cortex (mostly in your frontal and temporal lobes), and learning a foreign language is basically that but more (like 10x more).

So while a foreign language is not the perfect analogy (nothing is), I think it is a reasonable analogy as a counter example to the bicycle myth.


Maybe something that keeps programming skills fresh is that after you learn to think like a programmer, you do that with problems away from the keyboard. Decomposition, logic... in the years I wasn't programming, I was still solving problems like a programmer. Getting back behind the keyboard just engaged the thought processes I was already keeping warm with practice.

You are right about the content, but it's still worth publishing the study. Right now, there's an immense amount of money behind selling AI services to schools, which is founded on the exact opposite narrative.

No, it isn't.

The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.

"While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."

The study also found that LLM-group was largely copy-pasting LLM output wholesale.

Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.


I would like to see more agent harnesses adopt rules that are actually rules. Right now, most of the "rules" are really guidelines: the agent is free to ignore them and the output will still go through. I'd like to he able to set simple word filters and regenerate that can deterministically block an output completely, and kick the agent back into thinking to correct it. This wouldn't have to be terribly advanced to fix a lot of slop. Disallow "genuine," disallow "it's not x, it's y," maybe get a community blacklist going a la adblockers.

Seems like a postprocess step on the initial output would fix that kind of thing - maybe a small 'thinking' step that transforms the initial output to match style.

Yeah, that's how it would be implemented after a filter fail, but it's important that the filter itself be separate from the agent, so it can be deterministic. Some problems, like "genuine," are so baked in to the models that they will persist even if instructed not to, so a dumb filter, a la a pre-commit hook, is the only way to stop it consistently.

I wonder why they chose per minute? That method of rate limiting would seem to defeat their entire value proposition.


In general, with per minute rate limiting you limit load spikes, and load spikes are what you pay for: they force you to ramp up your capacity, and usually you are then slow to ramp down to avoid paying the ramp up cost too many times. A VM might boot relatively fast, but loading a large model into GPU memory takes time.


Unsloth doesn't seem to do this for every new model, but they did publish a report on their quant methods and the performance loss it causes.

https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs

It isn't much until you get down to very small quants.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: