Hacker Newsnew | past | comments | ask | show | jobs | submit | wdfx's commentslogin

The only time I had this other than changing to SSD was when I got my first multi-core system, a Q6600 (confusingly labeled a Core 2 Quad). Had a great time with that machine.


"Core" was/is like "PowerPC" or "Ryzen", just a name. Intel Core i9, for instance, as opposed to Intel Pentium D, both x86_x64, different chip features.


Gemini and it's tooling is absolute shit. The LLM itself is barely usable and needs so much supervision you might as well do the work yourself. Then couple that with an awful cli and vscode interface and you'll find that it's just a complete waste of time.

Compared to the anthropic offering is night and day. Claude gets on with the job and makes me way more productive.


It's probably a mix of what you're working on and how you're using the tool. If you can't get it done for free or cheaply, it makes sense to pay. I first design the architecture in my mind, then use Grok 4 fast (free) for single-shot generation of main files. This forces me to think first, and read the generated code to double-check. Then, the CLI is mostly for editing, clerical work, testing, etc. That said, I do try to avoid coding altogether if the CLI + MCP servers + MD files can solve the problem.


> Gemini and it's tooling is absolute shit.

Which model were you using? In my experience Gemini 2.5 Pro is just as good as Claude Sonnet 4 and 4.5. It's literally what I use as a fallback to wrap something up if I hit the 5 hour limit on Claude and want to just push past some incomplete work.

I'm just going to throw this out there. I get good results from a truly trash model like gpt-oss-20b (quantized at 4bits). The reason I can literally use this model is because I know my shit and have spent time learning how much instruction each model I use needs.

Would be curious what you're actually having issues with if you're willing to share.


I share the same opinion on Gemini cli. Other than for simplest tasks it is just not usable, it gets stuck in loops, ignores instructions, fails to edit files. Plus it just has a plenty of bugs in the cli that you occasionally hit. I wish I could use it rather than pay an extra subscription for Claude Code, but it is just in a different league (at least as recently as couple of weeks ago)


Which model are you using though? When I run out of Gemini 2.5 Pro and it falls back to the Flash model, the Flash model is absolute trash for sure. I have to prompt it like I do local models. Gemini 2.5 Pro has shown me good results though. Nothing like "ignores instructions" has really occurred for me with the Pro model.


I get that even with the 2.5 pro


That's weird. I can prompt 2.5 Pro and Claude Sonnet 4.5 about the same for most typescript problems and they end up doing about the same. I get different results with Terraform though, I think Gemini 2.5 Pro does better on some Google Cloud stuff, but only on the specifics.

Is just strange to me that my experience seems to be a polar opposite of yours.


I don't know. The last problem I tried was a complex one -- migration of some scientific code from CPU to GPU. Gemini was useless there, but Claude proposed realistic solutions and was able to explore and test those.


I think you must be using it quite differently to me.

I can one-shot new webapps in Claude and Codex and can't in Gemini Pro.


The type of stuff I tend to do is much more complex than a simple website. I really can't rely on AI as heavily for stuff that I really enjoy tinkering with. There's just not enough data for them to train on to truly solve distributed system problems.


... wonders if the dns config store is in fact dynamodb ...


DNS is managed by Route53 which has no dependency on Dynamodb for data plane



I feel like even Amazon/AWS wouldn't be that dim, they surely have professionals who know how to build somewhat resilient distributed systems when DNS is involved :)


I doubt a circular dependency is the cause here (probably something even more basic). That being said, I could absolutely see how a circular dependency could accidentally creep in, especially as systems evolve over time.

Systems often start with minimal dependencies, and then over time you add a dependency on X for a limited use case as a convenience. Then over time, since it's already being used it gets added to other use cases until you eventually find out that it's a critical dependency.


There are sphinx and dinosaurs in Crystal Palace Park (SE London).


I bought this case a couple of years ago after this article was linked here.

I love it. It's beautifully engineered. Top quality. It sits at the corner of my desk proudly silent.

I'm likely about to upgrade the pc within but the case will remain a strong feature of my desk.


Do you use it as a gaming PC (or for other high GPU load activities)? And if so, what's your take on noise under load?

Edit: I guess this is a senseless question if the case really only uses passive cooling. I was assuming there would still be fans somewhere.

I despise my current PC's fan noise and I'm always on the lookout for a quieter solution.


It's a dev workstation for me.

Currently inside is an i7-9600 which I limit to 3.6ghz and a cheap 1050ti.

The CPU is technically over the TDP limit of the case but with the frequency limit in place I never exceed about 70degC and due to my workloads I'm rarely maxing the CPU anyway.

There is zero noise under any load. There is no moving parts inside the case at all, no spinning HDD, no PSU fan, no CPU fan, no GPU fan.


> I guess this is a senseless question if the case really only uses passive cooling.

Are there senseless questions?

It can be used for gaming if your demands are met by a Nvidia 1650.

MonsterLabo built passive cases that could cool hotter components, seems defunct though, sadly.


Did you have no success upgrading your fans (Noctua etc)? Still too loud? How about water cooling?


It's an HP OEM (because I moved countries during the pandemic and getting parts where I settled was ridiculously more expensive).

The CPU is AIO (and the radiator fans are loud). The GPU has very loud fans too, but is not AIO.

It's four years old at this point and I might just build something else rather than try to retrofit this one to sanity (which I doubt is possible without dumping the GPU anyway).


I bought my current gaming desktop off a friend as he didn't need it anymore when I was looking for an upgrade. It had an AIO cooler. The pump made so much noise and it seemed like I had to fiddle with fan profiles forever to get it to have sane cooling. I swapped it for a $30 CoolerMaster Hyper 212 and a Noctua case fan. It cools well enough for the CPU to stay above stock speeds pretty much all the time and is much quieter than the AIO cooler was. I'm not suggesting this CPU cooler is the best one out there, but just pointing out its not like one needs to spend $100+ on a cooler to get pretty good performance.

The GPU still gets kind of loud during intense graphics gaming sessions but when I'm not gaming the GPU fans often aren't even spinning.


Honestly at this point it's not so much about money as it is about whether or not this particular case/setup/components combo is salvageable with minimal effort.

The CPU fan is rarely an issue (it mostly just goes bananas when IntelliJ gets its business on with gradle on a new project XD).

The GPU is the main culprit and I'm not sure there's any solution there that doesn't involve just replacing it.


Depending on the fans it may be possible to re-oil the bearings.


Interesting idea. I feel like the fan noise from my GPU is just air moving, but maybe not.


Just last week I moved from using a Noctua NH-U12S to cool my 5950X, to a ARCTIC Liquid Freezer III Pro 360 AIO liquid cooler (first time using liquid cooling), and while I expected the difference to be big, I didn't realize how big.

Now my CPU idles at ~35 usually, which is just 5 degrees above the ambient temperature (because of summer...), and hardly ever goes above 70 even under load, and still super quiet. Realize now I should have done the upgrade years ago.

Now if I could only get water cooling for the radiator/GPU I'm using. Unfortunately no water blocks available for it (yet) but can't wait to change that too, should have a huge impact as well.


A 3050 6GB would work too.


He also did not do (or at least did not mention) anything about the reflected/diffused and ambient light behind and around the TV screen, which would negatively impact contrast.


Aw, give him a break. Neither do OLED tv reviews, but I do not see everyone piling on them "well, technically the contrast isn't inf because the screen is scattering the ambient light"

Have you seen those samsung qdoleds? In a normally lit room they look more washed out than a VA panel, yet somehow still having an "inf" contrast...


Drivers and control panels for interfaces are also an issue on non macOs and windows platforms. I don't see hardware vendors changing that any time soon.


Don't control panels and surfaces send MIDI messages that could be processed in a somehow standard way? Or do they (predominantly) run proprietary protocols over raw USB data pipes?


Isn't that's asking for terrible UI latency?


IPC latency is typically measured in nanoseconds.


Even with an electron UI?


I understand the frustrations with the current DAW offerings. Many of them have a very long history in taking over where the analogue tape machine left off.

Some have developed much further though to support a more digital-first approach.

But it's true that the barrier to entry can still be very high. Trying to explain any of these packages to a musician who is not also a computer power user is extremely challenging, believe me I've tried.

If we could arrive at a point where a DAW can be intuitive to a musician and not technically overwhelming that would be very interesting.

What would be more interesting though would be if that same project could be viewed in an "engineer mode" which exposes the technical view for someone else to work on at a different level.


Exactly! It's too high a barrier to entry. And it doesn't matter how low that barrier is: if people won't use it due to it, that's too high. Crazy how much pushback I get on that idea.

As far as the "engineer mode" that's what I think galls me most: You can't really write audio software without all of the technical stuff so you're going to NEED that stuff anyway. AND, as someone matures in their musical ability, they often need to do more specific fine-tuning which would require those features. And that means that you could basically funnel non-audio-engineers into understanding at least the parts they need to make their own music when the time came. There's no better way to learn than to solve a "problem", even if that "problem" is just "how do I tighten up the high end on this so it makes this cool sound I want?"

In short: making a DAW for musicians is not only accessible to non-audio-engineers, it's also a gateway drug to semi-audio-engineers and their explorations. I'm just all for that!


Dare I say it but perhaps some sort of natural language input would be interesting here.

If the software was primarily driven by a command list back-end, had a bunch of semi-preset solutions to common problems, and also could be "spoken to" - would that feel more comfortable for our musician user?


I could definitely see it! I would think that voice commands would be more for the musician side of it, such as "start", "stop", "cut", "redo", "alternate", stuff like that. Don't really need tensors for that. But yeah, once they have a question about "how do I...?", you can layer in some of the latest DeepSeek-style chain-of-thought stuff and probably get some actually useable results with it.

Still though, all of that is a layer AFTER that initial barrier to entry.


> how do I...?

Even this is still a problem, because it's unlikely they know even what question to ask. Or if a sensible question is asked it may be an XY problem, where what is really intended is not what is asked.

Having thought about this for the last few minutes, it does seem inevitable that the software would have to start coaching the musician in the ways of the engineering and of "music software" people, so that the inputs become more accurate and aligned with the outcomes the software is capable of providing.

I think everyone would crave becoming more productive in the environment over time and not have to suffer the initial baby steps forever.

It's very difficult to imagine a DAW environment which exposes deeper functionality that is not already like a lot of the existing packages.

Edit: and one final thought - it's a hard environment to build by the nature of the work being done being a creative process with no correct answers and which needs to support a multitude of different approaches to creativity. It's pretty opposed to software being generally a machine with a fixed number of functions


Absolutely agree with you here.

There's a huge divide between people who might play with this at home as a toy and those who would be able to work with professional musicians with it.

The latter group will have some very strict requirements around performance, latency and workflow.

Edit: and reliability


Just going to add this RE: reliability. Today CPUs are so powerful you don't need DSP systems anymore to do things like low latency tracking. It is still up to the user to manage latency by carefully selecting plugins, etc. With DSP based systems, the latency is generally fixed and extremely stable. I still use a very old PTHD system because it works great for recording audio :)


My 286 with a Voyetra Sequencer, which is still in use, is much more reliable then my modern PC in terms of tracking and timing. You need a real time system, perfect task separation and _not_ an unreliable USB interface. It has absolutely nothing to do with CPU power. Also my Atari with Cubase 3.1 is much much better with MIDI timing then every modern PC setup. Think about it. :-)


Just to be clear I'm talking about digital audio, not MIDI. I ran Cakewalk on my 386/486 as well, it worked great including SMPTE sync over to an analog tape machine


Latency is not related to CPU power, until the DSP load starts creeping up. For just low latency tracking, a 486 is perfectly capable.


As a practical matter, the CPU has to deal with IO as well, I don't believe any 486 systems could handle this.

DSP based systems struggled a lot with IO in the late 90s until faster SATA drives became ubiquitous. Lots of them used SCSI or exotic hardware cards to deal with large track counts.


The first version of Ardour was written on a 25Mhz 486 and could record 24 tracks of 24bit 48kHz audio without breaking a sweat.

It did have a SCSI drive, but in 1999 I did not consider that "exotic", having been using them on various Unix workstations for more than a decade before that.


I used to use special FireWire extension card to have low latency, however I think this has been fixed in usb 3.0. the problem is when you have multitrack mixer that sends you many outputs all of them being different soundcards visible to the computer. Of course one guitar + VST will give you a little lag but you'll just push it back a little and it will do. Or will it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: