Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> All these power hungry beasts coming out of Intel and NVIDIA feel quite out of sync with the zeitgeist in a world that's worried about the power bill -

These CPUs aren't consuming 250W all the time. Those are peak numbers.

Both Intel and AMD are providing huge efficiency gains, too. Rumors show the new i7 13700T Raptor Lake part can have a 35W mobile TDP and still outperform a Ryzen 7 5800X: https://www.tomshardware.com/news/intel-13700t-raptor-lake-a...

Speed scales nonlinearly with power. These high TDP parts are halo parts meant for enthusiast builds where it doesn't matter that the machine draws a lot of power for an hour or two of gaming.

It's also trivially easy to turn down the maximum power limit in the BIOS if that's what someone wants. The power consumption isn't a fixed feature of the CPU. It's a performance/power tradeoff that can be adjusted at use time.



Just adding to what you said, a 24-core CPU won't get anywhere near peak power usage during gaming. Most games only use a handful of cores. The only way you'll approach it is with parallelizable productivity work like video encoding or compiling code.


My nephew, B, got his 16+8 i9, during Path of Exile, to peak at 250W, and use all 24 cores. He is running at 5.2Ghz, and using air cooled. We are not sure at all how it uses e-(efficiency) cores, when it has 16 p-cores w/ hyper threading, but it all did show up in the new dark mode task manager.


PoE is one of the few games that actually makes use of lots and lots of cores/threads.


Any idea what for? I feel like PoE doesn't involve that much compute other than what would be offloading to the gpu. Maps are static, and I would have assumed that mobs are primarily computed server-side based on some sort of loosely synchronized state.

I guess I could imagine a few threads for managing different 'panes', a thread for chat, a thread for audio maybe? It's hard to think of 24 independent units of work.

I'm not a game dev, just used to play PoE and curious.


The trick used in AAA is to see each frame as an aggregation of core-independent jobs that can be queued up, and then to buffer several frames ahead. So you aren't working on just "frame A", but also finishing "frame B" and "frame C", and issuing the finished frames according to a desired pace, which allows you to effectively spend more time on single-threaded tasks.

The trade-off is that some number of frames of latency are now baked in by default, but if it means your game went from 30hz to 60hz with an frame of delay, it is about as responsive as it was before, but feels smoother.


Sure that explains the parallelization, but not why it takes 250 watts worth of compute to run the game. What's it computing?


The next frame.


if it's anything like gta5 it's going to be calling strlen a billionty times


Can you provide some more info about this?



Could it be the gpu driver/framework? I thought DX12 and Vulkun were meant to be cpu optimised and be able to use heaps of cores.


I guess, but like... how? Like I said, I can't really think of 24 things to do lol. I'm reminded of Dolphin, the GC/Wii emulator - people would ask for more cores to be used and they'd basically be like "for what???", they started moving stuff like audio out, eventually they made some breakthroughs where they could split more things out.

Maybe with these frameworks threads are less dedicated and instead are more cooperative, idk. Really not my area!


https://m.youtube.com/watch?v=MWyV0kIp5n4 I'm reminded of this poe build that can crash the server with too many spell effects


Or simply put, there's too much going on. I remember they had to rewrite ASAP some parts of the engine right after the release of Blight due to FPS drops down to 1/inf at the end-endgame versions of the encounter, as well as server crashes.


Sort of funny story, the concept of this build (spell loop) is currently meta, sadly the servers have improved to the point that they don‘t crash anymore.


Maybe all it does is produce crazy high, pointless FPS.


I've seen the NVIDIA driver eat up all the CPU on multiple cores without really doing anything substantial to the framerate.

This was back in the Windows XP days when I was working on OpenGL and DirectX. It would do this while rendering like a couple of triangles. One core I could understand, but not all. I'm pretty sure the driver had some spinlocks in there.

I also managed to find out the NVIDIA driver assumed user buffers passed to OpenGL (VBOs) would be 16-byte aligned, using aligned SIMD operations on them directly, even though there's no mention of alignment in the OpenGL spec.

It just so happened that Microsoft's C++ runtime would do 16-byte aligned allocations, while the language I was using only did 4-byte.

All is fair in love and performance wars I suppose...


What’s a new dark mode task manager?


The latest Windows 11 preview finally reads the system default paint allowing “dark mode” rendering the ui with dark background and light foreground.


So like in win 95 when you use a "dark theme". What an achievement. Wait, you can also set background colour. /s


What a time to be alive!!


I think you'll find that modern games use many more cores than they used to since mainstream consoles have all moved to being octa-core for the last two generations and you have things like Vulkan better allowing multi-threaded graphics code.


Many more cores yes, but 100% CPU usage should still be rare. If your game uses 100% of a 24C/32T processor, it will run poorly on a "mere" 8-core CPU, and most of your target audience won't be able to play it. You're right though, these aren't your grandma's single-threaded games anymore.


I don't really second this perspective.

CPUs and GPUs keep getting hungrier and that is just not where we should be heading. I wish the perf increase didnt keep coming along consumption increase each gen.


You can clock down a 7950x to 105W and it will be 37% faster than a 5950x


I hardly care, I don't want that heat in my room anyway.


> Both Intel and AMD are providing huge efficiency gains, too. Rumors show the new i7 13700T Raptor Lake part can have a 35W mobile TDP and still outperform a Ryzen 7 5800X: https://www.tomshardware.com/news/intel-13700t-raptor-lake-a...

Don´t let the TDP of T-models fool you. Power consumption to reach boost clocks can peak up to 100W for T-models of the previous generation, and the 13700T probably needs to run close to that to outperform a 5800X.


> for an hour or two of gaming

U gotta pump those numbers up, those are rookie numbers.


> These CPUs aren't consuming 250W all the time. Those are peak numbers.

But they require a heat-sink management designed for that peak. And it is insane. Try to keep microwave oven under 100C :-)


Your toaster uses more than 250W, microwave ovens are far above at 1-2kw


Its still pretty terrible from an optimal performance viewpoint. I can undervolt my 3070Ti by ~100mV, dropping performance by ~6% but dropping temperature by about 10C, or 13%, and dropping fan speed from PS3 to inaudible for anything <90% load.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: