All these power hungry beasts coming out of Intel and NVIDIA feel quite out of sync with the zeitgeist in a world that's worried about the power bill - especially when the M1/M2 is there to provide contrast. I'm getting Pentium 4 vs Core architecture vibes.
> All these power hungry beasts coming out of Intel and NVIDIA feel quite out of sync with the zeitgeist in a world that's worried about the power bill -
These CPUs aren't consuming 250W all the time. Those are peak numbers.
Speed scales nonlinearly with power. These high TDP parts are halo parts meant for enthusiast builds where it doesn't matter that the machine draws a lot of power for an hour or two of gaming.
It's also trivially easy to turn down the maximum power limit in the BIOS if that's what someone wants. The power consumption isn't a fixed feature of the CPU. It's a performance/power tradeoff that can be adjusted at use time.
Just adding to what you said, a 24-core CPU won't get anywhere near peak power usage during gaming. Most games only use a handful of cores. The only way you'll approach it is with parallelizable productivity work like video encoding or compiling code.
My nephew, B, got his 16+8 i9, during Path of Exile, to peak at 250W, and use all 24 cores. He is running at 5.2Ghz, and using air cooled. We are not sure at all how it uses e-(efficiency) cores, when it has 16 p-cores w/ hyper threading, but it all did show up in the new dark mode task manager.
Any idea what for? I feel like PoE doesn't involve that much compute other than what would be offloading to the gpu. Maps are static, and I would have assumed that mobs are primarily computed server-side based on some sort of loosely synchronized state.
I guess I could imagine a few threads for managing different 'panes', a thread for chat, a thread for audio maybe? It's hard to think of 24 independent units of work.
I'm not a game dev, just used to play PoE and curious.
The trick used in AAA is to see each frame as an aggregation of core-independent jobs that can be queued up, and then to buffer several frames ahead. So you aren't working on just "frame A", but also finishing "frame B" and "frame C", and issuing the finished frames according to a desired pace, which allows you to effectively spend more time on single-threaded tasks.
The trade-off is that some number of frames of latency are now baked in by default, but if it means your game went from 30hz to 60hz with an frame of delay, it is about as responsive as it was before, but feels smoother.
I guess, but like... how? Like I said, I can't really think of 24 things to do lol. I'm reminded of Dolphin, the GC/Wii emulator - people would ask for more cores to be used and they'd basically be like "for what???", they started moving stuff like audio out, eventually they made some breakthroughs where they could split more things out.
Maybe with these frameworks threads are less dedicated and instead are more cooperative, idk. Really not my area!
Or simply put, there's too much going on. I remember they had to rewrite ASAP some parts of the engine right after the release of Blight due to FPS drops down to 1/inf at the end-endgame versions of the encounter, as well as server crashes.
Sort of funny story, the concept of this build (spell loop) is currently meta, sadly the servers have improved to the point that they don‘t crash anymore.
I've seen the NVIDIA driver eat up all the CPU on multiple cores without really doing anything substantial to the framerate.
This was back in the Windows XP days when I was working on OpenGL and DirectX. It would do this while rendering like a couple of triangles. One core I could understand, but not all. I'm pretty sure the driver had some spinlocks in there.
I also managed to find out the NVIDIA driver assumed user buffers passed to OpenGL (VBOs) would be 16-byte aligned, using aligned SIMD operations on them directly, even though there's no mention of alignment in the OpenGL spec.
It just so happened that Microsoft's C++ runtime would do 16-byte aligned allocations, while the language I was using only did 4-byte.
All is fair in love and performance wars I suppose...
I think you'll find that modern games use many more cores than they used to since mainstream consoles have all moved to being octa-core for the last two generations and you have things like Vulkan better allowing multi-threaded graphics code.
Many more cores yes, but 100% CPU usage should still be rare. If your game uses 100% of a 24C/32T processor, it will run poorly on a "mere" 8-core CPU, and most of your target audience won't be able to play it. You're right though, these aren't your grandma's single-threaded games anymore.
CPUs and GPUs keep getting hungrier and that is just not where we should be heading. I wish the perf increase didnt keep coming along consumption increase each gen.
Don´t let the TDP of T-models fool you. Power consumption to reach boost clocks can peak up to 100W for T-models of the previous generation, and the 13700T probably needs to run close to that to outperform a 5800X.
Its still pretty terrible from an optimal performance viewpoint. I can undervolt my 3070Ti by ~100mV, dropping performance by ~6% but dropping temperature by about 10C, or 13%, and dropping fan speed from PS3 to inaudible for anything <90% load.
If you consider the mainstream products of Intel and Nvidia, they have way more moderate power consumption. These products with massive power draw are ultra enthusiast products. They are an outlier. You could build a great PC now with a RTX 3060 and a mainstream CPU that would be fine with a 500~ watt PSU.
As technologists, we should support manufacturers pushing the limit in power and performance. It helps drive overall efficiency and move technology forward.
Power consumption doubled at least even on mainstream and keep increasing gen over gen.
1070 vs 3070 is +52٪ average (145 vs 220w) and +66% (154 vs 250w) sustained.
2070 vs 3070 is +10% (195) or +24% (203) sustained.
Even the 3060 you defend draws power as older flagships and 500w arent enough even for mainstream gaming.
And it keeps getting worse both on gpu and cpu size.
We aren't technologists but consumers, and reality is that x86 and gpus are in near duopolys so the 3 companies involved have little reasons to do a better job and it's clear Apple socs or more and more cloud moving into arm have not been enough of wake up calls.
In fact, the 3070 is significantly more power efficient than the 1070 is. Because while yes power is up 50%, performance is up 100%. So performance per watt has continued to improve even as power consumption has also increased.
The reality of power consumption is that it's the main lever to pull right now to deliver generational gains. It's the same lever Apple pulled for the M2 even.
> more and more cloud moving into arm have not been enough of wake up calls.
You mean the ARM enterprise SoCs that use just as much power as x86 does to deliver on average worse performance?
I believe AMD Ryzen 6000 mobile cpus can hold their own against the apple m1. They have comparable performance and can be set by the manufacturer at a TDP comparable to the m1(and still perform well). Except for mainly m1 optimized apps, GPU performance should be pretty comparable too. Ryzen integrated graphics perform better in gaming.
Power consumption is an issue, but worrying about CPUs of a gaming machine is like worrying about straws in the context of the pollution.
Electricity needs of an average household in western world are going to increase a lot in coming decades, with transition to more electric heating, cooking, cars, etc. Gaming machine power usage is minuscule compared to those.
The flaws in your calculations are apparent when we realize that modern CPUs clock down when they are not busy. You would assume this would be common knowledge on a site called Hacker News.
I would say that people who regularly invest in top end hardware don't care as much about power bills. Otherwise power efficient chips is the norm (laptops, phones, etc).
This is slowly changing IMO - I'm seeing concern over energy use even on forums discussing high end hardware builds as cost of energy mounts in Europe. Previously no one really ever mentioned this other than to laugh at poor thermals.
If some of Nvidia's next generation 4xxx series GPUs are close to 1000w draw as many rumors suggest, the total draw of a high spec Intel/Nvidia system is going to probably have similar running costs to an electric space heater when playing demanding games. The existing 3090ti is already a 500w part, which not so long ago was enough to power a whole system in addition to the GPU.
Yeah that's not true, at least here in the UK it isn't. My normal build will use 500W when gaming, so every couple hours that's £0.40. Every 10 hours is £4. That's just few days of gaming for me, not including all the other computer use, definitely adds up over a month, especially since my bills used to be £100/month now they are £300 a month.
Using your own numbers, if you're spending 200/month more on gaming then that's 500 hours/month or approximately every waking hour. Are you really gaming that much? And even if you are, that's a lot cheaper than practically any other hobby you could spend that much time on.
I never said I spend £200 on gaming? I just said that my bills have increased from 100 a month to 300, but that's due to rising energy costs in the UK, not my gaming habits . It's more that in addition to my bill literally tripling, costs of gaming aren't insignificant for me. It doesn't matter that it's still cheap for the type of entertainment - it adds up. Ever pound spend this way is not a pound spent on something else.
> It's more that in addition to my bill literally tripling, costs of gaming aren't insignificant for me. It doesn't matter that it's still cheap for the type of entertainment - it adds up. Ever pound spend this way is not a pound spent on something else.
Sounds like a route to being penny-wise and pound-foolish. If you cut cheap entertainment you may well end up spending more (because in practice it's very hard to just sit in a room doing nothing), and if your gaming costs are a lot smaller than your energy bills then the cost should be relatively insignificant, almost by definition.
Well the discussion started with saying "why would enthusiasts care about their energy consumption" - so my point isn't that I'm going to cut out entertainment altogether, but for my next GPU I will definitely look at the energy consumption and there is zero chance I'm buying a 500W monster, even if I can afford the energy for it. It's just stupid and wasteful. I might go with a xx60 series instead, just because the TDP will be more reasonable. Or alternatively I might play the same games on my Xbox Series S which provides me with the same entertainment yet uses 1/6th of the energy of my PC when gaming.
I don't use AC I think it's terrible to waste energy like that. I can understandan office or hospital, but home?
I'm from southern italy, it's hot, you sweat, you don't need to burn gas, coal or build infrastructure so people waste it to cool their room, this is so entitled and no wonder we're in full climate crisis.
People can't give up on anything really. It gets worse everyday and people's remedy is to make it even worse,. Nonsense.
So yes, it makes things much worse with a hot pc in the room.
In the USA, for a significant part of the year, chances are you have to add the electricity costs of running your airconditioning to get rid of that heat.
If you’re living in a colder state, you may have to subtract the costs of having lower heating costs.
the last 2-3 generations of cpus and gpus have seen the most innovation, efficiency, and performance gains in a long time.
If you don't want to drive a 5l truck, don't drive one, but 500W is not an average load, and if you have high electricity costs thats a you problem not an everyone else problem.
The largest parts of my electrical bill are distribution and overhead charges that don't change whether I use power or not. The marginal utility of the power vs the cost is quite reasonable.
This is nonsense. The last 2-3 generations of GPUs have seen nothing of the sort in terms of efficiency, the same is also true of many Intel desktop CPUs. The latest Alder Lake desktop parts from Intel have been universally criticized for power draw too.
Each of the last 4 generations of mid to high-end card from Nvidia has required more electricity than the last. By Nvidia's own admission, their future chips will get larger and hotter to some extent due to slow down in moore's law and future process nodes being harder to reach. The die size of the parts has also grown which does not help.
Its not a small trajectory either; 4 years ago the most power hungry consumer part from Nvidia was a 1080ti which would easily draw 250w under load. today that number is now 500w for the current 3090ti, and rumored to be 800-1000 for the 4xxx parts launching end of this year. A GPU often will sit at peak draw for hours at a time in games as they try to push as many FPS as possible.
A current gaming machine with recent components can easily exceed 500w constant load during gameplay, and this figure will rise again with the 4xxx parts.
The ONLY exception to this really is Apple devices, and even then its not clear we can compare an M1/M2 GPU to the fastest parts Nvidia offer.
My power is some of the cheapest in the country and we pay ~13 cents/kWh. It's a little misleading though since my bill breaks out generation and distribution costs into separate line items. They are both billed per kWH though and add up to 13 cents.
Yes. We have a fixed basic connection charge of $20. So it's really close to your $0.13/kWh when that is taken into account.
I wasn't trying to be misleading though because the point is the basic charge does not increase with usage. So for each additional kWh we add to that it's only $0.089/kWh.
It's worse than that. Gamer's Nexus had a video a few months ago about power transients becoming a bigger problem. Power spikes can
double the amount of power needed. It doesn't really impact average power useage, but it can cause a psu's ocp to shut down the machine. https://www.youtube.com/watch?v=wnRyyCsuHFQ
> If some of Nvidia's next generation 4xxx series GPUs are close to 1000w draw as many rumors suggest
Those rumors are for millisecond long transient spikes, not an average of anything. So basically the rumor is a 500w peak load. Just like how the current 350-400w GPUs have transient spikes to upwards of 800w. It's not a problem in terms of power consumption (although obviously the increase from 400w to 500w would be), rather it's an issue with over-current protection in power supplies tripping. It's a "need bigger capacitors" type problem.
Yeah exactly this. I have a 3080 with a 5900X, would consider myself an enthusiast, and after recent price hike to my tariff here in the UK electricity usage is definitely something that's on my mind. Like, it hasn't stopped me gaming yet, but I'm very acutely aware that I'm using £1 worth of electricity every few hours of play - it adds up.
I mean, thank you for the thoughtful advice about my finances, but it doesn't help in the slightest. Life is getting a LOT more expensive lately, with everything going up in price - I'm seeing my grocery bills double, energy bills triple, spending lots more money on petrol, on eating out, on taking my family out for trips, and yes - on gaming too. Is that £1 every few hours making me destitute? No, absolutely not and I'm extremely privileged to be able to afford it. But at the same time every £1 taken for this isn't a pound saved, or spent on my kid, or on literally anything other more productive.
So yes, I can "easily" afford it, but it doesn't mean that the energy consumption of my gaming rig hasn't affected how I think about it. Any future hardware upgrades will also be impacted by this - there is no way I'm buying a GPU with 500W TDP, even if again, I can afford the energy bills.
Whether he does or not it's none of your business, and it doesn't change the fact that those are high prices and sources of environmental issues.
This power draw is getting out of hand on desktop, consoles and x86 laptops and is largely a symptom of lack of competition and lack of technological advances.
The pilot light on my furnace went out years ago. I only noticed because when I opened the door to the room with my computer a light but noticeable heat blast hit me. It took a second, but I turned around and checked my furnace, etc instead of going in the room. It really was a revelation about how much heat those things produce.
I wonder what % of the overall power bill a PC actually consumes. My gut would say it doesn't compare, really, to the water heater or air conditioner, but it would be good to see numbers.
I would love some PSU metering ability, to see actual data about how much juice my PC is pulling down. Other than getting a kill a watt meter, how could one go about this?
I have what is probably a close spiritual cousin of one of those, and while it even touts a power factor display, it also loves to show ridiculously high values during idle consumption for anything involving some sort of power electronics (not just for my computer, but for example for my washing machine, too – it shows sensible values while the heating element runs, or when the motor actually turns, but in-between it shows nonsensically high values).
It is a few years old, though, so maybe by now quality standards have improved even for those kinds of cheapo-meters…
Calculate it. You'll have kW/hr costs for electricity which depend on where you are and can ballpark power based on fraction of the time it's running and the components in it.
My standard dev machine is ~1kW flat out, ~500W most of the time, probably 100W idle. Runs for about eight hours a day. Say 500W is the average, suggests 4kW/hr a day. That's about $2 a day in the UK.
(those power numbers are relatively high - it's an elderly threadripper with two GPUs)
I think that 500W average is the tricky bit. When web browsing for example, my laptop (linux+intel) seems to spend 99% of the time in the C1 halt state, according to i7z.
I can say that when my son left for college last fall, our electric bill dropped about $30 a month compared to the months he was here (after adjusting for seasonal heating/cooling costs). He has an i9-12xxx gaming rig with two monitors. A Prusa 3D printer that gets a lot of use and a few other gadgets and such.
In Intel's case, they need to push these insane TDPs in order to even dream of performance parity with AMD and Apple. All those years spinning their wheels on 14nm+++++++++++ are biting them in the ass.