The PCI-Express bus is actually rather slow. Only ~63 GB/s, even with PCIe 5 x16!
PCIe is simply not a bottleneck for gaming. All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.
Which is funny to think about historically. I remember when AGP first came out, and it was advertised as making it so GPUs wouldn't need tons of memory, only enough for the frame buffers, and that they would stream texture data across AGP. Well, the demands for bandwidth couldn't keep up. And now, even if the port itself was fast enough, the system RAM wouldn't be. DDR5-6400 running in dual-channel mode is only ~102 GB/s. On the flip side the RTX 5050, a current-gen budget card, has over 3x that at 320 GB/s, and on the top end, the RTX 5090 is 1.8 TB/s.
> All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.
Ah, not really these days, textures are loaded in/out on demand, at multiple different mipmap levels, same with model geometry and LOD's. There is texture and mesh data frequently being cached in and out during gameplay.
Not arguing with your points around bus speeds, and I suspect you knew the above and were simplyifing anyway.
You are correct that games generally are not PCIe limited. But you are incorrect that games just upload everything ones and be done. Most modern engines are most certainly streaming in and out assets all the time.
Faster M.2 drives are great, but you know what would be even greater? More M.2 drives.
I wish it was possible to put several M.2 drives in a system and RAID them all up, like you can with SATA drives on any above-average motherboard. Even a single lane of PCIe 5.0 would be more than enough for each of those drives, because each drive won't need to work as hard. Less overheating, more redundancy, and cheaper than getting a small number of super fast high capacity drives. Alas, most mobos only seem to hand out lanes in multiples of 4.
Maybe one day we'll have so many PCIe lanes that we can hand them out like candy to a dozen storage devices and have some left to power a decent GPU. Still, it feels wasteful.
> Alas, most mobos only seem to hand out lanes in multiples of 4.
AFAIK, the cpu lanes can't be broken up beyond x4; it's a limitation of the pci-e root complex. The Promontory 21 chipset that is mainstream for AM5 does two more x4 and four choose sata or pci-e x1. I don't think you can bifurcate those x4s, but you might be able to aggregate two or four of the x1s. And you can daisy chain a second Prom21 chipset to net one more x4 and another 4 x1.
Of course, it's pretty typical for a motherboard to use some of those lanes for onboard network and what nots. Nobody sells a bare minimum board with an x16 slot, two cpu based x4 slots, two chipset x4 slots, and four chipset x1 slots and no onboard perhipherals, only the USB from the cpu and chipset. Or if they do, it's not sold in US stores anyway.
If pci-e switches weren't so expensive, you might see boards with more slots behind a switch (which the chipsets kind of are, but...)
The M.2 form factor isn't that conducive to having lots of them, since they're on the board and need large connectors and physical standoffs. They're also a pain in the ass to install because they lie flat, close to the board, so you're likely to have to remove a bunch of shit to get to them. This is why I've never cared about and mostly hated every "tool-less" M.2 latching mechanism cooked up by the motherboard manufacturers: I already have a screwdriver because I needed to remove my GPU and my ethernet card and the stupid motherboard "armor" to even get at the damn slots.
SATA was a cabling nightmare, sure, but cables let you relocate bulk somewhere else in the case, so you can bunch all the connectors up on the board.
Frankly, given that most advertised M.2 speeds are not sustained or even hit most of the time, I could deal with some slower speeds due to cable length if it meant I could mount my SSDs anywhere but underneath my triple slot GPU.
Including ones that have controllers, if your motherboard doesn't have enough lanes or it doesn't support bifurcation. I have a Rocket 7608A, which gives you 8 M.2 slots in a PCIe 5.0 x16 card: https://www.highpoint-tech.com/nvme-raid-aic/gen5/rocket-760...
Variable frame rate screens aren’t just for making the phone feel snappier but are also needed for the battery to last longer.
If your production volume isn’t high enough to justify a custom screen to be cut you are stuck with what is available on the market.
And even if 5” screens are available now in the form of NOS or upcycled refurbs that may not be the case 2 or 3 not to mention 5+ years down the line.
So you have to go with what not only is available today but with what is still likely to be available throughout the expected usable lifetime of your product.
Nuclear program, ballistic missile program, drones, establishing and supporting multiple proxies in the region.
For a fraction of what they spent on that they could’ve have desalination plants in the Caspian Sea and a water way capable of providing water to their capital.
That dongle has its own Bluetooth stack and is exposing a standard audio device via USB.
Indeed that currently seems to be the only way, but then the stack need config input somehow, which in case of this one requires a proprietary Win/Mac Software.
You are also overestimating how much room there is on the interposer.
As someone with a 9950x3d with direct die cooling setup I can tell you there is no room.
reply