Hacker Newsnew | past | comments | ask | show | jobs | submit | dogma1138's commentslogin

It’s really not that simple, the unpopulated memory slots will cause havoc with signal integrity 4 slot boards already suffer from this.

You are also overestimating how much room there is on the interposer.

As someone with a 9950x3d with direct die cooling setup I can tell you there is no room.


They already do the latter with X3D.

You won’t be able to add RAM to the die itself there no room on the interposer really.


Not easily, and you will need a new motherboard anyhow because each of the 2 slots you can have per lane are wired in tandem.

Those are objectively different skills tho.

In the same way that using hand tools are different skills than using power tools.

Or doing bookkeeping for a small business with pencil and paper is a different skill than using spreadsheets or dedicated bookkeeping software.


There is fuck all difference between x8 and x16 for gaming. Heck with PCIe5 even dropping to x4 is borderline noticeable outside of benchmarks.

100% this

The PCI-Express bus is actually rather slow. Only ~63 GB/s, even with PCIe 5 x16!

PCIe is simply not a bottleneck for gaming. All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.

Which is funny to think about historically. I remember when AGP first came out, and it was advertised as making it so GPUs wouldn't need tons of memory, only enough for the frame buffers, and that they would stream texture data across AGP. Well, the demands for bandwidth couldn't keep up. And now, even if the port itself was fast enough, the system RAM wouldn't be. DDR5-6400 running in dual-channel mode is only ~102 GB/s. On the flip side the RTX 5050, a current-gen budget card, has over 3x that at 320 GB/s, and on the top end, the RTX 5090 is 1.8 TB/s.


> All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.

Ah, not really these days, textures are loaded in/out on demand, at multiple different mipmap levels, same with model geometry and LOD's. There is texture and mesh data frequently being cached in and out during gameplay.

Not arguing with your points around bus speeds, and I suspect you knew the above and were simplyifing anyway.


You are correct that games generally are not PCIe limited. But you are incorrect that games just upload everything ones and be done. Most modern engines are most certainly streaming in and out assets all the time.

Main problem seems to be they're kinda badly utilized (IMHO) on many motherboards. Most seem to go with two x16 slots so you get x8 lanes in both.

There are some exceptions, but I haven't seen one with for example four x16 slots that support PCIe 5.0 x4 lanes with bifurcation.


You can buy add-in cards that do lane bifurcation

E.g. https://www.ebay.co.uk/itm/126656188922

Most motherboards don’t go beyond 2x8 with 2x16 physical slots because there is little actual use for it and it costs quite a bit of money.


The biggest difference for me for PCIe 5.0 has been additional bandwidth for my M2 drive.

Faster M.2 drives are great, but you know what would be even greater? More M.2 drives.

I wish it was possible to put several M.2 drives in a system and RAID them all up, like you can with SATA drives on any above-average motherboard. Even a single lane of PCIe 5.0 would be more than enough for each of those drives, because each drive won't need to work as hard. Less overheating, more redundancy, and cheaper than getting a small number of super fast high capacity drives. Alas, most mobos only seem to hand out lanes in multiples of 4.

Maybe one day we'll have so many PCIe lanes that we can hand them out like candy to a dozen storage devices and have some left to power a decent GPU. Still, it feels wasteful.


> Alas, most mobos only seem to hand out lanes in multiples of 4.

AFAIK, the cpu lanes can't be broken up beyond x4; it's a limitation of the pci-e root complex. The Promontory 21 chipset that is mainstream for AM5 does two more x4 and four choose sata or pci-e x1. I don't think you can bifurcate those x4s, but you might be able to aggregate two or four of the x1s. And you can daisy chain a second Prom21 chipset to net one more x4 and another 4 x1.

Of course, it's pretty typical for a motherboard to use some of those lanes for onboard network and what nots. Nobody sells a bare minimum board with an x16 slot, two cpu based x4 slots, two chipset x4 slots, and four chipset x1 slots and no onboard perhipherals, only the USB from the cpu and chipset. Or if they do, it's not sold in US stores anyway.

If pci-e switches weren't so expensive, you might see boards with more slots behind a switch (which the chipsets kind of are, but...)


The M.2 form factor isn't that conducive to having lots of them, since they're on the board and need large connectors and physical standoffs. They're also a pain in the ass to install because they lie flat, close to the board, so you're likely to have to remove a bunch of shit to get to them. This is why I've never cared about and mostly hated every "tool-less" M.2 latching mechanism cooked up by the motherboard manufacturers: I already have a screwdriver because I needed to remove my GPU and my ethernet card and the stupid motherboard "armor" to even get at the damn slots.

SATA was a cabling nightmare, sure, but cables let you relocate bulk somewhere else in the case, so you can bunch all the connectors up on the board.

Frankly, given that most advertised M.2 speeds are not sustained or even hit most of the time, I could deal with some slower speeds due to cable length if it meant I could mount my SSDs anywhere but underneath my triple slot GPU.


> I could deal with some slower speeds due to cable length

Observing server mainboards reveals many PCIe 5.0 connectors for cables to attach PCIe-SSDs looking similar to SATA ones.


Agree that M.2 is fiddly. PCIE cards with M.2 sockets are nice nice for desktops and servers, then one can just unplug it to do operations.

There are add-in cards with PCIe switch chips that will let you put a large number of drives into a single PCIe slot.

Including ones that have controllers, if your motherboard doesn't have enough lanes or it doesn't support bifurcation. I have a Rocket 7608A, which gives you 8 M.2 slots in a PCIe 5.0 x16 card: https://www.highpoint-tech.com/nvme-raid-aic/gen5/rocket-760...

Your comment is basically the "tl;dr" of this Techpowerup article (which is great and people should read it if they are unconvinced or curious): https://www.techpowerup.com/review/nvidia-geforce-rtx-5090-p...

Variable frame rate screens aren’t just for making the phone feel snappier but are also needed for the battery to last longer.

If your production volume isn’t high enough to justify a custom screen to be cut you are stuck with what is available on the market.

And even if 5” screens are available now in the form of NOS or upcycled refurbs that may not be the case 2 or 3 not to mention 5+ years down the line.

So you have to go with what not only is available today but with what is still likely to be available throughout the expected usable lifetime of your product.


There are, they want a more secure capital from which they’ll be able to continue operating if a civil war breaks out.


Nuclear program, ballistic missile program, drones, establishing and supporting multiple proxies in the region.

For a fraction of what they spent on that they could’ve have desalination plants in the Caspian Sea and a water way capable of providing water to their capital.


I’ll wager there is a bit more Israeli tech in those phones than some adware.


Or a heck of a lot of non-phone tech as well.


https://www.sennheiser-hearing.com/en-UK/p/btd-700

Works on SteamOS out of the box and with all the features as far as I can tell.


That dongle has its own Bluetooth stack and is exposing a standard audio device via USB. Indeed that currently seems to be the only way, but then the stack need config input somehow, which in case of this one requires a proprietary Win/Mac Software.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: