[HN Gopher] PCIe 7.0 Draft 0.5 Spec: 512 GB/s over PCIe x16 On T...
       ___________________________________________________________________
        
       PCIe 7.0 Draft 0.5 Spec: 512 GB/s over PCIe x16 On Track For 2025
        
       Author : ksec
       Score  : 76 points
       Date   : 2024-04-04 16:17 UTC (6 hours ago)
        
 (HTM) web link (www.anandtech.com)
 (TXT) w3m dump (www.anandtech.com)
        
       | teaearlgraycold wrote:
       | How do they keep pulling this off?
        
         | nullindividual wrote:
         | I have the same question! They double the performance over the
         | same physical interface generation after generation.
         | 
         | Why haven't we seen the need for a PCI-X or VLB-style PCIe
         | interface expansion?
        
           | Night_Thastus wrote:
           | They explain on the website:
           | 
           | >To achieve its impressive data transfer rates, PCIe 7.0
           | doubles the bus frequency at the physical layer compared to
           | PCIe 5.0 and 6.0. Otherwise, the standard retains pulse
           | amplitude modulation with four level signaling (PAM4), 1b/1b
           | FLIT mode encoding, and the forward error correction (FEC)
           | technologies that are already used for PCIe 6.0. Otherwise,
           | PCI-SIG says that the PCIe 7.0 speicification also focuses on
           | enhanced channel parameters and reach as well as improved
           | power efficiency.
           | 
           | So it sounds like they doubled the frequency and kept the
           | encoding the same. PCIe 6 can get up to 256 GB/s, and 2x 256
           | = 512.
           | 
           | In any case, it'll be a long time before the standard is
           | finished, and far longer before any real hardware is around
           | that actually uses PCIe 7.
        
             | Retr0id wrote:
             | This answers the question in the literal sense, but I'm
             | nonetheless surprised (as a relative outsider to the world
             | of high-speed signalling).
             | 
             | "just double the frequency" isn't something we're used to
             | seeing elsewhere these days (e.g. CPU clock speeds for the
             | last couple of decades). What are the fundamental
             | technological advances that allow them to do so? Or in
             | other words, what stopped from from achieving this in the
             | previous generation?
        
               | Night_Thastus wrote:
               | PCIe doesn't have to do anything as complex as a general-
               | purpose CPU. Increasing the frequency is a lot easier
               | when you don't need to worry about things like heat,
               | pipelining, caching, branch prediction, multithreading,
               | etc. It's just encoding data and sending it back and
               | forth. We've gotten very, very good at that.
               | 
               | It wasn't really like it was impossible before now - it's
               | just more that it wasn't in demand. With the
               | proliferation of SSDs transferring data over PCIe, it's
               | become much more important - so the extra cost of better
               | signaling hardware is worth it.
               | 
               | Not to dismiss it completely, it's still a hard problem.
               | But it's far easier than doubling the frequency of a CPU.
        
               | p1esk wrote:
               | Why would pipelining, caching, branch prediction make
               | increasing the frequency difficult? Why would heat be
               | less of a problem for a pcie controller than for a cpu?
        
               | kanetw wrote:
               | The short, stupid answer is: transistor count and size.
               | 
               | Explaining it in detail requires more background in
               | electronics, but that's ultimately what it boils down to.
               | 
               | High-end analog front ends can reach the three-digit GHz
               | (non-silicon processes admittedly, but still).
        
               | p1esk wrote:
               | _Explaining it in detail_
               | 
               | Please do
        
               | loeg wrote:
               | The rest, sure, but PCIe does still have to worry about
               | heat (and energy consumption).
        
               | dvas wrote:
               | I think a quick 2-minute read on the changes around each
               | generation gen1 -> gen4 example from 2016 will make it a
               | bit clearer [0].
               | 
               | Things like packet encoding etc. Then a quick look at the
               | signalling change of NRZ vs PAM4 in later generations.
               | 
               | Gen1 -> Gen5 used NRZ, PAM4 is used in PCIe6.0.
               | 
               | [0] Understanding Bandwidth: Back to Basics, Richard
               | Solomon, 2016: https://www.synopsys.com/blogs/chip-
               | design/pcie-gen1-speed-b...
        
               | EgoIncarnate wrote:
               | Too much energy is used (and heat generated) at high
               | frequencies. For something like PCIe, you don't need to
               | double the frequency for the whole circuit to double the
               | frequency of the link. You can double the front/back and
               | then double the parallelization of the rest of the
               | circuit. Most of the circuit can still run at a more
               | energy efficient frequency. Potentially it was possible
               | earlier, but the circuit size made it cost prohibitive.
        
               | georgeburdell wrote:
               | PAM4 allows the frequencies involved to stay close to the
               | same, at the expense of higher required precision in
               | other parts of the transmit/receive chains, so they
               | didn't "just double the frequency"
        
               | Kirby64 wrote:
               | But they literally did. FTA:
               | 
               | >> To achieve its impressive data transfer rates, PCIe
               | 7.0 doubles the bus frequency at the physical layer
               | compared to PCIe 5.0 and 6.0. Otherwise, the standard
               | retains pulse amplitude modulation with four level
               | signaling (PAM4), 1b/1b FLIT mode encoding, and the
               | forward error correction (FEC) technologies that are
               | already used for PCIe 6.0.
               | 
               | Nothing else changed, they didn't move to a different
               | encoding scheme. PCIe 6.0 already uses PAM4. Unless they
               | moved to a higher density PAM scheme (which they didn't),
               | the only way to increase bandwidth is to increase speed.
        
               | touisteur wrote:
               | I'm thinking huge progress recently both on ADC tech and
               | increase in compute power near the rx/tx source, are
               | getting widespread adoption, be it on Ethernet (latest
               | raw lane speed being industrialises is 224Gbps, useful
               | 200Gbps, and you bunch lanes together in e.g. quad- or
               | octo-sfp/quad-sfp-double-density - not sure osfp-224 is
               | already available...) and get to 800G or 1.6Tbps ifffff
               | you have the switch or controller for this (to my
               | knowledge no Ethernet controller yet - NVIDIA connectx 7
               | stops at qsfp-112) but it's mostly because of PCIe ?
               | 
               | The future NVIDIA B100 might be PCIe 6.0 but hopefully
               | will support 7.0 and maybe NVIDIA (or someone) gets a NIC
               | working at those speeds by then...
        
               | magicalhippo wrote:
               | Some EE's I know speculate that PCIe 7.0 will require
               | cables, such as the next generation of these[1].
               | 
               | That is, they recon long traces on a motherboard just
               | won't cut it for the strict tolerances needed to make
               | PCIe 7.0 work.
               | 
               | [1]: https://www.amphenol-cs.com/connect/news/amphenol-
               | released-p...
        
           | Tuna-Fish wrote:
           | > the same physical interface
           | 
           | It's not _really_ the same physical interface. The connector
           | is the same, but quality requirements for the traces have
           | gotten much more strict over time.
           | 
           | > Why haven't we seen the need for a PCI-X or VLB-style PCIe
           | interface expansion?
           | 
           | x32 PCIe does exist, it's just rarely used.
        
         | jeffbee wrote:
         | My impression is they use the standards process as a kind of
         | objective-setting function to ensure the industry continues to
         | move forward. They seem to figure out what will be just
         | possible in a few years with foreseeable commercialization of
         | known innovations, and write it down. It seems to have worked
         | for > 20 years.
        
       | vardump wrote:
       | Good. IMHO, PCIe bandwidth is problematic on consumer devices.
       | 
       | Can't have multiple high bandwidth devices.
        
         | Night_Thastus wrote:
         | Where is it problematic? On PCIe 4, you can run a high end
         | graphics card and multiple SSDs over PCIe no problem. Bandwidth
         | is not an issue there.
         | 
         | If you're doing some server stuff, then maybe. But for normal
         | consumers it's not an issue.
        
           | michaelt wrote:
           | Modern consumer motherboards are in a bit of a weird place at
           | the moment.
           | 
           | Even if you buy the fanciest consumer motherboard out there,
           | you'll find it only comes with one PCIe slot that actually
           | runs at 16x speed, one that's maybe 4x speed (but 16x size),
           | and that's about it.
           | 
           | You want a motherboard with two 16x slots, to run a dual-GPU
           | setup at full speed? Buy a threadripper.
           | 
           | From a market segmentation perspective this makes sense from
           | the processor manufacturers' point of view - get the people
           | who want lots of IO onto your high-margin server/workstation
           | product range.
           | 
           | Back in the 1990s you needed to plug in a graphics card, a
           | sound card, an ethernet card, maybe a separate wifi card, and
           | so on, each of which would use exactly one slot - but these
           | days other than the GPU, it's all integrated on the
           | motherboard already, and the GPU might block 2 or 3 slots
           | anyway. So modern consumer motherboards still sell OK despite
           | having fewer slots than ever before!
        
             | Alupis wrote:
             | > Even if you buy the fanciest consumer motherboard out
             | there, you'll find it only comes with one PCIe slot that
             | actually runs at 16x speed, one that's maybe 4x speed (but
             | 16x size), and that's about it.
             | 
             | Maybe in the Intel World, this is true. Intel has always
             | played games at consumer's expense...
             | 
             | Here's the cheapest AMD AM5 socket ATX mobo sold by NewEgg,
             | complete with two (2) 16x PCIe gen 4 slots[1].
             | 
             | Here's an AM5 mATX board with two (2) 16x PCIe gen 4
             | slots[2].
             | 
             | Come join team Red - the water is warm.
             | 
             | [1] https://www.newegg.com/msi-
             | pro-b650-s-wifi/p/N82E16813144642
             | 
             | [2] https://www.newegg.com/msi-pro-b650m-a-
             | wifi/p/13-144-559
        
               | nolist_policy wrote:
               | Eh, on both boards the 2nd PCIe x16 slot only has 4 PCIe
               | lanes - physically its an x16 slot and you can put x16
               | devices in it, but they will only be able to talk x4
               | speeds.
        
               | elabajaba wrote:
               | Even on high end AM5 motherboards, they'll only run at
               | x8/x8 (or maybe x16/x4) if you populate both slots.
        
               | loeg wrote:
               | PCIe 5.0 x8 has the same throughput as PCIe 4.0 x16,
               | FWIW. So the x8/x8 configuration is very reasonable.
        
               | 0x457 wrote:
               | My board two slots PCI-E 5.0 that work as x8 to CPU if
               | both used to x16 if only one used. As well as two PCI-E
               | 5.0 x4 m.2 slots.
               | 
               | To my understanding on a consumer market there isn't much
               | that can utilize that much bandwidth. Current consumer
               | GPUs tap out at 4.0 speeds, but use 5.0 (probably due to
               | "more number more better" marketing). To benefit from 5.0
               | for NVMe it has to be very specific access patterns.
               | 
               | It's only sad on Intel side because they want strong
               | market segmentation and got away with it because AMD
               | wasn't a competitor for years.
        
             | smileybarry wrote:
             | Some of the fancier PCIe 5.0 motherboards out there today
             | actually _do_ have two PCIe 16x slots -- one 5.0 and one
             | 4.0. So you _can_ run dual-GPU, though I don 't see a
             | reason to do this anymore except some niche GPU
             | virtualization stuff. (multi-GPU proprietary tech is dead,
             | DX12 mGPU never really took off, etc.)
        
             | ants_a wrote:
             | That's not the motherboards fault, the PCIe controller is
             | on the CPU and needs to have corresponding I/O pinout on
             | the socket. And even from the CPU manufacturer side, it's
             | not artificial market segmentation. The I/O die alone on an
             | EPYC/ThreadRipper Pro is more than double the die area of
             | the CPU, GPU and I/O chips in a Ryzen 7 7800, and it's on a
             | single chip which makes yields way worse. The socket and
             | package need 1700 connections in one case and 6000 on the
             | other. The high I/O CPU is way more expensive to
             | manufacture, and on the lower end the prices start at ~1k
             | for 512 GB/s of full duplex PCIe bandwidth and 12 memory
             | channels.
        
         | baobun wrote:
         | You want more buses, not more bandwidth per bus.
        
         | technofiend wrote:
         | As much as anything that's also due to Intel limiting lanes to
         | on consumer CPUs keep from competing with their Xeon line. But
         | yeah make each lane fast enough and it's less of a concern.
        
           | throwaway48476 wrote:
           | It would be if Broadcom didn't break the PLX PCIe switch
           | market.
        
             | magicalhippo wrote:
             | Yeah, I mean for AM5 platform for example, it would have
             | been awesome if some motherboards took x4 PCIe 5.0 lanes
             | and turned them into x16 PCIe 3.0 lanes or something like
             | that. That would benefit me way more than the current AM5
             | motherboards out there.
             | 
             | Of course, as you note, the reason we don't see this is
             | likely because the PCIe switch chips cost more than the
             | rest of the motherboard.
        
         | accrual wrote:
         | It was kinda like that with regular PCI before PCIe as well. A
         | single gigabit NIC could saturate the 133MB/s bus, so two or
         | more wouldn't let you build a router with two full speed
         | gigabit NICs for example.
        
       | anonymousDan wrote:
       | So will it close the gap with NVLink?
        
         | p1esk wrote:
         | What makes you think nvlink will stay the same?
        
           | anonymousDan wrote:
           | I don't know either way, hence my question! Is there some
           | fundamental design issue that means NVLink (or ultra Ethernet
           | in the future) are likely to maintain their bandwidth
           | advantage? Or is PCIe likely to close the gap?
        
             | transpute wrote:
             | Recent thread:
             | https://news.ycombinator.com/item?id=39729509
        
         | touisteur wrote:
         | Recent announcement during GTC was Blackwell would do x2 on
         | nvlink, 900GB/s => 1800GB/s. And iirc nvlink can do multi-node
         | (gpus in different servers) too now... so... no?
        
           | z4y5f3 wrote:
           | NVLink advertises combined bandwidth in both direction, so
           | the 1800 GBps NVLink on Blackwell is actually 900 GBps for
           | everyone else. PCIe can also do multi-node direct transfer
           | via PCIe switches and has been already widely adopted. NVLink
           | still has the power and chip arena advantage even if the
           | bandwidth is similar.
        
             | touisteur wrote:
             | I was just saying they've announced x2 already, around the
             | same pace as PCIe. And it's G-Byte-ps on nvlink, while
             | G-bit-ps on PCIe, right? I'm probably missing something...
             | 
             | Anyway, good read here https://community.fs.com/article/an-
             | overview-of-nvidia-nvlin...
        
         | z4y5f3 wrote:
         | Depends. NVLink advertises bidirectional bandwidth, whereas
         | PCIe and standard networking calculate bandwidth in a single
         | direction. So a 1800 GBps NVLink is actually 900 GBps in PCIe
         | and standarding networking terms.
         | 
         | Therefore, a 512 GBps PCIe would sit between the current H100
         | NVLink (450 GBps) and next generation B200 NVLink (900 GBps).
         | With that being said, NVLink still has lower power draw and
         | smaller chip area, so it would still have a competitive
         | advantage even if the bandwidth is similar.
        
           | anonymousDan wrote:
           | Wut, I did not know that regarding NVLink marketing gimmicks!
           | Thanks for the info.
        
         | aseipp wrote:
         | NVLink doesn't have as strict latency requirements as PCIe
         | does. Stricter latency requirements mean you need more die area
         | per GB/s of bandwidth. In practice this means that if you think
         | of it from an mm^2 perspective, you can get much higher
         | bandwidth -- higher GB-per-second-per-mm^2 -- with NVLink than
         | you can PCIe. For many compute/bandwidth heavy workloads where
         | that latency can be masked, this makes NVLink a superior
         | choice, because die space is limited.
        
       | Veserv wrote:
       | That is significantly more bandwidth than a DDR5 memory bus. Is
       | there something about PCIe that makes it infeasible to use as a
       | memory bus? Otherwise that just seems like a ton of free memory
       | bandwidth just being left on the table.
        
         | O5vYtytb wrote:
         | Isn't that what CXL is?
        
           | crest wrote:
           | CXL supports cache-coherent memories and even the numbers
           | claimed in marketing materials are ~5x when it comes than
           | DDR5 access times measured on desktop CPUs in
           | microbenchmarks. It's still a lot better than having software
           | handle the coherency protocol and could be really useful to
           | allow either very large ccNUMA systems or memory capacity and
           | bandwidth hungry accelerators to access the host DRAM as long
           | they can tolerate the latency e.g. something along the lines
           | of a several SX-Aurora cards hooked into a big host with a
           | few TiB of DRAM spread over >=16 DDR5 or later channels.
        
         | redleader55 wrote:
         | PCIe is a pretty complicated protocol based on messages. Using
         | this for RAM would mean the CPU and everything will be required
         | to speak this protocol. Using PCIe for RAM will also have
         | impact on latency, which is undesired.
         | 
         | Speed is not always the only thing.
        
           | Quarrel wrote:
           | > Speed is not always the only thing.
           | 
           | or perhaps more accurately, bandwidth is not the only goal.
        
             | alexgartrell wrote:
             | Or perhaps more accurately, throughput is not the only
             | goal.
        
               | cogman10 wrote:
               | Throughput, latency, cost. Pick 2.
        
         | tyler569 wrote:
         | Without being a domain expert, my intuition would be that PCIe
         | is optimized for throughput over latency and there's probably a
         | throughput compromise when you want low-latency access.
        
           | smallmancontrov wrote:
           | Yeah but DDR is starting to move to more complex modulations,
           | CXL is bringing down PCIe latency, and PCIe is starting from
           | a position of elevated competence when compared to other
           | standards. For example, you might expect that PCIe obtains
           | parallelism by sending different packets down different lanes
           | but in fact it shreds packets across lanes specifically
           | because of latency. When PCIe eats another standard, the
           | average quality of the standards ecosystem generally goes up.
           | 
           | That said, memory latency is so important that even small
           | sacrifices should be heavily scrutinized.
        
         | Retr0id wrote:
         | I'm speculating here but I think the answer has more to do with
         | latency than bandwidth.
        
         | practicemaths wrote:
         | What's the latency comparison between the memory bus and PCIe?
        
           | malfist wrote:
           | I dug into this for a hn comment months ago, but I think it's
           | 2-3 orders of magnitude difference in latency. RAM is
           | measured in nanoseconds, PCIe is measured in microseconds
        
             | pclmulqdq wrote:
             | The difference is smaller than that if you optimize it. The
             | memory bus is about 50 ns in latency, and PCIe you can get
             | down to sub 500 ns.
        
               | malfist wrote:
               | I don't think you've checked those numbers. SSD access is
               | in the order of 10-20 microseconds (10,000 - 20,000 ns)
               | and memory bus access is ~10-15 nanoseconds.
               | 
               | Here's the comment I made a couple months ago when I
               | looked up the numbers:
               | 
               | I keep hearing that, but that's simply not true. SSDs are
               | fast, but they're several orders of magnitude slower than
               | RAM, which is orders of magnitude slower than CPU Cache.
               | 
               | Samsung 990 Pro 2TB has a latency of 40 ms
               | 
               | DDR4-2133 with a CAS 15 has a latency of 14 nano seconds.
               | 
               | DDR4 latency is 0.035% of one of the fastest SSDs, or to
               | put it another way, DDR4 is 2,857x faster than an SSD.
               | 
               | L1 cache is typically accessible in 4 clock cycles, in
               | 4.8 ghz cpu like the i7-10700, L1 cache latency is sub
               | 1ns.
        
               | pclmulqdq wrote:
               | I have absolutely checked those numbers, and I have
               | written PCIe hardware cores and drivers before, as well
               | as microbenchmarking CPUs pretty extensively.
               | 
               | I think you're mixing up a few things: CAS latency and
               | total access latency of DRAM are not the same, and SSDs
               | and generic PCIe devices are not the same. Most of SSD
               | latency is in the SSD's firmware and accesses to the
               | backing flash memory, not in the PCIe protocol itself -
               | hence why the Intel Optane SSDs were super fast. Many
               | NICs will advertise sub-microsecond round-trip time for
               | example, and those are PCIe devices.
               | 
               | Most of DRAM access latency (and a decent chunk of access
               | latency to low-latency PCIe devices) comes from the CPU's
               | cache coherency network, queueing in the DRAM
               | controllers, and opening of new rows. If you're thinking
               | only of CAS latency, you are actually missing the vast
               | majority of the latency involved in DRAM operations -
               | it's the best-case scenario - you will only get the CAS
               | latency if you are hitting an open row on an idle bus
               | with a bank that is ready to accept an access.
        
               | malfist wrote:
               | I will defer to your experience, seems you have more on
               | depth knowledge on this than I do.
        
         | avar wrote:
         | Wouldn't PCI as a "memory bus" just be an SSD connected to the
         | PCI bus setup as a swap space, which you can already do?
        
           | nolist_policy wrote:
           | No, NVMe SSDs are not memory mapped. While they max out at
           | 14GB/s - giving you 56GB/s per PCIe 5.0 x16 slot - emulating
           | memory/swapping is expensive.
        
         | p1esk wrote:
         | Latency
        
         | qwertox wrote:
         | It would be nicer to see the ability to extend GPUs with off-
         | the-shelf RAM, think SO-DIMM clipped to the back of the GPU.
         | Then being able to choose to buy 2 x 64 GB DIMMs or just a 8GB
         | one, maybe also have some GBs of good RAM soldered onto the GPU
         | like it is now. One can dream.
        
           | temp0826 wrote:
           | Why does this make me picture a bunch of GPU company execs in
           | big leather chairs cackling around a conference table with a
           | dead goat in the middle?
        
           | briffle wrote:
           | But then the vendors would not be able to differentiate (ie,
           | markup) their products.
           | 
           | They currently charge thousands more for adding $75 of
           | memory.
        
           | crote wrote:
           | Wasn't this a thing in the past - like the 1990s? I know the
           | Matrox G200 supported it, but there might've been others too.
        
         | beauzero wrote:
         | I wonder if this is to help support a newish arch. like the one
         | Microsoft and Intel are proposing (Taipei conf.) with each
         | laptop/pc having a dedicated NPU, CPU, GPU to bring some of the
         | Copilot button back on PC? Everything I see today I
         | inadvertently read in "oh this will help me as a hobbyist". It
         | generally doesn't. Probably just technology doing what it does
         | and moving forward.
        
         | anonymousDan wrote:
         | I've seen some very recent research proposing just that:
         | https://arxiv.org/pdf/2305.05033.pdf . Of course still far from
         | production, but interesting that in some cases it actually also
         | improves latency.
        
         | crest wrote:
         | Even the fastest (as in lowest latency) CXL memories claim a
         | ~200ns latency (~4x worse than DDR5) and it wouldn't be free.
        
         | nxobject wrote:
         | Or, we could make it a general purpose unified system bus...
         | call it the Unibus, or something like that.
        
       | loudmax wrote:
       | I wonder how this might affect GPU access to system RAM. Right
       | now, people wanting to large language models on GPUs are
       | constrained by the amount of VRAM the cards have. GPU access to
       | system RAM is enough of a bottleneck that you might as well run
       | your application on the regular CPU. But GPU access to system RAM
       | is fast enough, then this opens more possibilities for GPU
       | acceleration for large models.
        
         | alexgartrell wrote:
         | Latency and cache coherency are the other things that make this
         | hard. Cache coherency can theoretically be resolved by CXL, so
         | maybe we'll get there that way.
        
           | Tuna-Fish wrote:
           | AI models do not need coherent memory, the access pattern is
           | regular enough that you can make do with explicit barriers.
           | 
           | The bigger problem is that by the time PCIe 7.0 will be
           | actually available, 242GB/s per direction will probably not
           | be sufficient for anything interesting.
        
           | foobiekr wrote:
           | The announced Grace-Hopper chip appears to use a CXL or Arm-
           | specific-CXL-alike
        
         | dist-epoch wrote:
         | Currently the memory controller is inside the CPU, so the GPU
         | would have to go through the CPU to get to the memory.
         | 
         | You would still be limited by the CPU memory bandwidth, and you
         | will share it with it.
        
           | AnotherGoodName wrote:
           | They just moved that silicon to the CPU side is all. The same
           | issues existed in the northbridge days particularly in multi-
           | CPU setups where you'd have to go over hyper transport if the
           | memory access was on the other controller.
           | 
           | Not saying these issues don't exist. It's just that they
           | really haven't changed much here except moved logic from the
           | motherboard to the CPU itself.
        
         | faeriechangling wrote:
         | As PCIe gets faster so does memory so it continues being a
         | bottleneck.
        
         | KeplerBoy wrote:
         | Regular desktop ddr5 dusk Channel memory bandwidth is only
         | around 60 GB/s. I doubt it will keep up with PCIe speeds.
        
       | westurner wrote:
       | PCI Express > PCIe 7.0:
       | https://en.wikipedia.org/wiki/PCI_Express#PCI_Express_7.0
        
       | ShakataGaNai wrote:
       | This one of those things were the default reaction is to say
       | "Cool! This will be super handy" but then realize your existing
       | computer is still running PCIE4 and and PCIE5 devices are still
       | rare. I didn't realize that PCIE6 was already done and "out".
        
         | zamadatix wrote:
         | Consumer devices don't really need the bandwidth doubling,
         | outside power users treating it as a way to have less lane
         | contention instead of increased bandwidth per device, yet. Most
         | of the attraction is for large systems hoping to remotely
         | access/share memory in a cluster. On the consumer side the only
         | thing really close to mattering is "higher speed sequential SSD
         | reads/writes on an x4 m.2 NVMe slot" and even then the speeds
         | are already a bit silly for any practical single drive purpose.
        
           | crote wrote:
           | I am hoping for some PCIe x16 Gen N to 2x PCIe x16 Gen (N-1)
           | switches - or even 4x PCIe x8 Gen (N-1).
           | 
           | Modern consumer devices have an x16 connection for a GPU, an
           | x4 connection for an NVMe, and an x4 connection to the
           | chipset for everything else. If the motherboard has more than
           | one "x16 slot", the other slots usually only have an x1
           | connection!
           | 
           | Want to add a cheap 10G NIC? Sorry, that uses a Gen 3 x4
           | connection. x16-to-4xNVMe? Not going to happen. HBA adapter?
           | Forget it.
           | 
           | Meanwhile, the GPU is _barely_ using its Gen 5 x16
           | connection. Why can 't I make use of all this spare
           | bandwidth? Give the GPU a Gen 5 x8 instead, and split the
           | remaining 8 lanes into 1x Gen 3 x16 + 2x Gen 3 x8 or
           | something.
        
         | whartung wrote:
         | It's still "WOW" and tickles my "remember when" neurons.
        
       | _zoltan_ wrote:
       | is the expectation that Zen 5 will be PCIe 6?
        
       | amluto wrote:
       | Based on my (admittedly limited) understanding, I'm not sure I'm
       | impressed.
       | 
       | Nvidia/Mellanox will sell you an NDR Infiniband device, right
       | now, that does 100Gbps per lane. PCIe7 is 128Gbps per lane and is
       | probably quite a ways out.
       | 
       | AIUI (based on reading some good comments a while ago), there's a
       | tradeoff here. PCIe/CXL requires a very low BER, whereas
       | Infiniband tolerates a BER that is several orders of magnitude
       | higher. This lets PCIe achieve much lower latency, but I'm not
       | entirely sure I'm convinced that this low latency is useful at
       | these extremely high throughputs. I'm not personally aware of an
       | application that needs it. (I know of some applications that
       | would love extremely low CXL latency, but they don't need this at
       | anywhere near 100Gbps per lane, nor do I even expect real world
       | CXL uses to come particularly close to the nominal latency of the
       | link any time soon.
       | 
       | Maybe PCIe should consider making the throughput-vs-latency
       | tunable, either per link or per packet? And maybe that would
       | allow simpler board designs and lower power for applications that
       | can tolerate higher latency.
        
       ___________________________________________________________________
       (page generated 2024-04-04 23:01 UTC)