[HN Gopher] PCIe 8.0 announced by the PCI-Sig will double throug...
       ___________________________________________________________________
        
       PCIe 8.0 announced by the PCI-Sig will double throughput again
        
       Author : rbanffy
       Score  : 85 points
       Date   : 2025-08-09 22:41 UTC (4 days ago)
        
 (HTM) web link (www.servethehome.com)
 (TXT) w3m dump (www.servethehome.com)
        
       | SlightlyLeftPad wrote:
       | Any EEs that can comment on at what point do we just flip the
       | architecture over so the GPU pcb is the motherboard and the
       | cpu/memory lives on a PCIe slot? It seems like that would also
       | have some power delivery advantages.
        
         | vincheezel wrote:
         | Good to see I'm not the only person that's been thinking about
         | this. Wedging gargantuan GPUs onto boards and into cases,
         | sometimes needing support struts even, and pumping hundreds of
         | watts through a power cable makes little sense to me. The CPU,
         | RAM, these should be modules or cards on the GPU. Imagine that!
         | CPU cards might be back..
        
           | ksec wrote:
           | It is not like CPU aren't getting higher wattage as well.
           | Both AMD and Intel have roadmap for 800W CPU.
           | 
           | At 50-100W for IO, this only leaves 11W per Core on a 64 Core
           | CPU.
        
             | linotype wrote:
             | 800 watt CPU with a 600 watt GPU, I mean at a certain point
             | people are going to need different wiring for outlets
             | right?
        
               | jchw wrote:
               | At least with U.S. wiring we have 15 amps at 120 volts.
               | For continuous power draw I know you'd want an 80% margin
               | of safety, so let's say you have 1440 Watts of AC power
               | you can safely draw continuously. Power supplies built on
               | MOSFETs seem to peak at around 90% efficiency, but you
               | could consider something like the Corsair AX1600i using
               | gallium nitride transistors, which supposedly can handle
               | up to 1600 watts at 94% efficiency.
               | 
               | Apparently we still have room, as long as you don't run
               | anything else on the same circuit. :)
        
               | cosmic_cheese wrote:
               | Where things get hairy are old houses with wiring that's
               | somewhere between shaky and a housefire waiting to
               | happen, which are numerous.
        
               | kube-system wrote:
               | Yeah, but it ain't nothing that microwaves, space
               | heaters, and hair dryers haven't already given a run for
               | their money.
        
               | jchw wrote:
               | Hair dryers and microwaves only run for a few minutes, so
               | even if you do have too much resistance this probably
               | won't immediately reveal a problem. A space heater might,
               | but most space heaters I've come across actually seem to
               | draw not much over 1,000 watts.
               | 
               | And even then, even if you _do_ run something 24 /7 at
               | max wattage, it's definitely not guaranteed to start a
               | fire even if the wiring is bad. Like, as long as it's not
               | egregiously bad, I'd expect that there's enough margin to
               | cover up less severe issues in most cases. I'm guessing
               | the most danger would come when it's particularly hot
               | outside (especially since then you'll probably have a lot
               | of heat exchangers running.)
        
               | jchw wrote:
               | As an old house owner, I can attest to that for sure. In
               | fairness though, I suspect most of the atrocities occur
               | in wall and work boxes, as long as your house is new
               | enough to at least have NM sheathed wiring instead of
               | ancient weird stuff like knob and tube. That's still bad
               | but it's a solvable problem.
               | 
               | I've definitely seen my share of scary things. I have a
               | lighting circuit that is incomprehensibly wired and seems
               | to kill LED bulbs randomly during a power outage; I have
               | zero clue what is going on with that one. Also, often
               | times opening up wall boxes I will see backstabs that
               | were not properly inserted or wire nuts that are just
               | covering hand-twisted wires and not actually threaded at
               | all (and not even the right size in some cases...)
               | Needless to say, I should really get an electrician in
               | here, but at least with a thermal camera you can look for
               | signs of serious problems.
        
               | atonse wrote:
               | You can always have an electrician install a larger
               | breaker for a particular circuit. I did that with my
               | "server" area in my study, which was overkill cuz I
               | barely pull 100w on it. But it cost nearly zero extra
               | since he was doing a bunch of other things around the
               | house anyway.
        
               | davrosthedalek wrote:
               | Larger breaker and thicker wires!
        
               | atonse wrote:
               | I thought you only needed thicker wires for higher amps?
               | Should go without saying, but I am not a certified
               | electrician :-)
               | 
               | I only have a PhD from YouTube (Electroboom)
        
               | jchw wrote:
               | The voltage is always going to be the same because the
               | voltage is determined by the transformers leading to your
               | service panel. The breakers break when you hit a certain
               | amperage for a certain amount of time, so by installing a
               | bigger breaker, you allow more amperage.
               | 
               | If you actually had an electrician do it, I doubt they
               | would've installed a breaker if they thought the wiring
               | wasn't sufficient. Truth is that you can indeed get away
               | with a 20A circuit on 14 AWG wire if the run is short
               | enough, though 12 AWG is recommended. The reason for this
               | is voltage drop; the thinner gauge wire has more
               | resistance, which causes more heat and voltage drop
               | across the wire over the length of it, which can cause a
               | fire if it gets sufficiently hot. I'm not sure how much
               | risk you would put yourself in if you were out-of-spec a
               | bit, but I wouldn't chance it personally.
        
               | bangaladore wrote:
               | Could you not just run a 240 volt outlet on existing
               | wiring built for 110v? Just send l1 and l2 on the
               | existing hot/neutral?
        
               | bri3d wrote:
               | You can, 240V on normal 12/2 Romex is fine. The neutral
               | needs to be "re-labeled" with tape at all junctions to
               | signify that it's hot, and then this practice is
               | (generally) even code compliant.
               | 
               | However! This strategy only works if the outlet was the
               | only one on the circuit, and _that_ isn't particularly
               | common.
        
               | viraptor wrote:
               | > You can always have an electrician install ...
               | 
               | If you own the house, sure. Many people don't.
        
               | chronogram wrote:
               | That's still not much for wiring in most countries. A
               | small IKEA consumer oven is only 230V16A=3860W. Those
               | GPUs and CPUs only consume that much at max usage anyway.
               | And those CPUs are uninteresting for consumers, you only
               | need a few Watts for a single good core, like a Mac Mini
               | has.
        
               | dv_dt wrote:
               | So Europe ends up with an incidental/accidental advantage
               | in the AI race?
        
               | buckle8017 wrote:
               | In residential power delivery? yes
               | 
               | In power cost? no
               | 
               | I'm literally any other way? also no
        
               | kube-system wrote:
               | Consumers with desktop computers are not winning any AI
               | race anywhere.
        
               | atonse wrote:
               | All American households get mains power at 240v (I'm
               | missing some nuance here about poles and phases, so the
               | electrical people can correct my terminology).
               | 
               | It's often used for things like ACs, Clothes Dryers,
               | Stoves, EV Chargers.
               | 
               | So it's pretty simple for a certified electrician to just
               | make a 240v outlet if needed. It's just not the default
               | that comes out of a wall.
        
               | kube-system wrote:
               | To get technical -- US homes get two phases of 120v that
               | are 180 degrees out of phase with the neutral. Using
               | either phase and the neutral gives you 120v. Using the
               | two out of phase 120v phases together gives you a
               | difference of 240v.
               | 
               | https://appliantology.org/uploads/monthly_2016_06/large.5
               | 758...
        
               | ender341341 wrote:
               | Even more technical, we don't have two phases, we have
               | 1-phase that's split in half. I hate it cause it makes it
               | confusing.
               | 
               | Two phase power is not the same as split phase (There's
               | basically only weird older installations of 2 phase in
               | use anymore).
        
               | kube-system wrote:
               | Yeah that's right. The grid is three phases (as it is
               | basically everywhere in the world), and the transformer
               | at the pole splits one of those in half. Although, what
               | are technically half-phases are usually just called
               | "phases" when they're inside of a home.
        
               | voxadam wrote:
               | Relevant video from _Technology Connections_ :
               | 
               | "The US electrical system is not 120V"
               | https://youtu.be/jMmUoZh3Hq4
        
               | atonse wrote:
               | That's such a great video, like most of his stuff.
        
               | dv_dt wrote:
               | Well yes its possible but often $500-1000 to run a new
               | 240v outlet, and that's to a garage for an ev charger. If
               | you want an outlet in the house I dont know how much wall
               | people want to tear up and extra time and cost.
        
               | atonse wrote:
               | Sure yeah, I was just clarifying that if the issue is
               | 240v, etc, US houses have the feed coming in.
               | Infrastructure-wise it's not an issue at all.
        
               | ender341341 wrote:
               | > So it's pretty simple for a certified electrician to
               | just make a 240v outlet if needed. It's just not the
               | default that comes out of a wall.
               | 
               | It'd be all new wire run (120 is split at the panel, we
               | aren't running 240v all over the house) and currently
               | electricians are at a premium so it'd likely end up
               | costing a thousand+ to run that if you're using an
               | electrician, more if there's not clear access from an
               | attic/basement/crawlspace.
               | 
               | Though I think it's unlikely we'll see an actual need for
               | it at home, I imaging a 800w cpu is going to be for
               | server class CPUs and rare-ish to see in home
               | environments.
        
               | vel0city wrote:
               | I don't think many people would want some 2kW+ system
               | sitting on their desk at home anyways. That's quite a
               | space heater to sit next to.
        
               | bonzini wrote:
               | Also the noise from the fans.
        
               | com2kid wrote:
               | > and currently electricians are at a premium so it'd
               | likely end up costing a thousand+
               | 
               | I got a quote for over 2 thousand to run a 24v line
               | literally 9 feet from my electrical panel across my
               | garage to put a EV charger in.
               | 
               | Opening up an actual wall and running it to another room?
               | I can only imagine the insane quotes that'd get.
        
               | the8472 wrote:
               | If we're counting all the phases then european homes get
               | 400V 3-phase, not 240V split-phase. Not that typical
               | residential connections matter to highend servers.
        
               | bonzini wrote:
               | It depends on the country, in many places independent
               | houses get a single 230V phase only.
        
               | carlhjerpe wrote:
               | In the Nordics we're on 10A for standard wall outlets so
               | we're stuck on 2300W without rewiring (or verifying
               | wiring) to 2.5mm2.
               | 
               | We rarely use 16A but it exists. All buildings are
               | connected to three phases so we can get the real juice
               | when needed (apartments are often single phase).
               | 
               | I'm confident personal computers won't reach 2300W
               | anytime soon though
        
               | bonzini wrote:
               | In Italy we also have 10A and 16A (single phase). In
               | practice almost all wires running in the walls are 2.5
               | mm^2, so that you can use them for either one 16A plug or
               | two adjacent 10A plugs.
        
               | rbanffy wrote:
               | > And those CPUs are uninteresting for consumers, you
               | only need a few Watts for a single good core, like a Mac
               | Mini has.
               | 
               | Speak for yourself. I'd love to have that much computer
               | at my disposal. Not sure what I'd do with it. Probably
               | open Slack and Teams at the same time.
        
               | orra wrote:
               | Laughs in 230V (sorry).
        
               | tracker1 wrote:
               | There already are different outlets for these higher
               | power draw beasts in data centers. The amount of energy
               | used in a 4u "AI" box is what an entire rack used to
               | draw. Data centers themselves are having to rework/rewire
               | areas in order to support these higher power systems.
        
               | t0mas88 wrote:
               | A simple kitchen top water cooker is 2000W, so a 1500W PC
               | sounds like no big deal.
        
               | kube-system wrote:
               | Kettles in the US are usually 1500W, as the smallest
               | branch circuits in US homes support 15A at 120V and the
               | general rule for continuous loads is to be 80% of the
               | maximum.
        
               | linotype wrote:
               | True but kettles rarely run for very long.
        
               | kube-system wrote:
               | But computers do, which was why I included that context.
               | You don't really want to build consumer PC >1500W in the
               | US or you'd need to start changing the plug to patterns
               | that require larger branch circuits.
        
               | CyberDildonics wrote:
               | Kettles and microwaves are usually 1100 watts and lower,
               | but space heaters and car chargers can be 1500 watts and
               | run for long periods of time.
        
               | triknomeister wrote:
               | And cooling. Look here: https://www.fz-
               | juelich.de/en/news/archive/press-release/2025...
               | 
               | Especially a special PDU: https://www.fz-
               | juelich.de/en/newsroom-jupiter/images-isc-202...
               | 
               | And cooling: https://www.fz-juelich.de/en/newsroom-
               | jupiter/images-isc-202...
        
           | avgeek23 wrote:
           | And the memory should be a onboard module on the cpu card
           | intel/amd should replicate what apple did with a unified same
           | ringbus sort of memory modules. Lower latency,higher
           | throughput.
           | 
           | Would push performance further. Although companies like intel
           | would bleed the consumer dry with, a certain i5-whatever cpu
           | with onboard memory of 16 gigs could be insanely priced
           | compared to what you'd pay for addon memory.
        
             | 0x457 wrote:
             | That would pretty much make both intel and amd to start
             | market segmentation by CPU Core + Memory combination. I
             | absolutely do not want that.
        
           | sitkack wrote:
           | https://en.wikipedia.org/wiki/Compute_Express_Link
        
           | derefr wrote:
           | But all of the most-ridiculous hyperscale deployments, where
           | bandwidth + latency most matter, have multiple GPUs per CPU,
           | with the CPU responsible for splitting/packing/scheduling
           | models and inference workloads across its own direct-attached
           | GPUs, providing the network the abstraction of a single GPU
           | with more (NUMA) VRAM than is possible for any single
           | physical GPU to have.
           | 
           | How do you do that, if each GPU expects to be its own
           | backplane? One CPU daughterboard per GPU, and then the CPU
           | daughterboards get SLIed together into one big CPU using
           | NVLink? :P
        
             | wmf wrote:
             | GPU as motherboard really only makes sense for gaming PCs.
             | Even there SXM might be easier.
        
           | mensetmanusman wrote:
           | It's always going to be a back and forth on how you attach
           | stuff.
           | 
           | Maybe the GPU becomes the motherboard and the CPU plugs into
           | it.
        
         | verall wrote:
         | If you look at a any of the nvidia DGX boards it's already
         | pretty close.
         | 
         | PCIe is a standard/commodity so that multiple vendors can
         | compete and customers can save money. But at 8.0 speeds I'm not
         | sure how many vendors will really be supplying, there's already
         | only a few doing serdes this fast...
        
         | MurkyLabs wrote:
         | Yes I agree, let's bring back the SECC style CPU's from the
         | Pentium Era, I've still got my Pentium II (with MMX technology)
        
         | Razengan wrote:
         | Isn't that what has kinda sorta basically happened with Apple
         | Silicon?
        
           | MBCook wrote:
           | GPU + CPU on the same die, RAM on the same package.
           | 
           | A total computer all-in-one. Just no interface to the world
           | without the motherboard.
        
           | trenchpilgrim wrote:
           | And AMD Strix Halo.
        
         | Dylan16807 wrote:
         | And limit yourself to only one GPU?
         | 
         | Also CPUs are able to make use of more space for memory, both
         | horizontally and vertically.
         | 
         | I don't really see the power delivery advantages, either way
         | you're running a bunch of EPS12V or similar cables around.
        
         | burnt-resistor wrote:
         | Figure out how much RAM, L1-3|4 cache, integer, vector,
         | graphics, and AI horsepower is needed for a use-case ahead-of-
         | time and cram them all into one huge socket with intensive
         | power rails and cooling. The internal RAM bus doesn't have to
         | be DDRn/X either. An integrated northbridge would deliver PCIe,
         | etc.
        
         | kvemkon wrote:
         | > at what point do we just flip the architecture over so the
         | GPU pcb is the motherboard and the cpu/memory
         | 
         | Actually the RapsberryPi (appeared 2012) was based on a SoC
         | with a big and powerful GPU and small weak supporting CPU. The
         | board booted the GPU first.
        
         | LeoPanthera wrote:
         | Bring back the S100 bus and put literally everything on a card.
         | Your motherboard is just a dumb bus backplane.
        
           | MBCook wrote:
           | We were moving that way, sorta, with Slot 1 and Slot A.
           | 
           | Then that became unnecessary when L2 cache went on-die.
        
         | leoapagano wrote:
         | One possible advantage of this approach that no one here has
         | mentioned yet is that it would allow us to put RAM on the CPU
         | die (allowing for us to take advantage of the greater memory
         | bandwidth) while also allowing for upgradable RAM.
        
         | pshirshov wrote:
         | Can I just have a backplane? Pretty please?
        
           | colejohnson66 wrote:
           | Sockets (and especially backplanes) are absolutely atrocious
           | for signal integrity.
        
             | pshirshov wrote:
             | I guess if it's possible to have 30cm PCIe 5 riser cables,
             | it should be possible to have a backplane with traces of
             | similar length.
        
           | vFunct wrote:
           | VMEBus for the win! (now VPX...)
        
           | theandrewbailey wrote:
           | I've wondered why there hasn't been a desktop with a CPU+RAM
           | card that slots into a PCIe x32 slot (if such a thing could
           | exist), or maybe dual x16 slots, and the motherboard could be
           | a dumb backplane that only connected the other slots and
           | distributed power, and probably be much smaller.
        
         | dylan604 wrote:
         | Wouldn't that mean an complete mobo replacement to upgrade the
         | GPU? GPU upgrades seem much more rapid and substantial compared
         | to CPU/RAM. Each upgrade would now mean taking out the CPU/RAM
         | and other cards vs just replacing the GPU
        
           | p1esk wrote:
           | GPUs completely dominate the cost of a server, so a GPU
           | upgrade typically means new servers.
        
             | BobbyTables2 wrote:
             | Agree - newer GPU likely will need faster PCIe speeds too.
             | 
             | Kinda like RAM - almost useless in terms of "upgrade" if
             | one waits a few years. (Seems like DDR4 didn't last long!)
        
       | zkms wrote:
       | My reaction to PCIe gen 8 is essentially "Huh? No, retro data
       | buses are like ISA, PCI, and AGP, right? PCIe Gen 3 and SATA are
       | still pretty new...".
       | 
       | I wonder what modulation order / RF bandwidth they'll be using on
       | the PHY for Gen8. I think Gen7 used 32GHz, which is ridiculously
       | high.
        
         | eqvinox wrote:
         | I'd highly advise against using GHz here (without further
         | context, at least), a 32Gbaud / 32Gsym/s NRZ signal toggling at
         | full rate is only a 16GHz square wave.
         | 
         | baud seems out of fashion, sym/s is pretty clear & unambiguous.
         | 
         | (And if you're talking channel bandwidth, that needs
         | clarification)
        
           | kvemkon wrote:
           | > > I think Gen7 used 32GHz, which is ridiculously high.
           | 
           | > 16GHz square wave
           | 
           | Is it for PCIe 5.0? PCIe 6.0 should operate on the same
           | frequency and doubling the bandwidth by using PAM4. If PCIe
           | 7.0 doubled the bandwidth and is still PAM4, what is the
           | underlying frequency?
        
             | eqvinox wrote:
             | PCIe 7 = 128 GT/s = 64 Gbaud x PAM-4 = 32 "GHz" (if you
             | alternate extremes on each symbol)
             | 
             | for gen6, halve all numbers
        
               | Dylan16807 wrote:
               | Is it me or are they using the term GigaTransfers wrong?
               | They're counting a single PAM4 pulse as two "transfers".
        
               | eqvinox wrote:
               | They kinda are and kinda aren't, they're just using their
               | own definition...
               | 
               | (I'm accepting it because "Transfers"/"T" as unit is
               | quite rare outside of PCIe)
        
               | zamalek wrote:
               | GT/s is also gaining ground for system RAM in order to
               | clear up the ambiguity that DDR causes for end-consumers.
        
               | Dylan16807 wrote:
               | And it's a good way to remove the ambiguity of things
               | like DDR, but ugh "transfers" is not the best word here.
               | 
               | Looking at some documents from Micron I don't see them
               | using GT/s anywhere. And in particular if I go look at
               | their GDDR6X resources because those chips use PAM4, it's
               | all about gigabits per second [per pin]. So for example
               | 6GHz data clock, 12Gbaud, 24Gb/s/pin.
        
           | guerrilla wrote:
           | > baud seems out of fashion, sym/s is pretty clear &
           | unambiguous.
           | 
           | Huh? Baud is sym/s.
        
             | eqvinox wrote:
             | Yes, that was the implication, but I've been getting the
             | impression that using baud is kinda unpopular compared to
             | using sym/s.
        
               | throwway120385 wrote:
               | A lot of people think that baud rate represents bits per
               | second, which it only does in systems where the symbol
               | set is binary. People got it from RS232.
        
               | rbanffy wrote:
               | IIRC, modems never went much beyond 2400 baud. Everything
               | past that was clever modulation packing more bits onto a
               | single symbol.
        
         | Dylan16807 wrote:
         | > PCIe Gen 3 and SATA are still pretty new...
         | 
         | That's an interesting thought to look at. PCIe 3 was a while
         | ago, but SATA was nearly a decade before _that_.
         | 
         | > I wonder what modulation order / RF bandwidth they'll be
         | using on the PHY for Gen8. I think Gen7 used 32GHz, which is
         | ridiculously high.
         | 
         | Wikipedia says it's planned to be PAM4 just like 6 and 7.
         | 
         | Gen 5 and 6 were 32 gigabaud. If 8 is PAM4 it'll be 128
         | gigabaud...
        
         | weinzierl wrote:
         | Don't forget VESA Local Bus.
        
       | bhouston wrote:
       | I love the PCIe standard is 3 generations ahead of what is
       | actually released. Gen5 is the live version, but the team behind
       | it is so well organized that they have a roadmap of 3 additional
       | versions now. Love it.
        
         | ThatMedicIsASpy wrote:
         | Gen6 is in use look at Nvidia ConnectX-8
        
           | drewg123 wrote:
           | What hosts support Gen6? AFAIK, Gen5 is the most recent
           | standard that's actually deployed. Eg, what can you plug a
           | CX8 into that will link up at Gen6?
        
             | triknomeister wrote:
             | Custom Nvidia network cards I guess.
        
             | my123 wrote:
             | Blackwell DC (B200/B300)
        
         | tails4e wrote:
         | It takes a long time to get form standard to silicon, so I bet
         | there are design teams working on pcie7 right now, which won't
         | see products for 2 or more years
        
         | Seattle3503 wrote:
         | Is there an advantage of getting so far ahead of
         | implementations? It seems like it would be more difficult to
         | incorporate lessons.
        
           | kvemkon wrote:
           | When AMD introduces a new Desktop CPU series IIRC they claim
           | the next generation design is (almost) finished (including
           | layout?) and they start with the next-next-gen design. And
           | I'm also asking the same question. But more than a half a
           | year before the CPU becomes available to the public it is
           | already being tested by partners (mainboard manufacturers and
           | ?).
        
         | Phelinofist wrote:
         | So we can skip 6 and 7 and go directly to 8, right?
        
       | ThatMedicIsASpy wrote:
       | I'll take it if my consumer mb chipset supports giving me 48
       | PCIe7 lanes if future desktops still would only come with 24 gen
       | 8 lanes
        
       | richwater wrote:
       | Meanwhile paying a premium for a Gen5 motherboard may net you
       | somewhere in the realm of 4% improvements in gaming if you're
       | lucky.
       | 
       | Obviously PCI is not just about gaming but...
        
         | simoncion wrote:
         | From what I've seen, the faster PCI-E bus is important when you
         | need to shuffle things in and out of VRAM. In a video game, the
         | faster bus reduces the duration of stutters caused by pushing
         | more data into the graphics card.
         | 
         | If you're using a new video card with only 8GB of onboard RAM
         | and are turning on all the heavily-advertised bells and
         | whistles on new games, you're going to be running out of VRAM
         | very, very frequently. The faster bus isn't really important
         | for higher frame rate, it makes the worst-case situations less
         | bad.
         | 
         | I get the impression that many reviewers aren't equipped to do
         | the sort of review that asks questions like "What's the
         | intensity and frequency of the stuttering in the game?" because
         | that's a bit harder than just looking at average, peak, and 90%
         | frame rates. The question "How often do textures load at
         | reduced resolution, or not at all?" probably _requires_ a human
         | in the loop to look at the rendered output to notice those
         | sorts of errors... which is time consuming, attention-demanding
         | work.
        
           | Dylan16807 wrote:
           | There's a good amount of reviewers showing 1% lows and 0.1%
           | lows, which should capture stuttering pretty well.
           | 
           | I don't know how many games are even capable of using lower
           | resolutions to avoid stutter. I'd be interested in an
           | analysis.
        
           | rbanffy wrote:
           | I'm sure Windows performance counters can track the volume of
           | data going between CPU memory and VRAM over the PCIe bus.
        
         | jeffbee wrote:
         | By an overwhelming margin, most computers are not in gamers'
         | basements.
        
         | checker659 wrote:
         | No matter the leaps in bandwidth, the latency remains the same.
         | Also, with PCIe switches used in AI servers, the latency (and
         | jitter) is even pronounced.
        
       | LeoPanthera wrote:
       | I thought we were only just up to 5? Did we skip 6 and 7?
        
         | pkaye wrote:
         | Some of the newer ones maybe more for data centers.
        
       | robotnikman wrote:
       | I know very little about electronics design, so I always find it
       | amazing that they keep managing to double PCIe throughput over
       | and over. Its also probably the longest lived expansion bus at
       | the moment.
        
         | wmf wrote:
         | It's less surprising if you realize that PCIe is behind
         | Ethernet (per lane).
        
         | rbanffy wrote:
         | I'm sure you can get some VMEbus boards.
        
       | pshirshov wrote:
       | Yeah, such a shame I've just upgraded to a 7.0 motherboard for my
       | socket AM7 CPU.
       | 
       | Being less sarcastic, I would ask if 6.0 mobos are on the
       | horizon.
        
         | wmf wrote:
         | I guess Venice, Diamond Rapids, and Vera will have 6.0.
        
       ___________________________________________________________________
       (page generated 2025-08-13 23:00 UTC)