[HN Gopher] The death of the PCIe expansion card
___________________________________________________________________
The death of the PCIe expansion card
Author : Kerrick
Score : 154 points
Date : 2022-09-06 14:26 UTC (8 hours ago)
(HTM) web link (kerricklong.com)
(TXT) w3m dump (kerricklong.com)
| structural wrote:
| Your article mentions M.2 to PCIe adapters, and having just
| worked with a few of these in the lab, some extra information for
| you:
|
| 1. The ADT-Link parts that you see on AliExpress are the most
| common ones; most other parts you see will be rebadged or resold
| versions of those. If you're looking to play around with one of
| these, http://www.adtlink.cn/en/product/R42.html is resold on
| Mouser as
| https://www.mouser.com/ProductDetail/DFRobot/FIT0782?qs=ljCe...
|
| 2. There's some mechanical compatibility issues with these
| adapters due to the screws connecting the cable to the M.2 card:
| M.2 connectors on the motherboard side come in various heights
| and not all have sufficient room for the bottom of the screw to
| not hit the motherboard and prevent the card from being fully
| inserted.
|
| That said, the adapters work great at PCIe Gen 3 speeds. I
| probably wouldn't expect them to work above that in the general
| case.
| Havoc wrote:
| An the flip side this does open up some up-cycling opportunities
| on the prior gen being retired.
|
| Take old gaming PC, throw out GFX sitting in the x16 gen4 slot
| and replace it with a bifurcation card for nvmes. Should make for
| a decent proxmox machine with has zfs storage
| formerly_proven wrote:
| Unfortunately most mainstream boards have one PEG / PCIe x16
| slot, one M.2 slot and maybe 1-3 PCIe x1 slots (usually x1, even
| if they are x4/x16 mechanically). Those ancillary PCIe slots are
| generally multiplexed through the chipset and 1-2 generations
| behind the CPU's PCIe generation, so they aren't even good enough
| for networking or another SSD (not much faster than SATA, so
| what's the point).
|
| Even USB-C remains a niche feature on the latest socket 1700
| (Intel) and AM4 (AMD, presumably extending to AM5), despite being
| standard on laptops for years. Part of this is because graphics
| are often not on the motherboard, so the motherboard has to
| include a DP-in socket to support graphics out over USB-C.
| MisterTea wrote:
| This is on the heels of "what happened to the discreet sound
| card?"
|
| Let me ask you this: What is left to plug in? Most removable
| gadgets are USB because USB is fast enough. The devices which
| actually need the bandwidth, latency or memory mapping are
| already on the motherboard. And the people who need more are a
| minority.
|
| The only real use case for PCIe to the average PC user is for
| connecting a GPU or NVMe. Moving forwards I see the converged
| hodgepodge of USB4 and beyond killing the x1-x4 PCIe slot with
| servers being the last hold out for high slot counts.
| cbozeman wrote:
| We're being nickel and dimed and upsold on high-end
| workstations for something that another poster in this thread
| already established was cheap... right up until Broadcom bought
| the company that made the cheap alternative chip.
|
| It's so fucking American, isn't it? If you can't beat someone
| with your technical superiority, buy them out.
| JoeAltmaier wrote:
| Happens all the time! I was in a startup bought out to kill
| our product.
| GekkePrutser wrote:
| This article title is BS. PCIe is alive and well. We just need it
| for less things because a lot of good stuff is already on board
| and it now appears in more form factors namely Thunderbolt and
| M.2.
|
| We will still need it for graphics, fast SSDs, fibre channel and
| 10G Ethernet etc
| structural wrote:
| The title is accurate, though. The use of PCIe is exploding
| everywhere, but we're putting it in every other interface and
| the "PCIe expansion card" connector and form factor is seeing
| relatively less use overall.
| GekkePrutser wrote:
| It's not 'dead' at all, that's what I was referring to. It's
| not going anywhere either for the remaining usecases.
| MarkusWandel wrote:
| I'm running the kind of eclectic setup that expansion slots were
| meant for. A board that has three full length PCI-E slots in it,
| with three video cards.
|
| Running a multiseat machine (i.e. truly independent login
| stations) pretty much requires a video chip per seat.
|
| It's been an interesting ride. Graphics card fans weren't really
| meant for 24x7 operations. After a recent power failure, gummed
| up lubricant caused one not to start again, and the card suffered
| heat death. Also multi-seat login was abandoned, i.e. broken and
| tmk never fixed, in GDM, so I have to use LightDM with no
| alternatives. Also there were stability issues with nVidia cards,
| but three Radeon cards work fine.
|
| Possibly with the latest hardware, a GPU-per-seat setup could be
| done with Thunderbolt? Anyway meanwhile we soldier on on the
| cheap, with a circa 2012 vintage ultra-high-end gamer machine
| still providing adequate compute power for all the screens in the
| house.
| zamadatix wrote:
| https://github.com/Arc-Compute/LibVF.IO/tree/master/ plus
| https://github.com/gnif/LookingGlass works pretty well. If you
| use an Intel GPU, particularly one of their new Arc dedicated
| GPUs, it supports the functionality on the consumer grade
| hardware without any trickery and you just need Looking Glass
| to map the outputs.
|
| If you really want multiple GPUs though you can also use a
| normal 1-in-4-out type PCIe switch and save a lot of cost on
| Thunderbolt components in-between. Low bandwidth ones are
| particularly dirt cheap due to crypto mining volume. Keep an
| eye out for ACS support or you may have to use an ACS override
| patch though.
| ridgered4 wrote:
| > If you use an Intel GPU, particularly one of their new Arc
| dedicated GPUs, it supports the functionality on the consumer
| grade hardware without any trickery and you just need Looking
| Glass to map the outputs.
|
| Does this work yet? Last time I looked, my understanding was
| that SR-IOV is supposedly supported on Arc but the software
| doesn't exist yet, and might not for some time.
| zamadatix wrote:
| I don't think it's due for upstream until next year. You
| can pull it and mess with it now though if you're willing
| to buy the GPU from eBay or China. I haven't seen any
| US/Europe retail postings yet. For most it's fair to say
| it's not actually available yet though.
|
| I'm sad they got rid of GVT-g with the change though. SR-
| IOV is definitely a nice add but it has downsides on the
| resource sharing. Undoubtedly GVT-g was just considered too
| unsafe and too niche to keep though.
| justinlloyd wrote:
| It is unfortunate that the consumer grade GPUs cannot be shared
| in Proxmox/ESXi/UNRAID like you can with the "pro" level cards.
| One of the four major benefits of going with RTX A5000 cards
| over 3090's was that I can share one or more GPUs amongst
| several virtual machines, i.e. Shared Passthrough and Mutl-
| vGPU.
| NotYourLawyer wrote:
| The rise of laptops seems like the big one to me. Why would I
| make a device in a PCIe format and alienate a huge chunk of my
| market, instead of using USB-whatever?
| CosmicShadow wrote:
| I just built a new PC after like 8 years and had to buy a cheaper
| mobo in order to get access to more PCIe slots. I was baffled why
| there were so few on them and was wondering if I could even fit 2
| video cards on the one I originally wanted. I almost hit buy
| before I went wait, no something does look wrong, where the hell
| are all the slots! Definitely a bit of a worrying trend. I expect
| mobile devices to get shittier and shittier as they remove
| expandable memory and headphone jacks, but not my precious PCs
| with so many components to choose from and customize with.
| Luckily I didn't really need to put any of my old stuff into my
| new rig so it does indeed look empty. I got that Fractal case
| that seems like a dream for customizability and it feels like a
| waste that I'm not utilizing it.
| kmeisthax wrote:
| One related issue is the death of integrated PCIe switches on
| motherboards. High-end boards used to have PCIe hubs on them so
| you could plugin in a lot of cards and still get all the
| bandwidth out of a 16-lane CPU.
|
| The reason why they don't do this is because PLX got bought out
| by Broadcom and they jacked up the price of the chips. It turns
| out their most lucrative market was storage, because datacenters
| want to connect a _lot_ of high-speed NVMe to one machine and
| they will pay buckets for a chip that will let them all share
| bandwidth.
|
| So now consumers that want a lot of devices need a CPU with a lot
| of lanes, which meant buying into a more expensive high-end
| desktop platform with more than the standard of 16 or 20 lanes on
| it. Except these are also becoming scarce; both Intel and AMD's
| high-end desktop platforms haven't been updated in years. The way
| high-end used to work was that they were just last-gen server
| chips with overclocking and more consumer-oriented boards, so
| they got the extra lanes and memory capacity, but you still got
| the gobs of USB and so on that you'd see on a consumer board.
|
| So with no lane expanders on motherboards anymore, and no high-
| end platforms with higher lane count CPUs in them, the only other
| option for buying something with lots of slots in it is to buy
| server gear. Not only is that more expensive, but you also lose
| out on all the consumer-grade creature comforts you expect[0] and
| have to deal with the added noise of small fans in a rackmount
| case.
|
| [0] VGA is considered standard video output on most server
| motherboards
| arminiusreturns wrote:
| Can confirm, much of my high-compute architecture design work
| centered around heavy analysis and comparison of existing and
| promised backplanes. Especially when you throw images in the
| mix and need gpus, but you need the whole stack to be HC/HT or
| you hit bottlenecks later.
| hinkley wrote:
| And other creature comforts like decibels, I suspect.
| PragmaticPulp wrote:
| > One related issue is the death of integrated PCIe switches on
| motherboards. High-end boards used to have PCIe hubs on them so
| you could plugin in a lot of cards and still get all the
| bandwidth out of a 16-lane CPU.
|
| PCIe switches weren't common or necessary for most consumer
| applications. Workstation and server CPUs have plenty of lanes
| on their own. PCIe switches occupied a relatively narrow use
| case in the middle where people wanted to use consumer CPUs but
| attach a lot of add-in cards and share bandwidth across them.
|
| Even that use case has been eroded a bit with PCIe splitting
| options, where an x16 lane can be split into x8/x8 or even
| x4/x4/x4/x4 depending on the motherboard. An x4 PCIe 4.0 lane
| is as fast as an old x16 PCIe 2.0 lane, so that's a significant
| amount of bandwidth still. PCIe 5.0 takes this even further,
| where an x16 PCIe 5.0 slot has over 60GB/sec of bandwidth,
| which is a lot even when divided into x4 or x2 channels.
|
| AMD especially made platforms with a lot of PCIe lanes
| relatively affordable. An AMD workstation motherboard and CPU
| with a lot of PCIe lanes might not actually be that much more
| expensive than a hypothetical consumer CPU with a PCIe switch.
| The other issue is that with a PCIe switch is that you're still
| limited to upstream bandwidth, so if you need x16 bandwidth
| running to multiple cards at the same time, a PCIe switch
| doesn't actually help you.
| bilegeek wrote:
| It's still a problem if you want to run previous gen
| hardware. For instance, if I have a Gen4 motherboard and pass
| through a second GPU to a VM - so the second GPU at Gen3 x8
| plus the host GPU at Gen4 x8 - I'm only using a quarter
| instead of half the throughput the hardware actually
| supports, because Gen3 is half the speed of Gen4 per lane.
|
| Or even think about Gen5, doubling Gen4 speeds: With a PCIe
| switch, I could get essentially 3/4 performance out of a Gen5
| host GPU, and full performance out of a Gen3 second, instead
| of 1/2 and 1/4 respectively.
| Dalewyn wrote:
| Do GPUs even saturate PCIe3 x16 or x8 connections, let
| alone PCIe4 or 5, that we need to be worried about
| bottlenecks?
|
| The only thing I'm aware of that will actually saturate a
| PCIe connection is NVME storage.
| thequux wrote:
| I'm running a server motherboard (Supermicro H12DSi-N6), and
| while the processing power is _wonderful_ , the tradeoffs were
| a bit surprising:
|
| * I can't disable the BMC video card, which means that when I'm
| running Windows, there's a 800x600 monitor that will never go
| away and windows occasionally decide to open on it. This is
| solvable by opening the BMC on the machine itself and using the
| IKVM webapp to drag it onto my main screen; this takes a _very_
| careful flick of the mouse. * There 's no built-in audio; in
| order to listen to Spotify, I use a USB audio card. * The board
| only has 6 USB ports, only 2 of which are USB-3. Thus, I get to
| use a lot of hubs. * It takes a _long_ time to boot past the
| firmware; generally 3-4 minutes to warm boot and 6-8 cold. *
| While it can be used as a single-socket board (which I do,
| because I don 't have the funds for a second CPU just yet), I
| lose a bunch of PCIe slots doing so; half the PCIe lanes for
| the first CPU are direct wired to the second socket for the
| inter-CPU link.
| justinlloyd wrote:
| Bridge jumper JPVGA1 directly on the motherboard and it will
| disable the onboard VGA. Switch BMC priority from onboard
| (default) to offboard, I don't think this is necessary if you
| bridge JPVGA1.
|
| Get yourself a nice Highpoint USB 3.0 card with quad
| controllers on each port. 20Gbps of bandwidth in aggregate.
|
| I believe there is a jumper on the motherboard that will
| bridge the missing PCIe slots in single CPU configuration.
| zeusk wrote:
| If that's the only _other_ monitor, you can set your actual
| display as primary then select primary display only in Win+P
| menu
| ev1 wrote:
| If the 800x600 is on the right and a window opens in it, you
| can alt-tab to it or click it on the taskbar, then just hit
| win+leftarrow until it appears on the correct monitor. Same
| for win+rightarrow if other way around. Should not need to
| IKVM
| Eduard wrote:
| On all Windows operating systems I have used, this
| workaround (winkey+left/right) unfortunately does not work
| for moving modal pop-ups.
|
| For example, when starting KeePass by opening a .kdbx
| password safe file, KeePass will first open the master
| password input modal pop-up.
|
| If that happens on a turned-off screen (IMHO it opens on
| the screen on which the KeePass main window was placed
| before the process exited on the previous run), you'll be
| lost.
|
| Very frustrating, e.g. for multi-screen home theater
| setups.
|
| Anyone please share your solutions to this problem.
| saratogacx wrote:
| Win + Shift + arrow works better with modals because it
| doesn't try and resize/pin the window but just blips it
| to the next screen over preserving size and location. I
| use it a lot when I move from my laptop to a multi-mon
| setup to sort windows to different screens quickly.
| vic20forever wrote:
| Moving a window: Alt-Space, M, arrow keys
| stinkyball wrote:
| If you Alt-Space, M, and press one of the arrow keys, the
| windows will 'stick' to your mouse cursor so you can just
| wiggle your mouse after they key combo to bring it to
| you.
| Lownin wrote:
| In case helpful, select the app on the task bar and use
| Shift+Win+Left/Right arrow key to move it between monitors
| without needing to do the IKVM dance
| PaulHoule wrote:
| The PC industry has been scroogling users on I/O since the late
| 2000's, the main entity to blame is Intel with its vainglorious
| plan to take over the complete BoM for PCs. Now AMD is at the
| center of the phenomenon.
| ridgered4 wrote:
| There may have been a stealthy power move going on back then to
| kill off the GPU by denying them anywhere to plug in. When
| Intel successfully killed off 3rd party chipset makers nvidia
| was clearly quite worried and successfully sued Intel with the
| result they were required to keep PCIe expansion available on
| their chipsets with enough bandwidth for GPUs. (This agreement
| expired some years ago, I think it was only for a decade)
|
| This wasn't an unreasonable worry for nvidia to have. They'd
| just lost preferred chipset status on AMD platforms when AMD
| purchased ATi. Intel had released some lower end platforms with
| only 1x pcie connections (nvidia had responded with nvidia Ion
| which could work over such a tiny link) and AMD was talking big
| about their APUs.
|
| It was a reasonable fear back then that Intel and AMD, with a
| tightened grip on their platforms and their own integrated
| solutions competing in the same space, might choose to just cut
| off nvidia's air supply by flooding the market with chipsets
| that didn't have the connectivity Nvidias cards needed.
| LegitShady wrote:
| I think on desktops most users have enough io, and on laptops
| the Ultrabook type trend is not particular to and or Intel.
| CharlesW wrote:
| > _The PC industry has been scroogling users on I /O since the
| late 2000's..._
|
| What does this mean? (I'm familiar with the Microsoft ad
| campaign, but that doesn't make sense to me in this context.)
| tenebrisalietum wrote:
| In the 80's PCs and software ate the world, and one of those
| reasons (apart from DOS/Windows) was that anyone could put
| something on a PC expansion card which more or less directly
| spoke to the CPU and extend the hardware.
|
| So the PC acquired a very diverse set of cards and was able
| to do heavy processing in a very diverse set of roles - from
| medical equipment, cash registers, MIDI/music composition,
| graphics, publishing, sound, networking, etc. No business was
| untouched by the PC in the 80's--the ability of anyone to
| make hardware for the platform was a significant part of that
| being as cheap as possible.
|
| PC hardware seems to be slowly moving towards being like a
| cell phone - everything onboard/builtin, ports picked by the
| manufacturer (you will get 1 USB-C port and like it), and if
| you want to add something, your primary option is a USB-type
| port of some sort - with its own world of firmware,
| controllers, etc.
|
| Look at a smartphone and a modern ODM-designed cheapo laptop
| motherboard - they're very similar.
|
| If motherboards stop being expandable it might mean the PC
| hardware ecosystem isn't going to be able to innovate except
| at the behest of high-capital firms like Intel, AMD, etc.
| That might be OK if we are truly at the zenith of what's
| possible with a PC, and really want to give the keys to the
| kingdom to those firms.
|
| And who knows - I know somewhere in there USB4/Thunderbolt on
| some level is able to move PCIe traffic, so maybe it won't be
| so bad.
| PaulHoule wrote:
| USB3, USB-C is no answer.
|
| Back when the USB1 spec came out it said you could plug 127
| devices into a port through a hierarchy of hubs.
|
| With three different laptops and a plethora of USB 3 hubs I
| found the system would support only a limited number of
| devices in the tree. If you plugged in too many devices
| (somewhere between 3 and 5), plugging in an new device
| would cause an existing device to drop out. It's just
| annoying if it is a keyboard or mouse that gets dropped out
| but if it is a storage device data could get corrupted.
|
| I looked at the USB 3 spec and couldn't find any guarantee
| that there was any number of devices you could plug into
| hubs and have it work.
| magicalhippo wrote:
| > I looked at the USB 3 spec and couldn't find any
| guarantee that there was any number of devices you could
| plug into hubs and have it work.
|
| It's a controller and/or BIOS limitation[1][2]. The
| complication is that while USB devices can be addressed
| using 7 bits = 127 devices total, each device usually
| creates more than one endpoint, and each endpoint
| consumes controller and system memory resources. The BIOS
| allocates the memory, and the amount is apparently
| hardcoded (guess a setting would be too difficult). If
| you have many USB 3 devices with a lot of endpoints, that
| memory runs out quickly.
|
| In addition, each endpoint reserves some bandwidth, so
| the uplink needs to be able to provide that bandwidth.
|
| [1]: https://community.intel.com/t5/Embedded-Intel-Core-
| Processor...
|
| [2]: https://borncity.com/win/2019/09/06/windows-10-not-
| enough-us...
| formerly_proven wrote:
| Quite honestly USB 3 has always been flaky for me, I
| avoid using USB 3 hubs because they don't work reliably
| anyway, and on many cases the front USB 3 ports cause
| errors because... I don't even know. To say nothing of
| using longer cables than 50 cm. Back in the late 2000s
| USB 2.0 was similarly troublesome as USB 3 remains today.
| Slow USB 1.1 devices always seem to work though.
| cbozeman wrote:
| > That might be OK if we are truly at the zenith of what's
| possible with a PC, and really want to give the keys to the
| kingdom to those firms.
|
| No, we don't, and they've already shown we can't.
|
| Intel kept us at 4-core shit-tier processors for a decade,
| and the moment AMD managed to get a leg-up on Intel, they
| killed their HEDT platform because it competed with their
| server platform _in PCI Express lanes_ and turned their
| back on all the gamers that brought their sorry ass back
| from the brink of death.
|
| Both of them have been shown to be greedy, opportunistic
| shitbags of companies that cannot be trusted.
| PaulHoule wrote:
| Unfortunately it's an obscure topic because people have been
| so dependent on laptops and phones that few people know about
| the regression of desktop PCs.
|
| Here's a simple example.
|
| I found an old i3 PC from Intel in my house that was left by
| one of my son's friends. The CPU is artificially limited to
| 16 PCI lanes.
|
| You might think you could plug a discrete GPU in, it only
| uses 16 lanes, but no, some of those lanes are consumed by
| storage, USB ports, super I/O, etc.
|
| So this computer is e-waste now because it can't be upgraded
| to keep up. This kind of delinquency is only possible because
| Intel has pushed barely-functional integrated "GPU"s.
|
| Back in the 1980s and 1990s you had to know some rules about
| how interrupts were assigned to slots to build a working PC.
| Since the early 2000's, PC builders face a number of barely
| documented rules about how PCI lanes are assigned which boil
| down to "i3 and i5 are e-waste, buying a budget or mid-range
| motherboard is a waste of money because upgrading your
| machine will be a matter of 'you can't get here from there'"
| yourusername wrote:
| Even with 8 lanes available a GPU should work fine. If it's
| a very high end GPU it won't run at full speed but this is
| a old i3 so probably not a issue. Some low end GPU's only
| use 4 lanes to begin with. Did it actually fail to work
| when you added a GPU?
| justsomehnguy wrote:
| I think the other poster is... exaggerating his one-time
| experience to everything.
|
| I personally did "x16-x1 mod", ie removed the plastic
| from the connector so a x16 video card could be installed
| in a x1 slot (and free up the x16 slot for a network or
| RAID card, don't remember atm). Video card worked fine.
| It's up to PCI-E standard to actually use _any_ amount of
| lanes _available_.
|
| https://en.wikipedia.org/wiki/File:PCIe_J1900_SoC_ITX_Mai
| nbo...
| PaulHoule wrote:
| Yes
| cortesoft wrote:
| I think the question was more around the use of the word
| "scroogle".
| CharlesW wrote:
| Yes! From the interesting answers I think "scroogling"
| was used as a synonym for "screwing" or "gaslighting".
| I've never seen it used outside of the Microsoft ad
| campaign, so it's interesting to see it in the wild.
|
| > scroogling
| jamiek88 wrote:
| I think it was a typo for scrooging which is a common
| enough word for being cheap(USA) / tight (uk).
| mikeInAlaska wrote:
| My PC I just built, a Ryzen 5950x with an X570S chipset, has the
| least PCIe slots I've ever had on a PC. (3) Plus... if you use
| the second slot, it cuts your graphics card down to x8 PCIe
| lanes, so no way !!! And if you use the third slot, it disables
| one of the on board NVME drive slots. This I had no choice. So,
| in the end I had room for the graphics card and a single 10
| gigabit ethernet adapter.
| thro388 wrote:
| First batch of AM5 mobos are high-added value, makers want to
| make most money on selling buzzwords. Normal consumer grade mobos
| (with many PCIe slots) will propably follow latter.
| voldacar wrote:
| I remember back in the original nvidia GTX Titan era (2013 ish?)
| it wasn't uncommon for TOTL gaming rigs to have 2, 3, even 4
| GPUs. At some point that just ceased to be a thing
| silksowed wrote:
| "The Death of the _Consumer_ PCIe Expansion Card ". just a
| friendly reminder that in the data center world pcie isn't going
| anywhere soon
| Taniwha wrote:
| serdes .... it's serdes all the way down ....
|
| There's a growing trend in SoCs of just having serdes blocks -
| because these days PCIe, high speed ether, USB/thunderthing, sata
| etc etc are all just variants on the same high-speed technology -
| different protocols are just different MACs on top of a common
| serdes block
| marktangotango wrote:
| Interesting, can anyone point out some SOCs with multiple
| serdes? As an interested, non embedded developer, I know high
| end fpga do, but I'm not aware of any SOCs.
|
| Also how hard is it to layout pcb for these? Is length matching
| sufficient?
| markbnj wrote:
| I haven't used more than two PCIe slots for a long time now. My
| current asus mobo has ... 4 I think, and I use two: one for the
| evga gpu and another for a usb 3.0 card I had lying about.
| dsr_ wrote:
| It's a demand issue.
|
| Business users get: laptops, mostly. You need a desktop? It's
| either because you aren't trusted/don't need to take a laptop
| around or because you need a workstation. The low end doesn't
| need PCIe slots for anything: integrated GPUs are good enough to
| do all generic business work, integrated sound, network, blah
| blah blah.
|
| So we've identified workstations as the business case for more
| PCIe slots. The AMD workstation CPUs (ThreadRipper and
| EPYC-P)have lots and lots of PCIe lanes. More than that. The
| Intel workstation CPUs ... don't. G12 has 16xPCIe 5 and 4xPCIe 4.
| So you buy AMD this year.
|
| Home users: either you're a casual gamer or a hardcore gamer.
| Either way, you need a maximum of 2 PCIe slots for GPUs, and
| you're done.
|
| What's left? People on HN who run server and workstation tasks on
| repurposed desktop hardware, some of whom buy used server gear
| and live with the noise, some of whom buy AMD workstation gear
| kryptiskt wrote:
| I'd like a PCIe card with room for more NVMe drives on it. I'd
| like to replace my SATA connected SSDs for performance, but I
| can only stick two M2 cards on my motherboard.
| Tijdreiziger wrote:
| There are multiple products in that space, such as
| https://www.asus.com/Motherboards-
| Components/Motherboards/Ac...
| jandrese wrote:
| Something like this?
|
| https://www.amazon.com/Adapter-advanced-solution-
| Controller-...
| blibble wrote:
| you buy a cheap adapter to plug a m.2 nvme drive into a pci
| card
|
| and vice-versa (why? m.2 slots have ACS support for pci
| passthrough)
|
| they're essentially just a wire as it's the same bus
| justinlloyd wrote:
| Highpoint and Squid are your goto options in that space.
| There are others, e.g. dumb cards, if you just want to
| bifurcate an existing PCIe slot to handle four M.2 drives.
| But if you want 16 or 64 NVMe drives, the two manufacturers I
| mentioned have offerings. I have a Highpoint U.2 card, which
| gives me the option to handle 8x U.2 or M.2 drives (with
| adapters) in a single PCIe slot.
| Gigachad wrote:
| The vast majority of users are happy with a few tb fast
| storage and the rest on regular hard drives.
|
| The users who aren't tend to be building servers.
| jeffbee wrote:
| > Intel workstation CPUs ... don't. G12 has 16xPCIe 5 and
| 4xPCIe 4.
|
| Intel's current _workstation_ CPU line, launched a year ago,
| has 64 PCIe 4.0 lanes.
| https://www.pugetsystems.com/recommended/Recommended-Systems...
| jamiek88 wrote:
| I didn't realize Xeon had improved that much but his point
| still stands as threadripper has 128 lanes. AMD is better at
| the moment.
| justinlloyd wrote:
| I believe the red team processors don't treat all PCIe
| lanes equally, much like the earlier blue team processors
| did. So that 128 lanes is not 128 lanes. More like 128
| lanes with caveats. Though my information may be out of
| date as it has been a year or so since I last looked at AMD
| options.
| hobo_mark wrote:
| The only caveat I know of is that on a dual socket system
| you won't get twice as many lanes. 48 (or 64, depends on
| the configuration) of each are used for the interconnect
| to the other socket. Then of course the performance of
| any slot-to-slot traffic may depend on the different
| paths taken inside the interconnect, but that only
| matters for very high-bandwidth of low-latency
| applications.
| justinlloyd wrote:
| It is true, the CPU-to-CPU interconnect fabric on both
| blue team and red team play into that, and it varies
| between generations and motherboards and chip sets. It
| gets very confusing quite quickly. Some server boards
| will let you dedicate PCIe slots to a specific CPU,
| others won't let you not dedicate certain slots to a CPU.
| Melatonic wrote:
| The higher end Intel CPU's always have tons of PCIe lanes.
| z3t4 wrote:
| I use multi seat on Linux. You need one graphic card per seat.
| So our whole family shares one computer but everyone have their
| own work station with monitor, keyboard, mouse and headset. Ohh
| you also need separate sound cards.
| dsr_ wrote:
| I understand how to do this... but I have to ask, why?
| toast0 wrote:
| Not the OP, but I've thought about doing something like
| this, because you can justify a much fancier computer if
| it's shared. I have no reason for 16-cores, but 16-cores
| across three people is reasonable, maybe not even enough,
| so better get something bigger.
|
| Then at least 64 GB of ram, and a sweet disk array, etc.
|
| I _did_ run a dual head Windows environment for a while,
| but it did have issues from time to time.
| blibble wrote:
| > Ohh you also need separate sound cards.
|
| hdmi/displayport audio?
| pjmlp wrote:
| Not even business, since 2011 on the projects I worked on, the
| workstations if needed, were always some beefy cloud VM.
| JAA1337 wrote:
| Demand and capitalism. Bigger, better, and newer.
| Q6T46nT668w6i3m wrote:
| There're plenty of uses that you may not see: I have a capture
| and sound card in addition to two GPUs.
| highwaylights wrote:
| I'd be confident in saying that you're an outlier in that
| though.
| Silhouette wrote:
| That may be true but in an industry that produces millions
| of new computers every year there's quite a bit of room for
| outliers. After all someone has been buying all these
| specialised expansion cards for all these years.
|
| Graphics, sound and networking may have been the biggest
| reasons and may now be adequately catered for in everyday
| PCs without the extra hardware. However the high end of all
| of those markets still needs more than what you get on any
| basic motherboard and processor and then there are all the
| speciality cards as well catering for who knows how many
| niche markets.
| bentcorner wrote:
| I have a PCIe sound card but I have to say these days that
| I'm actually using a tiny apple usb dongle for my audio,
| and it sounds cleaner than the internal sound card does.
| The dongle was also a fraction of the price of the sound
| card, and way more convenient.
| dsr_ wrote:
| Which of those need to be at PCIe3 or 4 bandwidths rather
| than USB3 bandwidths?
|
| Which of them need to be at PCIe latencies rather than USB
| latencies?
| pclmulqdq wrote:
| Capture cards need the bandwidth. Whether they need the
| latency is arguable, but they need a lot more latency
| determinism than USB tends to offer out of the box.
| belthesar wrote:
| Introduced latency on the capture side makes latency
| tuning your entire production pretty difficult. For non-
| real time usage, sure, latency in the 100-200ms range is
| more than acceptable (assuming it's deterministic, as you
| pointed out), but in the real-time world? Keeping things
| within a frame is pretty much required, and with the
| popularity of software-driven studio workflows across
| both amateur streaming and professional production, it's
| been real hard to get reliable performance out of USB
| hardware that didn't add frustrating amounts latency due
| to pre-ingest compression or seemingly random amounts of
| delay due to protocol or CPU time starvation.
| Dylan16807 wrote:
| USB latency should be under a millisecond as far as I
| know.
| pclmulqdq wrote:
| This is both only true for small transfers (not
| bulk/asynchronous transfers) and for the ~99th
| percentile. Large transfers have some buffer management
| and handshaking, so they tend to have highly variable
| latency that has a very fat tail.
|
| The latency degradation for large transfers is so
| noticeable that most audio DACs (not just ones for
| gullible audiophiles, also for the pro market) use custom
| drivers and USB protocols. For 1/1000000th the data rate
| that a capture card would need.
| justinlloyd wrote:
| Yep, there's a reason I have Blackmagic quad 4K capture
| cards in my workstation. Syncing multiple video streams
| with USB capture cards would be nigh impossible even if
| you put them on separate USB controllers. Ingest over USB
| is fine (though slow) but pretty much every USB capture
| card does its own internal compression, as you point out,
| and then involves the CPU to decompress it and get it
| into VRAM or DRAM.
| dsr_ wrote:
| Realtime video production is definitely an outlier. You
| probably want a workstation-class system anyway, with a
| full TB of RAM so you know that's never an issue.
| pier25 wrote:
| > _You need a desktop? It 's either because you aren't trusted_
|
| What do you mean with "you aren't trusted" ?
| dsr_ wrote:
| An awful lot of people work in a corporate desktop
| environment where the machines are bolted down. They don't go
| home with you. You don't have admin privileges, you can't
| install software, and there's probably corporate spyware
| installed.
|
| Customer service. Tech support. Inbound sales. Outbound
| sales.
|
| The company doesn't trust you. If you take a machine home,
| you're probably going to lose it or break it or sell it, so
| that's a hard no.
| layer8 wrote:
| It's also a security risk. They may generally trust you,
| but may not trust you 100% in the little detail of never
| ever getting malware on your laptop, which then would
| invade their internal network. I'm actually astonished that
| so many companies allow you having a laptop that you take
| home and that also connects to the company network.
| Gigachad wrote:
| Being "in the network" doesn't matter anymore because
| everything has moved from unsecured intranet services to
| Internet exposed stuff authenticated with SAML. One user
| having malware is as much of a threat as someone else at
| the cafe having malware. No longer an issue.
| pdntspa wrote:
| You're forgetting independent creators and artists
| pier25 wrote:
| I think that falls into the workstation category?
| pdntspa wrote:
| Not really, all you need is a beefy PC or laptop. These
| guys aren't usually running Xeon or Threadripper
| Finnucane wrote:
| To be fair, the chip and system makers have mostly forgotten
| them too.
| auxym wrote:
| Not all motherboards have built in wifi, which leaves the
| option of a USB external adapter or a PCIe card.
| pbhjpbhj wrote:
| PCIe x1 for networking is quite common, PCIe mounted SSD aren't
| unheard of, PCIe USB expansions see pretty common too (others
| already mention audio/capture cards). Not much else I can think
| of unless you're getting really out there.
| wink wrote:
| I find PCIe USB expansions and SSDs very uncommon, but I
| guess my sample size is mostly nerds who also game, not even
| sure when I last opened up a work-issued desktop PC...
| Kerrick wrote:
| I use a 7x USB 3.0 expansion card for my gaming system,
| because I hate hot-plugging input devices into my front
| panel I/O. My gaming uses a mix of peripherals from a
| Keyboard and Mouse (separate from the ones I use for
| productivity work on the same PC) to an Xbox controller to
| a HOTAS & Pedal kit, plus charging cords for my Valve Index
| controllers that I leave plugged in and routed. And of
| course high-end gaming headsets usually use USB these days,
| plus a webcam. I use a lot of ports. :)
| Dylan16807 wrote:
| That's a reasonable method but I think most people would
| use a hub to achieve those goals.
| usrusr wrote:
| Recently I had some USB enumeration loop (turned out to
| be the Index umbilical connector) that made me disconnect
| everything one by one. What a marathon. The USB hub
| dedicated to wireless dongles alone has a wireless
| keyboard/touchpad adaptor, the one of the Steam
| Controller and an ANT+ stick. Elsewhere the homebrew
| stick/rudder is linked up, the Nrf52 devkit, keyboard,
| mouse, another touchpad, the hub where I connect Android
| devices for development and occasionally a Garmin for
| some cIQ stuff. A modified webcam I used to use for
| infrared headtracking and that cheap portable usb audio
| with built in phantom power for a condenser mic headset
| (the firewire audio interface seems to be acting up). The
| old laser printer is permanently connected whereas the
| scanner is only plugged in on demand. Currently
| disconnected are the throttle quadrant, the trim box
| (with yet another touchpad, still ps/2) and the midi
| controller. Somewhere there's an arduino configured as an
| nrf52 programmer. Yes, I need a notebook computer free of
| most of that stuff to get actual work done.
| hnuser123456 wrote:
| And if you get 3x full body vive trackers for your index,
| those each have a USB receiver and you need 3 more ports
| and they can't be consolidated for some reason. And if
| you want to charge those from your computer, that's 3
| more ports.
|
| I have an anker 5-port USB power hub (no data, not
| connected to computer) next to my machine for charging
| phone, headphones, earbuds, etc, and another 10-port hub
| for charging the 2 index controllers, 3 full body
| trackers, and 3 track straps with builtin batteries for
| extended play (for days when you feel like spending more
| than 7 hours in VR anyways...)
| numpad0 wrote:
| "M.2 NVMe" SSDs are PCIe add-in cards in a laptop
| formfactor.
| li2uR3ce wrote:
| So few buy used hardware. I got a used Dell laptop and the WiFi
| card was shit. Dell doesn't always make the right call but
| because WiFi card was a replaceable PCIe card, I was able to
| get more life out of the machine. On the one hand it was good
| for me. On the other hand it was bad for Dell because I didn't
| replace the whole machine. Dell is learning, however. Their new
| machines have every thing soldered in place with no pesky
| upgrade/repair options, not even a stray NVME or RAM slot. It's
| amazing they left a USB port.
|
| I've used PCIe a lot for storage and networking applications in
| laptops, servers, and desktop form factors. It has allowed for
| much cheapskating--which is why it's got to go. Repair and
| expansion options are bad for business.
| formerly_proven wrote:
| > Their new machines have every thing soldered in place with
| no pesky upgrade/repair options, not even a stray NVME or RAM
| slot. It's amazing they left a USB port.
|
| I'm gonna shill a bit again and say the LG Gram series belies
| anyone saying this has to be done for weight / slimness
| reasons, as even the 1.0 kg 14-inch model has two M.2 2280
| slots.
| dmw_ng wrote:
| Curious what replacement card you went with. I know the
| WiFi/BT are combined, Bluetooth headphones cutting out last
| night for no reason at all and the thought came around again
| to flip a replacement in.
|
| Nice to meet another used hardware buyer. I think the total
| cost of my laptops over the past 10 years equates to roughly
| the price of a single new high spec Macbook, and that's with
| repeat replacements due to damage / spills / etc.
| li2uR3ce wrote:
| I went with an Intel 7260HMW BN for $20. Dual band with
| Bluetooth 4.0.
|
| USB 3 can trash your WiFi/BT spectrum as many device
| manufactures provide inadequate shielding. I got some foil
| tape and carefully lined the inside of an external drive
| enclosure and fixed some intermittent WiFi issues. FCC
| should scrutinize USB devices more, I thinks. Probably
| wack-a-mole though.
| kevin_thibedeau wrote:
| If the FCC cared, windowed gaming PCs wouldn't have ever
| been a thing.
| AmVess wrote:
| Intel Wifi 6 is as solid as they come.
| formerly_proven wrote:
| The PCIe versions of those cards are widely known to
| experience frequent microcode hangs (which can also hang
| your entire machine) depending on "some circumstances"
| (allegedly linked to 5 GHz 802.11n or something).
| brnt wrote:
| Instead of building gaming rigs, I keep my knowledge and
| skills current by reviving/reusing/repurposing older
| hardware. Some new ram, a fast SSD, system is good for
| another 5 years or more. It keeps an old hobby alive and
| saves me quite a bit of money in the process.
| rwmj wrote:
| This has been going on since the second IBM PC (probably). On the
| original PC even RAM expansion was on an ISA card, and there were
| cards for serial, parallel, floppy disk, CGA, MDA and probably
| more. Those functions were gradually moved first to the
| motherboard and then into the processor until it became the SoC
| we have today.
| johnklos wrote:
| ...and instead of USB 5.0, we have USB 4, version 2.0, probably
| so they can snicker when we realize it's USB 420...
| Kerrick wrote:
| Ha! I even snarkily made fun of the USB naming structure in the
| article, but I didn't catch that the Promoter Group embedded a
| 420 joke.
| dec0dedab0de wrote:
| and here I am wanting a modern computer with regular PCI slots to
| use old hardware
| Kerrick wrote:
| You can at least get PCIe to PCI adapters.
| https://www.startech.com/en-us/cards-adapters/pex1pci1
| LegitShady wrote:
| I have a large graphics card that takes up 3ish slots of PCIe on
| its own even if it's only taking the 16 lanes it gets.
|
| There's room at the bottom for a wifi card with an external
| antenna that is connected by wires to the rp-sma connectors on
| the card.
|
| That's it. That's all the room k have on a full size tower. I
| used to have a FireWire card fory audio interface but replaced it
| with a usb-c model to not worry about FireWire support in current
| year.
| Kerrick wrote:
| That's another thing that bothers me about motherboard
| manufacturer's choices around PCIe -- one that I didn't cover
| in the article. Why on earth are they choosing (even on EATX
| motherboards) to put the extra PCIe slots within 1-2 spaces of
| the primary GPU slot, when they are so incredibly likely to be
| covered up? Instead, that's a great place to put the M.2 slots.
| Move the few remaining PCIe slots down, so they can actually be
| used.
| ridgered4 wrote:
| m.2 can get hot, installing them under a hot video card
| cooler is a judgement call, especially if there is more space
| elsewhere. The truth is most people probably don't use their
| 1x slots.
|
| I found some ultra low profile pcie extensions (basically
| mining risers that use a USB3 cable) that allow me to
| relocate the cards elsewhere but still install them in slots
| that are under big GPU fans.
| Kerrick wrote:
| Those extensions sound interesting, can you share a link or
| model number?
| ridgered4 wrote:
| As mentioned, there is a wide variety of these available
| of different types. They were made popular and available
| enough by the mining boom that there are different styles
| (and different qualities).
|
| This was the one I used:
| https://www.amazon.com/dp/B07N38Y799
| justinlloyd wrote:
| There are M.2 extension cables, I have one that lets me
| put the M.2 in to the drive bay away from the
| motherboard. There's also M.2 bifurcation cables. And you
| can also put the M.2 on PCIe cards, with or without a PLX
| switch.
| justinlloyd wrote:
| Unfortunately you cannot actually "move the slots down" due
| to how motherboards and CPUs work and still keep everything
| compliant and correctly timed. You can put in a PCIe
| extender, stiff (daughterboard) or flexible (ribbon or
| bracket) which works, but you are playing the game of "it
| might work, it might not" and juggling three variables of
| motherboard timings/quality, PCIe card timings/quality and
| extender quality. You don't know if it'll work until you try
| it. There's multiple reasons why those slots are where they
| are.
|
| I have two ASUS dual CPU workstation motherboards and run
| some of the devices with PCIe ribbon extensions. I cannot run
| my Highpoint U.2 RAID card on an extension but I can run a
| Highpoint USB 3 card just fine. I can run a 2070 GPU just
| fine, but the RTX A5000 GPUs are flakey. The dual Blackmagic
| quad 4k capture cards want to be in specific motherboard
| slots and work okay on one particular brand (Thermaltake) of
| extension cable.
|
| The problem is nuanced.
| Semaphor wrote:
| The one I use, MSI X-570-A PRO [0], has space for a 2-slot
| GPU (are they 3 slot nowadays? I don't know, I don't do AI or
| shooter-games), and then 3 x1 and one x16 slot which seems
| pretty okay.
|
| [0]: https://www.msi.com/Motherboard/X570-A-PRO/
| Kerrick wrote:
| > are they 3 slot nowadays?
|
| In fact, some even clock in at 4.3 slots.
| https://www.asus.com/Motherboards-Components/Graphics-
| Cards/...
| sgtnoodle wrote:
| That card looks like it takes up 2 slots externally, but
| then has really tall fans on it.
| sgtnoodle wrote:
| The 2 or 3 top end Nvidia and AMD GPUs have been 3-slot for
| at least a couple years. I bought a Dell 6800xt on eBay
| that was notable for only taking up 2 slots.
| Dagonfly wrote:
| Most consumers only have 1-2 NVMe drive and a GPU. So I
| presume OEMs don't want to have an M.2 slot buried under the
| GPU.
|
| On AM4 ATX-boards slot 0 is often then primary M.2 slot. Slot
| 1 the GPU and slot 2 is either empty, or the CMOS battery, or
| like you suggested a secondary M.2.
|
| So you're already down to a maximum of 5 PCIe slots.
| [deleted]
| dotnet00 wrote:
| They're doing it because it allows them to sell 'workstation'
| motherboards that do have decent slot spacing at ridiculous
| prices.
|
| I had to resort to a mining rig style setup with 16 lane
| extenders to be able to properly utilize all the slots after
| running into the same issue.
| PCMCIAnostalgia wrote:
| mxfh wrote:
| _I was bewildered at this because I couldn't imagine anybody who
| would find 5 NVME SSDs and two PCIe cards more useful than the
| inverse_
|
| Never tried running a 4x2TB striped NVME PCIe 4.0 scratch disk?
| This stuff just lets you forget about what used to be a
| bottleneck. No need for RAM-drives.
| causi wrote:
| I do miss expandability and features. Nothing quite matches the
| look of nerd envy I got when I popped a folding bluetooth mouse
| out of my laptop's ExpressCard slot.
| mxuribe wrote:
| > ...a folding bluetooth mouse out of my laptop's ExpressCard
| slot...
|
| Say what!?! How have i never heard of this until today!?!
| @causi Would you mind sharing a link to whatever cool mouse you
| had? Thanks!
| causi wrote:
| Sure. The original HP RJ316AA PCMCIA version was probably
| more comfortable, but I had the MoGo X54 ExpressCard model
| which was still pretty good. Both of them charged via the
| card slot. I wish laptops still came with an ExpressCard slot
| just for that mouse; I hated giving it up. Plenty of pictures
| and data on them if you Google the names.
| ZekeSulastin wrote:
| Here's the PCMCIA version Causi mentioned; I had one of these
| back in the day: https://the-
| gadgeteer.com/2006/07/27/newton_peripherals_mogo...
| rr888 wrote:
| With lack of expansion cards combined with m2 drives modern PCs
| are looking completely different to just 10 years ago. Just a big
| motherboard with a video card and a massive cooler. Those big
| cases look mostly empty. All the wiring is just for RGB now. :)
| Kerrick wrote:
| Even spinning-disk hard drive arrays (another good way to fill
| out a large case) don't get much love since SSDs have gotten
| cheaper and people rely more on cloud storage. My next rig
| will, at least, continue to have an ever-growing number of 18TB
| hard drives as a huge volume managed by Windows Storage Spaces.
| rr888 wrote:
| Right, my new PC has an m2 ssd and a NAS in the closet.
| adastra22 wrote:
| Roswell still makes 24x bay 4U cases, and there is backblaze.
| dylan604 wrote:
| Oh the days of buying PCIe expansion chassis to get the cards
| required to fit into your system. When 3 slots were not enough,
| boom, now you have 7 in just twice the square footage and twice
| the power needs.
| cbozeman wrote:
| This is interesting, but honestly a little pointless, and misses
| the real issue.
|
| I'll address it line-by-line.
|
| > Lack of USB ports
|
| My motherboard has 8 USB ports of varying capability on the rear
| I/O block. It has two internal headers to add _another_ 8 USB
| ports. That 's 16 ports. It's not even a very expensive or
| feature-rich motherboard... it's an MSI X570 Gaming Edge Wifi...
| $209.99 on release date at MicroCenter in 2019.
|
| > Thunderbolt ports so you can attach monitors, USB devices, and
| even externally-mounted PCIe cards with a single daisy-chainable
| cable
|
| Who exactly is doing this, when most video cards have 2 or 3
| DisplayPort connectors on them...? External PCIe cards? For what
| purpose?
|
| > WiFi cards so you can get higher network speeds and lower
| latency without a cable
|
| Present on most motherboards, even budget options around $125.
| Nowadays, you can even switch the card out by unscrewing the
| shield, unscrewing the wire tension screws, and popping in a new
| M.2 WiFi card. Why suck up a PCIe slot when a small M.2 WiFi card
| slot fits?
|
| > Network cards so you can have additional and/or faster Ethernet
| ports
|
| Already built into every modern motherboard. Many have two.
| Higher-end boards have 10gig NICs. Some have two. Why would you
| need more than two NICs in your machine?* (See below)
|
| > TV tuners so you can receive, watch, and time-shift local over-
| the-air content
|
| This may be the _single_ place where the author has a point,
| since the only way to view OTA content via stream is to either
| pay for Hulu, DirecTV Stream, Fubo, YouTubeTV, and then input
| your zip code.
|
| > Video capture cards so you can stream or record video feeds
| from game consoles or non-USB cameras
|
| There are plenty of excellent USB devices to accomplish this, you
| don't need a PCI Express-based solution.
|
| > SATA or SAS adapters so you can attach additional hard drives
| and SATA SSDs
|
| Almost every motherboard, certainly every ATX motherboard of the
| past 10 years or so I've seen, has standard at least 6 SATA
| ports. Smaller ITX boards may only have 2 or 4, but if that's
| your design, you've already committed yourself to a lack of PCI
| Express slots.
|
| > M.2 adapters so you can attach additional NVME SSDs
|
| Most mid-range motherboards have at least two of these. High-end
| motherboards have up to 4. This still doesn't solve your primary
| problem that I'm going to address here in a moment.
|
| > Sound cards so you can run a home theater surround sound system
| from your PC without needing a receiver (RIP Windows Media
| Center)
|
| Realtek cornered the market here with onboard audio, and it is
| actually surprisingly good for their higher-end options. Yes, you
| could get an older HT|Omega card, or a SoundBlaster AE-5/7/9, but
| why?
|
| > Legacy adapters for older devices that use serial, parallel, or
| PCI (non-Express) to connect to the computer
|
| I'm not even sure what you'd attach that's that old... and more
| importantly, why...
|
| **** THE ACTUAL PROBLEM ****
|
| Everything the author of this blog pointed out is an issue, but
| most, if not nearly all of them, have been solved by motherboards
| integrating more and more features onto them. This is the reason
| I don't get salty about buying a new $400 motherboard from ASUS,
| or MSI, or Gigabyte or whoever. I get a high-end sound card
| (usually a Realtek ALC4080, which supports 7.1 channel surround
| sound + optical S/PDIF), a 2.5 to 10 gig NIC, WiFi 6 or 6E, 8-12
| USB ports including Type-A & Type-C, multiple M.2 slots (some
| motherboards support up to 5 now).
|
| Turn the clock back 20 years or so. Hell, go back to 1997, when
| the first WiFi router was sold.
|
| You'd have to buy your motherboard. You'd have to buy your NIC.
| You'd have to buy your WiFi card. You'd have to buy your sound
| card. That's four components in one. But wait... most
| motherboards had TWO... at best FOUR... USB ports. So you need to
| buy at least one or two USB hubs to get up to the 8 to 12 ports
| modern motherboards have _on their rear I /O connectors alone_.
|
| No.
|
| The problem is _not_ "lack of PCI Express slots".
|
| The problem is lack of PCI Express lanes. How do you expect to
| drive all this awesome shit you wanna throw into your tower case?
| The MSI Z690 MEG UNIFY has two PCI Express 5.0 x16 slots and one
| PCI Express 5.0 x4 slot. And guess what... your single x4 slot is
| off-limits, because you're gonna buy a Samsung 990 Pro NVMe SSD
| (which is gonna use those 4 lanes), and will certainly saturate
| it with it's 1,600,000 IOPS and 8-9000 MB/s transfer rate.
|
| SLI and CrossFire are still supported technologies, even though,
| frankly, no one could _afford_ to use them in the past two years,
| but that 's changing. The cratering of cryptocurrencies and the
| uncontrolled dive that GPUs are currently experiencing means dual
| video card solutions may be back on the table for gamers. Hell,
| I've seen used RTX 3080s sell for $440 on eBay by the time of
| auction expiry. Sub-$1000 for dual RTX 3080 still outperforms a
| single RTX 3090 Ti. But even those are dropping like a shit from
| heaven. I saw one for sale on eBay for $1059. $2120 for 100+ FPS
| at 4K, with ultra settings, and ray-tracing activated? Why not?
| Lots of people were buying single RTX 3090s for $2400 just a year
| ago.
|
| No, the problem is lack of PCI Express lanes available.
|
| We thought AMD had rode to our rescue with Threadripper. 64 to
| 128 PCI Express lanes "ought to be enough for just about
| anybody". Enthusiasts bought Threadripper to easily run dual GPU
| setups and dual or quad NVMe setups.
|
| You didn't even have to "go nuts". You could pick up a
| Threadripper 2950X for it's 16 cores for as little as $799 on
| release. It gave you 64 lanes. The 3960X supported 88 lanes...
| Hell, even with dual GPUs and quad NVMe drives, you still had
| room for another 40 lanes of equipment.
|
| No. The problem is not the lack of slots. The problem is the lack
| of lanes.
|
| We thought Lisa Su had rode to our rescue against Intel and their
| stingy horseshit antics, only to find out she's not only just as
| bad, she's arguably worse... because after being the also-ran,
| far-in-the-distance, second-tier shitpick of a brand, AMD turned
| their backs on the gamers and enthusiasts that brought the
| company back from the brink. Threadripper is now priced so far
| out of the reach of the enthusiast as to be a non-starter.
|
| But that is your real problem. The lack of PCI Express _lanes_.
| Not slots.
| gsich wrote:
| I can see a usecase for all of them. Just not all in the same
| case. Connecting a capture card and a legacy port? Maybe, but
| USB might suffice too.
| Kerrick wrote:
| Did you read the whole article? I covered the slots vs. lanes
| issue in the third paragraph, and debunked my own list of
| expansion card types in the second half of the article.
| justinlloyd wrote:
| >Almost every motherboard, certainly every ATX motherboard of
| the past 10 years or so I've seen, has standard at least 6 SATA
| ports.
|
| Except that the SATA ports on any motherboard are not equal.
| And you didn't even address the SAS point or highspeed U.2
| drives for broadcast or data capture or...
|
| > There are plenty of excellent USB devices to accomplish this,
| you don't need a PCI Express-based solution.
|
| Evidently you don't work in broadcast or computer vision or
| machine vision inspection or...
|
| > Realtek cornered the market here with onboard audio, and
|
| Evidently you don't work in broadcast, or use a DAW, or desire
| higher grade audio, or special audio processors or low-latency
| MIDI...
|
| >> Legacy adapters for older devices that use serial, parallel,
| or PCI (non-Express) to connect to the computer >I'm not even
| sure what you'd attach that's that old... and more importantly,
| why...
|
| Evidently you don't work in robotics or machine vision
| inspection or industrial control or industrial logging or
| medical devices or...
|
| Before we go any further, USB is very consumer grade tech. We
| know this because it can be unplugged or work itself loose
| under vibration. There are locking connector options, but they
| are for the most part completely inadequate. USB is also prone
| to incredible cross-talk in noisy environments.
|
| I've touched all of these areas in my career, and they all
| require those connections you are so quick to dismiss out-of-
| hand. This is an incredibly parochial and naive mindset on
| exhibit here.
| cbozeman wrote:
| Pity I can't downvote you. You deserve it.
|
| You didn't read the whole post, especially the most critical
| part.
|
| _Lack of PCI Express LANES is the problem, not slots_.
|
| All this great shit you're talking about is wonderful... as
| long as you have the PCI Express _lanes_ to support all those
| add-in cards.
| justinlloyd wrote:
| I did read your entire post. And the OP post.
|
| I have Intel Xeon Skylake processors, 48 lanes of PCIe 3.0,
| all of equal priority so they aren't tiered like in some of
| the earlier blue team processors or the red team
| processors. Across two CPUs that gives me 96 lanes. The
| slots are switched so either of the CPUs can use an
| individual PCIe card, or I can assign a PCIe card to a
| specific CPU. Alternatively if there are not enough PCIe
| slots I can run the PCIe cards through a separate switch.
| There are a lot of non-consumer grade motherboards out
| there that offer these features. It is rare that any
| deployment will require all 48 lanes on a CPU to be maxed
| out simultaneously, though I've been involved with use
| cases where that has taken place (8x 4K non-compressed
| video stream captures directly to storage), though what
| happens then is you run in to DRAM bandwidth issues and
| other problems.
|
| In my workstation I have dual RTX A5000 GPUs, dual
| Blackmagic quad 4K capture cards, dual Highpoint USB 3.0
| expanders with four separate USB controllers per board, a
| Highpoint M.2 RAID controller with on-board PLX, along with
| the onboard M.2, six channel U.2 through a switch, onboard
| USB & SATA.
|
| What we should be asking for is better switching, not more
| lanes. Again, it is a rare use case where we can max out
| the bandwidth of all PCIe lanes of a CPU. We should also be
| asking for better switching for the direct DMA between
| cards which is a sorely neglected area across all
| architectures/motherboards.
|
| P.S. Forgot to mention the NIC PCIe card.
| Underphil wrote:
| "I've touched all of these areas in my career, and they all
| require those connections you are so quick to dismiss out-of-
| hand."
|
| I hear you, and I'm usually first to remind people that their
| own use case is anecdotal and irrelevant.
|
| However, are you really suggesting that the use cases you
| mentioned make up a large enough percentage of the whole to
| warrant manufacturers catering to them?
| justinlloyd wrote:
| Yeah, we're talking multiple multi-billion dollar
| industries. That if the main players won't service them
| will be serviced by niche players who do. You won't find a
| USB anything version of an SDI capture card worth (they
| exist, they're just universally not good) a damn or used in
| a professional broadcast environment because USB, by de
| facto, doesn't lock and is temperamental. You will struggle
| to find a USB multi HDMI capture card. Forget about putting
| multiple USB capture devices on the single shared USB
| connection integrated into your motherboard. A motherboard
| with 12+ ports usually has three distinct USB controllers,
| only one of which is worth a damn. There are PCIe machine
| vision capture cards that have onboard GPUs and dedicated
| co-processors so that the machine vision algorithms can run
| directly on the PCIe card and never involve the CPU nor
| have to move the captured video across the bus to main
| (usually far slower) DRAM. USB has incredibly high latency,
| and more importantly, non-deterministic latency, which is
| why USB MIDI on the desktop is fine for casual use, and
| lousy in an event setting or a professional recording
| studio.
| stinos wrote:
| _USB has incredibly high latency, and more importantly,
| non-deterministic latency_
|
| Seeing that RME's USB 2 interfaces manage to stream like
| 50 or more 24bit audio channels at 48kHz with buffer size
| small enough to get latencies in the mSec ranges, I
| always wonder: are other manufacturers just doing it
| wrong? I know that doesn't completely cover the non-
| determinism argument, but 'incredibly high' seems to be
| covered pretty well.
| justinlloyd wrote:
| The RME-Audio devices do indeed have low latency, at the
| limits of the USB 2.0 spec, 125 microseconds I believe.
| They crank up that USB poll rate. And are also using the
| Arasan chipset IIRC, the same to be found in some of the
| other prosumer and pro line-up of equipment, e.g. the
| Solid State Logic h/w. I am hazy on the details, it has
| been a few years since I was inside any of those devices,
| people from RME and SSL please feel free to correct me as
| to your chipsets. Some are using dedicated FPGAs to
| handle the data capture and processing before handing off
| to USB. RME's devices are definitely doing a bunch of on-
| board processing before giving it to the USB bus, and
| making sure the packets going out are as small as can be.
| Most non-integrated USB controllers are using VIA or
| Renesas. USB 2.0 has lower latency but less consistency
| (shared bus) vs USB 3.0 which has higher latency but is
| more consistent (point-to-point protocol). Obviously you
| don't want to go sharing your USB 2.0 port on your PC
| with an RME and a bunch of USB 3.0 devices, e.g. an
| external drive, because then you just end up with the
| worst of both specs, terrible latency and terrible
| consistency.
| nijave wrote:
| > Who exactly is doing this, when most video cards have 2 or 3
| DisplayPort connectors on them...? External PCIe cards? For
| what purpose?
|
| This is used extensively in laptop docks. If my desktop
| motherboard supported it, I'd hook all my peripherals up this
| way. Instead, I have a KVM with desktop on one side and laptop
| dock (for work and personal laptop) on the 2nd KVM input. A
| single Thunderbolt cable goes to the laptop from a CalDigit
| dock
|
| With Thunderbolt, I could theoretically even share my GPU
| between machines if I had an enclosure (and desktop correct
| ports)
| tenebrisalietum wrote:
| > Why would you need more than two NICs in your machine?* (See
| below)
|
| I've always wanted to build an 8-port or 16-port software
| Linux-based switch. Just for fun.
| guardiangod wrote:
| Yup I ran out of PCIe slots in my last computer-
|
| 1x Bluetooth+Wifi adapter (offload the USB bus)
|
| 1x Highpoint USB 3 controller (the only USB3 controller that is
| reliable for work)
|
| 1x Quad network ports adapter
|
| 1x Video card
|
| Things I wanted to install but couldn't-
|
| 1x M.2 nvme adapter card
|
| 1x Video card
| philjohn wrote:
| The argument there is that you should probably have looked at a
| workstation CPU and board, e.g. ThreadRipper.
| Kerrick wrote:
| Threadripper has worse single-core performance than Ryzen,
| lacks 3D V-Cache, only came in a Pro option last generation
| (which was vendor-locked to Lenovo for months), and has been
| removed from next generation's roadmap.
|
| As I mentioned in the "Slots vs. Lanes" section in the
| article, most home users don't actually need more lanes --
| just more slots.
| magicalhippo wrote:
| > most home users don't actually need more lanes -- just
| more slots
|
| Indeed. I really don't understand why they can't have x16
| slots instead of those useless x1 slots.
|
| My current motherboard has two PCIe x1 slots, running
| either 3.0 or 4.0. Plenty of bandwidth for lots of stuff.
| But they're useless because every expansion board has a x4
| or larger connector. Previous motherboards have been full
| of useless x1 connectors as well.
|
| I know there are exceptions, but x1 is certainly the norm.
| toast0 wrote:
| Open back 1x connectors do exist, then you can put
| whatever in them. Would be nice if motherboard makes
| would use those.
| duffyjp wrote:
| Consider yourself lucky. During the GPU apocalypse I managed a
| Newegg Shuffle for a GPU that included this motherboard:
| https://www.newegg.com/gigabyte-b550m-ds3h/p/N82E16813145210
|
| Fast-forward a bit and I used it to build my kid a new machine,
| except my old RTX 2060 was a "2.5 slot" card which means it's
| literally the one and only PCIe card I can install. I had to
| get him a USB wifi adapter...
| mxuribe wrote:
| > ...During the GPU apocalypse...
|
| @duffyjp : Maybe i'm mis-reading your comment here...but are
| we out of the GPU apocalypse yet? Genuinely curious, because
| I've been holding out getting a small desktop PC for homelab
| use. (Yes, yes, i know for server homelab stuff i don't
| really need a GPU, but GPU pricing i think tends to portend
| overall computing cost nowadays.)
| easrng wrote:
| Expect GPU prices to go down in a little over a week from
| now
| mmastrac wrote:
| Do you have enough lanes overall? I wonder if a splitter might
| work for your needs.
| Nextgrid wrote:
| USB has a non-trivial CPU overhead compared to PCIe.
| nr2x wrote:
| For audio you often get a ton of electronic noise on a PCIe card
| that isn't present with an external device. Even the few PCIe
| still around use breakout boxes.
___________________________________________________________________
(page generated 2022-09-06 23:00 UTC)