Post Atuz5D1c8XzVcv9CXA by T_X@chaos.social
(DIR) More posts by T_X@chaos.social
(DIR) Post #Atti2lgAwb9hrDPTrE by futurebird@sauropods.win
2025-05-08T22:10:54Z
0 likes, 0 repeats
When I see server farms they often feature network cables, so many network cables. But... if you were building a massive computing center would you need all of those cables if you had the most high-tech equipment? That is ... are the cable bundles something we'll outgrow? I guess I need to look at some tours of the most massive data centers?
(DIR) Post #Atti9djmrzANFmNNdA by nazokiyoubinbou@urusai.social
2025-05-08T22:12:07Z
0 likes, 0 repeats
@futurebird I think due to a lot of how they work there is only so much that can really be done to reduce the cables. Of course, if you were satisfied in reduction of bandwidth you cold use fewer cables (and, after all, we're into multi-gigabit speeds now) but of course with more possible speed just comes more demand...
(DIR) Post #AttiAn3lhAKuSJGq8G by Lyle@cville.online
2025-05-08T22:12:09Z
0 likes, 0 repeats
@futurebird This is something NVIDIA has been pushing a lot on to eliminate some of that wasted copper
(DIR) Post #AttiBcOQq6AVfAKYvA by ajn142@infosec.exchange
2025-05-08T22:12:20Z
0 likes, 0 repeats
@futurebird not a data center expert, just a tech guy but no I don’t think so
(DIR) Post #AttiRduKyYiGzftZ0y by isotope239@mastodon.online
2025-05-08T22:15:21Z
0 likes, 0 repeats
@futurebird There's actually a thing called "cable porn", where engineers try to produce the neatest, tidiest network cabling. Here's a reddit with some examples: https://www.reddit.com/r/cableporn/ For anyone who isn't a network engineer, it's hard to explain just how satisfying it can be to finish a cabling job that's super tidy and efficiently wired.
(DIR) Post #AttigB8KEyGE5yMxPM by futurebird@sauropods.win
2025-05-08T22:18:01Z
0 likes, 1 repeats
To write SF you gotta just be full of hubris. Yeah yeah I can totally learn enough about networking to describe a data center of the future. But, it turns out I only have a hazy notion why contemporary ones are filled with all those cable bundles. It's clear to me those need to go if you want a self-repairing data center than can last for 20k years or more. Even if you seal the place up ... the sagging leads to problems over time. It need to be one solid state machine.
(DIR) Post #AttirZ2NhozzPz1BA0 by undead@masto.hackers.town
2025-05-08T22:20:01Z
0 likes, 0 repeats
@futurebird Yes, all the cables.Cable bundles may be reduced over time, but that is usually a function of adding aggregation equipment to reduce the bundle thickness. Also, changing out the kind of cabling used to something more narrow (like fiber).I´ve been networking in data centers for... 25ish years. Mostly, reducing things now just means optimizing your interfaces, and replacing equipment to increase bandwidth (cutting down on cables).1/n
(DIR) Post #AttivNPEtPVgVDWlYO by nazokiyoubinbou@urusai.social
2025-05-08T22:20:42Z
0 likes, 0 repeats
@futurebird Do you mean all the servers themselves should be one machine or just the switches should be even bigger? Because at this point they already fill shelves with as much as they can squeeze in there.There are "cloud" setups that do allocate resources dynamically. These can utilize a single piece of hardware across many virtual servers, which is a very viable thing to some extent. But of course the number of actual available cores, the amount of RAM, etc etc you can squeeze in there still has practical limits. And anything with more demand is going to reduce total available allocation.
(DIR) Post #AttixHweAULY90470a by dougfort@mastodon.social
2025-05-08T22:21:01Z
0 likes, 0 repeats
@futurebird It needs genetically engineered ants to lay pheromone trails for cables.
(DIR) Post #AttjGQW6WRPSEPmAvw by dan@discuss.systems
2025-05-08T22:24:31Z
0 likes, 0 repeats
@futurebird Probably we'll need even more! AI workloads in particular need even more network bandwidth plumbed into relatively small spaces compared to other data center workloads.
(DIR) Post #AttjOcTrvNSxeadYye by dan@discuss.systems
2025-05-08T22:26:01Z
0 likes, 0 repeats
@futurebird happy to answer any questions - my research these days is largely on datacenter networking and related topics!
(DIR) Post #AttjZ9ImkbbMLJTQrQ by woody@pleroma.pch.net
2025-05-08T22:26:18.436652Z
0 likes, 1 repeats
@futurebird The cable bundles are going into the top-of-rack switches, and from the top-of-rack switches into the end-of-row switches. And they look like something, so people take photos of them. But each individual server rarely has more than three (LoM and lagged data) cables going into it. So they don't look as interesting, and people don't take as many photos of them. So, observation bias.
(DIR) Post #AttjZJWGlKKU21rLGa by woody@pleroma.pch.net
2025-05-08T22:27:38.722821Z
0 likes, 1 repeats
@futurebird But, no, not going to outgrow cable bundles. That three is a magic number. What you need for out-of-band management and redundant data. And those all have to be aggregated together somewhere. And us rack-and-stackers are gonna bundle. It's in our blood.
(DIR) Post #AttjyEprJZ0kkBuIng by dan@discuss.systems
2025-05-08T22:32:27Z
0 likes, 1 repeats
@futurebird There are a bunch of research efforts on self-maintaining data centers. Some of my colleagues in the UK are working on robots to automate maintenance tasks, currently focused on manipulating or replacing network cables and optical transceivers, which is one of the main things people have to go into the data center to repair.https://www.microsoft.com/en-us/research/project/craft/There was also this project about deploying an underwater data center which obviously required it to be maintenance-free:https://natick.research.microsoft.com/
(DIR) Post #AttjzE2jkIYcZnOwqG by michael_w_busch@mastodon.online
2025-05-08T22:32:32Z
0 likes, 0 repeats
@futurebird Cables make it easy to move things around and change configurations.At one point; the Arecibo Observatory considered swapping out several racks of cabled backend receivers and data processing with a single hardwired rack, to enable more remote observing.It would have worked for most but not all of the things the cabled racks did.And if it had broken, swapping in replacements would have been a big pain.
(DIR) Post #AttkTi84S1pSZeQtYO by paulc@mstdn.social
2025-05-08T22:38:08Z
0 likes, 0 repeats
@futurebird One time I had a server room with beautiful cable setups. Everything was clean a neat looking. But tracking the matching ends of cables was a nightmare and it eventually became a mess. Solution was to move and have fewer servers. My patch panel looks good but I rarely have to change anything these days.
(DIR) Post #AttksnECd9ghcyGFV2 by lopta@mastodon.social
2025-05-08T22:42:40Z
0 likes, 0 repeats
@futurebird Have you looked at @oxidecomputer at all?
(DIR) Post #AttlAYIWsaLMXDkkbI by leon_p_smith@ioc.exchange
2025-05-08T22:45:53Z
0 likes, 1 repeats
@futurebird Yes, most large datacenters have I'm sure many metric shittons of ethernet cables in them. Usually a lot of work goes into cable management, otherwise thingswould be totally out of control.There are companies that have even designed large compute clusters in clever ways to minimize cable length, which minimizes latency and saves money and effort in wiring.There's often a fair bit of fiber optic cables too, but there's tradeoffs in terms of cost. Also, it takes an incredibly beefy server to be able to make much use of the very high bandwidths that fiber optics provide, so sometimes application servers are copper to switches, which aggregate multiple servers together into one very high bandwidth fiber optic backbone link.
(DIR) Post #AttlqscRiRPTirFhwm by lopta@mastodon.social
2025-05-08T22:53:33Z
0 likes, 0 repeats
@futurebird We're already outgrowing some of the racks and bundles of cables thanks to virtualisation and faster network links.
(DIR) Post #AttmAu3e4nXzyaWS8W by futurebird@sauropods.win
2025-05-08T22:57:11Z
0 likes, 0 repeats
This video pretty much shows what I expected, although here that there are millions of machines on those racks was kind of vertigo inducing. This is what "a data center" looks like today. And people keep all those cables tidy, and replace the servers when they break. https://www.youtube.com/watch?v=80aK2_iwMOs&
(DIR) Post #AttmiYqLSRy21hLnea by dan@discuss.systems
2025-05-08T23:03:11Z
0 likes, 1 repeats
@futurebird We recently brought in these same cable-tidiers to manage my (much, much smaller) research lab, and they are much, much better at running cables neatly than I am. It is very obvious which racks they wired and which ones I did.(This is probably why they look nervous whenever I walk into the server room.)
(DIR) Post #AttoMehLD1wLipozRY by Rycaut@mastodon.social
2025-05-08T23:21:41Z
0 likes, 0 repeats
@futurebird I remember touring a client’s data center over 25 years ago. Smaller in scale than these (though massive by the standards of the 1990’s) but the scary part was when I saw our mainframe. The client was one of the largest banks in the world. That mainframe cleared the bank’s currency trading. We merged with another bank - over $1.5T (yes Trillion) or about 20% of the then global currency markets cleared over that machine in a few days. Needless to say I didn’t touch anything
(DIR) Post #AttsNga65oK7fXBDlI by jaymcor@mastodon.acm.org
2025-05-09T00:06:40Z
0 likes, 0 repeats
@futurebird Inside the room, not so different than many decades ago. Outside, as they pan out and show the sheer multitudes of buildings... Jeez, now that's a lot of servers... "Each one of these generators creates enough electricity to supply 3000 houses." (oof, my stomach... the global warming implications in this industry are hard not to think about).
(DIR) Post #Attx2a9rHKKdRAfBfk by notsoloud@expressional.social
2025-05-09T00:58:54Z
0 likes, 0 repeats
@futurebirdFor SciFi, look into cooling. That's a hard physical limit, no matter how the computing is done. It's becoming the limiting factor already.
(DIR) Post #Atu69NXLV8bluLxtMO by lopta@mastodon.social
2025-05-08T22:57:48Z
0 likes, 0 repeats
@spacehobo @futurebird @oxidecomputer What I've seen of their work is encouraging but I haven't got to test it myself. Looks like they put a lot of work into power, networking and firmware in an attempt to weed out a lot of the cruft that you get with racks of raditional servers.
(DIR) Post #Atu69O2tbpqTUCj5XM by JamesWidman@mastodon.social
2025-05-08T23:41:58Z
0 likes, 1 repeats
@lopta @spacehobo @futurebird @oxidecomputer to me the most impressive part is where they completely replaced the BIOS with their own firmware, written from scratch in rust, that is co-designed with the kernel.But also, close to OP's point, they designed the rack so that the owner never does any cable management within the rack. E.g. when you need to add another motherboard, you just slide it into an empty slot, and the back of it auto-connects to DC power & network cables.
(DIR) Post #Atu69TAWYtuzNJKkca by JamesWidman@mastodon.social
2025-05-08T23:43:41Z
0 likes, 0 repeats
@lopta @spacehobo @futurebird @oxidecomputer And i think i read that e.g. AWS & google designed similar hardware for their rack farms...? And i think that's what oxide is talking about when they mention "cloud architecture": i.e. hardware that isn't like a standard PC where each motherboard comes with its own power supply and a general-purpose BIOS (which comes with like 3 decades worth of support for peripherals that no one uses anymore, sets up alarming ways of interrupting the kernel, etc).
(DIR) Post #AtuDDR9ggngFy3mVEm by bucknam@mastodon.social
2025-05-09T04:00:07Z
0 likes, 0 repeats
@futurebird Maybe it’s a machine made out of some sort of crystalline structure that is self-healing?
(DIR) Post #AtuIURZYtGpjxJoRIu by rf@mas.to
2025-05-09T04:59:13Z
0 likes, 0 repeats
@futurebird Datacenter outages today often have to do w/power, plus power means cooling and can't trust fans forever. If I want something to last super long, power-sippy becomes a top design priority.Just vibes, but fiber networking feels like something we might still have in 50 or 75 years, like we've stumbled on the 'right' approach to transmission. Sending silly bandwidth over silly distances through cheap optics smaller than your finger.
(DIR) Post #AtuV0RugMH2JjFieqO by lufthans@mastodon.social
2025-05-09T07:19:28Z
0 likes, 0 repeats
@futurebird ants that use excess heat and water coolant under the data center could learn to string spider silk to keep the network flowing so you just need a way to use spider silk as the physical layerCould spiders spin fibers optic cable?
(DIR) Post #Atuz5D1c8XzVcv9CXA by T_X@chaos.social
2025-05-09T12:56:28Z
0 likes, 0 repeats
@futurebird good question. The size of a rack is pretty standardized, as well as the network connectors. But not where they are. I guess that could be standardized, too, so that in the end you'd just have to slot in such a pizza box sized server and some SFP(-like) connectors in the back would slot in in the same go. The downside would be, you'd have to upgrade the whole rack if you'd want to use newer, faster network interfaces? Right now one is really flexible with SFP connectors and cables.
(DIR) Post #AtvXNW0lgJzIALeVRg by cshlan@dawdling.net
2025-05-09T19:20:24Z
0 likes, 0 repeats
@futurebird What if the support structure for the servers was also the communication channel between them and whatever they need to connect to? Being fictional it can always be hand waved a bit.
(DIR) Post #AtvdtodmYMyQw2gbI0 by log@mastodon.sdf.org
2025-05-09T20:33:38Z
0 likes, 0 repeats
@futurebird We definitely outgrow copper cable. Singlemode optical fiber bundles replace them at larger scales. On-board traces and in-chip conductor channels replace them at smaller scales. At even smaller scales, integrating compute circuits with nonvolatile memory storage eliminates some cache blocks and memory buses. Layered chips might get rid of some board traces. Cables are already better than they used to be.
(DIR) Post #Atvf8AS6M4USahd7KK by barrygoldman1@sauropods.win
2025-05-09T20:47:39Z
0 likes, 0 repeats
@futurebird not one solid machine to survive that long. delocalized, the cables are alive and constantly growing repairing re-attaching, especially as new data storage units grow..
(DIR) Post #AtwCnV9Jbcaj8r9yqm by bhawthorne@infosec.exchange
2025-05-10T03:04:53Z
0 likes, 0 repeats
@futurebird I am trying to remember which SF author extrapolated from fiber optic interconnects to directional line-of-sight laser communication between nodes in a closely linked swarm of computing devices. Where nodes kept track of each other’s relative physical location in space so they knew which direction to aim one of their comm lasers. Also it’s possible this is just something I dreamed of reading, in which case feel free to run with it. If you put the nodes in close orbits, you can optimize inter-node communications by moving nodes around in real time so that when a set of nodes need to exchange data frequently, they can move closer to one another and reduce communications lag. Swarm behavior is all current research being done now.As always, of course, if you want this to last a long time, you either need a source of reaction mass for those positioning thrusters, or you have to bite the bullet and go for a magical reactionless propulsion system. Of course, if you already have that in-world this is a great application for it!