[HN Gopher] Petabit-class transmission over > 1000 km using stan...
       ___________________________________________________________________
        
       Petabit-class transmission over > 1000 km using standard 19-core
       optical fiber
        
       Author : the_arun
       Score  : 69 points
       Date   : 2025-07-13 00:06 UTC (2 days ago)
        
 (HTM) web link (www.nict.go.jp)
 (TXT) w3m dump (www.nict.go.jp)
        
       | eqvinox wrote:
       | Contrary to the "highlights" section (which seems to be the only
       | place calling it a "standard" 19-core optical fiber), this is not
       | in fact a 'standard' fiber, rather the origin seems to be the
       | standard (125um) diameter ("Sumitomo Electric was responsible for
       | the design and manufacture of a coupled 19-core optical fiber
       | with a standard cladding diameter (see Figure 1)"). Looks like
       | the "diameter" simply got lost for the highlights section.
       | 
       | (Nonetheless impressive, and multi-core fiber seems to be
       | maturing as technology.)
        
       | bcrl wrote:
       | Interesting work, but 19 cores is very much not standard.
       | Multiples of 12 cores are the gold standard in the
       | telecommunications industry. Ribbon fibre is typically 12,
       | sometimes 24 fibres per ribbon, and high count cables these days
       | are 864 cores or more using a more flexible ribbon structure that
       | improves density while still using standard tooling.
        
         | eqvinox wrote:
         | You're confusing multi-core in a single cladding with multiple
         | strands of cladding. This is 19 cores in a single cladded 125um
         | (which is quite impressive manufacturing from Sumitomo).
        
           | bcrl wrote:
           | I wasn't confusing anything. To interoperate with industry
           | standard fibre optic cables it should have a multiple of 12
           | or 24 cores, not the complete oddball number of 19. Yes it's
           | cool that it's that small, but that is not the limiting
           | factor in the deployment of long haul fibre optic
           | telecommunications networks.
           | 
           | Sumitomo sells a lot of fusion splicers at very high margins.
           | It is in their best interest to introduce new types of fibre
           | that requires customers to buy new and more expensive fusion
           | splicers. Any fibre built in this way will need rotational
           | alignment that the existing fusion splicers used in telecom
           | do not do (they only align the cores horizontally, vertically
           | and by the gap between the ends). _Maybe_ they can build
           | ribbon fibres that have the required alignment provided by
           | the structure of the ribbon, but I think that is unlikely.
           | 
           | Given that it does not interoperate with any existing cables
           | or splicers, the only place this kind of cable is likely to
           | see deployment in the near term is in undersea cables where
           | the cost of the glass is completely insignificant compared to
           | everything that goes around it and the increased capacity is
           | useful. Terrestrial telecom networks just aren't under the
           | kind of pressure needed to justify the incompatibility with
           | existing fibre optic cables. Data centers are another
           | possibility when they can figure out how to produce the
           | optics at a reasonable cost.
        
       | ksec wrote:
       | The actual figures are 1,808 km. For reference US is 2,800 miles
       | (4,500 km) wide from east to west, and 1,650 miles (2,660 km)
       | from north to south.
        
         | exabrial wrote:
         | For us Americans, thats about 295,680 toilet paper rolls or
         | 2,956 KDC (kilo donkey kicks).
        
           | chasd00 wrote:
           | Or about 3 MAG (mega Ariana Grandes).
           | https://x.com/GatorsDaily/status/1504570772873904130
        
       | aDfbrtVt wrote:
       | As others have mentioned, this is mostly a proof of concept for a
       | high core count weakly-coupled fibre from Sumitomo. I also want
       | to highlight the use of a 19 channels MIMO receiver structure
       | which is completely impractical. The linked article also fails to
       | mention a figure for MIMO gain.
        
         | eqvinox wrote:
         | Worse, it's _offline_ MIMO processing! ;D
         | 
         | I would guesstimate that if you try to run it live, the
         | receiver [or rather its DSPs] would consume >100W of power,
         | maybe even >1000W. (These things evolve & improve though.)
         | 
         | (Also, a kilowatt for the receiver is entirely acceptable for a
         | submarine cable.)
        
           | aDfbrtVt wrote:
           | To get a ballpark power usage, we can look at comparable (for
           | some definition thereof) commercial offerings. Take a public
           | datasheet from Arista[1], they quote 16W typical for a
           | 400Gbps module for 120km of reach. You would need 2500 modems
           | at 16W (38kW) jointly decoding (i.e. very close together) to
           | process this data rate. GPU compute has really pushed the
           | boundaries on thermal management, but this would be far more
           | thermally dense.
           | 
           | [1] https://www.arista.com/assets/data/pdf/Datasheets/400ZR_D
           | CI_...
        
             | eqvinox wrote:
             | I think the scaling parameters are a bit different here
             | since the primary concern is the DSP power processing _and
             | correlating for MIMO_ 19 signals simultaneously. But the
             | 16W figure for a 120km 400Gbps module includes a high-
             | powered1 transmitter amplifier  & laser, as well as receive
             | amplifiers on top of the DSP. My estimate is based on O(n2)
             | scaling for 19x19 MIMO (=361) and then assuming 2[?]3W of
             | DSP power per unit factor.
             | 
             | [but now that I think about it... I think my estimate is
             | indeed too low; I was assuming commonplace transceivers for
             | the unit factor, i.e. <=1Tb; but a petabit on 19 cores is
             | still 53Tb per core...]
             | 
             | 1 note the setup in this paper has separate amplifiers in
             | 86.1km steps, so the transmitter doesn't need to be
             | particularly high powered.
        
             | cycomanic wrote:
             | It's important to note that wavelength channels are not
             | coupled, so modems with different wavelengths don't need to
             | be terribly close together (in fact one could theoretically
             | do wavelength switching so they could be 100s of km apart).
             | So the scaling we need to consider is the scaling of the
             | MIMO which in current modems is 2x2. The difficulty is not
             | necessarily just power consumption (also the power envelope
             | of long haul modems is higher than the DCI modem you link,
             | up to 70W IIRC), but also resourcing on the ASIC, your MIMO
             | part (which needs to be highly parallel) will take up
             | significant floorspace and you need to balance the delays.
             | 
             | The 38kW is not a very high number btw, the switches at the
             | end points of submarine links are quite a bit more power
             | hungry already.
        
               | aDfbrtVt wrote:
               | Depending on phase matching criteria of lambda's on a
               | given core, I would mostly agree that various wavelengths
               | are not significantly coupled. I also agree there are a
               | different power budget for LH modems vs. DCI, but power
               | on LH modems is not something that often gets publicly
               | disclosed. I am not too concerned with the overall power,
               | more the power density (and component density) that 19
               | channel MIMO would require.
               | 
               | The main point I was trying to make is the impracticality
               | of MIMO SDM. The topic has been discussed to death (see
               | the endless papers from Nokia) and has yet to be deployed
               | because the spatial gain is never worth the real world
               | implementation issues.
        
             | quickthrowman wrote:
             | 38kW ~= 50 HP ~= 45A at 480V three-phase, which is a
             | relatively light load handled by 3#6 AWG conductors and a
             | #10 equipment ground.
             | 
             | I mean, it's a shitload more power than a simple media
             | converter that takes in fiber and outputs to a RJ-45 but
             | not all that much compared to other commercial electrical
             | loads. This Eaton/Tripplite unit draws ~40W at 120V -
             | https://tripplite.eaton.com/gigabit-multimode-fiber-to-
             | ether...
             | 
             | A smallish commercial heat pump/CRAC unit (~12kW) can
             | handle the cooling requirements (assuming a COP of 3)
        
       | throw0101c wrote:
       | The NANOG has had a regular presentation by Richard Steenbergen
       | called "Everything You Always Wanted to Know About Optical
       | Networking - But Were Afraid to Ask"; last year's:
       | 
       | * https://www.youtube.com/watch?v=Y-MfLsnqluM
        
       | exabrial wrote:
       | Alright, I have a dumb question...
       | 
       | How come with a LAG group on ethernet, I can get "more total
       | bandwidth", but any single TCP flow is limited to the max speed
       | of one of the LAG Components (gigabit lets say), but then these
       | guys are somehow combining multiple fibers into an overall faster
       | stream? What gives? Even round robin mode on LAG groups doesn't
       | do that.
       | 
       | What are they doing differently and why can't we do that?
        
         | bradfitz wrote:
         | Because your switch is mapping a 4 tuple to a certain link and
         | these people aren't, is my guess.
        
         | eqvinox wrote:
         | > What are they doing differently and why can't we do that?
         | 
         | You're (incorrectly) assuming they're doing Ethernet/IP in that
         | test setup. They aren't (this is implied by the results section
         | discussing various FEC, which is below even Ethernet framing),
         | so it's just a petabit of raw layer 1 bandwidth.
        
           | cycomanic wrote:
           | It's also important to note that many optical links don't use
           | ethernet as a protocol either (SDH/SONET are the common
           | ones), although this is changing more and more.
        
             | wmf wrote:
             | Looks like SDH/SONET topped out at 40 Gbps which means it
             | died 10 years ago.
        
               | meepmorp wrote:
               | SONET is widely used in the US.
        
               | eqvinox wrote:
               | Used, maybe, but [citation needed].
               | 
               | Built, no, definitely not voluntarily1, Ethernet is the
               | only non-legacy thing surviving for new installations for
               | anything more than short range (few kilometer) runs.
               | InfiniBand, CPRI and SDI are dying too and getting
               | replaced with various over-Ethernet things, even for low-
               | layer line aggregation there's FlexE these days.
               | 
               | 1 some installations are the exception confirming the
               | rules; but as a telco sinking more money into keeping an
               | old SONET installation alive is totally the choice of
               | last resort. You'll have problems getting new hardware
               | too.
               | 
               | Disclaimer: I don't know what military installations do.
        
         | wmf wrote:
         | I assume this is just a PHY-level test and no real switches or
         | traffic was involved.
        
         | toast0 wrote:
         | You don't really want to, but if you configure _all_ of the LAG
         | participants on the path to do round-robin or similar balancing
         | rather than hashing based on addresses, you can have data in
         | one flow that exceeds an individual connection. You 'll also be
         | pretty likely to get out of order data, and tcp receivers will
         | exercise their reassembly buffer, which will kill performance
         | and you'll rapidly wish you hadn't done all that configuration
         | work. If you do need more than one link's worth of throughput,
         | you'll almost always do better by running multiple flows, but
         | you may need still need to configure your network so it hashes
         | in a way that you can get diverse paths between two hosts,
         | defaults might not give you diversity even on different flows.
        
           | exabrial wrote:
           | the data out of order is the key bit.
           | 
           | How do these guys get the data in order and we dont?
        
             | aaronax wrote:
             | Consider that a QSFP28 module uses four 25gbps lanes to
             | support sending one single 100gbps flow. So electronics do
             | exist that can easily do what you are asking. I think it is
             | just the economics of doing it for the various ports on a
             | switch, lack of a standard, etc.
        
       | Keyframe wrote:
       | while fascinating I'm still waiting for that transformative move
       | from electrical. Whichever optical route you're taking, at the
       | beginning and at the end of it has to be an electrical conversion
       | which hinders speed, consumes power and produces (sometimes tons
       | of) heat. Wen optical switching?
        
         | wmf wrote:
         | There's been a ton of research on optical computing and it just
         | isn't impressive.
        
           | Keyframe wrote:
           | yet
        
       ___________________________________________________________________
       (page generated 2025-07-15 23:00 UTC)