[HN Gopher] Record-breaking chip can transmit 1.8 petabits per s...
___________________________________________________________________
Record-breaking chip can transmit 1.8 petabits per second
Author : typeofhuman
Score : 211 points
Date : 2022-10-24 11:36 UTC (11 hours ago)
(HTM) web link (newatlas.com)
(TXT) w3m dump (newatlas.com)
| petercooper wrote:
| _For reference, the global internet bandwidth has been estimated
| at just shy of 1 Pbit /s_
|
| The _entire_ Internet is using the same as 1 million residential
| 1 gigabit connections could max out? I don 't know why, but that
| sounds far below what I would have expected.
| smolder wrote:
| I wonder how that estimate was made. Maybe they are counting it
| as one transmission when something non-unique is broadcast to
| many endpoints? Or does every fetch of an asset from a CDN
| count?
|
| Either way, the bulk of the web is structured to put data as
| close to where it's needed as possible, to keep things quick
| and uncongested. So, it doesn't surprise me that internet
| backbones are much thinner than the aggregate of last mile
| connections.
| jedberg wrote:
| How do they generate data at that rate to transmit? I assume it's
| synthetic data and probably duplicated a lot? But how do they
| generate it and receive it to count it?
| henrikeh wrote:
| The two other comments gave very good general answers, but I
| happen to have worked on this specific project, so I can give
| some very specific details (as far as my memory goes.)
|
| Lab testing of this scale of transmission involves a bit of
| "educated simplification". We had some hundreds of wavelength
| channels, 37 fiber cores and two polarizations to fill with
| data. That is not realistic to actually do within our budget,
| so instead e split the system into components where there is no
| interference. For example, if there is different data on all
| neighboring cores compared to the core-under-test, then we dare
| to assume that the interference is random, without considering
| neighbors' neighbor etc.
|
| This reduces our perspective to a single channel under test
| with known data and then at least one other channel which is
| just there as "noise" for the other channels. The goal is to
| make the channel-under-test have a realistic "background noise"
| from neighboring interference. This secondary signal is
| sometimes a time-delayed version, sometimes a completely
| independent (but real) data signal.
|
| This left us with a single signal of 32 GBd (giga symbols / s).
| This is doable on high-performance signal generators and
| samplers.
| jedberg wrote:
| Ah ok so you just extrapolate the capacity of the pipe based
| on that, you don't actually generate petabytes of data. That
| makes a lot of sense, thanks!
| henrikeh wrote:
| I should clarify that we did measure every channel
| (polarization, wavelength and fiber core) individually. It
| would not be fair if we just measured one and multiplied ;)
|
| (And yes, that took forever. A shout out to A. A. Jorgensen
| and D. Kong for their endurance in that.)
| javajosh wrote:
| That's a good question! I assume that their test run is very
| short, like maybe a nanosecond. A petabit is 10^15 bits, which
| means they only needed to generate 10^6 bits (a megabit) for
| such a run. But even then, I'd be curious to know how you feed
| a laser 10^6 bits of configuration data in 10^-9 seconds!
| Definitely a paper I'd like to read.
| cycomanic wrote:
| So the way you do these experiments that at the transmitter
| you use an arbitrary waveform generator with ~4 DAC channels
| which let you modulate a single wavelength channel in IQ and
| two polarizations (4 dimensions). These devices have
| typically a memory of around 500k samples and rates of up to
| 120 GS/s (newest one actually has 256 GS/s Google keysight
| AWG if you are interested). So you generate a sequence of
| ~120k symbols (depending on symbolrate/oversampling) with 12
| bit/per symbol (assuming 64 QAM). That sequence repeats over
| and over. You then use the multiplexing/emulation techniques
| described in other posts to emulate the other channels. This
| is essentially due to limitations of the measurement
| equipment. You can't just convert a random incoming bitstream
| into analogue symbols (with FEC coding) in realtime.
|
| In a deployed system this would be done by specific Asics
| that take millions to develop and are comparatively
| inflexible. Thus if you want to test/research methods you use
| the above mentioned equipment which gives much more
| flexibility.
| jpmattia wrote:
| > _How do they generate data at that rate to transmit?_
|
| In the lab, the most common scenario is to have a pseudo-random
| bit sequence (PRBS), and usually the sequence is 2^31-1 bits
| long. This makes both the generation (on the transmit side) and
| error-rate detection (on the receive side) reasonably
| straightforward, although it can be tricky to read out every
| one of the receive channels to check the bit-error rate (BER).
|
| Here's typical PRBS BER equipment: https://www.anritsu.com/en-
| us/test-measurement/products/mp19...
|
| Spoiler alert: The test equipment isn't cheap.
|
| Edit: Probably should mention- PRBS from a linear-feedback
| shift register is used, because in a PRBS of 2^N-1 you are
| guaranteed every permutation of N bits long, except for N x
| zeroes in a row. This measures the wideband system, so if there
| are spurious resonances in the wide pass band, errors will
| result.
| showerst wrote:
| As someone who's become a bit of a test equipment nerd, that
| is _very_ neat.
| cycomanic wrote:
| Actually, we tend not to use PRBSs anymore for these sort of
| experiments, instead you use a randomly generated symbol/bit
| sequence which fits into the memory of the DAC. Similarly you
| don't use a BERT anymore but instead use a Realtime
| oscilloscope (even more expensive than the BERTs) and do
| offline digital signal processing (in real systems this is
| done by very expensive asics). PRBSs and BERTs are still uses
| in so called datacom experiments where latency is often an
| issue and only very lightweight FEC is used, so one wants to
| measure down to error rates of 10e-9 unlike coherent systems.
| bell-cot wrote:
| The supporting circuitry & equipment - to get 1.84 petabits per
| second (Pbit/s) to & from the transmit/receive chips they
| demonstrated - will be a bit $$$extra...
| traviskeens wrote:
| great news in theory, but in practice, problems remain; chiefly,
| that google analytics & hubspot still reduce this to 0.9MB/s
| [deleted]
| dmitrygr wrote:
| Traffic is amount per second. "traffic per second" is amount per
| second^2.
|
| What does it mean for a chip to "transfer an mount per second^2"
| ?
| Sohcahtoa82 wrote:
| I think it's pretty obvious what was meant by the title and
| you're disguising pedantry as confusion.
|
| It's poorly worded, sure, I'll give you that. But anyone should
| be able to understand that what they meant was "The internet on
| average transfers a certain amount of data per second, and this
| chip is capable of transferring at that rate."
| Dylan16807 wrote:
| > I think it's pretty obvious what was meant by the title and
| you're disguising pedantry as confusion.
|
| I think it's pretty obvious it was a challenge, not a display
| of fake confusion.
| drfuchs wrote:
| Presumably per minute, per hour, and per day, too? (The point
| being that the headline makes no sense as written.)
| dvirsky wrote:
| Yeah, it's like saying "this spaceship can go 10% of the speed
| of light per second"
| typeofhuman wrote:
| Sure it does. It says per-second.
|
| What is confusing about it?
| ouid wrote:
| What are the units of internet traffic?
| robertlagrant wrote:
| Data volume transmitted per time increment.
| stjohnswarts wrote:
| 10000 libraries of congress per second.
| cynwoody wrote:
| The Library of Congress claims+ to host 21 petabytes of
| digital content. That would take++ a little over a minute
| and a half to send over the link described in the
| article, assuming, of course, that the content has been
| put in a ready-to-send form.
|
| +https://www.loc.gov/programs/digital-collections-
| management/....
|
| ++https://www.google.com/search?q=21+petabytes+%2F+1.84+p
| etabi...
| mirekrusin wrote:
| They use Pbit/s in the article.
| postalrat wrote:
| It can transmit one internet of traffic per second. So the
| unit is an internet.
|
| They should have used a more common unit like encyclopedia
| britannicas.
| thomastjeffery wrote:
| But how long of an internet?
|
| That's the tricky bit: "internet traffic" is already a
| measure of units over time.
| stevuu wrote:
| How many tricky bits are there in 1.84 petabits?
| etrautmann wrote:
| The word traffic already implies a rate
| Sebb767 wrote:
| Per-second doesn't make sense in this context. Either it can
| transmit all of the internet traffic, so it has sufficient
| bandwidth to theoretically mirror the whole internet traffic,
| or it can't. A time unit doesn't make sense here.
|
| The alternative interpretation would be that it can transmit
| the whole amount of data ever sent through the internet in
| its existence per second, but this seems rather unlikely.
| stjohnswarts wrote:
| Well barring the title they define it in the article and
| say it can match the current speed of all internet traffic
| raw traffic speed. doesn't matter what units you use as
| long as it's based on basic units bits/second. Pretty
| straightforward. Can it keep it up? probably not currently.
| Can it handle similar amounts of switching? Also probably
| not.
| williamscales wrote:
| > Either it can transmit all of the internet traffic, so it
| has sufficient bandwidth to theoretically mirror the whole
| internet traffic, or it can't. A time unit doesn't make
| sense here.
|
| It's not a time unit. It's a rate. The rate is twice the
| rate of traffic on the internet. Therefore it can transmit
| all the traffic on the internet.
|
| Ideas like traffic only make sense in the context of per-
| unit-time, because they're fundamentally about a flow.
| Dylan16807 wrote:
| Yes, it's a rate. The aspect of time is already baked in.
| Adding an _additional_ unit of time is either redundant
| or means you 're talking about acceleration.
| JCharante wrote:
| I can run twice the speed of Usain Bolt.. per second
| chrisseaton wrote:
| > What is confusing about it?
|
| If it can transmit it per-second, then it can also transmit
| it per-hour, so it's redundant and doesn't add anything,
| which means it's confusing as to why it's there.
| happytoexplain wrote:
| "Car X is capable of the same velocity as car Y, per hour."
| Camisa wrote:
| Why would you ever say that "car X is capable of the same
| velocity as car Y if you measure car X's velocity by km/h
| and car Y's velocity by mph"?
| badwolf wrote:
| The second sentence in the article:
|
| Engineers have transmitted data at a blistering rate of 1.84
| petabits per second (Pbit/s), almost twice the global internet
| traffic per second.
| SnowHill9902 wrote:
| Traffic is already measured in bit/s so "traffic per second"
| would be something like data acceleration. Of course this is
| wrong but journalists have no idea.
| rad88 wrote:
| The most popular topic is so often the post title.
| SnowHill9902 wrote:
| Why be wrong if you can be right.
| metadat wrote:
| Also discussed 2 days ago:
|
| _Chip can transmit all of the internet 's traffic every second_
|
| https://news.ycombinator.com/item?id=33296750
|
| (56 points, 17 comments)
| tmikaeld wrote:
| Are these speeds just "tested" maximums or can they be utilized
| in practicality?
| rassibassi wrote:
| Not practical yet, the novelty is the frequency comb which
| allows +200 channels across wavelength with only a single
| laser, where before one required 200 lasers.
|
| In an experiment like this, only the initial light source is
| modulated and therefore all channels carry the same data. The
| equipment for the transmitter and receiver chain is so
| expensive that university labs can barely afford one of each.
| cycomanic wrote:
| Almost correct. You typically need 2-4 transmitters to
| emulate the system. So you modulate one or two channels under
| test and modulate the rest of the band with a single
| modulator and use some decorrelation tricks to be realistic.
| Then you scan your channels under test through the whole
| band. This is in typically a lower bound of performance, i.e.
| a real system would likely perform better. As you said, using
| individual transmitters is economically unfeasible even for
| the best equipped industry labs.
| Dylan16807 wrote:
| Does that mean "We experimentally demonstrate transmission
| of 1.84 Pbit s-1" in the paper abstract is a lie?
| henrikeh wrote:
| I worked on this project and cycomanic summarizes the
| practice well. I've written more on it here:
| https://news.ycombinator.com/item?id=33321506
| yieldcrv wrote:
| Anybody here following photonic or optical processors closely?
| le-mark wrote:
| It says they transmit over a 37 core fiber, so 1.9 / 37 is about
| 53 terabits per second? Is it common for optical phys to
| encode/decode at this rate?
| Sporktacular wrote:
| Unless I misunderstood, this is the number that matters. The 1+
| Pb/s is like giving a headline grabbing statistic that highway
| can carry 10 million passengers per hour and then adding below
| that it's a 100 lane highway. The advancement seems that the
| de/multiplexing is done on a single module at each end.
| rassibassi wrote:
| They also multiplex across +200 channels across wavelength
| (wavelength division multiplexing).
|
| Not sure what the baud rate of a single channel was in their
| experiment but probably between 32-80Gb which is common for the
| lab equipment at Universities. The industry is knocking on
| 100-400Gb where for the actual decoding and signal processing
| there is massive parallelism applied to reduce the rate even
| more
| tmaly wrote:
| I want my 8k streaming video.
| noobermin wrote:
| Great now webdevs will get even more lazy and ship an entire
| docker image in every html tag.
| booleandilemma wrote:
| It gives the <img> tag a whole new meaning.
| tpmx wrote:
| Please don't give them ideas...
| datavirtue wrote:
| It's an old idea called: object oriented programming.
| TOMDM wrote:
| "The user needs to be able to edit some audio in the browser"
|
| Next thing you know, you have linux compiled to WASM running a
| docker container built to host ffmpeg for you.
| thesandlord wrote:
| https://github.com/ffmpegwasm/ffmpeg.wasm
| henrikeh wrote:
| Late to the thread, but I took part of this research (7th author
| in the list). I worked on the signal processing, information
| coding etc and is happy to answer any questions :-)
| randcraw wrote:
| Does this work imply that the same tech could create ultra-
| high-speed switches that could match this bandwidth, thereby
| routing and propagating, and not just flow between two points?
|
| BTW, congrats on your success.
| lancewiggs wrote:
| The short answer is yes. (1)
|
| Optical saves a heck of a lot of power, and is obviously much
| faster than copper, so that's the way it's all going.
|
| The longer answer requires reliable and appropriately
| sized/cost transceivers to get the data back to electrical to
| match the speed of the optical, and those are going to be a
| while coming, and this tech is still in the lab.
|
| At the top end subsea cables have very high cost and
| traditionally bulky transceivers, and it's all about data
| volume, not switching.
|
| At the other end of the scale is inside the data centre,
| where most switching needs to occur, there is a move towards
| optical interconnections and co-packaged switches. (1 and 2)
|
| 1: https://www.intel.com/content/www/us/en/newsroom/news/inte
| l-... 2: https://www.intel.in/content/www/in/en/architecture-
| and-tech...
| henrikeh wrote:
| Thanks :-)
|
| It is a while since I have been into optical signal
| processing, but I will ask my colleague who is much more
| well-versed.
| 3minus1 wrote:
| For us n00bs, how do you see this being applied? And in what
| time frame?
| henrikeh wrote:
| I can't answer for the chip aspect (which is the truly novel
| part of this research), but many of the signal processing and
| coding techniques are being deployed in new optical
| transmission systems. Constellation shaping and rate adaptive
| coding were two techniques we used in this paper to ensure
| that individual channels were as ideally utilized as
| possible.
| rglover wrote:
| What do you think the time lag is for this actually being
| deployed in a non-research context (either small scale or full-
| blown rollout)?
| henrikeh wrote:
| Wrote another reply here:
| https://news.ycombinator.com/item?id=33321669
|
| I'd say that there is at least a 10 year delay between the
| lab and commercial deployment. Even then we are talking about
| deployment in large fiber systems and not to the home.
|
| However, not all ideas in the lab ever make it into
| deployment.
| electroagenda wrote:
| Congrats!
|
| What modulation, bitrate and spectral efficiency did you use
| per WDM channel?
|
| Was that rate achieved in real-time or with massive post
| processing?
| henrikeh wrote:
| We used constellation shaping and a rate adaptive code to
| tailor tailor the bitrate of each channel. It varied between
| something along 64-QAM and 256-QAM depending on the SNR in
| the channel.
|
| Post processing times were not too bad. It ran on a standard
| desktop computer and gave an estimate of the data rate in
| about a minute (can't remember exactly). Of course, compared
| to actual transmission that is terrible slow, but that was
| only due to the implementation and need of this experiment.
| contingencies wrote:
| Devil's advocate here. How do you feel about the social
| significance of this type of work? Do you think "enough
| bandwidth" is a thing? If only the cost drops further, will it
| affect society? If we can already stream anything in the
| collective consciousness within seconds, what is the purpose of
| more? Is it likely to enable unnecessary levels of video
| surveillance by state actors?
| henrikeh wrote:
| I must confess that I have never been concerned along those
| lines.
|
| I have thought a lot more about the environmental impact of
| transmission technology. It is a massively energy consuming
| industry and the expectation is to provide more capacity,
| while the expectations on efficiency do not add up to an
| actually reduced energy use.
|
| For what it is worth, I work on Alzheimer's research today:
| https://optoceutics.com
| contingencies wrote:
| I appreciate your honesty. You are not alone in working
| without considering social impact, it's rife in tech and I
| am previously guilty too.
|
| Alzheimer's seems a challenge! Here in China they
| apparently approximate it for research purposes by dosing
| primates with MDMA... should be easy to find volunteers!
| ccbccccbbcccbb wrote:
| So, will the life of an average dweller of the Earth become
| happier because of this?
| superkuh wrote:
| Radio astronomy always needs more bandwidth. International arrays
| like LOFAR or the SKA pathfinders generate a comparable amount of
| information/second as the entire internet. They could definitely
| benefit from small scale production of extremely high bandwidth
| optical networking components.
| hedora wrote:
| This is cool, but note that it's only enough to feed the floating
| point units on about 1000 consumer grade GPUs.
|
| I know cloud is all the rage and stuff, but the thing that really
| surprised me from the article is at how (relatively) slow the
| internet backbone is.
| hnuser123456 wrote:
| I'm guessing you're talking VRAM bandwidth, which is just over
| 1 TB/s on a 4090, while the "internet backbone" is apparently
| ~1 Pb/s, lowercase B, so actually only 128 4090s have the
| memory bandwidth to match the internet backbone. Of course,
| they would fill up in 0.2 ms, at only 24GB each running in
| parallel.
| hedora wrote:
| Those are over $2K. I meant "normal" consumer grade stuff in
| the $200-$400 range, as opposed to "enthusiast" stuff.
|
| Either way, it's no more than a few racks of server-grade
| GPUs, which is probably where applications would actually
| want 1PBit/sec of VRAM bandwidth.
| jl6 wrote:
| What does "entire internet's traffic" really mean? There isn't
| one single measurement point through which all traffic flows, so
| what set of connections are they measuring? Maybe traffic between
| BGP peers?
| npongratz wrote:
| And 21.5 years ago, we were (or at least, I was) celebrating mere
| multi-terabit photonic switching:
|
| https://hardware.slashdot.org/story/01/04/23/1233235/multite...
| Clent wrote:
| Is it possible to calculate the maximum upper bound on the amount
| of data possible here?
| Dylan16807 wrote:
| It depends on what you mean by "possible", what future
| improvements you're considering, because otherwise the answer
| is just 1.84 Pbit/s.
|
| But in very general, you have around 200THz of range for these
| infrared lasers. So on a single core, I'd expect the max to be
| within an order of magnitude of 200Tbps. They're using 37
| cores, so they're getting 50Tbps per core right now.
|
| Order of magnitude because it's not super hard to approach a
| bit per Hz of bandwidth from the bottom side, though difficult
| at very high frequencies, while it gets exponentially hard to
| exceed it. And here's a couple relevant charts for how fiber is
| extra self-limiting: https://i.stack.imgur.com/bwTy2.png
| http://opticalcloudinfra.com/wp-content/uploads/2017/07/Nonl...
| rassibassi wrote:
| The upper bound is still the Shannon limit. The experiment does
| a lot of multiplexing: spatial multi-core fiber, spectral multi
| channel multiplexing across wavelength, dual polarization.
|
| Each of the multiplexed channels are individually limited by
| the Shannon limit, and with higher power the fiber's Kerr
| effect creates interference which creates a sweet spot for the
| optimal optical launch power.
|
| the novelty here is that the spectral channels are all
| generated from a single laser source rather than a laser per
| channel
| igravious wrote:
| ^ Superb answer ^ | |
|
| Shannon Limit in Information Theory
|
| [1] https://en.wikipedia.org/wiki/Noisy-
| channel_coding_theorem
|
| [2] https://news.mit.edu/2010/explained-shannon-0115
| Vt71fcAqt7 wrote:
| >We also present a theoretical analysis that indicates that a
| single, chip-scale light source should be able to support 100
| Pbit s-1 in massively parallel space-and-wavelength multiplexed
| data transmission systems.
| Zenst wrote:
| I'm eventually foreseeing a whole new form of cache. A coil of
| optical fiber with the cache data constantly inflight around that
| loop. With denser optical data transmissions the amount of data
| per meter of coil starts increasing.
|
| At this speed, we are already talking 2% of the entire Internet
| traffic in the length of a single fiber between the shortest
| point between the UK and USA. That's just a single fiber. As
| transducers of this ability get cheaper and cheaper, all those
| unused dark fibers start to offer up alternative uses with
| inflight-caches. Think of how much memory would be needed to
| store that amount of data, how much that costs and even with the
| costs of fiber, things would start.
| achr2 wrote:
| Everything old is new again - delay line memory at the speed of
| light.
| WASDx wrote:
| Reminds me of https://github.com/yarrick/pingfs
| colechristensen wrote:
| 1/1.4 * the speed of light :) Moves a bit slower in glass
| fiber.
| adgjlsfhk1 wrote:
| if you want to be pedantic, it is the speed of light. Just
| not the speed of light in vacuum :)
| fellerts wrote:
| You can technically do this today. Just target a remote server
| and run pingfs. Store your data in the transatlantic fibres!
| https://github.com/yarrick/pingfs
| [deleted]
| hinkley wrote:
| You wouldn't want fiber though. It's designed with low latency
| in mind, whereas for a delay line you want high latency (but
| not too high).
| porbelm wrote:
| Fiber Token Ring?
| p1mrx wrote:
| Even at 1.84 Pbps, you can only store about a gigabyte per km,
| so this doesn't seem very practical.
|
| https://www.wolframalpha.com/input?i=1.84Pbps+*+1km+%2F+c
| Cerium wrote:
| Speed of light in fiber is not c, but about 2/3 c.
| p1mrx wrote:
| So you're saying it's... _about a gigabyte per km_?
| oefnak wrote:
| Reminds me of the harderdrive based on the ping protocol:
| https://youtu.be/JcJSW7Rprio
| richardwhiuk wrote:
| Reminds me of https://www.youtube.com/watch?v=d8BcCLLX4N4
| jpmattia wrote:
| I have not dug deeply into the technical content, but the
| headline as written is pretty far off the mark.
|
| I believe the press release is here:
| https://www.dtu.dk/english/news/all-news/new-data-transmissi...
|
| The innovation: Normally, data over a fiber is multiplexed using
| many wavelengths of light (wave-division multiplexing, or WDM for
| short). These wavelengths are generated from an array of lasers,
| forming a frequency comb.
|
| The result here creates a frequency comb from a single laser, and
| uses that for the transmission. It saves all the power associated
| with the many lasers traditionally used for WDM. All the "chips"
| that do the modulation, transmission, reception, and de-
| modulation are still there, but you've cut out all but one laser
| from the system. It's a nice result.
|
| That was my quick take, please correct if you have more info.
| Vt71fcAqt7 wrote:
| The key point is the petabit per second rate they achieved:
|
| >Using only a single light source, scientists have set a world
| record by transmitting 1.8 petabits per second.
|
| In 2021 the world record was 300 TB[0]. Why is the headline
| misleading? for reference, the headline is currently "Record-
| breaking chip can transmit entire internet's traffic per
| second." This seems to be correct:
|
| >According to a study from global telecommunications market
| research and consulting firm TeleGeography, global internet
| bandwidth has risen by 28% over the course of 2022, with a
| four-year compound annual growth rate (CAGR) of 29%, and is now
| standing at 997Tbps (terabits per second).[1]
|
| >Normally, data over a fiber is multiplexed using many
| wavelengths of light (wave-division multiplexing, or WDM for
| short). These wavelengths are generated from an array of
| lasers, forming a frequency comb.
|
| I think that is a relativly new techniuque. For example see
| https://www.nature.com/articles/s41467-019-14010-7 :
|
| >Optical frequency combs were originally conceived for
| establishing comparisons between atomic clocks1 and as a tool
| to synthesize optical frequencies2,3, but they are also
| becoming an attractive light source for coherent fiber-optical
| communications, where they can replace the hundreds of lasers
| used to carry digital data
|
| So "normally" might give the wrong impression. As far as I
| know, no commercial service is using it. One reason is the
| cost, which this article addresses by proposing a chip based
| apporach which makes it cheaper and easier.
|
| [0]https://www.nict.go.jp/en/press/2021/07/12-1.html
|
| [1]https://www.computerweekly.com/news/252524883/New-
| networking...
|
| Edit: I should point out that the "previous" record was with a
| 4-core optical fiber, whereas this one uses a 37 core one. They
| are really two different things: one about the cable and the
| other about the transmitter. So this one doesn't "beat" the
| other.
| formerly_proven wrote:
| Maybe I'm missing a nuance here but WDM with one laser per
| wavelength is bread and butter tech used everywhere. The base
| case (n=2) even forms the basis of PON networks.
| Vt71fcAqt7 wrote:
| Frequency combs are derived from a single light source.
|
| >Current fibre optic communication systems owe their high-
| capacity abilities to the wavelength-division multiplexing
| (WDM) technique, which combines data channels running on
| different wavelengths, and most often requires many
| individual lasers. Optical frequency combs, with equally
| spaced coherent comb lines derived from a single source,
| have recently emerged as a potential substitute for
| parallel lasers in WDM systems[0](2021)
|
| So "These wavelengths are generated from an array of
| lasers, forming a frequency comb" is using "frequency comb"
| to mean something else in that sentence.
|
| [0]https://www.degruyter.com/document/doi/10.1515/nanoph-20
| 20-0...
| jpmattia wrote:
| > _So "These wavelengths are generated from an array of
| lasers, forming a frequency comb" is using "frequency
| comb" to mean something else in that sentence._
|
| Yes, "frequency grid" would have been better terminology.
| Common spacing for WDM is 50 GHz between adjacent
| frequencies (it's ITU spec'd iirc), and those rely on
| feedback system to maintain the spacing precision.
| jpmattia wrote:
| > _Why is the headline misleading? for reference, the
| headline is currently "Record-breaking chip can transmit
| entire internet's traffic per second."_
|
| The "chip" is a CW laser, so it transmits no data.
|
| It's a little hard to tell from the article + PR, but I think
| the result is a laser with a stabilized frequency-comb output
| suitable for WDM that has been implemented on a single die
| (which is still a nice result.)
|
| Perhaps I missed that they implemented an entire transmitter
| chain on the "chip", but I believe the chip innovation is the
| continuous photon source, not the data transmission.
| henrikeh wrote:
| The chip which produced the laser is indeed "just" CW with
| data modulated on separately. And novelty indeed lies in
| the width of the comb source and the SNRs of the obtained
| channels.
|
| (Worked on this project.)
| jpmattia wrote:
| Congrats to you and team on these results.
|
| > _And novelty indeed lies in the width of the comb
| source and the SNRs of the obtained channels._
|
| Can you expand on this? I'd be curious how it compares to
| a traditional (multi-laser) WDM system, probably others
| would be too.
| henrikeh wrote:
| Thanks! I've reached out to my colleague who worked on
| the chip side of this project.
___________________________________________________________________
(page generated 2022-10-24 23:00 UTC)