[HN Gopher] Understanding Google's Quantum Error Correction Brea...
       ___________________________________________________________________
        
       Understanding Google's Quantum Error Correction Breakthrough
        
       Author : GavCo
       Score  : 170 points
       Date   : 2024-11-22 17:53 UTC (1 days ago)
        
 (HTM) web link (www.quantum-machines.co)
 (TXT) w3m dump (www.quantum-machines.co)
        
       | terminalbraid wrote:
       | Note the paper they are referring to was published August 27,
       | 2024
       | 
       | https://arxiv.org/pdf/2408.13687
        
       | dangerlibrary wrote:
       | I'm someone not really aware of the consequences of each quantum
       | of progress in quantum computing. But, I know that I'm exposed to
       | QC risks in that at some point I'll need to change every security
       | key I've ever generated and every crypto algorithm every piece of
       | software uses.
       | 
       | How much closer does this work bring us to the Quantum Crypto
       | Apocalypse? How much time do I have left before I need to start
       | budgeting it into my quarterly engineering plan?
        
         | griomnib wrote:
         | The primary threat model is data collected _today_ via mass
         | surveillance that is _currently_ unbreakable will _become_
         | breakable.
         | 
         | There are already new "quantum-proof" security mechanisms being
         | developed for that reason.
        
           | sroussey wrote:
           | Yes, and people are recording encrypted conversations
           | communications now for this reason.
        
           | bawolff wrote:
           | Perhaps, but you got to ask yourself how valuable will your
           | data be 20-30 years in the future. For some people that is a
           | big deal maybe. For most people that is a very low risk
           | threat. Most private data has a shelf life where it is no
           | longer valuable.
        
         | bdamm wrote:
         | I'm not sure anyone really knows this although there is no
         | shortage of wild speculation.
         | 
         | If you have keys that need to be robust for 20 years you should
         | probably be looking into trying out some of the newly NIST
         | approved standard algorithms.
        
         | er4hn wrote:
         | You'll need to focus on asym and DH stuff. If your symmetric
         | keys are 256 bits you should be fine there.
         | 
         | The hope is that most of this should just be: Update to the
         | latest version of openssl / openssh / golang-crypto / what have
         | you and make sure you have the handshake settings use the
         | latest crypto algorithms. This is all kind of far flung because
         | there is very little consensus around how to change protocols
         | for various human reasons.
         | 
         | At some point you'll need to generate new asym keys as well,
         | which is where I think things will get interesting. HW based
         | solutions just don't exist today and will probably take a long
         | time due to the inevitable cycle of: companies want to meet us
         | fed gov standards due to regulations / selling to fedgov,
         | fedgov is taking their sweet time to standardize protocols and
         | seem to be interested in wanting to add more certified
         | algorithms as well, actually getting something approved for
         | FIPS 140 (the relevant standard) takes over a year at this
         | point just to get your paperwork processed, everyone wants to
         | move faster. Software can move quicker in terms of development,
         | but you have the normal tradeoffs there with keys being easier
         | to exfiltrate and the same issue with formal certification.
        
           | dylan604 wrote:
           | Maybe my tinfoil hat is a bit too tight, but every time
           | fedgov wants a new algo certified I question how strong it is
           | and if they've already figured out a weakness. Once bitten
           | twice shy or something????
        
             | jiggawatts wrote:
             | The NSA has definitely weakened or back-doored crypto. It's
             | not a conspiracy or even a secret! It was a matter of
             | (public) law in the 90s, such as "export grade" crypto.
             | 
             | Most recently Dual_EC_DRBG was forced on American vendors
             | by the NSA, but the backdoor private key was replaced by
             | Chinese hackers in some Juniper devices and used by them to
             | spy on westerners.
             | 
             | Look up phrase likes "nobody but us" (NOBUS), which is the
             | aspirational goal of these approaches, but often fails,
             | leaving everyone including Americans and their allies
             | exposed.
        
               | dylan604 wrote:
               | You should look up the phrase "once bitten twice shy" as
               | I think you missed the gist of my comment. We've already
               | been bitten at least once by incidents as you've
               | described. From then on, it will always be in the back of
               | my mind that friendly little suggestions on crypto algos
               | from fedgov will always be received with suspicion.
               | Accepting that, most people that are unawares will assume
               | someone is wearing a tinfoil hat.
        
         | bawolff wrote:
         | > But, I know that I'm exposed to QC risks in that at some
         | point I'll need to change every security key I've ever
         | generated and every crypto algorithm every piece of software
         | uses.
         | 
         | Probably not. Unless a real sudden unexpected breakthrough
         | happens, best practise will be to use crypto-resistant
         | algorithms long before this becones a relavent issue.
         | 
         | And practically speaking its only public-key crypto that is an
         | issue, your symmetric keys are fine (oversimplifying slightly,
         | but practically speaking this is true)
        
       | computerdork wrote:
       | Does anyone on HN have a understanding how close this achievement
       | brings us to useful quantum computers?
        
         | kittikitti wrote:
         | This is another hype piece from Google's research and
         | development arm. This is a theoretical application to increase
         | the number of logical qubits in a system by decreasing the
         | error caused by quantum circuts. They just didn't do the last
         | part yet so the application is yet to be seen.
         | 
         | https://arxiv.org/abs/2408.13687
         | 
         | "Our results present device performance that, if scaled, could
         | realize the operational requirements of large scale fault-
         | tolerant quantum algorithms."
         | 
         | Google forgot to test if it scales I guess?
        
           | wholinator2 wrote:
           | Lol yeah the whole problem with quantum computation is the
           | scaling, that's literally the entire problem. It's trivial to
           | make a qbit, harder to make 5, impossible to make 1000. "If
           | it scales" is just wishy washy language to cover, "in the
           | ideal scenario where everything works perfectly and nothing
           | goes wrong, it will work perfectly"
        
           | wasabi991011 wrote:
           | It's the opposite of a theoretical application, and it's not
           | a hype piece. It's more like an experimental confirmation of
           | a theoretical result mixed with an engineering progress
           | report.
           | 
           | They show that a certain milestone was achieved (error rate
           | below the threshold), show experimentally that this milestone
           | implies what theorists predicted, talk about how this
           | milestone was achieved, and characterize the sources of error
           | that could hinder further scaling.
           | 
           | They certainly tested how it scales up to the scale that they
           | can build. A major part of the paper is how it scales.
           | 
           | >> "Our results present device performance that, if scaled,
           | could realize the operational requirements of large scale
           | fault-tolerant quantum algorithms."
           | 
           | > Google forgot to test if it scales I guess?
           | 
           | Remember that quantum computers are still being built. The
           | paper is the equivalent of
           | 
           | > We tested the scaling by comparing how our algorithm runs
           | on a chromebook, a server rack, and google's largest
           | supercomputing cluster and found it scales well.
           | 
           | The sentence you tried to interpret was, continuing this
           | analogy, the equivalent of
           | 
           | >Google's largest supercomputing cluster is not large enough
           | for us, we are currently building an even bigger
           | supercomputing cluster, and when we finish, our algorithm
           | should (to the best of our knowledge) continue along this
           | good scaling law.
        
           | Strilanc wrote:
           | The experiment is literally all about scaling. It tests
           | scaling from distance 3 to 5 to 7. It shows the logical qubit
           | lifetime doubles each time the distance is increased. The
           | sentence you quoted is describing an expectation that this
           | doubling will continue to larger distances, when larger chips
           | are built.
           | 
           | This is the first quantum error correction experiment showing
           | actual improvement as size is increased (without any cheating
           | such as postselection or only running for a single step). It
           | was always believed in theory that bigger codes should have
           | more protection, but there are have been various skeptics
           | over the years saying you'd never actually see these
           | improvements in practice due to the engineering difficulty or
           | due to quantum mechanics breaking down or something.
           | 
           | Make no mistake; much remains to be done. But this experiment
           | is a clear indication of progress. It demonstrates that error
           | correction actually works. It says that quantum computers
           | should be able to solve qubit quality with qubit quantity.
           | 
           | disclaimer: worked on this experiment
        
         | layer8 wrote:
         | The fact that there is a forward-looking subsection about "the
         | _vision_ for fault tolerance" (emphasis mine) almost entirely
         | composed of empty words and concluding in "we are just starting
         | this exciting journey, so stay tuned for what's to come!" tells
         | you "not close at all".
        
       | xscott wrote:
       | While I'm still eager to see where Quantum Computing leads, I've
       | got a new threshold for "breakthrough": Until a quantum computer
       | can factor products of primes larger than a few bits, I'll
       | consider it a work in progress at best.
        
         | kridsdale1 wrote:
         | There will be a thousand breakthroughs before that point.
        
           | xscott wrote:
           | That just means that the word "breakthrough" has lost it's
           | meaning. I would suggest the word "advancement", but I know
           | this is a losing battle.
        
             | Suppafly wrote:
             | >That just means that the word "breakthrough" has lost it's
             | meaning.
             | 
             | This. Small, incremental and predictable advances aren't
             | breakthroughs.
        
         | UberFly wrote:
         | I guess like most of these kinds of projects, it'll be smaller,
         | less flashy breakthroughs or milestones along the way.
        
           | Terr_ wrote:
           | People dramatically underestimate how important incremental
           | unsung progress is, perhaps because it just doesn't make for
           | a nice memorable story compared to Suddenly Great Person Has
           | Amazing Idea Nobody Had Before.
        
         | dekhn wrote:
         | quantum computers can (should be able to; do not currently)
         | solve many useful problems without ever being able to factor
         | primes.
        
           | Eji1700 wrote:
           | Yeah I think that's the issue that makes it hard to assess
           | quantum computing.
           | 
           | My very layman understanding is that there are certain things
           | it will be several orders of magnitude better at, but
           | "simple" things for a normal machine quantum will be just as
           | bad if not massively worse.
           | 
           | It really should be treated as a different tool for right
           | now. Maybe some day in the very far future if it becomes
           | easier to make quantum computers an abstraction layer will be
           | arrived at in some manner that means the end user thinks it's
           | just like a normal computer, but from a "looking at series of
           | 1/0's" or "looking at a series of superimposed particles"
           | it's extremely different in function.
        
           | xscott wrote:
           | What are some good examples?
           | 
           | The one a few years ago where Google declared "quantum
           | supremacy" sounded a lot like simulating a noisy circuit by
           | implementing a noisy circuit. And that seems a lot like
           | _simulating_ the falling particles and their collisions in an
           | hour glass by using a physical hour glass.
        
             | dekhn wrote:
             | The only one I can think of is simulating physical systems,
             | especially quantum ones.
             | 
             | Google's supremacy claim didn't impress me; besides being a
             | computationally uninteresting problem, it really just
             | motivated the supercomputer people to improve their
             | algorithms.
             | 
             | To really establish this field as a viable going concern
             | probably needs somebody to do "something" with quantum that
             | is experimentally verifiable but not computable
             | classically, _and_ is a useful computation.
        
               | SAI_Peregrinus wrote:
               | That is equivalent to proving BQP [?] P. We currently
               | don't know that any problem even exists that can be
               | solved efficiently (in polynomial time) by quantum
               | computers but not by classical computers.
        
             | EvgeniyZh wrote:
             | I wrote a long-ish comment about what you can expect of QC
             | just yesterday
             | 
             | https://news.ycombinator.com/item?id=42212878
        
               | xscott wrote:
               | Thank you for the link. I appreciate the write-up. This
               | sentence though:
               | 
               | > breaking some cryptography schemes it not exactly the
               | most exciting thing IMHO
               | 
               | You're probably right that we'll migrate to QC-resistant
               | algorithms before this happens, but if factoring was
               | solved _today_ , I think it would be very exciting :-)
        
               | EvgeniyZh wrote:
               | I think it would be very __impactful__, but it is not
               | really useful for humanity, rather opposite.
        
               | xscott wrote:
               | Who knows. _" It's difficult to make predictions,
               | especially about the future"_, but it might be a good
               | thing to accelerate switching to new crypto algorithms
               | sooner, leaving fewer secrets to be dug up later.
        
         | mrandish wrote:
         | > While I'm still eager to see where Quantum Computing leads
         | 
         | Agreed. Although I'm no expert in this domain, I've been
         | watching it a long time as a hopeful fan. Recently I've been
         | increasing my (currently small) estimated probability that
         | quantum computing may not ever (or at least not in my
         | lifetime), become a commercially viable replacement for SOTA
         | classical computing to solve valuable real-world problems.
         | 
         | I wish I knew enough to have a detailed argument but I don't.
         | It's more of a concern triggered by reading media reports that
         | seem to just assume "sure it's hard, but there's no doubt we'll
         | get there eventually."
         | 
         | While I agree quantum algorithms can solve valuable real-world
         | problems _in theory_ , it's pretty clear there are still a lot
         | of unknown unknowns in getting all the way to "commercially
         | viable replacement solving valuable real-world problems." It
         | seems at least possible we may still discover some fundamental
         | limit(s) preventing us from engineering a solution that's
         | reliable enough and cost-effective enough to reach commercial
         | viability at scale. I'd actually be interested in hearing
         | counter-arguments that we now know enough to be reasonably
         | confident it's mostly just "really hard engineering" left to
         | solve.
        
         | ashleyn wrote:
         | My first question any time I see another quantum computing
         | breakthrough: is my cryptography still safe? Answer seems like
         | yes for now.
        
           | xscott wrote:
           | I have a pseudo-theory that the universe will never allow
           | quantum physics to provide an answer to a problem where you
           | didn't already know the result from some deterministic means.
           | This will be some bizarre consequence of information theory
           | colliding with the measurement problem.
           | 
           | :-)
        
             | rocqua wrote:
             | You can use quantum computers to ask about the behavior of
             | a random quantum computer. Google actually did this a while
             | ago. And the result was better than a real computer could
             | simulate.
        
           | catqubit wrote:
           | Depends on your personal use.
           | 
           | In general to the 'Is crypto still safe' question, the answer
           | is typically no - not because we have a quantum computer
           | waiting in the wings ready to break RSA right now, but
           | because of a) the longevity of the data we might need to
           | secure and b) the transition time to migrate to new crypto
           | schemes
           | 
           | While the NIST post quantum crypto standards have been
           | announced, there is still a long way to go for them to be
           | reliably implemented across enterprises.
           | 
           | Shor's algorithm isn't really going to be a real time
           | decryption algorithm, it's more of a 'harvest now, decrypt
           | later' approach.
        
         | Strilanc wrote:
         | If qubit count increased by 2x per year, largest-number-
         | factored would show no progress for ~8 years. Then the largest
         | number factored would double in size each year, with RSA2048
         | broken after a total of ~15 years. The initial lull is because
         | the cost of error correction is so front loaded.
         | 
         | Depending on your interests, the initial insensitivity of
         | largest-number-factored as a metric is either great (it reduces
         | distractions) or terrible (it fails to accurately report
         | progress). For example, if the actual improvement rate were 10x
         | per year instead of 2x per year, it'd be 3 years until you
         | realized RSA2048 was going to break after 2 more years instead
         | of 12 more years.
        
           | xscott wrote:
           | What's the rough bit count of the largest numbers anyone's
           | quantum computer can factor today? Breaking RSA2048 would be
           | a huge _breakthrough_ , but I'm wondering if they can even
           | factor `221 = 13*17` yet (RSA8).
           | 
           | And as I've mentioned elsewhere, the other QC problems I've
           | seen sure seem like simulating a noisy circuit with a noisy
           | circuit. But I know I don't know enough to say that with
           | confidence.
        
             | Strilanc wrote:
             | Like I said above, the size of number that can be factored
             | will sit still for years while error correction spins up.
             | It'll be a good metric for progress later; it's a terrible
             | metric for progress now. Too coarse.
        
               | xscott wrote:
               | Heh, that seems evasive. Good metric or not, it makes me
               | think they aren't at the point where they can factor `15
               | = 3*5`.
               | 
               | I'm not trying to disparage quantum computing. I think
               | the topic is fascinating. At one point I even considered
               | going back to school for a physics degree so I would have
               | the background to understand it.
        
               | Strilanc wrote:
               | I'm not trying to be evasive. I'm directly saying quantum
               | computers won't factor interesting numbers for years.
               | That's more typically described as biting the bullet.
               | 
               | There are several experiments that claim to factor 15
               | with a quantum computer (e.g. [1][2]). But beware these
               | experiments cheat to various degrees (e.g. instead of
               | performing period finding against multiplication mod 15
               | they do some simpler process known to have the same
               | period). Even without cheating, 15 is a huge outlier in
               | the simplicity of the modular arithmetic. For example, I
               | think 15 is the only odd semiprime where you can
               | implement modular multiplication by a constant using
               | nothing but bit flips and bit swaps. Being so close to a
               | power of 2 also doesn't hurt.
               | 
               | Beware there's a constant annoying trickle of claims of
               | factoring numbers larger than 15 with quantum computers,
               | but using completely irrelevant methods where there's no
               | reason to expect the costs to scale subexponentially. For
               | example, Zapata (the quantum startup that recently went
               | bankrupt) had one of those [3].
               | 
               | [1]: https://www.nature.com/articles/414883a
               | 
               | [2]: https://arxiv.org/abs/1202.5707
               | 
               | [3]: https://scottaaronson.blog/?p=4447
        
               | xscott wrote:
               | Thank you for the reply and links. Good stuff.
        
       | vlovich123 wrote:
       | Is this an actually good explanation? The introduction
       | immediately made me pause:
       | 
       | > In classical computers, error-resistant memory is achieved by
       | duplicating bits to detect and correct errors. A method called
       | majority voting is often used, where multiple copies of a bit are
       | compared, and the majority value is taken as the correct bit
       | 
       | No in classical computers memory is corrected for using error
       | correction not duplicating bits and majority voting. Duplicating
       | bits would be a very wasteful strategy if you can add
       | significantly fewer bits and achieve the same result which is
       | what you get with error correction techniques like ECC. Maybe
       | they got it confused with logic circuits where there's not any
       | more efficient strategy?
        
         | outworlder wrote:
         | That threw me off as well. Majority voting works for industries
         | like aviation, but that's still about checking results of
         | computations, not all memory addresses.
        
         | UniverseHacker wrote:
         | ECC is not easy to explain, and sounds like a tautology rather
         | than an explanation "error correction is done with error
         | correction"- unless you give a full technical explanation of
         | exactly what ECC is doing.
        
           | marcellus23 wrote:
           | Regardless of whether the parent's sentence is a tautology,
           | the explanation in the article is categorically wrong.
        
             | vlovich123 wrote:
             | Yeah, I couldn't quite remember if ECC is just hamming
             | codes or is using something more modern like fountain codes
             | although those are technically FEC. So in the absence of
             | stating something incorrectly I went with the tautology.
        
             | bawolff wrote:
             | Categorically might be a bit much. Duplicating bits with
             | majority voting is an error correction code, its just not a
             | very efficient one.
             | 
             | Like its wrong, but its not like its totally out of this
             | world wrong. Or more speciglficly its in the correct
             | category.
        
               | vlovich123 wrote:
               | It's categorically wrong to say that that's how memory is
               | error corrected in classical computers because it is not
               | and never has been how it was done. Even for systems like
               | S3 that replicate, there's no error correction happening
               | in the replicas and the replicas are eventually converted
               | to erasure codes.
        
               | bawolff wrote:
               | I'm being a bit pedantic here, but it is not
               | categorically wrong. Categorically wrong doesn't just
               | mean "very wrong" it is a specific type of being wrong, a
               | type that this isn't.
               | 
               | Repetition codes are a type of error correction code. It
               | is thus in the category of error correction codes. Even
               | if it is not the right error correction codes, it is in
               | the correct category, so it is not a categorical error.
        
               | cycomanic wrote:
               | Well it's about as categorically wrong as saying quantum
               | computers use similar error correction algorithms as
               | classical computers. Categorically both are are error
               | correction algorithms.
        
               | Dylan16807 wrote:
               | I interpret that sentence as taking about real computers,
               | which does put it outside the category.
        
             | cortesoft wrote:
             | Eh, I don't think it is categorically wrong... ECCs are
             | based on the idea of sacrificing some capacity by adding
             | redundant bits that can be used to correct for some number
             | of errors. The simplest ECC would be just duplicating the
             | data, and it isn't categorically different than real ECCs
             | used.
        
               | vlovich123 wrote:
               | Then you're replicating and not error correcting. I've
               | not seen any replication systems that use the replicas to
               | detect errors. Even RAID 1 which is a pure mirroring
               | solution only fetches one of the copies when reading &
               | will ignore corruption on one of the disks unless you
               | initiate a manual verification. There are technical
               | reasons why that is related to read amplification as well
               | as what it does to your storage cost.
        
               | cortesoft wrote:
               | I guess that is true, pure replication would not allow
               | you to correct errors, only detect them.
               | 
               | However, I think explaining the concept as duplicating
               | some data isn't horrible wrong for non technical people.
               | It is close enough to allow the person to understand the
               | concept.
        
               | vlovich123 wrote:
               | To be clear. A hypothetical replication system with 3
               | copies could be used to correct errors using majority
               | voting.
               | 
               | However, there's no replication system I've ever seen
               | (memory, local storage, or distributed storage) that
               | detects or corrects for errors using replication because
               | of the read amplification problem.
        
               | bawolff wrote:
               | https://en.wikipedia.org/wiki/Triple_modular_redundancy
        
               | vlovich123 wrote:
               | The ECC memory page has the same non sensical statement:
               | 
               | > Error-correcting memory controllers traditionally use
               | Hamming codes, although some use triple modular
               | redundancy (TMR). The latter is preferred because its
               | hardware is faster than that of Hamming error correction
               | scheme.[16] Space satellite systems often use
               | TMR,[17][18][19] although satellite RAM usually uses
               | Hamming error correction.[20]
               | 
               | So it makes it seem like TMR is used for memory only to
               | then back off and say it's not used for it. ECC RAM does
               | not use TMR and I suggest that the Wikipedia page is
               | wrong and confused about this. The cited links on both
               | pages are either dead or are completely unrelated,
               | discussing TMR within the context of fpgas being sent
               | into space. And yes, TMR is a fault tolerance strategy
               | for logic gates and compute more generally. It is not a
               | strategy that has been employed for storage full stop and
               | evidence to the contrary is going to require something
               | stronger than confusing wording on Wikipedia.
        
         | refulgentis wrote:
         | I think it's fundamentally misleading, even on the central
         | quantum stuff:
         | 
         | I missed what you saw, that's certainly a massive oof. It's not
         | even wrong, in the Pauli sense, i.e. it's not just a simplistic
         | rendering of ECC.
         | 
         | It also strongly tripped my internal GPT detector.
         | 
         | Also, it goes on and on about realtime decoding, the foundation
         | of the article is Google's breakthrough _is_ real time, and the
         | Google article was quite clear that it isn 't real time.*
         | 
         | I'm a bit confused, because it seems completely wrong, yet they
         | published it, and there's enough phrasing that definitely
         | _doesn 't_ trip my GPT detector. My instinct is someone who
         | doesn't have years of background knowledge / formal comp sci &
         | physics education made a valiant effort.
         | 
         | I'm reminded that my throughly /r/WSB-ified MD friend brings up
         | "quantum computing is gonna be big what stonks should I buy"
         | every 6 months, and a couple days ago he sent me a screenshot
         | of my AI app that had a few conversations with him hunting for
         | opportunities.
         | 
         | * "While AlphaQubit is great at accurately identifying errors,
         | it's still too slow to correct errors in a superconducting
         | processor in real time"
        
           | vlovich123 wrote:
           | Yeah, I didn't want to just accuse the article of being AI
           | generated since quantum isn't my specialty, but this kind of
           | error instantly tripped my "it doesn't sound like this person
           | knows what they're talking about alarm" which likely
           | indicates a bad LLM helped summarize the quantum paper for
           | the author.
        
           | bramathon wrote:
           | This is not about AlphaQubit. It's about a different paper,
           | https://arxiv.org/abs/2408.13687 and they do demonstrate
           | real-time decoding.
           | 
           | > we show that we can maintain below-threshold operation on
           | the 72-qubit processor even when decoding in real time,
           | meeting the strict timing requirements imposed by the
           | processor's fast 1.1 ms cycle duration
        
             | refulgentis wrote:
             | Oh my, I really jumped to a conclusion. And what fantastic
             | news to hear. Thank you!
        
         | abtinf wrote:
         | This seems like the kind of error an LLM would make.
         | 
         | It is essentially impossible for a human to confuse error
         | correction and "majority voting"/consensus.
        
           | GuB-42 wrote:
           | I don't believe it is the result of a LLM, more like an
           | oversimplification, or maybe a minor fuckup on the part of
           | the author as simple majority voting is often used in
           | redundant systems, just not for memories as there are better
           | ways.
           | 
           | And for a LLM result, this is what ChatGPT says when asked
           | "How does memory error correction differ from quantum error
           | correction?", among other things.
           | 
           | > Relies on redundancy by encoding extra bits into the data
           | using techniques like parity bits, Hamming codes, or Reed-
           | Solomon codes.
           | 
           | And when asked for a simplified answer
           | 
           | > Classical memory error correction fixes mistakes in regular
           | computer data (0s and 1s) by adding extra bits to check for
           | and fix any errors, like a safety net catching flipped bits.
           | Quantum error correction, on the other hand, protects
           | delicate quantum bits (qubits), which can hold more complex
           | information (like being 0 and 1 at the same time), from
           | errors caused by noise or interference. Because qubits are
           | fragile and can't be directly measured without breaking their
           | state, quantum error correction uses clever techniques
           | involving multiple qubits and special rules of quantum
           | physics to detect and fix errors without ruining the quantum
           | information.
           | 
           | Absolutely no mention of majority voting here.
           | 
           | EDIT: GPT-4o mini does mention majority voting as an example
           | of a memory error correction scheme but not as _the_ way to
           | do it. The explanation is overall more clumsy, but generally
           | correct, I don 't know enough about quantum error correction
           | to fact-check.
        
           | mmooss wrote:
           | People always have made bad assumptions or had
           | misunderstandings. Maybe the author just doesn't understand
           | ECC and always assumed it was consensus-based. I do things
           | like that (I try not to write about them without verifying);
           | I'm confident that so do you and everyone reading this.
        
             | Suppafly wrote:
             | >Maybe the author just doesn't understand ECC and always
             | assumed it was consensus-based.
             | 
             | That's likely, or it was LLM output and the author didn't
             | know enough to know it was wrong. We've seen that in a lot
             | of tech articles lately where authors assume that something
             | that is true-ish in one area is also true in another, and
             | it's obvious they just don't understand other area they are
             | writing about.
        
               | fnordpiglet wrote:
               | Frankly every state of the art LLM would not make this
               | error. Perhaps GPT3.5 would have, but the space of errors
               | they tend to make now is in areas of ambiguity or things
               | that require deductive reasoning, math, etc. Areas that
               | are well described in literature they tend to not make
               | mistakes.
        
         | weinzierl wrote:
         | Maybe they were thinking of control systems where duplicating
         | memory, lockstep cores and majority voting are used. You don't
         | even have to go to space to encounter such a system, you likely
         | have one in your car.
        
         | bramathon wrote:
         | The explanation of Google's error correction experiment is
         | basic but fine. People should keep in mind that Quantum
         | Machines sells control electronics for quantum computers which
         | is why they focus on the control and timing aspects of the
         | experiment. I think a more general introduction to quantum
         | error correction would be more relevant to the Hackernews
         | audience.
        
         | ziofill wrote:
         | Physicist here. Classical error correction may not always be a
         | straight up repetition code, but the concept of redundancy of
         | information still applies (like parity checks).
         | 
         | In a nutshell, in quantum error correction you cannot use
         | redundancy because of the no-cloning theorem, so instead you
         | embed the qubit subspace in a larger space (using more qubits)
         | such that when correctable errors happen the embedded subspace
         | moves to a different "location" in the larger space. When this
         | happens it can be detected and the subspace can be brought back
         | without affecting the states within the subspace, so the
         | quantum information is preserved.
        
           | immibis wrote:
           | This happens to be the same way that classical error
           | correction works, but quantum.
        
           | adastra22 wrote:
           | You are correct in the details, but not the distinction. This
           | is exactly how classical error correction works as well.
        
           | jessriedel wrote:
           | Just an example to expand on what others are saying: in the
           | N^2-qubit Shor code, the X information is recorded
           | redundantly in N disjoint sets of N qubits each, and the Z
           | information is recorded redundantly in a different
           | partitioning of N disjoint sets of N qubits each. You could
           | literally have N observers each make separate measurements on
           | disjoint regions of space and all access the X information
           | about the qubit. And likewise for Z. In that sense it's a
           | repetition code.
        
             | adastra22 wrote:
             | That's also correct but not what the sibling comments are
             | saying ;)
             | 
             | There are quantum error correction methods which more
             | resemble error correction codes rather than replication,
             | and that resemblance is fundamental: they ARE classical
             | error correction codes transposed into quantum operations.
        
         | abdullahkhalids wrote:
         | While you are correct, here is a fun side fact.
         | 
         | The electric signals inside a (classical) processor or digital
         | logic chip are made up of many electrons. Electrons are not
         | fully well behaved and there are often deviations from ideal
         | behavior. Whether a signal gets interpreted as 0 or 1 depends
         | on which way the majority of the electrons are going. The lower
         | the power you operate at, the fewer electrons there are per
         | signal, and the more errors you will see.
         | 
         | So in a way, there is a a repetition code in a classical
         | computer (or other similar devices such as an optical fiber).
         | Just in the hardware substrate, not in software.
        
         | Karliss wrote:
         | By a somewhat generous interpretation classical computer memory
         | depends on implicit duplication/majority vote in the form of
         | increased cell size of each bit instead of discrete
         | duplication. Same way as repetition of signal sent over the
         | wire can mean using lower baudrate and holding the signal level
         | for longer time. A bit isn't stored in single atom or electron.
         | A cell storing single bit can be considered a group of smaller
         | cells connected in parallel storing duplicate value. And the
         | majority vote happens automatically in analog form as you read
         | total sum of the charge within memory cell.
         | 
         | Depending on how abstractly you talk about computers (which can
         | be the case when contrasting quantum computing with classical
         | computing), memory can refer not just to RAM but anything
         | holding state and classical computer refer to any computing
         | device including simple logic circuits not your desktop
         | computer. Fundamentally desktop computers are one giant logic
         | circuits.
         | 
         | Also RAID-1 is a thing.
         | 
         | At higher level backups are a thing.
         | 
         | So I would say there enough examples of practically used
         | duplication for the purpose of error resistance in classical
         | computers.
        
           | mathgenius wrote:
           | Yes, and it's worth pointing out these examples because they
           | don't work as quantum memories. Two more: magnetic memory
           | based on magnets which are magnetic because they are build
           | from many tiny (atomic) magnets, all (mostly) in agreement.
           | Optical storage is similar, much like parent's example of a
           | signal being slowly sent over a wire.
           | 
           | So the next question is why doesn't this work for quantum
           | information? And this is a really great question which gets
           | at the heart of quantum versus classical. Classical
           | information is just so fantastically easy to duplicate that
           | normally we don't even notice this, it's just too obvious a
           | fact... until we get to quantum.
        
         | graycat wrote:
         | Error correction? Took a graduate course that used
         | 
         | W.\ Wesley Peterson and E.\ J.\ Weldon, Jr., {\it Error-
         | Correcting Codes, Second Edition,\/} The MIT Press, Cambridge,
         | MA, 1972.\ \
         | 
         | Sooo, the subject is not nearly new.
         | 
         | There was a lot of algebra with finite field theory.
        
         | wslh wrote:
         | > > In classical computers, error-resistant memory is achieved
         | by duplicating bits to detect and correct errors. A method
         | called majority voting is often used, where multiple copies of
         | a bit are compared, and the majority value is taken as the
         | correct bit
         | 
         | The author clearly doesn't know about the topic neither him
         | studied the basics on some undegraduate course.
        
         | EvgeniyZh wrote:
         | It's just a standard example of a code that works classically
         | but not quantumly to demonstrate the differences between the
         | two. More or less any introductory talk on quantum error
         | correction would mention it.
        
       | bawolff wrote:
       | Doesn't feel like a breakthrough. A positive engineering step
       | forward, sure, but not a breakthrough.
       | 
       | And wtf does AI have to do with this?
        
         | wasabi991011 wrote:
         | It's not a major part of the paper, but Google tested a neural
         | network decoder (which had the highest accuracy), and some of
         | their other decoders used priors that were found using
         | reinforcement learning (again for greater accuracy).
        
       | cwillu wrote:
       | Wow, they managed to make a website that scales everything
       | _except_ the main text when adjusting the browser 's zoom
       | setting.
        
         | rezonant wrote:
         | There should be a law for this. Who in their right mind wants
         | this?
        
         | essentia0 wrote:
         | They set the root font size relative to the total width of the
         | screen (1.04vw) with the rest of the styling using rem units
         | 
         | Ive never seen anyone do that before.. It may well be the only
         | way to circumvent browser zoom
        
           | rendaw wrote:
           | Why don't browsers reduce the screen width when you zoom in,
           | as they adjust every other unit (cm, px)?
        
             | zamadatix wrote:
             | They effectively do. All css absolute units are effectively
             | defined as ratios of each other and zoom*DPI*physicalPixels
             | sets the ratio of how many physical pixels each absolute
             | unit will end up turning into. Increase zoom and the screen
             | seems to have shrunk to some smaller 'cm' and so on.
             | 
             | For things like 'vh' and 'vw' it just doesn't matter "how
             | many cm" the screen is as 20% of the viewing space always
             | comes out to 20% of the viewing space regardless how many
             | 'cm' that is said to be equivalent to.
        
           | imglorp wrote:
           | Why is it so desirable to circumvent browser zoom? I hate it.
        
         | hiisukun wrote:
         | It's interesting how this (and other css?) means the website is
         | readable in a phone in portrait, but the text is tiny in
         | landscape!
        
         | BiteCode_dev wrote:
         | It's a quantum zoom: it's zoomed in and not zoomed in at the
         | same time.
        
       ___________________________________________________________________
       (page generated 2024-11-23 23:01 UTC)