[HN Gopher] Understanding Google's Quantum Error Correction Brea...
       ___________________________________________________________________
        
       Understanding Google's Quantum Error Correction Breakthrough
        
       Author : GavCo
       Score  : 71 points
       Date   : 2024-11-22 17:53 UTC (5 hours ago)
        
 (HTM) web link (www.quantum-machines.co)
 (TXT) w3m dump (www.quantum-machines.co)
        
       | terminalbraid wrote:
       | Note the paper they are referring to was published August 27,
       | 2024
       | 
       | https://arxiv.org/pdf/2408.13687
        
       | dangerlibrary wrote:
       | I'm someone not really aware of the consequences of each quantum
       | of progress in quantum computing. But, I know that I'm exposed to
       | QC risks in that at some point I'll need to change every security
       | key I've ever generated and every crypto algorithm every piece of
       | software uses.
       | 
       | How much closer does this work bring us to the Quantum Crypto
       | Apocalypse? How much time do I have left before I need to start
       | budgeting it into my quarterly engineering plan?
        
         | griomnib wrote:
         | The primary threat model is data collected _today_ via mass
         | surveillance that is _currently_ unbreakable will _become_
         | breakable.
         | 
         | There are already new "quantum-proof" security mechanisms being
         | developed for that reason.
        
           | sroussey wrote:
           | Yes, and people are recording encrypted conversations
           | communications now for this reason.
        
           | bawolff wrote:
           | Perhaps, but you got to ask yourself how valuable will your
           | data be 20-30 years in the future. For some people that is a
           | big deal maybe. For most people that is a very low risk
           | threat. Most private data has a shelf life where it is no
           | longer valuable.
        
         | bdamm wrote:
         | I'm not sure anyone really knows this although there is no
         | shortage of wild speculation.
         | 
         | If you have keys that need to be robust for 20 years you should
         | probably be looking into trying out some of the newly NIST
         | approved standard algorithms.
        
         | er4hn wrote:
         | You'll need to focus on asym and DH stuff. If your symmetric
         | keys are 256 bits you should be fine there.
         | 
         | The hope is that most of this should just be: Update to the
         | latest version of openssl / openssh / golang-crypto / what have
         | you and make sure you have the handshake settings use the
         | latest crypto algorithms. This is all kind of far flung because
         | there is very little consensus around how to change protocols
         | for various human reasons.
         | 
         | At some point you'll need to generate new asym keys as well,
         | which is where I think things will get interesting. HW based
         | solutions just don't exist today and will probably take a long
         | time due to the inevitable cycle of: companies want to meet us
         | fed gov standards due to regulations / selling to fedgov,
         | fedgov is taking their sweet time to standardize protocols and
         | seem to be interested in wanting to add more certified
         | algorithms as well, actually getting something approved for
         | FIPS 140 (the relevant standard) takes over a year at this
         | point just to get your paperwork processed, everyone wants to
         | move faster. Software can move quicker in terms of development,
         | but you have the normal tradeoffs there with keys being easier
         | to exfiltrate and the same issue with formal certification.
        
           | dylan604 wrote:
           | Maybe my tinfoil hat is a bit too tight, but every time
           | fedgov wants a new algo certified I question how strong it is
           | and if they've already figured out a weakness. Once bitten
           | twice shy or something????
        
             | jiggawatts wrote:
             | The NSA has definitely weakened or back-doored crypto. It's
             | not a conspiracy or even a secret! It was a matter of
             | (public) law in the 90s, such as "export grade" crypto.
             | 
             | Most recently Dual_EC_DRBG was forced on American vendors
             | by the NSA, but the backdoor private key was replaced by
             | Chinese hackers in some Juniper devices and used by them to
             | spy on westerners.
             | 
             | Look up phrase likes "nobody but us" (NOBUS), which is the
             | aspirational goal of these approaches, but often fails,
             | leaving everyone including Americans and their allies
             | exposed.
        
               | dylan604 wrote:
               | You should look up the phrase "once bitten twice shy" as
               | I think you missed the gist of my comment. We've already
               | been bitten at least once by incidents as you've
               | described. From then on, it will always be in the back of
               | my mind that friendly little suggestions on crypto algos
               | from fedgov will always be received with suspicion.
               | Accepting that, most people that are unawares will assume
               | someone is wearing a tinfoil hat.
        
         | bawolff wrote:
         | > But, I know that I'm exposed to QC risks in that at some
         | point I'll need to change every security key I've ever
         | generated and every crypto algorithm every piece of software
         | uses.
         | 
         | Probably not. Unless a real sudden unexpected breakthrough
         | happens, best practise will be to use crypto-resistant
         | algorithms long before this becones a relavent issue.
         | 
         | And practically speaking its only public-key crypto that is an
         | issue, your symmetric keys are fine (oversimplifying slightly,
         | but practically speaking this is true)
        
       | computerdork wrote:
       | Does anyone on HN have a understanding how close this achievement
       | brings us to useful quantum computers?
        
         | kittikitti wrote:
         | This is another hype piece from Google's research and
         | development arm. This is a theoretical application to increase
         | the number of logical qubits in a system by decreasing the
         | error caused by quantum circuts. They just didn't do the last
         | part yet so the application is yet to be seen.
         | 
         | https://arxiv.org/abs/2408.13687
         | 
         | "Our results present device performance that, if scaled, could
         | realize the operational requirements of large scale fault-
         | tolerant quantum algorithms."
         | 
         | Google forgot to test if it scales I guess?
        
           | wholinator2 wrote:
           | Lol yeah the whole problem with quantum computation is the
           | scaling, that's literally the entire problem. It's trivial to
           | make a qbit, harder to make 5, impossible to make 1000. "If
           | it scales" is just wishy washy language to cover, "in the
           | ideal scenario where everything works perfectly and nothing
           | goes wrong, it will work perfectly"
        
         | layer8 wrote:
         | The fact that there is a forward-looking subsection about "the
         | _vision_ for fault tolerance" (emphasis mine) almost entirely
         | composed of empty words and concluding in "we are just starting
         | this exciting journey, so stay tuned for what's to come!" tells
         | you "not close at all".
        
       | xscott wrote:
       | While I'm still eager to see where Quantum Computing leads, I've
       | got a new threshold for "breakthrough": Until a quantum computer
       | can factor products of primes larger than a few bits, I'll
       | consider it a work in progress at best.
        
         | kridsdale1 wrote:
         | There will be a thousand breakthroughs before that point.
        
           | xscott wrote:
           | That just means that the word "breakthrough" has lost it's
           | meaning. I would suggest the word "advancement", but I know
           | this is a losing battle.
        
             | Suppafly wrote:
             | >That just means that the word "breakthrough" has lost it's
             | meaning.
             | 
             | This. Small, incremental and predictable advances aren't
             | breakthroughs.
        
         | UberFly wrote:
         | I guess like most of these kinds of projects, it'll be smaller,
         | less flashy breakthroughs or milestones along the way.
        
         | dekhn wrote:
         | quantum computers can (should be able to; do not currently)
         | solve many useful problems without ever being able to factor
         | primes.
        
           | Eji1700 wrote:
           | Yeah I think that's the issue that makes it hard to assess
           | quantum computing.
           | 
           | My very layman understanding is that there are certain things
           | it will be several orders of magnitude better at, but
           | "simple" things for a normal machine quantum will be just as
           | bad if not massively worse.
           | 
           | It really should be treated as a different tool for right
           | now. Maybe some day in the very far future if it becomes
           | easier to make quantum computers an abstraction layer will be
           | arrived at in some manner that means the end user thinks it's
           | just like a normal computer, but from a "looking at series of
           | 1/0's" or "looking at a series of superimposed particles"
           | it's extremely different in function.
        
           | xscott wrote:
           | What are some good examples?
           | 
           | The one a few years ago where Google declared "quantum
           | supremacy" sounded a lot like simulating a noisy circuit by
           | implementing a noisy circuit. And that seems a lot like
           | _simulating_ the falling particles and their collisions in an
           | hour glass by using a physical hour glass.
        
       | vlovich123 wrote:
       | Is this an actually good explanation? The introduction
       | immediately made me pause:
       | 
       | > In classical computers, error-resistant memory is achieved by
       | duplicating bits to detect and correct errors. A method called
       | majority voting is often used, where multiple copies of a bit are
       | compared, and the majority value is taken as the correct bit
       | 
       | No in classical computers memory is corrected for using error
       | correction not duplicating bits and majority voting. Duplicating
       | bits would be a very wasteful strategy if you can add
       | significantly fewer bits and achieve the same result which is
       | what you get with error correction techniques like ECC. Maybe
       | they got it confused with logic circuits where there's not any
       | more efficient strategy?
        
         | outworlder wrote:
         | That threw me off as well. Majority voting works for industries
         | like aviation, but that's still about checking results of
         | computations, not all memory addresses.
        
         | UniverseHacker wrote:
         | ECC is not easy to explain, and sounds like a tautology rather
         | than an explanation "error correction is done with error
         | correction"- unless you give a full technical explanation of
         | exactly what ECC is doing.
        
           | marcellus23 wrote:
           | Regardless of whether the parent's sentence is a tautology,
           | the explanation in the article is categorically wrong.
        
             | vlovich123 wrote:
             | Yeah, I couldn't quite remember if ECC is just hamming
             | codes or is using something more modern like fountain codes
             | although those are technically FEC. So in the absence of
             | stating something incorrectly I went with the tautology.
        
             | bawolff wrote:
             | Categorically might be a bit much. Duplicating bits with
             | majority voting is an error correction code, its just not a
             | very efficient one.
             | 
             | Like its wrong, but its not like its totally out of this
             | world wrong. Or more speciglficly its in the correct
             | category.
        
               | vlovich123 wrote:
               | It's categorically wrong to say that that's how memory is
               | error corrected in classical computers because it is not
               | and never has been how it was done. Even for systems like
               | S3 that replicate, there's no error correction happening
               | in the replicas and the replicas are eventually converted
               | to erasure codes.
        
             | cortesoft wrote:
             | Eh, I don't think it is categorically wrong... ECCs are
             | based on the idea of sacrificing some capacity by adding
             | redundant bits that can be used to correct for some number
             | of errors. The simplest ECC would be just duplicating the
             | data, and it isn't categorically different than real ECCs
             | used.
        
               | vlovich123 wrote:
               | Then you're replicating and not error correcting. I've
               | not seen any replication systems that use the replicas to
               | detect errors. Even RAID 1 which is a pure mirroring
               | solution only fetches one of the copies when reading &
               | will ignore corruption on one of the disks unless you
               | initiate a manual verification. There are technical
               | reasons why that is related to read amplification as well
               | as what it does to your storage cost.
        
               | cortesoft wrote:
               | I guess that is true, pure replication would not allow
               | you to correct errors, only detect them.
               | 
               | However, I think explaining the concept as duplicating
               | some data isn't horrible wrong for non technical people.
               | It is close enough to allow the person to understand the
               | concept.
        
         | refulgentis wrote:
         | I think it's fundamentally misleading, even on the central
         | quantum stuff:
         | 
         | I missed what you saw, that's certainly a massive oof. It's not
         | even wrong, in the Pauli sense, i.e. it's not just a simplistic
         | rendering of ECC.
         | 
         | It also strongly tripped my internal GPT detector.
         | 
         | Also, it goes on and on about realtime decoding, the foundation
         | of the article is Google's breakthrough _is_ real time, and the
         | Google article was quite clear that it isn 't real time.*
         | 
         | I'm a bit confused, because it seems completely wrong, yet they
         | published it, and there's enough phrasing that definitely
         | _doesn 't_ trip my GPT detector. My instinct is someone who
         | doesn't have years of background knowledge / formal comp sci &
         | physics education made a valiant effort.
         | 
         | I'm reminded that my throughly /r/WSB-ified MD friend brings up
         | "quantum computing is gonna be big what stonks should I buy"
         | every 6 months, and a couple days ago he sent me a screenshot
         | of my AI app that had a few conversations with him hunting for
         | opportunities.
         | 
         | * "While AlphaQubit is great at accurately identifying errors,
         | it's still too slow to correct errors in a superconducting
         | processor in real time"
        
           | vlovich123 wrote:
           | Yeah, I didn't want to just accuse the article of being AI
           | generated since quantum isn't my specialty, but this kind of
           | error instantly tripped my "it doesn't sound like this person
           | knows what they're talking about alarm" which likely
           | indicates a bad LLM helped summarize the quantum paper for
           | the author.
        
           | bramathon wrote:
           | This is not about AlphaQubit. It's about a different paper,
           | https://arxiv.org/abs/2408.13687 and they do demonstrate
           | real-time decoding.
           | 
           | > we show that we can maintain below-threshold operation on
           | the 72-qubit processor even when decoding in real time,
           | meeting the strict timing requirements imposed by the
           | processor's fast 1.1 ms cycle duration
        
         | abtinf wrote:
         | This seems like the kind of error an LLM would make.
         | 
         | It is essentially impossible for a human to confuse error
         | correction and "majority voting"/consensus.
        
           | GuB-42 wrote:
           | I don't believe it is the result of a LLM, more like an
           | oversimplification, or maybe a minor fuckup on the part of
           | the author as simple majority voting is often used in
           | redundant systems, just not for memories as there are better
           | ways.
           | 
           | And for a LLM result, this is what ChatGPT says when asked
           | "How does memory error correction differ from quantum error
           | correction?", among other things.
           | 
           | > Relies on redundancy by encoding extra bits into the data
           | using techniques like parity bits, Hamming codes, or Reed-
           | Solomon codes.
           | 
           | And when asked for a simplified answer
           | 
           | > Classical memory error correction fixes mistakes in regular
           | computer data (0s and 1s) by adding extra bits to check for
           | and fix any errors, like a safety net catching flipped bits.
           | Quantum error correction, on the other hand, protects
           | delicate quantum bits (qubits), which can hold more complex
           | information (like being 0 and 1 at the same time), from
           | errors caused by noise or interference. Because qubits are
           | fragile and can't be directly measured without breaking their
           | state, quantum error correction uses clever techniques
           | involving multiple qubits and special rules of quantum
           | physics to detect and fix errors without ruining the quantum
           | information.
           | 
           | Absolutely no mention of majority voting here.
           | 
           | EDIT: GPT-4o mini does mention majority voting as an example
           | of a memory error correction scheme but not as _the_ way to
           | do it. The explanation is overall more clumsy, but generally
           | correct, I don 't know enough about quantum error correction
           | to fact-check.
        
           | mmooss wrote:
           | People always have made bad assumptions or had
           | misunderstandings. Maybe the author just doesn't understand
           | ECC and always assumed it was consensus-based. I do things
           | like that (I try not to write about them without verifying);
           | I'm confident that so do you and everyone reading this.
        
             | Suppafly wrote:
             | >Maybe the author just doesn't understand ECC and always
             | assumed it was consensus-based.
             | 
             | That's likely, or it was LLM output and the author didn't
             | know enough to know it was wrong. We've seen that in a lot
             | of tech articles lately where authors assume that something
             | that is true-ish in one area is also true in another, and
             | it's obvious they just don't understand other area they are
             | writing about.
        
         | weinzierl wrote:
         | Maybe they were thinking of control systems where duplicating
         | memory, lockstep cores and majority voting are used. You don't
         | even have to go to space to encounter such a system, you likely
         | have one in your car.
        
         | bramathon wrote:
         | The explanation of Google's error correction experiment is
         | basic but fine. People should keep in mind that Quantum
         | Machines sells control electronics for quantum computers which
         | is why they focus on the control and timing aspects of the
         | experiment. I think a more general introduction to quantum
         | error correction would be more relevant to the Hackernews
         | audience.
        
       | bawolff wrote:
       | Doesn't feel like a breakthrough. A positive engineering step
       | forward, sure, but not a breakthrough.
       | 
       | And wtf does AI have to do with this?
        
       ___________________________________________________________________
       (page generated 2024-11-22 23:00 UTC)