https://scottaaronson.blog/?p=6871 Shtetl-Optimized The Blog of Scott Aaronson If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel. Also, next pandemic, let's approve the vaccines faster! --------------------------------------------------------------------- << My AI Safety Lecture for UT Effective Altruism Google's Sycamore chip: no wormholes, no superfast classical simulation either [sycamore][sycamore] Update (Dec. 6): I'm having a blast at the Workshop on Spacetime and Quantum Information at the Institute for Advanced Study in Princeton. I'm learning a huge amount from the talks and discussions here--and also simply enjoying being back in Princeton, to see old friends and visit old haunts like the Bent Spoon. Tomorrow I'll speak about my recent work with Jason Pollack on polynomial-time AdS bulk reconstruction. But there's one thing, relevant to this post, that I can't let pass without comment. Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement that a wormhole had been created on a microchip (!!)--in a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg speculated that, a century from now, people might look back on the wormhole achievement as today we look back on Eddington's 1919 eclipse observations providing the evidence for general relativity. I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong. To be clear, I don't blame Professor Nirenberg at all. If I were a medieval historian, everything he said about the experiment's historic significance might strike me as perfectly valid inferences from what I'd read in the press. I don't blame the It from Qubit community--most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don't even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop about how he and others managed to compress a simulation of a variant of the SYK model into a mere 9 qubits--a talk that eschewed all claims of historic significance and of literal wormhole creation. But it's now clear to me that, between (1) the It from Qubit community that likes to explore speculative ideas like holographic wormholes, and (2) the lay news readers who are now under the impression that Google just did one of the greatest physics experiments of all time, something went terribly wrong--something that risks damaging trust in the scientific process itself. And I think it's worth reflecting on what we can do to prevent it from happening again. --------------------------------------------------------------------- This is going to be one of the many Shtetl-Optimized posts that I didn't feel like writing, but was given no choice but to write. News, social media, and my inbox have been abuzz with two claims about Google's Sycamore quantum processor, the one that now has 72 superconducting qubits. The first claim is that Sycamore created a wormhole (!)--a historic feat possible only with a quantum computer. See for example the New York Times and Quanta and Ars Technica and Nature (and of course, the actual paper), as well as Peter Woit's blog and Chad Orzel's blog. The second claim is that Sycamore's pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one--least of all me!--doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week's Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means. On their face, the two claims about Sycamore might seem to be in tension. After all, if Sycamore can't do anything beyond what a classical computer can do, then how exactly did it bend the topology of spacetime? I submit that neither claim is true. On the one hand, Sycamore did not "create a wormhole." On the other hand, it remains pretty hard to simulate with a classical computer, as far as anyone knows. To summarize, then, our knowledge of what Sycamore can and can't do remains much the same as last week or last month! --------------------------------------------------------------------- Let's start with the wormhole thing. I can't really improve over how I put it in Dennis Overbye's NYT piece: "The most important thing I'd want New York Times readers to understand is this," Scott Aaronson, a quantum computing expert at the University of Texas in Austin, wrote in an email. "If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper." More broadly, Overbye's NYT piece explains with admirable clarity what this experiment did and didn't do--leaving only the question "wait ... if that's all that's going on here, then why is it being written up in the NYT??" This is a rare case where, in my opinion, the NYT did a much better job than Quanta, which unequivocally accepted and amplified the "QC creates a wormhole" framing. Alright, but what's the actual basis for the "QC creates a wormhole" claim, for those who don't want to leave this blog to read about it? Well, the authors used 9 of Sycamore's 72 qubits to do a crude simulation of something called the SYK (Sachdev-Ye-Kitaev) model. SYK has become popular as a toy model for quantum gravity. In particular, it has a holographic dual description, which can indeed involve a spacetime with one or more wormholes. So, they ran a quantum circuit that crudely modelled the SYK dual of a scenario with information sent through a wormhole. They then confirmed that the circuit did what it was supposed to do--i.e., what they'd already classically calculated that it would do. So, the objection is obvious: if someone simulates a black hole on their classical computer, they don't say they thereby "created a black hole." Or if they do, journalists don't uncritically repeat the claim. Why should the standards be different just because we're talking about a quantum computer rather than a classical one? Did we at least learn anything new about SYK wormholes from the simulation? Alas, not really, because 9 qubits take a mere 2^9=512 complex numbers to specify their wavefunction, and are therefore trivial to simulate on a laptop. There's some argument in the paper that, if the simulation were scaled up to (say) 100 qubits, then maybe we would learn something new about SYK. Even then, however, we'd mostly learn about certain corrections that arise because the simulation was being done with "only" n=100 qubits, rather than in the n-[?] limit where SYK is rigorously understood. But while those corrections, arising when n is "neither too large nor too small," would surely be interesting to specialists, they'd have no obvious bearing on the prospects for creating real physical wormholes in our universe. And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves--including some of my close friends and collaborators--persist in talking about the simulated SYK wormhole as "actually being" a wormhole. What are they thinking? Daniel Harlow explained the thinking to me as follows (he stresses that he's explaining it, not necessarily endorsing it). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle. Granted, they couldn't get a message back out from the wormhole, at least not without "going the long way," which could happen only at the speed of light--so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as "more real" or "more wormholey" than a classical simulation. Of course, this scenario depends on strong assumptions not merely about quantum gravity, but also about the metaphysics of consciousness! And I'd still prefer to call it a simulated wormhole for simulated people. For completeness, here's Harlow's passage from the NYT article: Daniel Harlow, a physicist at M.I.T. who was not involved in the experiment, noted that the experiment was based on a model of quantum gravity that was so simple, and unrealistic, that it could just as well have been studied using a pencil and paper. "So I'd say that this doesn't teach us anything about quantum gravity that we didn't already know," Dr. Harlow wrote in an email. "On the other hand, I think it is exciting as a technical achievement, because if we can't even do this (and until now we couldn't), then simulating more interesting quantum gravity theories would CERTAINLY be off the table." Developing computers big enough to do so might take 10 or 15 years, he added. --------------------------------------------------------------------- Alright, let's move on to the claim that quantum supremacy has been refuted. What Aharonov et al. actually show in their new work, building on earlier work by Gao and Duan, is that Random Circuit Sampling, with a constant rate of noise per gate and no error-correction, can't provide a scalable approach to quantum supremacy. Or more precisely: as the number of qubits n goes to infinity, and assuming you're in the "anti-concentration regime" (which in practice probably means: the depth of your quantum circuit is at least ~log(n)), there's a classical algorithm to approximately sample the quantum circuit's output distribution in poly(n) time (albeit, not yet a practical algorithm). Here's what's crucial to understand: this is 100% consistent with what those of us working on quantum supremacy had assumed since at least 2016! We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That's why, from the very beginning, we targeted the "practical" regime of 50-100 qubits: a regime where 1. you can still see explicitly that you're exploiting a 2^50- or 2^ 100-dimensional Hilbert space for computational advantage, thereby confirming one of the main predictions of quantum computing theory, but 2. you also have a signal that (as it turned out) is large enough to see with heroic effort. To their credit, Aharonov et al. explain all this perfectly clearly in their abstract and introduction. I'm just worried that others aren't reading their paper as carefully as they should be! So then, what's the new advance in the Aharonov et al. paper? Well, there had been some hope that circuit depth ~log(n) might be a sweet spot, where an exponential quantum speedup might both exist and survive constant noise, even in the asymptotic limit of n-[?] qubits. Nothing in Google's or USTC's actual Random Circuit Sampling experiments depended on that hope, but it would've been nice if it were true. What Aharonov et al. have now done is to kill that hope, using powerful techniques involving summing over Feynman paths in the Pauli basis. Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it's still standing, but more precariously than I'd like--underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google's 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google's Linear Cross-Entropy Benchmark (LXEB)--but their algorithm can currently achieve only about 10% of the excess in LXEB that Google's experiment found. So, though it's been under sustained attack from multiple directions these past few years, I'd say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice. The wormhole is still open. Wait ... no ... that's not what I meant to write... --------------------------------------------------------------------- Note: With this post, as with future science posts, all off-topic comments will be ruthlessly left in moderation. Yes, even if the comments "create their own reality" full of anger and disappointment that I talked about what I talked about, instead of what the commenter wanted me to talk about. Even if merely refuting the comments would require me to give in and talk about their preferred topics after all. Please stop. This is a wormholes-'n-supremacy post. Email, RSSEmail, RSS Follow This entry was posted on Friday, December 2nd, 2022 at 5:04 pm and is filed under Announcements, Complexity, Metaphysical Spouting, Quantum . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. 123 Responses to "Google's Sycamore chip: no wormholes, no superfast classical simulation either" 1. Rand Says: Comment #1 December 2nd, 2022 at 5:27 pm > Daniel Harlow's argument to me was that, if you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then he believes they could have the subjective experience of jumping into a wormhole and meeting each other in the middle--something that wouldn't have been possible with a merely classical simulation. Hey wait stop! Alice and Bob's subjective experience would reflect one another's? It sounds like we're breaking no-signaling here - though perhaps we can't get the information out of the computer? Still, that would be pretty exciting if true. (I still have to read the paper, Quanta article, and Natalie Wolchover's tweet thread defending it.) 2. SR Says: Comment #2 December 2nd, 2022 at 5:39 pm I'm confused as to how Harlow's argument would work. Quantum simulations must run on real-world physics, so in particular cannot entail faster-than-light information transfer. Simulated wormholes would however allow effective FTL communication by shortening geodesic distances between far-away points. Unless I'm missing something, it seems like the only resolutions are (1) that you cannot simulate practically useful wormholes using QC or (2) that building a QC out of ordinary qubits lets you rewire the very geometry of space. It seems like (1) would be more plausible. Sorry if I'm missing something...I don't know anything about quantum gravity so would appreciate corrections from anyone who knows more. 3. Scott Says: Comment #3 December 2nd, 2022 at 5:55 pm Rand #1 (and SR #2): Yeah, that's exactly the point. You can't get the information out of the wormhole, or not without "going around the long way," which can only happen at the speed of light. Thus, the assertion that Alice and Bob would meet in the middle has a "metaphysical" character: it's suggested by GR, but it could only ever be experimentally confirmed by Alice and Bob themselves, not by those of us on the outside. 4. SR Says: Comment #4 December 2nd, 2022 at 6:28 pm Scott #3: Thanks for the explanation! 5. Alex Says: Comment #5 December 2nd, 2022 at 6:52 pm Scott, you write "And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves--including some of my close friends and collaborators--persist in talking about the simulated SYK wormhole as "actually being" a wormhole. What are they thinking?" It doesn't give me any pleasure to be the "I told you so" guy here, but... I told you so, Scott. More specifically: https://scottaaronson.blog/?p=6457#comment-1939489 https://scottaaronson.blog/?p=6599#comment-1942272 Now, you have the entire field of QC being entangled in the worldwide press with actual creation of wormholes and other nonsense. Good luck in trying to clear up that mess in the lay public's mind. Even more avoidable damage to the reputation and credibility of QC due to hype. My question is, what were *you* thinking, Scott? You always seemed like the sensible guy to me. Evidently, the "QC adults" you mentioned in your first response to my concerns were not in the room when all of this was being cooked up... or maybe nobody really understands "the idea that you can construct spacetime out of entanglement in a holographic dual description" and that's why all the confusion, unlike what you suggested to me in your other snarky response ("Once you understand it, you can't un-understand it!"). Anyway, I know no fundamental idea about your friends is going to change by whatever I could say. But I can tell you they will keep doing it, over and over again. 6. mls Says: Comment #6 December 2nd, 2022 at 6:54 pm Just wanted to say thank you for the last three postings. All have been extremely interesting. 7. Corbin Says: Comment #7 December 2nd, 2022 at 7:05 pm I think that, metaphysically, we have a Wigner's-friend situation. After all, how do we ask Alice and Bob about their experience? We have to somehow measure their reported status from the quantum computers to which they were uploaded. Such measurements entangle Alice/Bob with the rest of the laboratory. 8. Scott Says: Comment #8 December 2nd, 2022 at 7:10 pm Alex #5: This sort of thing-- (1) eliding the distinction between simulations and reality, (2) making a huge deal over a small quantum computer calculating something even though we all knew perfectly well what the answer would be, and (3) it all getting blown up even further by the press --has been happening over and over for at least 15 years, so I didn't exactly need you to warn me that it would keep happening! Yes, I expect such things to continue, unless and until the incentives change. And as it does, I'll continue to tell the truth as best I can! Equally obviously, though, none of this constitutes an argument against the scientific merit of AdS/CFT itself, just like the tsunami of QC hype isn't an argument against Shor's algorithm. Munging everything together, and failing to separate out claims, would just be perpetuating the very practices one objects to when hypemeisters do it. 9. Will Says: Comment #9 December 2nd, 2022 at 7:26 pm Hi Scott, I think I must be missing something in your argument. If "A foofs B" has a dual description "C blebs D", and we establish that A does indeed foof B, would you agree that it is equally true to say that C blebs D? If so, wouldn't it be correct to say that this experiment has created a wormhole? It's not a wormhole in our regular universe's spacetime, but perhaps it's a wormhole in some... where (? not exactly clear on this). And from this, perhaps it follows why an equally-precise simulation on the classical computer wouldn't create a wormhole in the same way? (This part seems dubious to me-I want to say that A foofing B is different from a simulation of A foofing B-after all, no matter how well you simulate a hurricane, nobody gets wet. But I'm wondering if this instinct is in conflict with my early claim that "A foofs B" is equally true as "C blebs D". Hmm.. now that I think about it, maybe this is actually what you meant by "bring a wormhole into actual physical existence every time you sketch one with pen and paper.") 10. Alex Says: Comment #10 December 2nd, 2022 at 8:15 pm Scott #8, Well, you are the one that sounded surprised about your friends in the first place, that's why I quoted your paragraph. Otherwise, I wasn't going to make any comment. As for AdS/CFT, I wasn't even commenting about its scientific merits, but about the cloud of noise that surrounds it and ends, due to the very nature of that noise, producing these confusions. And, noise that is deliberately perpetuated by the practitioners themselves, as noted, again, in what you wrote. I saw that in QG before and more recently, from these very same people, in QC. That was all of my "warning". Of course, in light of that, these recent hype developments in QC are hardly a surprise. I was just hoping that, knowing the tactics, maybe there was a chance to stop it. Anyway, whatever. 11. LK2 Says: Comment #11 December 3rd, 2022 at 1:57 am I do not comment about the wormhole BS. As a physicist I find the whole story just hilarious ad a bad signal for how research is conducted today (with the aid of "improper" strategies..). I have colleagues who simulated on the IBM QC other physics systems (even nuclei) using 5-7 bits: all these calculations were possible inverting a small matrix on a piece of paper. On the QC they spent days and days for getting something usable. We are still so far away for easily using these systems effectively, but it is exciting and promising. I'd like to think that for now it is like playing with ENIAC or something like that. As for quantum advantage: my English is probably not good enough to get the message so I ask: Were you saying that: 1) quantum supremacy with nisq hardware was ALREADY expected to break down after a certain N? 2) How do you overcome the limit? Error correction or higher-fidelity? Or both !? Or what ? 3) Have the Kalai's arguments any relevance in this limit? Thank you very much for the whole great post! 12. Mateus Araujo Says: Comment #12 December 3rd, 2022 at 2:51 am I call bullshit on Harlow's assertion that "it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle". The dual description doesn't matter, because it still boils down to an experiment with two distant quantum computers sharing an entangled state. Alice's actions change precisely nothing at Bob's side, and vice-versa. It doesn't matter if these actions are uploading oneself into the quantum computer, you still can't get any information faster than light. It doesn't matter if this information is somehow accessible only to the uploaded self, there can't be any information there to start with. 13. Joseph Shipman Says: Comment #13 December 3rd, 2022 at 3:19 am https://pubs.acs.org/doi/pdf/10.1021/acsenergylett.2c01969 This is being hyped as a great QC advance. The paper looks careful and thorough. Is it? 14. arbitrario Says: Comment #14 December 3rd, 2022 at 6:02 am Hi, long time reader, first time commenter. Thanks for your post! (and your blog in general! Despite being a physics student, your blog piqued my interest toward computational complexity) I still don't get how Harlow's argument is supposed to work. Even granting the consciousness uploading (which I guess is a discussion for another time), Alice and Bob would "experience" the simulation of SYK in (simulated) "real" spacetime, while if I understood correctly the wormhole is in the emergent holographic spacetime. All their experiences should be perfectly describable just in terms of QM. Supposing that (rather than a simulation) we have an effective physical sistem in the real world which follows SYK hamiltonian and Alice and Bob have two pieces of this system. There still wouldn't be an "actual" wormhole between them, it would just be an equivalent mathematical description. Maybe I am missing something! 15. gentzen Says: Comment #15 December 3rd, 2022 at 6:24 am Daniel Harlow explained the thinking to me as follows (...). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, ... If this experiment would have consisted of a quantum simulation on "two entangled quantum computers," then the hype would have been justified, even if only 9 qubits had been used from each quantum computer (i.e. 18 qubits in total). Of course, I am not interested in an overhyped "entangled tardigrade"-like entanglement between quantum computers here, but in honest quantum entanglement, i.e. one which could be used for quantum teleportation. What I have in mind is an experiment with a source of entangled particles (probably spin-entangled photons), two quantum computers which are able to perform quantum computations using such "quantum inputs", and some mechanism to "post-select" those computations on the two quantum computers which were actually entangled (by some sort of coincidence measurements). 16. Adam Treat Says: Comment #16 December 3rd, 2022 at 7:12 am Alex #5, It seems a bit much to me to excoriate Scott on his own blog for overhyped QC/QG results when he is debunking the hype in self-same post. Moreover, Scott was *the* *first* scientist quoted in mainstream press having done so. Sure, you can get after him for being friends with Lenny Susskind, but we don't have to answer for our friends especially when he is explicitly calling them out on his own blog. From my recollection Scott has always been of the - "I don't know how I can help, but I'd be happy to try..." - when it comes to the whole It from Qubit business. And I haven't seen him even once hyping the business or over even so much as getting over excited by it. Alex, in short maybe you should direct your (perhaps well-motivated anger) at the people actually making the hyped up beyond the pale claims rather than those who are debunking them just because the latter happens to be nice and friendly to them. 17. Adam Treat Says: Comment #17 December 3rd, 2022 at 7:28 am Scott, "Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as "more real" than a classical simulation." Even granting the outlandish assumptions I would still grant those grounds as entirely specious. The whole point of the wormhole hype is generating the insipid claim in the public's mind of wormholes in our *actual* 3+1 physical world. You don't push your work in the NYT and Quanta magazine if you're trying to sell your work to your quantum gravity peers. You push it to conjure in the general public's mind ideas coming from sci-fi movies like Interstellar, etc. What's missing in this work - even when granting outrageous assumptions - is any connection whatsoever to that 3+1 physical world. There is *nothing* about this work that suggests the so-called SYK "wormholes" have anything to do with the wormholes from general relativity in our 3+1 universe. Re-enacting the math of the SYK "wormholes" on a QC does nothing to change that. John Baez said on Woit's blog that this is like if: "a kid scrawls a picture of an inside-out building and the headline blares BREAKING NEWS: CHILD BUILDS TAJ MAHAL!" But I think the offense is considerably worse than that. To me it is as if a baby scrawls a crayon picture of random lines and dots and the parent raves to the media - "my kid built the Taj Mahal!" - all the while neglecting to mention: * it is a drawing * it is inside out * it is in a universe where the laws of perspective/drawing may be entirely unrelated to our own 18. Adam Treat Says: Comment #18 December 3rd, 2022 at 7:43 am How embarrassing for the authors when future research of actual gravity in our universe discovers that it does not contain and cannot contain wormholes of the ER variety. OR maybe they forgot that wormholes haven't been discovered in observation and are just still a conjectured (highly contested?) solution of GR? 19. Anon Says: Comment #19 December 3rd, 2022 at 8:02 am I guess what confuses my very naive self about situations like this is that everyone I meet in academia (though not QC companies) working on QC agrees that hype like this wormhole business is bad for the field and swears they themselves would never do it. Yet somehow we get one of these papers every so often from very well established academic groups. Either I'm very bad at reading people, or I'm living under a rock and don't interact with people who want hype - which do you think it is Scott? 20. 4gravitons Says: Comment #20 December 3rd, 2022 at 9:10 am Regarding the statement that there is nothing the experiment reveals that you couldn't learn from a classical simulation, what about this section in the Quanta article: "Surprisingly, despite the skeletal simplicity of their wormhole, the researchers detected a second signature of wormhole dynamics, a delicate pattern in the way information spread and un-spread among the qubits known as "size-winding." They hadn't trained their neural network to preserve this signal as it sparsified the SYK model, so the fact that size-winding shows up anyway is an experimental discovery about holography. "We didn't demand anything about this size-winding property, but we found that it just popped out," Jafferis said. This "confirmed the robustness" of the holographic duality, he said. "Make one [property] appear, then you get all the rest, which is a kind of evidence that this gravitational picture is the correct one."" This isn't saying that there is a property which *could not* be detected in the classical simulation, but it at least seems to be a property that *was not* detected in the classical simulation, right? (Or did they see it first there? I haven't read the actual scientific paper yet.) And this is kind of nontrivial as a "thing learned about quantum gravity", and not just for technical reasons. It's nontrivial because AdS/CFT is already a conjecture. It's a conjecture in the well-tested versions, but an even bigger conjecture is that it is in some sense "generic": that there are lots of "CFT-ish" systems with "AdS-ish" duals. In this case, the system they were testing wasn't N=4 SYM, or even SYK, but a weird truncation of SYK. The idea that "weird truncation of SYK" has a gravity dual is something the "maximal AdS/CFT" folks would expect, but that could not have been reliably predicted. And it sounds like, from those paragraphs, that the people who constructed this system were trying to get one particular gravity-ish trait out of it, but didn't expect this other one. That very much sounds like evidence that "maximal AdS/CFT" covers this case, in a way that people didn't expect it to, and thus like a nontrivial thing learned about quantum gravity. 21. Scott Says: Comment #21 December 3rd, 2022 at 9:12 am Will #9: Hmm.. now that I think about it, maybe this is actually what you meant by "bring a wormhole into actual physical existence every time you sketch one with pen and paper." Yup! 22. Scott Says: Comment #22 December 3rd, 2022 at 10:07 am LK2 #11: As for quantum advantage: my English is probably not good enough to get the message so I ask: Were you saying that: 1) quantum supremacy with nisq hardware was ALREADY expected to break down after a certain N? Yes. 2) How do you overcome the limit? Error correction or higher-fidelity? Or both !? Or what ? Yes. Higher fidelity, which in turn lets you do error-correction, which in turn lets you simulate arbitrarily high fidelities. We've understood this since 1996. 3) Have the Kalai's arguments any relevance in this limit? Gil Kalai believes that (what I would call) conspiratorially-correlated noise will come in and violate the assumptions of the fault-tolerance theorem, and thereby prevent quantum error-correction from working even in principle. So far, I see zero evidence that he's right. Certainly, Google's and USTC's Random Circuit Sampling experiments saw no sign at all of the sort of correlated noise that Gil predicts: in those experiments, the total circuit fidelity simply decayed like the gate fidelity, raised to the power of the number of gates. If that continues to hold, then quantum error-correction will """merely""" be a matter of more and better engineering. 23. Scott Says: Comment #23 December 3rd, 2022 at 10:23 am Joseph Shipman #13: I just looked at that paper. On its face, it seems to commit the Original Sin of Sloppy QC Research: namely, comparing only to brute-force classical enumeration (!), and never once even asking the question, let alone answering it, of whether the QC is giving them any speedup compared to a good classical algorithm. (Unless I missed it!) There's a tremendous amount of technical detail about the new material they discovered, and that part might indeed be interesting. But none of it bears even slightly on the question you and I care about: namely, did quantum annealing provide any advantage whatsoever in discovering this material, compared to what they could've done with (e.g.) classical simulated annealing alone? 24. Scott Says: Comment #24 December 3rd, 2022 at 10:29 am arbitrario #14: I should really let Harlow, or better yet one of the "true believers" in this, answer your question! To me, though, it's really a question of how seriously you take the ER=EPR conjecture. Do you accept that two entangled black holes will be connected by a wormhole, and that Alice and Bob could physically "meet in the middle" of that wormhole (even though they couldn't tell anyone else)? If so, then I suppose I see how you get from there to the conjecture that two entangled simulated black holes should be connected by a simulated wormhole, and that a simulated Alice and Bob could meet in the middle of it, again without being able to tell anyone (even though now it's metaphysically harder to pin down what's even being claimed!). 25. Still Figuring It Out Says: Comment #25 December 3rd, 2022 at 10:55 am Please correct me if I got anything wrong, but it sounds like we could do the Alice and Bob wormhole experiment in the near future. We just need to replace uploaded-brain Alice and Bob with much simpler probes (perhaps short text strings or simple computer programs?), and then extract their signal from the wormhole by "going the long way" with the speed of light, which shouldn't take that long if both quantum computers are here on Earth. Then again, entangling the quantum computers seems like a large technical bottleneck, and once we solve that, how is this experiment any different (apart from scale) from entangled photon experiments in quantum teleportation? 26. Scott Says: Comment #26 December 3rd, 2022 at 10:56 am Anon #19: I think what's going on is that there's a continuum of QC-hype-friendliness, from people who dismiss even the most serious QC research there is, from people who literally expect Google's lab to fall into a wormhole or something. At each point on the continuum, people get annoyed by the atrocious hype on one side, and also annoyed by the narrow-minded sticklers on their other side. As for whether the point where I sit is a reasonable one ... well, that's for you to decide! 27. Mark S. Says: Comment #27 December 3rd, 2022 at 11:47 am Scott - regarding the wormhole-in-a-lab, what would have been your sentiment about the old crummy BB84 device from the late-80's developed by Bennett, Brassard, Smolin and friends at Yorktown Heights that could send "secret" messages a whopping distance of 32.5 cm? The turning of the Pockels cells famously made different sounds depending on the basis of the photons that Eve could have listened to instead of having to measure the qubits directly. In 1989 Deutsch stated that the experimentalists "have created the first information processing device with capabilities that exceed those of the Universal Turing Machine." We know so much more now and would definitely not describe the experiment as extending beyond Turing, but at least Deutsch posited that something *different* was happening in the lab at Yorktown Heights in the late 80's than what happened at the beach in Puerto Rico where Bennett and Brassard first met to talk about what would become BB84 in the early 80's. 28. Remarkable: "Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage," by Xun Gao, Marcin Kalinowski, Chi-Ning Chou, Mikhail D. Lukin, Boaz Barak, and Soonwon Choi | Combinatorics and more Says: Comment #28 December 3rd, 2022 at 12:11 pm [...] There is a nice blog post by Scott Aaronson on several Sycamore matters. On quantum suppremacy Scott expresses an optimistic [...] 29. Johnny D Says: Comment #29 December 3rd, 2022 at 12:37 pm Everything that is run on 9 or 25 qubits of Google's qpu can be run on a computer. That doesn't diminish the need to demonstrate it on the qpu. QC has only just begun to explore complex quantum states. It is awesome that Google verifies that algorithms that illuminate very 'quantum' behaviors work on real quantum systems. What we learned was qm works for these types of complex states. I watched the Quanta utube video. They spoke of the algorithm being of a transversable wormhole. If I understand transversable wormholes, they have no horizons but they are not a shortcut, but a long way. It is great that a team first used ai to construct a qcircuit to illustrate this effect in a minimal way and that the circuit worked. I am very excited for the day when Google will have hardware below threshold thm's conditions and we do learn awesome stuff about qm they we don't know!!! 30. OhMyGoodness Says: Comment #30 December 3rd, 2022 at 1:06 pm "Yes, I expect such things to continue, unless and until the incentives change. " How would you change the incentives? John Q Public has a great thirst for doom forecasts and the latest zany results from the crazy world of quantum mechanics and substituting simulation for the subject of simulation is a grand way to provide that. There was reliance on the personal integrity of scientists even to the gallows but as the volume of boiler housed data, irreproducible results, over-the-moon hype,etc ever increases, it might be reasonable to re-examine that premise. As I have stated before you display personal integrity in the best tradition of science but many of your colleagues raise more than a reasonable doubt. 31. Gil Kalai Says: Comment #31 December 3rd, 2022 at 1:08 pm "Gao et al. have given a polynomial-time classical algorithm for spoofing Google's Linear Cross-Entropy Benchmark (LXEB)--but their algorithm can currently achieve only about 10% of the excess in LXEB that Google's experiment found." Gao et al.'s main observation is that when you apply depolarization noise on several gates, there is still some correlation between the noisy samples and the ideal distribution (hence large LXEB), and they apply this observation to split the circuit into two distinct parts (which roughly allows computations separately on these parts). I would expect that you can get 100% (or more) of the Google's LXEB as follows: apply the noise only to some of the gates in the "boundary", get substantially larger correlation, and make sure that the resulting noisy circuit still has quick (perhaps a bit slower) classical algorithm. (This is related to the "patch" and "elided" circuits in Google's 2019 paper. In the patch circuits all boundary edges are deleted and in the elided circuits only some of them are.) 32. LK2 Says: Comment #32 December 3rd, 2022 at 2:28 pm Scott #22: thank you very much! It seems time for me to look seriously into the fault-tolerance theorem. 33. Mitchell Porter Says: Comment #33 December 3rd, 2022 at 2:30 pm In the original ER=EPR paper (arXiv:1306.0533), Maldacena and Susskind write: "It is very tempting to think that *any* EPR correlated system is connected by some sort of ER bridge, although in general the bridge may be a highly quantum object that is yet to be independently defined. Indeed, we speculate that even the simple singlet state of two spins is connected by a (very quantum) bridge of this type." To me, this is saying there is a continuum between the concept of wormhole and the concept of entanglement. All wormholes, quantum mechanically, are built from entanglement, and all entanglement corresponds to a kind of wormhole. (And presumably, e.g. all quantum teleportation lies on a continuum with traversable wormholes, etc.) I find this an attractive and plausible hypothesis, and well-worked out in the case of entangled black holes, but still extremely vague at the other end of the continuum. What is the geometric dual of the two-spin singlet state? Does it depend on microphysical details - i.e. the dual is different if a Bell state arises in standard model physics, versus if it arises in some other possible world of string theory? Or is there some geometric property that is universally present in duals of Bell states? As far as I know, there are still no precise answers to questions like these. 34. Scott Says: Comment #34 December 3rd, 2022 at 5:21 pm 4gravitons #20: To the extent they learned something new, it seems clear from the description that they learned it from the process of designing the compressed 9-qubit circuit, and not at all from the actual running of the circuit. By the time they'd put the latter on the actual QC, they'd already learned whatever they could. 35. Shion Arita Says: Comment #35 December 3rd, 2022 at 5:56 pm @Scott #24 But if Alice and Bob are uploaded into physically distant parts of a quantum computer, if they are able to meet, even 'encrypted' inside the wormhole, even if they can only get out the long way, there must be some kind of long-range causal connection, since Alice and Bob started out far apart, which would necessitate there existing something like a wormhole in actual physical reality to enable that. In fact, something like this (with simple particles playing the role of Alice and Bob, across more feasible distances), might actually be a good way to experimentally test ER=EPR. 36. bystander Says: Comment #36 December 3rd, 2022 at 6:44 pm Kudos for calling the hype what it is. Regarding your friends that do the damage, recall the advice: "Lord, protect me from my friends; I can take care of my enemies." 37. Alex Says: Comment #37 December 3rd, 2022 at 7:31 pm Adam Treat #16 That's all very fair. I don't consider Scott to be responsible by any means of this stunt. And I salute his strong rebuttals. What happened was that I just read that paragraph I quoted and that made me recall those previous brief conversations in this blog, and then I just felt a bit frustrated at his expressed frustration with his friends. He can befriend whoever he wants, of course, and be involved in whatever research lines he likes. Personally, my choices would obviously be very different, since I don't respect those people. And, indeed, I can assure you that my anger goes to them, not to Scott. But, going back to the problem of hype, I think more needs to be done. This type of hype is new, QG inside QC. Both fields already had a difficult time with hype of their own, but this new combination is particularly worrying and dangerous, the levels of surrealism are jaw-dropping even by the standards of hype in these fields. 38. Christopher Says: Comment #38 December 3rd, 2022 at 8:38 pm > This is going to be one of the many Shtetl-Optimized posts that I didn't feel like writing, but was given no choice but to write. Then why didn't you just have ChatGPT write it? XD 39. Scott Says: Comment #39 December 3rd, 2022 at 8:56 pm bystander #36, Alex #37, etc.: Like, imagine you had a box that regularly spit out solid gold bars and futuristic microprocessors, but also sometimes belched out dense smoke that you needed to spend time cleaning up. It would still be a pretty great box, right? You'd still be lucky to have it, no? Lenny Susskind has contributed far more to how we think about quantum gravity -- to how even his critics think about quantum gravity -- than all of us in this comment thread combined. If his friends sometimes have to ... tone things down slightly when he gets a little too carried away with an idea, that's a price worth paying many times over. 40. Scott Says: Comment #40 December 3rd, 2022 at 9:01 pm Shion Arita #35: In fact, something like this (with simple particles playing the role of Alice and Bob, across more feasible distances), might actually be a good way to experimentally test ER=EPR. But if (as we said) you can't get a message out from the simulated wormhole, if all the experiments that an external observer can do just yield the standard result predicted by conventional QM, then what exactly would the experimental test be? 41. Scott Says: Comment #41 December 3rd, 2022 at 9:05 pm Gil Kalai #31: Yeah, I also think it's plausible (though not certain) that the LXEB score achieved by Gao et al.'s spoofing algorithm can be greatly improved. Keep in mind, though, that the experimental target that algorithm is trying to hit will hopefully also be moving! 42. Scott Says: Comment #42 December 3rd, 2022 at 9:35 pm OhMyGoodness #30: How would you change the incentives? I'm not sure, but I'd guess the strong, nearly-unanimous pushback that the "literal wormhole" claim has gotten in the science blogosphere has already changed the incentives, at least marginally! 43. Alex Fischer Says: Comment #43 December 3rd, 2022 at 10:29 pm Do you think that the noisy random circuit sampling result (or ideas from it) can be used to get a polynomial time algorithm for noisy BosonSampling? Based off my first reading, it seems not immediately applicable to BosonSampling, but it seems like the ideas could be transferred to that. 44. Scott Says: Comment #44 December 3rd, 2022 at 11:56 pm Still Figuring It Out #25: Please correct me if I got anything wrong, but it sounds like we could do the Alice and Bob wormhole experiment in the near future. We just need to replace uploaded-brain Alice and Bob with much simpler probes (perhaps short text strings or simple computer programs?), and then extract their signal from the wormhole by "going the long way" with the speed of light, which shouldn't take that long if both quantum computers are here on Earth. Then again, entangling the quantum computers seems like a large technical bottleneck, and once we solve that, how is this experiment any different (apart from scale) from entangled photon experiments in quantum teleportation? Indeed, see my comment #40. Even after your proposed experiment was done, a skeptic could say that all we did was to confirm QM yet again, by running one more quantum circuit on some qubits and then measuring the output. Nothing would force the skeptic to adopt the wormhole language: that language exists, but the whole point of duality is that it doesn't lead to any predictions different from the usual quantum-mechanical ones. See the difficulty? 45. Scott Says: Comment #45 December 4th, 2022 at 12:02 am Alex Fischer #43: Excellent question! While the many differences between RCS and BosonSampling make it hard to do a direct comparison, some might argue that the "analogous" classical simulation results for noisy BosonSampling were already known. See for example this 2018 paper by Renema, Shchesnovich, and Garcia-Patron, which deals with a constant fraction of lost photons, with some scaling depending on the fraction. In any case, whatever loopholes remain in the rigorous results, I've personally never held out much hope that noisy BosonSampling would lead to quantum supremacy in the asymptotic limit--just like I haven't for RCS and for similar reasons. At least in the era before error-correction, the goal, with both BosonSampling and RCS, has always been what Aharonov et al. are now calling "practical quantum supremacy." 46. Scott Says: Comment #46 December 4th, 2022 at 12:25 am Mark S. #27: Scott - regarding the wormhole-in-a-lab, what would have been your sentiment about the old crummy BB84 device from the late-80's developed by Bennett, Brassard, Smolin and friends at Yorktown Heights that could send "secret" messages a whopping distance of 32.5 cm? ... In 1989 Deutsch stated that the experimentalists "have created the first information processing device with capabilities that exceed those of the Universal Turing Machine." While I was only 8 years old in 1989, I believe I would've pushed back against Deutsch's claim ... the evidence being that I'd still do so today! In particular, while you and I know perfectly well that this isn't what Deutsch meant, his statement (which I confirmed here by googling) is practically begging for some journalist to misconstrue it as "IBM's QKD device violates the Church-Turing Thesis; it does something noncomputable; it can't even be simulated by a Turing machine." And in this business, I believe we ought to hold ourselves to a higher standard than (1) knowing the truth, and (2) saying things that in our and other experts' minds convey the truth. We should also (3) anticipate the exciting false things that non-experts will likely take us to mean, and rule them out! Having said that, Deutsch's statement was on much firmer ground than the wormhole claim in at least one way. Namely, the IBM device's ability to perform QKD--an information-processing protocol impossible in a classical universe--was ultimately a matter of observation. There was no analogue there to the metaphysical question of whether a computational simulation of a wormhole "actually brings a wormhole into physical existence." 47. manorba Says: Comment #47 December 4th, 2022 at 5:12 am so next step is E.Musk asking for public funding to build a net of teleporting stations is that it? 48. Adam Treat Says: Comment #48 December 4th, 2022 at 8:23 am Scott, Could you explain a little more about the hope for the sweet spot with "circuit depth ~log(n)" ... Now that you know this hope is dashed what does it mean for future experiments? Does this alter the trajectory of what Google will attempt next or the ability to scale up to higher qubits? Basically, I'm wondering what effect this has on the ongoing fight to prove quantum supremacy beyond all doubters... 49. James Cross Says: Comment #49 December 4th, 2022 at 8:26 am "if someone simulates a black hole on their classical computer, they don't say they thereby "created a black hole." Or if they do, journalists don't uncritically repeat the claim". "I will die on the following hill: that once you understand the universality of computation, and how a biological neural network is just one particular way to organize a computation, the obvious, default, conservative thesis is then that any physical system anywhere in the universe that did the same computations as the brain would share whatever interesting properties the brain has: intelligent behavior (almost by definition), but presumably also sentience". So the rules for sentience and black holes/worm holes are different? 50. Gil Kalai Says: Comment #50 December 4th, 2022 at 8:27 am (LK2 #11, Scott #22) Hi everybody, 1) The crucial ingredient of my argument for why quantum fault-tolerance is impossible deals with the rate of noise. My argument asserts that it will not be possible to reduce the rate of noise to the level that allows the creation of good quality quantum error-correcting codes. This is based on analyzing the (primitive; classical) computational complexity class that describes NISQ computations (with fixed error-rate). The failure, in principle, for quantum computation is also related to the strong noise-sensitivity of NISQ computations in subconstant error rates. 2) Error correlation is not part of my current argument and I don't believe in "conspiratorially-correlated noise that will come in and violate the assumptions of the fault-tolerance theorem, and thereby prevent quantum error-correction from working even in principle." (Again, my argument is for why efforts to reduce the rate of noise will hit a wall, even in principle.) I did study various aspects of correlated errors (mainly, before 2012) and error-correlation is still part of the picture: namely, without quantum fault-tolerance the type of errors you assume for gated entangled qubits will be manifested also for entangled qubits where the entanglement was obtained indirectly through the computation. (As far as I can see, this specific prediction is not directly related to any finding of the Google 2019 experiment.) 3) Actually, some of my earlier results showed that exotic forms of error correlation will allow quantum fault-tolerance (a 2012 paper with Kuperberg) and earlier, in 2006, I showed that if the error rate is small enough, no matter what diabolic correlations exist, log-depth quantum computation (and, in particular, Shor's algorithm) still prevails. 4) Let me note that under the assumption of fixed-rate readout errors my simple (Fourier-based) 2014 algorithm with Guy Kindler (for boson sampling) applies to random circuit sampling and shows that approximate sampling can be achieved by low degree polynomials and hence random circuit sampling represents a very low-level computational subclass of P. I don't know if this conclusion applies to the model from Aharonov et al.'s recent paper and this is an interesting question. (Aharonov et al. 's noise model is based on an arbitrarily small constant amount of depolarizing noise applied to each qubit at each step.) Aharonov et al. 's algorithm also seems related to Fourier expansion. 5) Scott highlights the importance of the remarkable agreement in the Google 2019 experiment between the total circuit fidelity and the gate fidelity, raised to the power of the number of gates, and refers to it as "the single most important result we learned scientifically from these experiments". This is related to another aspect of my own current interest regarding a careful examination of the Google 2019 experiment, and it goes back, for example, to an interesting videotaped discussion regarding the evaluation of the Google claims that both Scott and I participated in Dec. 2019. As some of you may remember, I view the excessive agreement between the LXEB fidelity and the product of the individual fidelities as a reason for concern regarding the reliability of the 2019 experiment. 51. Shion Arita Says: Comment #51 December 4th, 2022 at 8:35 am @scott#40 Thought about it for a bit, and came up with this: -Design a system that's simple enough that it's practically possible (though it probably will be difficult) to brute force 'decrypt' what happened inside the wormhole part of the simulation. This should work because it's not actually 'impossible' to decrypt the wormhole-just exponentially difficult. So maybe there's some kind of toy system that is complicated enough to do what we want but simple enough to be decryptable. -Also have the computation generate some kind of hash that's sensitive to everything that happens in it. The reason for this will become apparent later. Or rather each side of the wormhole will create its own hash, but this is a bit of a non-central detail -Simulate this on a classical computer. -Brute force decrypt the computation and verify that Alice and Bob had interaction inside the 'wormhole'. Note that this does not necessitate a long-range causal connection in this case, since it's just using large amounts of classical resources to locally simulate what the effect of a long-range causal connection would be. I guess you could also run it on a quantum computer without separated parts. -Now run the computation on the entangled separated quantum computers, and see if each side's hash matches the classical case. If it does, the same computation occurred. But since the parts were separated by a great distance, this means that there was some kind of long-range influence. This test should work because the hash allows us to see whether the encrypted data is the same or different, without decrypting it. You'd still have to bring both hashes together the long way to verify, but if they match it does in fact mean that the computation on the Alice side influenced the computation on Bob's side. 52. OhMyGoodness Says: Comment #52 December 4th, 2022 at 8:36 am manorba#47 His boring machines will be upgraded to tunnel through space time. 53. It's a simulation, stupid - The nth Root Says: Comment #53 December 4th, 2022 at 8:53 am [...] The second claim is that Sycamore's pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one--least of all me!--doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week's Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means. ... (Shtetl-Optimized / Scott Aaronson) [...] 54. Johnny D Says: Comment #54 December 4th, 2022 at 9:02 am A giant fault tolerant qc will give control of unitary developement and collapse. In this setting we know qm. What will we learn about qm? Nothing, we know it already. It may simulate molecules and condensed matter well enough to give advances in those fields. It may factor large numbers or do other calcs better than computers. It won't answer qg questions for all the same reasons humans can't. No data!!! It will be able to simulate various holographic models, but it can never give evidence of the truth of the holography. Of course it is also possible that nature is such that fault tolerance cannot be achieved. Systems with many dof collapse (or if you prefer get stuck in a branch). Is there any known system of large dof that doesn't collapse? In qg, unitary evolution is often assumed for large dof systems. I can never believe this. It leads to awesome math but physical reality, I don't think so. It may be important for qm to have consistant unitary evolution even if it cannot be achieved. An observer collecting Hawking radiation would certainly collapse the wave function. I don't believe that the other branches which are unreachable to the observer can effect the gr metric that the observer lives in. I think ER=EPR can be true only in an unrealistic situation where unitary evolution is preserved, ie no observers. A fault tolerant qc could evolve a holographic version of this. That may be interpreted as an argument against believing in fault tolerance??? 55. Mark S. Says: Comment #55 December 4th, 2022 at 9:21 am Scott #46 - thanks! And thanks for considering my hypothetical. I can definitely get behind your third bullet. I think that's a fair burden to place on the media and the scientists who communicate with them. But, perhaps in relation to the first two, the author of the Quanta article has pointed to a talk led by Harlow on hardware vs. software at the 2017 IFQ Conference. Harlow's contention, as I could interpret, was that it's fair to be "hardware ambivalent" about certain topics in physics, and that it's wrong to state that certain things having certain properties are only *real* if a lump of material realizes the properties, in contrast to software running on a (quantum) computer to characterize that material and those properties. At least immediately during the conversation at the conference, you and others appeared receptive to such a position (positions might have changed or been more refined afterwards, of course). As an example I've heard that the toric code is essentially "the same" as a topological quantum computer. I think this question was initially posited by someone else, but - if a transmon or ion-trap quantum computer were to successfully implement a toric code, could you defend a NYT or Quanta headline such as "Physicists use quantum computers to create new phase of matter"? 56. Scott Says: Comment #56 December 4th, 2022 at 10:38 am Adam Treat #48: Could you explain a little more about the hope for the sweet spot with "circuit depth ~log(n)" ... Now that you know this hope is dashed what does it mean for future experiments? OK good question. What's special about depth log(n) is that (1) the quantum state is sufficiently "scrambled" to get "anti-concentration" (a statistical property used in the analysis of these experiments), but (2) even if every gate is subject to constant noise, the output distribution still has O(1/exp(log(n))) = O(1/poly(n)) variation distance from the uniform distribution, raising the prospect that a signal could be detected with only polynomially many samples. That's why it briefly looked like a "sweet spot," potentially attractive for future experiments -- though as far as I know, no experimental groups had actually made any concrete plans based on this idea (though of course it's hard to say, since once you're doing an experiment your depth is some actual number, not "O(log (n))" or whatever). Anyway, the implication is now we can tell the experimentalists never mind and not to waste their time on the large n, "logarithmic" depth RCS regime! 57. ilyas Says: Comment #57 December 4th, 2022 at 10:52 am Scott. Yet again a careful, measured and factual clarification. This time much needed, and this time with a tinge of sadness that this could even be reported. I dont know if the researchers or the people involved on the QC were given a chance to comment, but your point about how this further creates needless confusion is spot on. I wont comment on the word hype (or as Peter elegantly describes it, "publicity stunt") since no comment is required. Thank you yet again 58. Scott Says: Comment #58 December 4th, 2022 at 11:07 am James Cross #49: What an amusing juxtaposition! Here's what I'd say: We agree, presumably, that a simulated hurricane doesn't make anyone wet ... at least, anyone in our world. By contrast, simulated multiplication just is multiplication. Is simulated consciousness consciousness? That's one of the most profound questions ever asked. Like I said in the passage you quoted, it seems to me that the burden is firmly on anyone who says that it isn't, to articulate the physical property of brains that makes them conscious and that isn't shared by a mere computer simulation. At the least, though, we'd like a simulated consciousness to be impressive: to pass the Turing Test and so on. No one is moved by claims that a 2-line program to reverse an input string manifests consciousness. Now we're faced by a new question of this kind: is a simulated wormhole a wormhole? Or, maybe it's only a wormhole for simulated people in a simulated spacetime? However you answer that imponderable, my point was that a crude 9-qubit simulation of an SYK wormhole isn't obviously much more impressive, conceptually or technologically, than a "simulated wormhole" that consists of a wormhole that you'd sketch with pen and paper. Or not today, anyway, when running ~9-qubit, ~100-gate circuits and confirming that they behave as expected has become routine. Hope that clarifies! 59. Scott Says: Comment #59 December 4th, 2022 at 11:18 am Shion Arita #51: While I didn't follow every detail of your proposal, the question, once again, is how you'd answer a skeptic who says that you're just doing a bunch of computations that give an answer that you yourself already knew in advance, computations that couldn't possibly have given a different answer, rather than doing an experiment that could tell you anything about the possibility or impossibility of wormholes in our "base-level" physical spacetime. After all, if the claim just boils down to "dude ... if we created an entire simulated universe with wormholes, then that universe would have wormholes in it!", then you didn't need PhD physicists to help establish the claim, did you? 60. James Cross Says: Comment #60 December 4th, 2022 at 11:46 am #56 Scott Thanks for indulging me! I think that sentience like a black hole (and presumably a wormhole) is a natural phenomenon that can be described and simulated with computations but the abstract model simulations in neither case are the same as the phenomenon. However, there are simulations that are different from abstract models - concrete simulations that are realized in physical materials, for example wind tunnels and wave pools which are scale models which may have characteristics that make them nearly identical to the actual phenomenon they model. So the question about a quantum computer wormhole would be whether it is merely an abstract model or is it a concrete model. From what I can gather, it is a purely abstract model in this research. 61. Michael Janou Glaeser Says: Comment #61 December 4th, 2022 at 12:02 pm Scott #39 "Lenny Susskind has contributed far more to how we think about quantum gravity -- to how even his critics think about quantum gravity -- than all of us in this comment thread combined." It is interesting that you wrote "to how we think about quantum gravity" instead of "to quantum gravity". Susskind's contributions to really understanding quantum gravity have been null, and his contributions to how (some) people think about quantum gravity have been very negative to the field: he has made people lose a lot of time with each idea he has proposed, none of which ended up quite working (and some of which, particularly the latest ones like QM=GR are sheer nonsense). His modus operandi of overhyping everything he does is not a bug but a feature: without it, his influence would have probably been much smaller. So a better analogy would be with a box that produces shining plastic bars, but misleads some people into thinking that they're solid golden bars, and also produces futuristic-looking microprocessors which however don't work, and on top of that produces dense smoke as in your example. That's Lenny Susskind as far as QG is concerned. 62. Scott Says: Comment #62 December 4th, 2022 at 12:39 pm Mark S. #55: I confess I don't remember what I said at Daniel's talk in 2017--I haven't watched the video, and my memory is blurred by the birth of my son literally 2 days later. The way I'd put it now, though, is that given any claim to have created some new physical entity in the lab--a nonabelian anyon, a Bose-Einstein condendate, a wormhole, whatever--there are several crucial questions we should ask: (1) What, if anything, did we learn from studying this entity that we weren't already certain about, e.g. from theory or calculations on ordinary laptops? (2) What, if anything, could we learn from studying it in the future? (3) What, if anything, can the new entity be used for (including in future physics experiments)? (4) How difficult of a technological feat was it to produce the new entity? Where does it fall on the continuum between "PR stunt, anyone could've done it" and "genuine, huge advance in experimental technique"? (5) Whatever the new entity is called ("wormhole," "anyon," etc.), to what extent is it the thing people originally meant by that term, and to what extent is a mockup or simulation of the thing in a different physical substrate? (6) In talking about their experiment, how clear and forthright have the researchers been in answering questions (1)-(5)--and in proactively warning the public away from exciting but false answers? My personal feeling is that, judged by the above criteria in their entirety, not leaning exclusively on any one of them, the wormhole work gets maybe a D. Not an F, but a D. So, that's why I've been negative about it, if not quite as negative as some others on the blogosphere! 63. Scott Says: Comment #63 December 4th, 2022 at 2:01 pm Michael Janou Glaeser #61: Who if anyone, in your judgment, has made genuine contributions to understanding quantum gravity? 64. maline Says: Comment #64 December 4th, 2022 at 3:41 pm I also don't see how this wormhole trick evades no-signaling. Alice writes down a message, encodes the paper into her simulation, and drops it into the wormhole. Bob encodes his brain into his own simulation, and experiences jumping into the wormhole, finding and reading Alice's message, and then getting crushed into the singularity. How do we escape the conclusion that "the simulation of Bob recieving the message" is a property - and yes, an observable - of Bob's quantum computer? 65. Shion Arita Says: Comment #65 December 4th, 2022 at 3:55 pm @scott#59 Disclaimer: I'm kind of a skeptic too to be honest and maybe I am misunderstanding Harlow's proposed experiment. My understanding is that somehow the entangled nature of the quantum computers will allow Simulated Alice to end up meeting Simulated bob in the Simulated wormhole. But since the computers are far apart, in order for Alice's and Bob's data to meet, there would have to be something like a real wormhole connecting them, since what we're calling 'Alice' and 'Bob' are actually physical states of the computer. The point is that, if that kind of long-range wormhole-like influence isn't possible in the real world, the computation will have different results when it's run on the distant entangled quantum computers. Something will go wrong, and Alice and Bob won't actually be able to meet. Thus a different hash. The point is, if we simulate a universe with wormholes in it, yeah, it'll have wormholes in it. But if we try to implement the simulation in such a way that it would rely on something like wormholes actually happening in the real world, if it's not possible in the real world, it can't work. So the computation in that case can't have the same outcome. 66. Scott Says: Comment #66 December 4th, 2022 at 4:19 pm maline #64 and Shion Arita #65: At some point we're going to go around in circles, but--from an external observer's perspective, both a "real" wormhole (formed, say, from two entangled black holes) and a "simulated" wormhole (formed, say, from two entangled quantum computers) are just some quantum systems that obey both unitarity and No superluminal signaling. No story about "Alice and Bob meeting in the middle" can ever do anything at all to change the predictions that that external observer would make by applying standard QM. All statements about "meeting in the middle" have empirical significance, at most, for the infalling observers themselves, just like in the simpler case of jumping into a single black hole. Alas, I don't see how any amount of cleverness can get around this. 67. SR Says: Comment #67 December 4th, 2022 at 4:42 pm This discussion has made me realize something amusing. We are speculating about creating virtual worlds in which simulated people may have the experience of living in a geometry with wormholes, even though our physics (presumably) does not directly allow this. What if our world, analogously, was created by people who lived in a purely classical world who thought it would be amusing to create a simulation which appears from the inside to be quantum, but in the "outside reality" computes everything slowly in a classical manner, and "steps time" only when those arduous computations are complete? Our subjective time may correspond to aeons in the outside world and we would not know it. 68. Matthew Says: Comment #68 December 4th, 2022 at 5:59 pm Many people in the High Energy Theory (ie string theory, ie formal theory, etc.) community would object to the statement that SYK even has a meaningful relation to quantum gravity. Let me explain. The classic example of AdS/CFT relates a quantum system called super-Yang-Mills (like the strong force) in four dimensions to quantum gravity (ie string theory) in 5d with negative curvature (AdS). The quantum system is labeled by two parameters: the number N of colors of the strong force, and the coupling of the strong force. When N is large, extensive evidence shows that SYM is identical to gravity in 5d (with supersymmetry, etc.), ie all corrections to gravity are suppressed by 1/N. This means that if you could simulate SYM at large N, then in principle you could simulate a wormhole in 5d. Note that this duality holds no matter the kinematic regime (eg temperature) you study SYM, as long as the label N is large. Of course, even theoretically computing SYM at large N (including what would be needed to describe a wormhole) is super hard, and simulating it is still a distant dream. SYK is a 1d quantum system labeled by somewhat analogous parameter N. But even at large N, it is NOT dual to gravity in 2d AdS (or near-AdS) in the standard holographic sense. Instead, when people say its dual to 2d gravity they mean there is a part of the theory that is described by 2d gravity, and this part is dominant in a certain kinematic regime (eg low temperature), but this part of the system CANNOT be distinguished from the rest of the system by ANY label of the system. This is why SYK does not have a quantum gravity dual in the standard sense. What I am saying is quite standard, e.g. if you look at the standard review of SYK by Rosenhaus (on arxiv at 1807.03334), you see in Figure 5 that in the "AdS dual" there is a big question mark, unlike SYM and other more understood cases. You might say that I'm splitting hairs, and that SYK is just dual to quantum gravity in some new perhaps more general sense. But this general sense is so general that pretty much any conformal field theory would be dual to quantum gravity, so the statement becomes almost vacuous. Take the simplest field theory: the critical 3d Ising model. This model can be simulated classically, eg by an evaporating cup of water at a certain pressure and temperature, no need for a fancy quantum computer. Since the critical Ising model is conformal and has a stress tensor, thus formally it has an AdS description in 4d with a graviton dual to the stress tensor. There may also be kinematic regimes of the Ising model where the stress tensor dominates, which would be dual to to the graviton dominating in the bulk. But there is no label like N for the Ising model which can be dialed to gaurantee this, so its a bit silly to call the Ising model quantum gravity. You could indeed rewrite the 3d Ising model in 4d AdS variables (which has been rigorously done), but this wont necessarily teach you anything about quantum gravity. I think its a pity some people exaggerated this result, bc it would be super cool if a wormhole could one day be simulated by a system with an actual quantum gravity dual. This wouldnt be a wormhole in our spacetime, but it would still be a big accomplishment, and could well teach us new things about quantum gravity. Exaggerating SYK and its connection to quantum gravity has done our community a disservice. 69. Andrei Says: Comment #69 December 5th, 2022 at 12:37 am Scott, " If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle." There is something that bothers me about this ER = EPR thing in both its original form (wormholes) and in this simulation. As far as I understand the black holes are entangled as long as they are built out of entangled particles. One particle goes in BH A and the other in BH B. However, Alice and Bob are not entangled, so isn't the entanglement terminated when they jump in those BH? Likewise, would the entanglement between those computers survive when Alice and Bob (which have some uncorrelated physical states) are uploaded? 70. LK2 Says: Comment #70 December 5th, 2022 at 2:06 am Gil Kalai (#50): thank you very much for the post. I'd be glad to at least understand point #1: could you please point me to your publication which addresses this? Thank you very much. 71. Scott Says: Comment #71 December 5th, 2022 at 2:51 am Andrei #69: No, in the scenario being discussed, I believe the number of qubits in Alice and Bob is small compared to the number of pre-entangled qubits in the two black holes. In the bulk dual description, Alice and Bob enter the two mouths of the wormhole without appreciably changing its geometry. 72. gentzen Says: Comment #72 December 5th, 2022 at 6:12 am Andrei #69: No, in the scenario being discussed, I believe the number of qubits in Alice and Bob is small compared to the number of pre-entangled qubits in the two black holes. In the bulk dual description, Alice and Bob enter the two mouths of the wormhole without appreciably changing its geometry. I guess this means that even if I could pull-off the feature (described in gentzen #15) to have two quantum computers which can both read quantum input (let's assume spin entangled photons for definiteness), and the additional feature to detect (or "post-select") when they each got one photon from an entangled pair, I would still not be able to simulate to wormhole with "two entangled quantum computers". I would also need some quantum memory in both computers to collect enough entangled qubits to be able to simulate the relatively large "number of pre-entangled qubits". I start to feel just how disappointing the actually performed quantum simulation is, even compared to the thinking described by Daniel Harlow. 73. Gil Kalai Says: Comment #73 December 5th, 2022 at 6:24 am (LK2#11,#70, Scott) Scott's new notion of "practical quantum supremacy" gives me another opportunity to explain a basic ingredient of my argument. To repeat Scott's new notion: "practical quantum supremacy" refers to a quantum noisy device that for a fixed rate of noise represents classical computation, but practically can manifest "computational supremacy" in the intermediate scale. Here "computational supremacy" is the ability to use the device to perform certain computations that are impossible or very hard for digital computers. A basic ingredient of my argument is: " 'Practical quantum supremacy' cannot be achieved" As a matter of fact, this proposed principle applies in general and not only for quantum devices. " Any form of 'practical computational supremacy' cannot be achieved." In other words, if you have a system or a device that represents computation in P, then you will not be able to tune some parameters of that system or device to achieve superior computations in the small or intermediate scale. NISQ computers with constant error rates (whether it is constant readout errors or the Aharonov et al.'s errors) are, from the point of view of computational complexity, simply classical computing devices. (This is very simple for constant readout errors and it is the new Aharonov et al. 's result for their noise model.) Therefore, our principle implies that they cannot be used (in principle) to demonstrate practical computational supremacy. This conclusion has far-reaching consequences: The Google experiment had 2-gate fidelity of roughly 99.5%. Pushing it to 99.9% or 99.99% may lead to convincing "practical supremacy" demonstrations. The principle we stated above implies that engineers will fail to achieve 2-gates of 99.99% quality as a matter of principle. This sounds counterintuitive, as improving the 2-gate quality is widely regarded as a "purely engineering" task. The principle I propose, namely the principle of "no practical computational supremacy" tells you, based on computational complexity considerations, that what was considered as an engineering task is actually out of reach as a matter of principle. This principle lies in the interface of physics and the theory of computing. So let me come back to LKT#11's questions. The answer to part 3 is "yes". The assertion that "practical quantum supremacy" is out of reach leads to some lower limits for the quality of the components of NISQ computers. These lower limits will also not allow good quality quantum error-correction. LK2 and Scott, I hope that this explanation helps to clarify my argument. Let me know. LK2(#70) I hope this comment is of some help. In addition, there are various places to read about my theory. Recently (Nov. 5) I wrote a blog post over my blog with the text of a lecture given in Budapest, and the same post has links to several of my papers on related matters and two to videotaped (zoom) lectures in Pakistan and in Indonesia. 74. Adam Treat Says: Comment #74 December 5th, 2022 at 7:19 am Gil #73, 'Scott's new notion of "practical quantum supremacy" gives me another opportunity to explain a basic ingredient of my argument. To repeat Scott's new notion: "practical quantum supremacy"' Since the football World Cup is going on I must take this opportunity to raise a *red card* for this bit of passive aggressive rhetorical slight-of-hand Scott's entire point is that this "new notion" is nothing of the sort and so-called "practical quantum supremacy" has always been what the current slate of QC supremacy experiments were going for. So it isn't new. Also, it wasn't Scott who coined the term, but rather Aharonov et al. Foul called, but play may now Proceed... 75. Scott Says: Comment #75 December 5th, 2022 at 8:00 am Gil Kalai #73: To clarify one point, "practical quantum supremacy" is not a new notion! It's the same notion that I and others have been talking about since Arkhipov and I introduced BosonSampling more than a decade ago. I'm just being a good sport and using Aharonov et al's new term for it. 76. Scott Says: Comment #76 December 5th, 2022 at 8:02 am Adam Treat #74: You beat me to it! 77. Gali Weinstein Says: Comment #77 December 5th, 2022 at 8:33 am Susskind tries to establish the equivalence between the ER = EPR traversable wormhole protocol and Sycamore's quantum hardware. Only if they are equivalent, then based on experiments performed on the second we can learn about the first. I am also not convinced. In their paper, "Traversable wormhole dynamics on a quantum processor", Jafferis et al. write: "As described by the size-winding mechanism, information placed in the wormhole is scrambled throughout the left subsystem, then the weak coupling between the two sides of the wormhole causes the information to unscramble and refocus in the right subsystem. Owing to the [quantum] chaotic nature of such scrambling [in presence of chaos]-unscrambling dynamics, the many-body time evolution must be implemented with high fidelity to transmit information through the wormhole". And in their paper, "Quantum Gravity in the Lab", Brown et. al write that the scrambling, i.e. the exponential growth of OTOC, is a manifestation of quantum chaos. Chaotic systems such as random circuits exhibit high-temperature teleportation and no-size winding. Teleportation occurs at times larger than the scrambling time due to random dynamics. Only a single qubit can be teleported with high fidelity in the high-temperature limit, but with the right encoding of information, many qubits can be sent at low temperatures and intermediate times in a holographic system hosting a traversable wormhole. Jafferis et al. add: "This analysis shows that teleportation under the learned Hamiltonian is caused by the size winding mechanism, not by generic chaotic dynamics, direct swapping or other non-gravitational dynamics (Supplementary Information)". What I want to ask is about the chaos. It seems to me that on the one hand, Jafferis et al. speak of information that is placed in the wormhole which is scrambled in the presence of chaos. On the other hand, there are random quantum circuits and chaotic systems. According to Jafferis et al., they are equivalent. 78. fred Says: Comment #78 December 5th, 2022 at 9:09 am Scott "In more detail, Pan, Chen, and Zhang have shown how to simulate Google's 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!)." I'm not sure that falling back to arguments about energy efficiency is very insightful. It's true that classical server farms (cloud systems) use way more electricity than what they typically try to simulate but such systems are truly programmable/universal, i.e. they can be used to compute a zillion practical things. Like, alpha folding simulations obviously use an immense amount of energy compared to the actual energy involved in the folding of a single protein, but that's not the point. 79. Gil Kalai Says: Comment #79 December 5th, 2022 at 9:29 am Adam #74, Scott #75,#76 Guys, you missed my point. I tried to define and use the terms in a precise way: "practical computational supremacy" refers to a situation that you have a system or a device that represents asymptotically a computation in P, but by proper engineering of this device you achieve superior computations in the small or intermediate scale. I refer by "practical quantum supremacy" to "practical computational supremacy" via a quantum device. Both these terms refer to devices which asymptotically admit an efficient classical simulation. Adam wrote: "'practical quantum supremacy' has always been what the current slate of QC supremacy experiments were going for. As far as I can tell, the insight that NISQ sampling represents classical computation to which Aharonov et al. now added a nice new result was not observed by Aaronson and Arkhipov but later by Guy and me (for boson sampling) and subsequently by various other groups of researchers in various situations. (Certainly, the fact that for RCS, asymptotically classical algorithms performs better when n is large is a new surprising discovery by Gao et al.) There was a different reason why people advocated for quantum supremacy experiments in the intermediate scale and this is because this was the only regime where verifying the outcomes was at all possible. In any case, the purpose of my comment was to explain my argument using these new/old terms in the way I defined them in the comment. If you find the terms "practical quantum supremacy" and "practical computational supremacy" as I used them in my comment confusing, you can replace them with the terms "practical quantum supremacy, not supported by asymptotic analysis" and "practical computational supremacy, not supported by asymptotic analysis." In this case my proposed principle simply reads: Practical computational supremacy not supported by asymptotic analysis cannot be reached. 80. Stephen Jordan Says: Comment #80 December 5th, 2022 at 9:37 am It seems to me the premise that "things meet in the middle of the wormhole after jumping in" may have some complexity-theoretic meat to it. (Hopefully I am not posting the same comment twice. I think though that there was a snag in email verification the first time around.) Suppose Alice and Bob are separated by d lightseconds. Alice has a bit string x, and Bob has a bit string y, each of length n. Let \(f:\{0,1\} \to \{0,1\}\) be some function that requires \(T = 2^ {\Omega(n)} \) steps to compute. The objective is for Bob to obtain \( f(x \oplus y) \) as soon as possible. For simplicity, suppose all computation, classical or quantum, occurs at one elementary operation per second. Classically, the best strategy is for Alice to transmit x to Bob, which takes d seconds, and then Bob to compute \( f( x \oplus y) \), which takes T seconds. So, the soonest Bob can obtain the answer is after time d + T. Now, suppose Alice and Bob share entanglement. Does this help? We know from quantum communication complexity that sharing entanglement can help with some tasks. But given that f is fully general other than being hard to compute, it seems clear that the answer must be no. Now, suppose that Alice and Bob's entangled state represents a wormhole. Then, Alice and Bob can each send (i.e. upload) delegates into the wormhole who know x and y, respectively. (The delegates needn't be "minds". They could just be simple computer programs.) These delegates then "meet in the middle," share their bit strings, and proceed to compute \( f (x \oplus y) \). During this time Alice sends the necessary classical information to Bob so that he can extracting these delegates from the wormhole and learn their answer. In this case, the computation of f by the delegates inside the wormhole and the transmission of classical information from Alice to Bob are happening in parallel, at least naively. So, the time needed for Bob to obtain \( f(x \oplus y) \) seems as though it has been reduced from d + T to \( \max \{d,T \} \). What gives? Physically, Alice and Bob share an entangled state \( | \psi \ rangle \). If I understand correctly, the dynamics inside the wormhole where the delegates have met is ultimately implemented by unitaries that are local to the two sides: \( | \psi(t+1) \ rangle = U_A \otimes U_B \left| \psi(t) \rangle \). And while this is going on, some classical information is in transit from Alice to Bob. So it must be that either: entanglement provides a generic speedup whereby the computation and the classical communication can effectively be done in parallel. (It seems that this must not be right. Maybe if we think about it hard enough we could point to a theorem guaranteeing that this could not be right.) Or, the unscrambling step by which the delegates are extracted from the wormhole after the necessary classical information has been transmitted from Alice to Bob requires a number of computational steps that grows at least linearly which how much subjective time has passed for the delegates when they were inside the wormhole. (Maybe this lower bound on complexity of unscrambling is a standard assumption.) 81. fred Says: Comment #81 December 5th, 2022 at 9:45 am The good news is that the authors will be retract their confusing wormhole claim by using the same QC to reverse time in order to truly undo this mess... https://phys.org/news/2019-03-physicists-reverse-quantum.html 82. LK2 Says: Comment #82 December 5th, 2022 at 9:59 am Gil Kalai #73: Thank you very much for the additional explanations. Now I have to go through your papers to understand how these two statements: " 'Practical quantum supremacy' cannot be achieved" " Any form of 'practical computational supremacy' cannot be achieved." are proven and how. Thank you again very very much! 83. Adam Treat Says: Comment #83 December 5th, 2022 at 10:29 am LKT #22, Gil himself acknowledges that it is a conjecture and that he has no proof. So don't look too long for the proof. At most you will get to understand the motivations behind the conjecture which I am also trying to understand. At this point, I don't hold out too much hope though that I'll be able to understand it because I think even Scott has admitted to reading Gil's 1999 paper and subsequent work and still does not understand Gil's argument for how Fourier-Walsh analysis of noise in NISQ somehow leads to the conjecture. 84. Scott Says: Comment #84 December 5th, 2022 at 10:30 am Stephen Jordan #80: Extremely interesting, thanks! But if there's any computational speedup to be had by "doing the computation inside the wormhole," then shouldn't that also show up as something much more elementary: a speedup for an entangled Alice and Bob compared to unentangled ones? In general, don't we always maintain the invariant that, from the outside observer's perspective, anything whatsoever that's stated in terms of a non-traversable wormhole can be restated without mentioning the wormhole at all? Indeed, isn't that in some sense the whole point of ER=EPR? 85. Scott Says: Comment #85 December 5th, 2022 at 10:52 am Gil Kalai #79: No, I don't think I missed the point. I just think you're wrong, both about the past of quantum supremacy and about its future! Regarding the future, I'm happy to wait for future experiments to settle much of what's at issue between us, as I trust you are as well. Regarding the past, here's what Alex and I wrote in the 2011 BosonSampling paper: The fundamental worry is that, as we increase the number of photons n, the probability of a successful run of the experiment might decrease like c^-n. In practice, experimentalists usually deal with such behavior by postselecting on the successful runs. In our context, that could mean (for example) that we only count the runs in which n detectors register a photon simultaneously, even if such runs are exponentially unlikely. We expect that any realistic implementation of our experiment would involve at least some postselection. However, if the eventual goal is to scale to large values of n, then any need to postselect on an event with probability c^-n presents an obvious barrier. Indeed, from an asymptotic perspective, this sort of postselection defeats the entire purpose of using a quantum computer rather than a classical computer. For this reason, while even a heavily-postselected Hong-Ou-Mandel dip with (say) n = 3, 4, or 5 photons would be interesting, our real hope is that it will ultimately be possible to scale our experiment to interestingly large values of n, while maintaining a total error that is closer to 0 than to 1. However, supposing this turns out to be possible, one can still ask: how close to 0 does the error need to be? Unfortunately, just like with the question of how many photons are needed, it is difficult to give a direct answer, because of the reliance of our results on asymptotics. What Theorem 3 shows is that, if one can scale the BosonSampling experiment to n photons and error d in total variation distance, using an amount of "experimental effort" that scales polynomially with both n and 1/d, then modulo our complexity conjectures, the Extended Church-Turing Thesis is false. The trouble is that no finite experiment can ever prove (or disprove) the claim that scaling to n photons and error d takes poly(n, 1/d) experimental effort. One can, however, build a circumstantial case for this claim--by increasing n, decreasing d, and making it clear that, with reasonable effort, one could have increased n and decreased d still further. Granted, we had no theorem to the effect that, if the error is held constant, then BosonSampling does not yield scalable quantum supremacy. Such theorems would come later, from the work of you and Guy and later Renema, Garcia-Patron, and others. Our focus on total variation distance, rather than (say) fraction of lost photons, also seems rather comically overoptimistic with hindsight. Nevertheless, the very fact that we stressed the need to continually decrease the error if you care about the scaling limit, shows that we weren't expecting scalable quantum supremacy from BosonSampling with a constant error rate. That, in turn, makes clear that when we discussed the prospect of experiments with particular numbers of photons like 50, we were already focused on what Aharonov et al are now calling "practical quantum supremacy." 86. Stephen Jordan Says: Comment #86 December 5th, 2022 at 11:21 am Scott 84: One can pose a question about what Alice and Bob can achieve if they share a certain entangled state and use local unitaries and classical computation, which makes no reference to wormholes. Namely, if a computation requires T timesteps and there is a time delay of d timesteps for classical data to travel from Alice to Bob, then what is the minimum time interval from when Alice and Bob receive their respective bit strings x and y to when Bob can output \( f(x \oplus y) \)? I conjecture that the answer is still d + T even if Alice and Bob are allowed to possess any (x,y independent) shared entangled state from the beginning. We also have the claim that "people who jump into wormholes can meet in the middle, share information, process it, and then be extracted back out of the wormhole and asked about their experiences, provided certain classical information is transmitted from one side to the other to enable the extraction". It seems to me the only way that the former conjecture and the latter claim are consistent is that unscrambling these people from the wormhole to ask them questions later requires not only classical communication from one end of the wormhole to the other, but also an amount of computation for the unscrambling which grows linearly with how much time they spent in the wormhole. This is a constraint from computational arguments that is stronger than the constraint coming purely from no-signaling. (The latter constraint being automatically satisfied by the necessity of the classical communication.) 87. shion arita Says: Comment #87 December 5th, 2022 at 11:23 am @scott#66 I think I essentially agree with you actually, Scott. I ultimately think that there's one part that's very confusing to me: The problem is, that, as far as I understand it, the claim is that Alice, who's being simulated at real-life location A, goes into the simulated wormhole, and Bob, simulated at real-life location B goes in, and they 'meet' in the simulation. Therefore, in this case, bits of information that depend on events at real life location A become correlated with bits that depend on events at real life location B, without any classical information exchange. How is this not FTL signaling? Even though us, outside the system, can't understand or access the signal, I don't see why that means that the signaling did not occur? 88. The Wormhole Publicity Stunt | Not Even Wrong Says: Comment #88 December 5th, 2022 at 12:22 pm [...] Something I should have linked to before is Scott Aaronson's blog posting about this, and the comments there. One that I think is of interest explains that SYK at large N is not [...] 89. Gil Kalai Says: Comment #89 December 5th, 2022 at 12:29 pm Thanks for the useful historical comments, Scott. I used a precise definition of "practical quantum supremacy" to present my proposed principle for why such "practical quantum supremacy" is out of reach. Again the heuristic principle I propose is "Computational supremacy not supported by asymptotic analysis cannot be reached" This seems an interesting statement in the interface between the theory of computing and physics and it can be interesting to discuss it. Based on what we now know (including Aharonov et al.), this principle, if true, implies that NISQ devices (and in particular RCS and boson sampling) cannot reach quantum supremacy. (We can study this proposed principle for smaller computational complexity classes, e.g. to look at examples where computationally inferior devices or algorithms perform better in the small/intermediate scale.) Scott: I just think you're wrong, both about the past of quantum supremacy and about its future! Scott, I am mainly interested in explaining and discussing my argument and other aspects of noise sensitivity in the context of NISQ systems, and examining current experimental claims. On those matters we disagree (and also on those we agree) I am very curious to know if I am right or you are. I am also quite interested in the history of the story, and I try to give accurate, fair, generous, and detailed credits. (I will look carefully at what you wrote.) But this is of little relevance to my comment. Adam Treat (#83) "At this point, I don't hold out too much hope though that I'll be able to understand it because I think even Scott has admitted to reading Gil's 1999 paper and subsequent work and still does not understand Gil's argument for how Fourier-Walsh analysis of noise in NISQ somehow leads to the conjecture." I will be happy to explain it privately (and other aspects of the noise stability/noise sensitivity theory) both to you and to Scott. Drop me an email, guys. 90. James Cross Says: Comment #90 December 5th, 2022 at 12:34 pm #81 Fred QCs actually are Universal Proving machines that can allow us to prove any possible sup[er]position about reality and the universe. 91. Scott Says: Comment #91 December 5th, 2022 at 1:51 pm Gil #89, I used the term "practical quantum supremacy" in my original post, before you'd even shown up in the comment section, because it was the term Aharonov et al used for what I and others had been talking about for more than a decade. To whatever extent your definition differs from Aharonov et al's, why is that relevant here? 92. LK2 Says: Comment #92 December 5th, 2022 at 1:55 pm Adam Treat #83: I will keep in mind your comment as I go through Gil's papers. Thanks! 93. Ted Says: Comment #93 December 5th, 2022 at 2:30 pm Scott, a technical question inspired by your long blog post on AdS/CFT and brain uploading back in July: Can the duality between the "regular QM" picture and the wormhole picture be implemented efficiently? I.e. if one is given only the quantum logic circuit implementing this simulation on a large number of qubits, can one efficiently compute the details of the corresponding process of traveling through the wormhole in the complementary description? And vice versa, if one is only given the details of the wormhole description? If the duality can't be computed efficiently, then it seems to me that that would even further weaken the (already very weak!) claim to have "created/simulated" a wormhole. Because if we don't require that the simulation can be "read out" efficiently, then we can equivalently say (via waterfall-type arguments) that (a larger version of) the quantum circuit would have simulated every physical process. 94. fred Says: Comment #94 December 5th, 2022 at 3:03 pm James Cross #90 "QCs actually are Universal Proving machines" I never said they weren't, but obviously such QCs don't exist right now and probably not for a few decades (if ever). So I'm talking about the commercial classical computers we do have vs the type of quantum *gadgets* we only have at our disposal right now ... a few dozens noisy qubits just aren't a QC, even if their behavior can't be simulated easily. 95. Simulan en un ordenador cuantico el llamado protocolo de "teletransporte en un agujero de gusano atravesable" - La Ciencia de la Mula Francis Says: Comment #95 December 5th, 2022 at 3:44 pm [...] Aaronson, <> Shtetl-Optimized, 02 Dec 2022; Mateus Araujo, <> More Quantum, 01 Dec 2022; Douglas Natelson, [...] 96. Actualites quantiques de novembre 2022 Says: Comment #96 December 5th, 2022 at 4:19 pm [...] publie dans Nature, un article de mise au point publie dans ArsTechnica, un post de Scott Aaronson, un thread interessant dans Twitter que tous les physiciens n'ont pas encore quitte pour [...] 97. John Says: Comment #97 December 5th, 2022 at 6:32 pm In the preprint it says the runtime of the algorithm is 2^(O(log (1/e)) = poly(1/e) and it is not practical due to a large exponent. So is it still possible that there is a superpolynomial advantage for NISQ devices conducting RCS experiments? 98. Scott Says: Comment #98 December 5th, 2022 at 7:00 pm John #97: In the "anti-concentration regime" that they study, it could at best be a polynomial advantage, albeit conceivably a large one, maybe even with the exponent depending on the noise rate. The other remaining hope is to go outside the anti-concentration regime--for example, by using constant-depth random noisy quantum circuits with no spatial locality. No one knows yet whether that yields scalable quantum supremacy or not. 99. The truth about wormholes and quantum computers - The magazine that never fails to amaze Says: Comment #99 December 6th, 2022 at 1:07 am [...] on the quantum computer, but missed the boat entirely on this front, as many others were quick to correctly point [...] 100. Mateus Araujo Says: Comment #100 December 6th, 2022 at 2:19 am I've emailed Daniel Harlow asking him to substantiate his claim (or unendorsed explanation?) that the simulated Alice and Bob could meet in the middle of the simulated wormhole. His answer was that I need to learn more AdS/CFT to understand it. I think this firmly puts this claim in the camp of complete bullshit. 101. The truth about wormholes and quantum computers - California Press News Says: Comment #101 December 6th, 2022 at 3:17 am [...] on the quantum computer, but missed the boat entirely on this front, as many others were quick to correctly point [...] 102. Scott Says: Comment #102 December 6th, 2022 at 9:09 am Mateus #100: My guess is that if you studied AdS/CFT more, you would indeed understand better why people came to believe that a simulated Alice and Bob could meet in the middle of a simulated wormhole ... and also there'd be a metaphysical leap that you'd still consider to be bullshit. But at least you'd understand the statement modulo that leap (I wish physicists would be more explicit about such leaps, incidentally!) 103. Adam Treat Says: Comment #103 December 6th, 2022 at 9:38 am Scott #102, I'll guess the opposite. Simply studying the papers of AdS/CFT will not be dispositive for Mateus or others to understand why others believe simulated Alices/Bobs may meet in the middle. I don't think there is *any* technical reason or feature of AdS/CFT that sheds any light on such beliefs. Rather, I think it is only through embedding oneself in the culture of those believers one may come to understand the belief. Further, to truly understand it I think one has to have a suspension of skepticism brought on by repeated inculcation with that culture and have weaknesses in human psychological biases and peer pressure that are sufficient to do away with said skepticism. Even though you've had such exposure to the culture and know the AdS/CFT conjecture well enough, I'd submit this is the reason you don't share the belief nor really understand what motivates it from a technical perspective. The culture/psychological pressure is not enough to overcome your healthy pentient for not buying in. 104. Carey Underwood Says: Comment #104 December 6th, 2022 at 9:40 am Professor Matt Strassler has been weighing in on this as well (as can be expected given that he's my other favourite physics blogger), might be illuminating on some of the wormhole questions. https://profmattstrassler.com/2022/12/06/ how-do-you-make-a-baby-cartoon-wormhole-in-a-lab/ 105. Scott Says: Comment #105 December 6th, 2022 at 10:29 am Adam Treat #103: How much have you studied this, in order to feel confident of what you say? Do you deny, for example, that classical GR admits solutions with non-traversable wormholes in which Alice and Bob, entering from opposite ends, could nevertheless meet in the middle? Or that there are reasons to believe that, in quantum gravity, such solutions would be relevant when you have two entangled black holes? 106. Mateus Araujo Says: Comment #106 December 6th, 2022 at 10:37 am Scott #102: My point is that this is unscholarly behaviour. It reminds me of Joy Christian. When confronted with the mistakes in his "disproof" of Bell's theorem, he would dismiss them by saying people needed to study geometric algebra. 107. Adam Treat Says: Comment #107 December 6th, 2022 at 12:07 pm Scott #105, I guess I've studied it enough to feel confident in my assertions, but that's a bit of a tautology isn't it? I can acknowledge it is possible my confidence is misplaced, but luckily I'm enough of an open mind to be happy when corrected. "Do you deny, for example, that classical GR admits solutions with non-traversable wormholes in which Alice and Bob, entering from opposite ends, could nevertheless meet in the middle?" I deny that those solutions have anything meaningful to say about what it might be *like* to meet in the middle. More forcefully, I deny that those solutions characterize in any way *that it is like anything at all* for intelligent agents to "meet in the middle." "Or that there are reasons to believe that, in quantum gravity, such solutions would be relevant when you have two entangled black holes?" Reasons to believe is too open ended of a statement to disagree with it in any kind of specific way. However, given any such *reason* I'd likely contend it is *also* perfectly reasonable to contend that the actual theory of quantum gravity might make those reasons into total and complete non-sequitur. I also contend that it is likely the case that what we don't know about real QG far outpaces what we do know. 108. Adam Treat Says: Comment #108 December 6th, 2022 at 12:11 pm In a happy coincidence I see that Matt Strassler has something to say that might be relevant to the situation: "Extremely Important Caveat [similar to one as in the last post]: Notice that the gravity of the simulated cartoon wormhole has absolutely nothing to do with the gravity that holds you and me to the floor. First of all, it's gravity in one spatial dimension, not three! Second, just as in yesterday's post, the string theory (from which we ostensibly obtained the JT gravity) is equivalent to a theory of quarks/gluons/etc (from which we might imagine obtaining the SYK model) with no gravity at all. There is no connection between the string theory's gravity (i.e. between that which makes the wormhole, real or cartoonish) and our own real-world gravity. Worst of all, this is an artificial simulation, not the natural simulation of the previous post; our ordinary gravity does interact with quarks and gluons, but it does not interact with the artificially simulated SYK particles. So the wormholes in question, no matter whether you simulate them with classical or quantum computers, are not ones that actually pull on you or me; these are not wormholes into which a pencil or a cat or a researcher can actually fall. In other words, no safety review is needed for this research program; nothing is being made that could suck up the planet, or Los Angeles, or even a graduate student." https://profmattstrassler.com/2022/12/06/ how-do-you-make-a-baby-cartoon-wormhole-in-a-lab/#more-13380 Emphasis mine. I haven't read his whole post - his caution just stood out because he highlighted it in red - so I don't know if it is relevant to Harlow's account, but it is striking enough I thought it might be useful to call out. 109. Adam Treat Says: Comment #109 December 6th, 2022 at 12:34 pm Err, one part I emphasized is the wrong thing. Anyway, the whole blog post is interesting and would recommend the read for anyone still interested in the whole "wormhole" thing. 110. Scott Says: Comment #110 December 6th, 2022 at 1:55 pm Adam Treat #107: Well yes, that a simulated holographic wormhole can't bend the spacetime of "real" observers like ourselves is the same (obvious) point that I made, both in this post and in the pages of the New York Times! But it doesn't answer the question I thought we were talking about, namely whether it would ever make sense to speak of simulated observers, living in the bulk dual theory, whose spacetime would be bent in the manner predicted by that wormhole solution to GR (which in particular would let those simulated observers "meet in the middle" of the simulated wormhole). I would never object to anyone speculating about such fun things! The one part that I do object to, is people passing over the metaphysical enormity of what needs to be presupposed in such a discussion, as if it didn't even require comment. 111. Adam Treat Says: Comment #111 December 6th, 2022 at 3:16 pm Scott #110, Yes, that's why I said I emphasized the wrong point in the blog comment... I challenge the idea that studying holographic gravity duals will help to clarify because the whole concept here of "simulated observers" and "meeting in the middle" is too nebulous. At this point I'm not really sure whether we disagree. 112. Nick Says: Comment #112 December 6th, 2022 at 8:06 pm I really appreciate you calling out so strongly and unambiguously that there is a big problem with what happened here. Even if your anti-hype work doesn't often have noticeable short-term effect on the information ecosystem around QIS (though sometimes it does!), it makes a big difference. If it weren't for you and a few other experts who regularly make honest (and if necessary brutally deflationary assessments) of developments in QIS, very many MORE people would be completely taken in by hype events. Especially the people like myself who are interested in quantum computing and know something about it, cannot evaluate claims in theoretical physics, and think it would be super awesome if any of these claims about very profound physics experiments being done in a quantum computer were true. If you and others didn't do this, future people working in QIS (many of whom read your blog I'm sure) would be way worse off. 113. Publicity Stunt Fallout | Not Even Wrong Says: Comment #113 December 6th, 2022 at 8:51 pm [...] Latest news this evening from Scott Aaronson at the IAS in Princeton: [...] 114. Andreas Karch Says: Comment #114 December 6th, 2022 at 10:15 pm I am afraid we will indeed remember that wormhole story a long time from now -- the moment a promising research field lost its credibility. Some big shots got to step up to the plate and call this out for the nonsense this is. Otherwise we'll soon all be seen as crazy. I can see why they are reluctant. The paper has some nice results. But this is not what this is about. The staggering exaggeration of what has been accomplished has the potential to hurt the entire enterprise. 115. manorba Says: Comment #115 December 7th, 2022 at 5:25 am Andreas Karch #114 Says: a promising research field lost its credibility. i wouldn't be so drastic, but yes communicating science to the general public is getting at an all time low. I again advocate for a change in budgeting science resources: allocate some to enlighten people, otherwise i fear it's gonna bite us in the ass in the future. actually it's already happening. Incidentally, one of the pivotal moments to me was the announcement of FTL neutrinos here in italy some years ago. 116. Nancy Lebovitz Says: Comment #116 December 7th, 2022 at 7:23 am David Nirenberg wrote the very impressive _Anti-Judaism: The Western Tradition_, an analysis of anti-semites from their own words. It just goes to show how it's easy to be fooled outside your specialty. I kept seeing the headlines for a week or two about creating a wormhole, and it's only more recently that I was seeing clearer headlines about creating a "wormhole". 117. George Ellis Says: Comment #117 December 7th, 2022 at 7:57 am Way back, "all entanglement corresponds to a kind of wormhole". False! Entanglement was established by using the Schroedinger equation in Minkowski spacetime. No wormhole anywhere in sight. Anton Zeilinger's Nobel prize winning entanglement experiment has nothing to do with any wormhole anywhere. It's propaganda, folks. Don't fall for it. ER = EPR is false. EPR happily takes place without ER. 118. Scott Says: Comment #118 December 7th, 2022 at 8:15 am George Ellis #117: I think the correct (and extremely interesting) nuggets in "ER=EPR" are: (1) There are forms of entanglement, for example between two black holes, that admit useful dual descriptions involving non-traversable wormholes. (2) Conversely, a non-traversable wormhole between two black holes admits an equivalent description in terms of entanglement between the black holes. With the fact that the wormhole can't transmit information being directly related to the fact that entanglement can't, in a manner that's now understood. Where I agree with you is this: I think that, for the vast majority of entangled states one cares about in physics, a dual description in terms of wormholes simply isn't useful, even in those cases where it meaningfully exists (which is far from all of them). 119. Evan Says: Comment #119 December 7th, 2022 at 1:13 pm Scott, "We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That's why, from the very beginning, we targeted the "practical" regime of 50-100 qubits" Is there any literature or people out there narrowing down this window of experiment sizes in a more precise way? I have heard people involved in the Google rcs, when approached with the latest classical simulations of their experiment, deflect criticism on the grounds that the advantage claim would clearly have held if they had used "just one or two more qubits". While maybe true for n=54, it seems like moving this goalpost much further is problematic. Is there an "n" and "t" such that a classical simulation of Google's RCS experiment with number of qubits "n" in time less than "t" (wall clock or compute hours, or pick your own a parameter) would definitely convince you that the experiment could never be used to (and so never actually did) demonstrate computational advantage? 120. Quantum Circuits in the New York Times | Godel's Lost Letter and P=NP Says: Comment #120 December 7th, 2022 at 1:16 pm [...] Scott Aaronson: "If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper." (See also Scott's post here.) [...] 121. I Says: Comment #121 December 7th, 2022 at 4:27 pm Are you bearish on topological data calculations being a potential path to quantum supremacy? That's an approach that you don't seem to mention as often as other paths to QS. 122. Davidson Says: Comment #122 December 7th, 2022 at 4:27 pm Reflecting on Scott's comments added on 12/06: In principle, both the media and scientists should communicate the truth without exaggeration. But in reality, both journalists and scientists are evaluated by how many people read their work, which is a fair measurement if everyone is truthful. From a game theory perspective, if everyone is truthful, then one can gain a tremendous advantage by being even slightly untruthful. To me, what we are seeing is a "clickbait", something not too far from reality but will attract a lot more eyeballs. In the age of the internet, clickbaits seem impossible to avoid, so if this indeed falls into the discussion of clickbaits, it might just be part of a much more serious problem. It's nice having people like Scott to demystify it, but Scott's power is limited. 123. Scott Says: Comment #123 December 7th, 2022 at 4:56 pm I #121: Yes, "bearish" is a good word. There have been some recent dequantizations, and if a quantum speedup does exist for topological data analysis, it will depend on finding examples of practical importance where (eg) the Betti numbers were enormous, and where it sufficed to estimate them up to an exponentially large additive error. Leave a Reply You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations. Comment Policies: 1. All comments are placed in moderation and reviewed prior to appearing. 2. You'll also be sent a verification email to the email address you provided. YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT. 3. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, snide and patronizing tone, trollishness, disingenuousness, or presumptuousness (e.g., linking to a long paper or article and challenging me to respond to it). 4. Even when no individual comment violates policy, when there are dozens of comments from a single commenter hammering home the same few themes, and the commenter shows no interest in changing their views or learning from anyone else, the commenter will receive a warning followed by a 3-month ban. 5. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear. 6. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email. [ ] Name (required) [ ] Mail (will not be published) (required) [ ] Website [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [Submit Comment] [ ] [ ] [ ] [ ] [ ] [ ] [ ] D[ ] --------------------------------------------------------------------- Shtetl-Optimized is proudly powered by WordPress Entries (RSS) and Comments (RSS).