[HN Gopher] Google's quantum computer instantly makes calculatio...
___________________________________________________________________
Google's quantum computer instantly makes calculations that take
rivals 47 years
Author : andrewstuart
Score : 181 points
Date : 2023-07-03 02:04 UTC (20 hours ago)
(HTM) web link (www.smh.com.au)
(TXT) w3m dump (www.smh.com.au)
| marricks wrote:
| > He said: "This is a very nice demonstration of quantum
| advantage. While a great achievement academically, the algorithm
| used does not really have real world practical applications,
| though.
|
| > "We really must get to utility quantum computing - an era where
| quantum computers with many thousand qubits actually begin to
| deliver value to society in a way that classical computers never
| will be able to."
|
| This seems to be the most level headed take from the paper and an
| explanation why (if claims are to be believed) we haven't seen
| anything useful come from quantum computing yet that couldn't be
| done by classical computers.
|
| Maybe it still is all hype and the reason we haven't seen a
| useful calculation is because the only problems it can solve
| better than others happen to be especially contrived. I guess
| time will tell.
| tedivm wrote:
| I don't think it's that the concepts are contrived (although
| the test cases certainly are), it's just that our quantum
| computers are really early in their development and can't do
| the more complicated things yet. It's like if we had
| calculators that took an hour to do each arithmetic operation-
| the fact that people wouldn't use it doesn't mean arithmetic is
| contrived, just that it isn't as powerful as better
| alternatives.
|
| As the quote mentions, we'll need "many thousand qubits" to
| reach the point where society itself is getting new benefits
| out of things. These contrived cases help figure out how to
| scale the whole system.
| upofadown wrote:
| It's more like if we had calculators that could not do any
| arithmetic operation at all...
|
| Someone someday might figure out how to make the calculator
| actually calculate, but not yet.
| AbrahamParangi wrote:
| There's a fundamental question as to whether or not it's
| exponentially difficult to add additional qubits. If each
| marginal qubit is 5% more difficult to add as the previous
| one the task would be essentially impossible - and even if it
| wasn't impossible it would turn out to basically be cheating
| (in that you would be doing exponential work via either the
| quantum computing route or the conventional route).
| alpineidyll3 wrote:
| Note: that there is good physical reasons why the cost of
| QC may grow exponentially with qbits. Refrigeration is
| exponentially inefficient as T=>0 and the gap of a quantum
| system which sets the temperature you must cool to shrinks
| as you couple new degrees of freedom. This dynamic has been
| the basic reason for the sub exponential progress in the
| area (despite exponential expenditure)
| s1dev wrote:
| This is not quite true. You only need to keep the qubits
| at a fixed temperature as you scale the system, so the
| resources required to add additional qubits grow only
| polynomially with the system size. Once you have many
| qubits with a sufficiently low (but constant) error rate,
| you can do quantum error correction which also only has
| polynomial overhead.
| alpineidyll3 wrote:
| No you are missing the fact that if the qubits are
| coupled the systems fundamental gap has shrunk demanding
| a lower temperature for the same error rate.
|
| You should think more about why your rosy scheme hasn't
| worked yet if you can't explain that empirically maybe
| you don't quite understand.
| abdullahkhalids wrote:
| Not for photonic quantum computing. Only detectors
| require cooling, and it is possible to build adequately
| sized quantum computers with constant number of detectors
| using loop based architectures.
|
| Even more realistic architectures are very very cost
| effective on the number of components
| https://quantumfrontiers.com/2023/06/21/what-is-the-
| logical-...
| pclmulqdq wrote:
| Yeah, it always struck me as odd that the main quantum
| computing research labs went for the "VLSI" approach
| straight away rather than building room-sized computers
| with qbits that are known to be more stable and don't
| require such aggressive refrigeration.
|
| Between the costs of refrigeration/fabrication and the
| increased speed of decoherence, it's not hard to imagine
| that the approach of using superconducting qbits may be a
| dead end, despite quantum ECC.
| abdullahkhalids wrote:
| There are very valid reasons for this. Historically, this
| is what happened.
|
| * People did very basic qubit experiments with NMR in the
| late 90s. Very noisy experiments.
|
| * Around 2000 they realized photonic quantum computers
| would be "easiest" because light experiences very little
| noise. The problem was that its insanely difficult to do
| non-linear interactions between photons (which is
| necessary to do any sort of non-trivial classical or
| quantum computing with photons). In 2001, somebody came
| up with a clever way of doing non-linear interactions by
| using fast detectors.
|
| * Huge efforts started to try and build photonic quantum
| computers. Unfortunately, around 2004-05, people started
| to do estimates and it turned out that the number of
| sources and detectors needed with the clever way was
| humongous. Far more than we could hope to achieve, and
| there didn't seem to be any way of reducing it. People
| abandoned photonic quantum computing and started doing
| ion traps and superconducting.
|
| * Interestingly around the same time in 2005, there
| emerged an alternate method of building photonic quantum
| computers, based on "cluster states". However, the method
| also had the same humongous resource problem, but it had
| an advantage: the framework could be modified and played
| with to improve it. Over the next two decades, very
| slowly people figured out improvement after improvement
| to this architecture to bring down the resource costs.
|
| * At this point, this cluster-state photonic architecture
| has improved quite a bit and is starting to become very
| competitive with ion traps and superconducting qubits.
| PsiQuantum (whose article I shared above) is the leader
| in this right now. And they might win the race.
| alpineidyll3 wrote:
| Yes, but all existing photonic platforms use post-
| selection which is even more clearly exponentially lossy.
| Although this could be solved with a deterministic single
| photo source if one can be found. Photonic quantum
| computing is an especially funny post-transistor paradigm
| because classical photonic computing is also quite
| attractive.
| abdullahkhalids wrote:
| If you read the linked article, it explains how to avoid
| exponential loss on post-selection.
|
| > Imagine you can toss coins, and you need to generate 20
| coins showing Heads. If you repeatedly toss all 20 coins
| simultaneously until they all come up heads you'd
| typically have to do so millions of times before you
| succeed. This is even more true if each coin also has a
| 20% chance of rolling off the table (akin to photon
| loss). But if you can toss 20 coins, set aside (switch
| out!) the ones that came up heads and re-toss the others,
| then after only a small number of steps you will have 20
| coins all showing heads. This large gap is fundamentally
| why the first whammy is not relevant: To generate a large
| photonic entangled state we begin by probabilistically
| attempting to generate a bunch of small ones. We then
| select out the success (multiplexing) and combine
| successes to (again, probabilistically) generate a
| slightly larger entangled state. We repeat a few steps of
| this. This possibility has been appreciated for more than
| twenty years, but hasn't been done at scale yet because
| nobody has had a good enough optical switch until now.
| alpineidyll3 wrote:
| Haha a self promotional blog post without any
| experimental data from a lab self-interested in the
| technology is somehow not persuasive to me.
|
| There is no way to add entangled particles to an
| entangled photon state without new light matter
| interaction which may (and almost certainly) bring
| decoherence.
|
| Ie to claim a good enough switch is possible is a claim
| demanding a ton of evidence, and there is none right now.
|
| Ofc I would be thrilled to see it work! But explaining a
| scheme in back of the envelope fashion and measuring it
| are two vastly diff things.
| laxd wrote:
| > ... quantum computers are really early in their development
| and can't do the more complicated things yet. It's like if we
| had calculators that took an hour to do each arithmetic
| operation ...
|
| Reminds me of the story George Stibitz tells after developing
| one of the first devices that could be called a computer.
| Management at Bell Labs was not impressed that he had built a
| "20000 dollar calculator".
|
| https://youtu.be/qundvme1Tik?t=259
| fnordpiglet wrote:
| It's a milestone on a roadmap. Science like this doesn't happen
| in eureka moments - it's a methodical slog through our
| ignorance where we latch in wins bit by bit over decades.
| Quantum computing, AI, fusion, curing cancer, ending aging,
| they're pinnacle achievements at the extended play of
| Civilization.
| mistermann wrote:
| A potential hidden bonus: if quantum computing theories get
| enough attention, maybe people will start to consider using the
| quantum computing abilities of the human mind with conscious
| intentionality (since by the time we get computers to be able
| to do it, it may be too late)!
| wholinator2 wrote:
| What are these capabilities of which you speak? Or have i
| missed the sarcasm
| acheron wrote:
| I'm guessing he's referring to the Roger Penrose stuff --
| https://en.m.wikipedia.org/wiki/The_Emperor%27s_New_Mind ,
| etc.
| kelseyfrog wrote:
| In many other worlds gp got tremendous karma for their
| post. That's how quantum mental abilities work.
| ithkuil wrote:
| Do you mean; Use a quantum random number generator and
| emit random words. Changes are that in some world the
| comment is insightful?
| mistermann wrote:
| Well, generate reality for one. It's not a trivial detail,
| but it tends to be dismissed or taken for granted (or
| downvoted lol).
|
| Someone may venture into this territory some day, and
| perhaps that someone will find some travelling companions
| to make the journey more exciting and productive....time
| will tell!
| weard_beard wrote:
| The trick is you can only flip as yet unobserved bits. If
| we know the asteroid is coming we can't delete it.
| mistermann wrote:
| Some bits can even be flipped in broad daylight while
| everyone's watching. One of the most popular ways to flip
| bits _is by telling stories_ in which the bits have been
| flipped.
| flangola7 wrote:
| Generate reality? I am not crystal clear on what you're
| talking about. Quantum computers are still just
| computers, not magical replicators from Star Trek.
| mistermann wrote:
| I'm talking about the mind's ability to generate what
| "is", and "is not". It is a curso no doubt, but it is
| also sometimes a blessing. But usually it runs unnoticed
| (or even: _denied_ ) in the background, duffing things
| up.
|
| And when it gets noticed, sometimes really weird things
| happen.
| wouldbecouldbe wrote:
| Quantum computing feels like nucleair fusion. Been hearing
| about how its going to change the world since I was kid.
| hedora wrote:
| Well, in fairness, I could say the same about solar, improved
| batteries, etc.
| marricks wrote:
| > solar, improved batteries
|
| Neither of those sounded as sexy (i.e hyped) to me a
| nuclear fusion and quantum computers.
| imchillyb wrote:
| > solar, improved batteries
|
| >> Neither of those sounded as sexy (i.e hyped) to me a
| nuclear fusion and quantum computers.
|
| Breaking News: The Gates Foundation has actually deployed
| a product -- Solary!
|
| Meet Solary, The Gates Foundation replacement for
| salaries and a new way to power your entire home. This
| $1,000 (price pending review) device can provide a multi-
| family, multi-story, dwelling with power for 100 years.
| The software called Solary GoFY (possibly a subscription
| service) serves as a full and complete monitoring
| solution for the premises!
|
| --
|
| See, solar can be sexy!
| skywhopper wrote:
| Solar has become incredibly inexpensive in the last decade,
| and batteries likewise are many times better than they were
| 20 years ago. Is there further to go, yes, but progress has
| been steady and real applications are all around you for
| both. Our phones are marvels of battery technology that was
| impossible 15 years ago.
|
| Fusion and quantum computing on the other hand. Neither has
| yet solved one problem or made net progress. They have
| great promise but the practicalities may ultimately
| outweigh the benefits. Meanwhile, solar-charged batteries
| powering high performance electric cars are more common
| every day.
| imchillyb wrote:
| Quantum computing has been just as sexy a topic as stable
| fusion.
|
| Those were/are goalposts. We've never actually /seen/ the
| goal though, and the posts are a collective hallucination. It
| IS a pretty nice hallucination though.
|
| Since we as a species don't quite understand quantum
| interactions, it could be quite some time before we can build
| ordered and predictable systems out of those interactions.
|
| It could take days/weeks/months/years/decades/millennium...
| who knows? The fun is in watching and playing.
| [deleted]
| oldgradstudent wrote:
| I'm sure it's faster than simulation, but in what sense is this a
| computer and in what sense is that a computation?
| [deleted]
| sporkl wrote:
| I don't remember the details, but the reason the 2019 paper was
| challenged was because Google did not take into account quantum
| simulation optimization techniques when estimating how long the
| computation would run on classical computers. I wouldn't be
| surprised if there's a similar situation here, where unconsidered
| optimizations prevent this from being quantum advantage.
| klyrs wrote:
| As I recall, they did not take the error rate into account. The
| result was dashed by somebody looking at their error rate and
| saying "we can compute that distribution to within 5%* with a
| classical computer." One would hope that they learned from
| this, but historically, supremacy claims have been rather
| short-lived.
|
| * number pulled out of a hat
| oldgradstudent wrote:
| The main challenge for the Google quantum supremacy result was
| that calling it a computer or what it does a computation never
| made any sense.
| mikewarot wrote:
| The cited paper[1] references "random circuit sampling" which is
| defined in [2] which then gets so heavy into abstract math, and I
| give up.
|
| Can someone explain this in terms an EE or programmer can
| understand?
|
| [1] https://arxiv.org/abs/2304.11119
|
| [2] https://arxiv.org/abs/2007.07872
| B1FF_PSUVM wrote:
| > references "random circuit sampling"
|
| It's either Monte Carlo or bullshit.
| canadianfella wrote:
| [dead]
| OkayPhysicist wrote:
| [2] is actually a phenomenally easy read for a physics paper,
| but it does assume an undergraduate physics degree's level of
| background.
|
| First, it's important to know that a quantum state (written as
| |letter>) is not directly observable. We can think of it as
| some vector, for which we can only apply an operation to to get
| out a scalar observable.
|
| A quantum circuit is a series of operations on a quantum state
| to create a different quantum state, which can be modeled as a
| matrix U, which is unitary (meaning U*U=1).
|
| The goal of "sampling a random quantum circuit" is trying to
| find the probability of observing some given observable if you
| keep applying random quantum circuits to a given input state.
| pclmulqdq wrote:
| > Can someone explain this in terms an EE or programmer can
| understand?
|
| Unfortunately not. The physicists who are behind quantum
| computing don't think the same way, and go straight for the
| abstract math to solve any problem. I am pretty sure they don't
| understand that most of our progress on computing up to this
| point is _because_ you don 't need to go into Galois fields or
| discrete math to describe what a computer or algorithm does.
| I'm also pretty sure that they don't know of another way to
| think about it.
| OkayPhysicist wrote:
| The second paper shared by the GP is actually just about as
| dumbed down as you can feasible get. The problem is that A)
| it uses some basic jargon, because it's fair to assume the
| only people interested in the result are people who
| understand the jargon and B) it's describing a completely
| impractical problem, designed to maximize the difference in
| performance between a quantum and classical computer rather
| than do anything particularly useful.
| swframe2 wrote:
| (I don't have a background (or interest) in quantum computing)
| I've tried some of the "chat with pdf" websites for other
| topics. It seems like they could help you understand the paper.
| If you give it a try, could you report back here on how it
| worked for you.
| pyentropy wrote:
| Are you familiar with logic circuits (those made of gates like
| AND, OR, XOR, NAND)? Just like they are the founding blocks of
| classical computers, the founding block of quantum computers
| are quantum circuits.
|
| Quantum circuits are made of quantum logic gates like Hadamard,
| CNOT, Z, CZ, etc. Instead of bits as inputs and outputs,
| quantum logic gates have qubits. Unlike boolean logic where
| bits are 0 and 1, a qubit is a 2D vector [a b] where a and b
| are complex numbers, corresponding to a superposition of the
| zero and one bases: a * |0> + b * |1>. You can visualise a
| qubit as a point on a sphere, the so called Bloch sphere [1]
|
| There are multiple ways to implement a qubit, but you need to
| start with some quantum phenomenon. An example is the
| polarisation of a photon, so horizontal could be |0> and
| vertical polarisation could be |1> and the qubit is represented
| as complex vector of these two. If you've studied linear
| algebra you know manipulating a vector often involves linear
| transformations. Any linear transformation can be represented
| as a matrix - so applying gates is just doing matrix
| multiplication. Unary gates are 2x2 matrices and binary gates
| are 4x4 matrices - for photons they would be implemented with
| mirrors and optical waveplates. Measuring the polarisation at
| the end is the output. The output is not deterministic but it
| always follows the same distribution, so you could design a
| circuit that has |001> X% of the time, |010> Y%, |111> Z% of
| the time, etc. such that X + Y + Z + .. = 100%.
|
| I'm not too familiar with the details of random circuit
| sampling, but the idea is that you start with a big circuit
| that wasn't intentionally designed and therefore has no known
| properties we can exploit - instead it's a random mess of
| transformations to the qubits. A classical computer cannot run
| big quantum circuits - N gates with the 49 Google qubits
| requires like 2^49 * N^3 classical gates, so it won't be able
| to calculate the output distribution. However, what we can do
| is run the quantum circuit many times (do measurements on the
| quantum computer) and collect many samples. Given enough
| samples, a classical computer can verify whether there's
| consistency between them and whether an actual transformation
| produced them (and therefore quantum computation happened) or
| its just pure noise / garbage using cross entropy benchmarks
| [2].
|
| Note that the purpose of the "random" in the random circuit is
| to introduce hardness and prevent cheating (assume that the
| classical computer is the "opponent" of the quantum computer);
| the circuits don't calculate anything useful / of human value.
|
| What's interesting is that once people with supercomputers saw
| the benchmark formula and analysed the constant factors, they
| found a loophole which let them run a classical algorithm which
| generates measurements/samples that satisfy the benchmark with
| 40K classical CPUs for a week, or even a single A100 within 140
| days. Some of their success was due to the sheer power
| available and some is due to algorithmic cleverness (see:
| tensor networks). In my opinion, they are only disproving the
| Sycamore supremacy in a fussy way.
|
| [1] - https://en.wikipedia.org/wiki/Bloch_sphere
|
| [2] - https://en.wikipedia.org/wiki/Cross-entropy_benchmarking
| knorker wrote:
| Like reddit's ELI5 (explain it like I'm five), but ELIHAM
| (explain it like I (only) have a masters)?
|
| Yeah, if someone could please do that.
| bob1029 wrote:
| > The theoretical basis for these experiments depends on
| sampling the output distributions of random quantum circuits;
| unfortunately, understanding how this theoretical basis can be
| used to define quantum supremacy is an extremely difficult
| task. Anyone attempting to understand how this sampling task
| relates to quantum supremacy must study concepts from random
| matrix theory, mathematical analysis, quantum chaos,
| computational complexity, and probability theory. Resources
| connecting these concepts in the context of quantum supremacy
| are scattered and often difficult to find.
|
| At the end of the day, people writing these papers are also
| human. The very same sorts of humans who pad resumes with shiny
| technology and self-serving complexity.
|
| This might be a case of a rabbit you do not want to chase.
| Gravyness wrote:
| > This might be a case of a rabbit you do not want to chase.
|
| I'm sure the researchers will all collectively realize this,
| if it were to be the case, and disregard their 12 year
| academic journey with their great salaries in favor of
| research into more important topics like world hunger,
| renewable energy, etc.
|
| I'm so sure that the percentage of sureness is an imaginary,
| quantum-entangled value between -7 and 13 billion percent.
| junon wrote:
| "academic journey" and "great salaries" generally don't go
| together.
| DLA wrote:
| Can anyone recommend a few videos, papers, or books that "start
| at the beginning" for how quantum computing works--the physics
| parts like entanglement. I'm a computer scientist, so I'm good
| with the algorithm/programming aspects. Should I start with
| quantum annealing? Ideally I guess I'd like some pointers to good
| approachable resources on the ways qubits are made, maintained,
| written to and read from--start me off at the quantum version of
| the transistor. Thanks HN.
| deepburner wrote:
| Quantum Computing/Information researcher here. This article is
| largely garbage, the original (~2 month old) paper is
| surprisingly readable [0] and I suggest you to check it out.
| Here's my $0.02: The efforts of the Google team is commendable in
| that they're trying to squeeze as much out of their noisy systems
| as possible until error correction is here (they need to, to
| justify their existence after all) and they are aware of the
| shortcomings of the paradigm. But personally I don't see anything
| useful coming out until error correction and number of qubits are
| improved many orders of magnitude to have fault tolerant QC. The
| jury is still out on their utility once realized; Shor's
| algorithm is an obvious one followed by quantum simulation
| algorithms that might benefit chemistry and fundamental physics,
| but it's not as big of a silver bullet as we thought before for
| simulation. Maybe someone can tell me what fast prime
| factorization is good for besides breaking encryption.
|
| Also IBM released a paper[1] recently making similar claims
| ("quantum advantage without complete error correction") which got
| obliterated by 2 back to back papers [2,3] that did what they did
| for much cheaper on a classical computer.
|
| [0]: https://arxiv.org/pdf/2304.11119.pdf
|
| [1]: https://www.nature.com/articles/s41586-023-06096-3
|
| [2]: https://arxiv.org/pdf/2306.14887.pdf
|
| [3]: https://arxiv.org/pdf/2306.16372.pdf
| imchillyb wrote:
| > I don't see anything useful coming out until error correction
| and number of qubits are improved many orders of magnitude to
| have fault tolerant QC.
|
| Error correction and control are vital for any system to be
| reliable.
|
| Thankfully the ability to run simultaneously and vet against
| multiple outputs should overcome this fault for a time.
| dekhn wrote:
| the best thing about watching the entire quantum computing
| debacle is every time somebody says their QC does something
| faster than a classical computer, somebody tunes the classical
| computer to beat the QC. Combined with the fact that chip
| infrastructure is already paid for and there is a large market
| to enable economy of scale, coupled with the lack of true
| problems where exponentially better problem solving is critical
| to business or military success, means we're going to see a
| series of QC claims that really just prop up classical
| computers for some time.
|
| (I used to work in supercomputing and chemistry and await the
| day we have useful QCs doing simulation better than what we can
| do on supercomputers)
| shawabawa3 wrote:
| > Maybe someone can tell me what fast prime factorization is
| good for besides breaking encryption.
|
| Maybe someone can tell me what alchemy is good for besides
| turning lead into gold?
| Retric wrote:
| Prime factorization only breaks a subset of crypto
| algorithms, it's not that useful or game changing alone.
| NooneAtAll3 wrote:
| considering that the world is transitioning to post-quantum
| crypto... everyone moved from gold to digital currencies,
| pretty much
| avsteele wrote:
| Grover's algorithm seems quite useful, though I'd guess it
| requires an enormous number of qubits (large unordered
| database) to be useful.
| zerodensity wrote:
| Questing regarding error correction and logical qbits. From my
| very limited understanding logical qbits could mitigate some of
| the error but uses multiple physical qbits for every logical
| qbits. Has anyone created a quantum computer that has logical
| qbits? Say a small one with 16 physical 4 logical qbits quantum
| computer, or similar.
| klyrs wrote:
| > Maybe someone can tell me what fast prime factorization is
| good for besides breaking encryption.
|
| That untapped market of trillionaire hobby number theorists?
| squarefoot wrote:
| I know nothing about quantum computing, but when I read "does X
| instantly when rivals need 47 years", I immediately think about
| it being abused for brute forcing passwords and encryption keys.
| Would it be possible?
| kouru225 wrote:
| Government institutions have already been instructed to start
| using quantum computer safe encryption algorithms.
| Internationally, governments have been storing tons of
| encrypted data that they can't currently decrypt because they
| expect to be able to decrypt it soon using quantum computers.
|
| There are already quantum computer safe encryption algorithms
| that have been released to the public. I don't know exactly how
| they work, but one of them was about reverse engineering a path
| through multi-dimensional space.
|
| Expect SHA256 to disappear soon and be replaced.
| return_to_monke wrote:
| sha256 is a hash algo. you probably meant something else?
| kouru225 wrote:
| Yea. I think maybe this is what happens when I read
| something I don't fully understand but want to share the
| info. I think they're switching their encryption algorithms
| and also they were looking into how quantum computing
| affects SHA256. Would have to find the article I was
| reading.
| EGreg wrote:
| Why are you being downvotdx? We need to have way more
| discussion about this. The answer is -- no one knows. In fact
| even classical computer algos might soon break modern crypto,
| especially Elliptic Curve crypto.
| Arch485 wrote:
| Not yet!
| snowstormsun wrote:
| Does someone know for reference how big of a number that computer
| can factor? Or some other complex problem that is not highly
| theoretical...
| adgjlsfhk1 wrote:
| Roughly 15 (all current attempts to factor bigger numbers cheat
| in various ways). The basic problem is that you need at least a
| thousand qbits to do anything remotely resembling useful
| (especially if you need any error correction).
| mabbo wrote:
| I feel like quantum computing is one of those technologies that
| is going to be a lot of hype and excitement by researchers, but
| no real applications and no real impact to the general public.
|
| ... Until the day that it does.
|
| And then, a lot of very interesting things will happen very
| quickly, and the world will change massively.
| pastelsky wrote:
| If I understand correctly, the limited amount of "expressiveness"
| a quantum computer has restricts its ability to solve useful
| problems.
|
| Couldn't you simply express useful problems as a function of
| problems that the quantum computer can already solve? Doing so
| might be extremely in-efficient, but give that there's so much
| performance leeway, it still might end up being similar to super
| computers of today?
| bradley13 wrote:
| Seems like more of what we've seen: a quantum system is really
| good at...simulating a quantum system. Water us wet, whoopie.
| hgl wrote:
| I wonder if Google is Xerox of our era, developing promising
| technologies only to be made practical and commercialized by
| other entities? Transformer is one example. Not sure if recent
| advancement in quantum computing is another.
| taeric wrote:
| "This is a very nice demonstration of quantum advantage. While a
| great achievement academically, the algorithm used does not
| really have real world practical applications, though." Not
| having real world applications is not necessarily damning, of
| course. Curious to know what implications this has for general
| algorithms. Reading this, it almost makes it sound like there
| will be some algorithms that quantum is better at and that they
| won't necessarily be the same algorithms that "classical"
| computing uses.
| jfengel wrote:
| Quantum computing is not a competitor for general computing. It
| solves certain categories of problems much faster, but not most
| of them.
|
| In fact, nobody yet really knows what those problems are. There
| are a few real-world problems that quantum computers could
| theoretically help with, like factoring large numbers, but
| nobody is really all that close to implementing them. There are
| problems that can be implemented, like this one, but they are
| of no use whatsoever. They just happen to be feasible to
| implement.
|
| The implementation does suggest that they're building up a
| toolbox of physical parts that could some day (years, less than
| decades) be used on real-world problems. The class of such
| problems appears very small so far, but there is reason to
| think that we could find other problems that matter. (For
| example, in the realm of AI, which is very compute-intensive --
| though thus far nobody seems to have any specific
| implementation.)
|
| Think of a quantum computer as a magic box on the side that
| solves a few problems. If you had it, you might completely
| reconsider what kinds of problems you want to solve. Solving it
| via the magic box might require a complete reconsideration of
| how you frame the question -- like discovering a wormhole that
| lets you get from Atlanta to Seattle instantly, so how would
| you redesign a trip from New York to LA?
| OkayPhysicist wrote:
| > It almost makes it sound like there will be some algorithms
| that quantum is better at and that they won't necessarily be
| the same algorithms that "classical" computing uses.
|
| This is exactly correct. While you can simulate classical
| computations on a quantum computer, it doesn't make a lot of
| sense to. However, there exists a class of problems for which
| there exist solutions that have lower time complexities than
| anything we can run on a classical computer, if only we had a
| quantum computer of sufficient size.
|
| Examples of such algorithms are the famous two, Shor's and
| Grover's algorithms, of which Shor's is more interesting
| because it factors integers in polynomial time whereas the best
| known classical algorithm runs in exponential time (this is the
| one that's scary for cryptography). Grover's algorithm is
| basically function inversion (find the x given a function f and
| some desired value y such that f(x)=y) in O(sqrt(N)) over the
| domain of the function, which on a classical computer requires
| O(N) operations.
|
| Those were discovered in the 90's, so they're pretty widely
| understood by now. There's a few newer ones, such as the
| quantum algorithm for linear systems of equations, which solves
| for a scalar result of an operation applied to the unknown
| vector x such that Ax=b in O(log(N)k^2) time versus the
| classical O(nk), where n is the number of matrix elements and k
| is a measure of how sensitive to error the problem is.
|
| Basically, there's a handful of useful algorithms, and another
| dozen or so theoretically interesting ones, that can run faster
| on a quantum computer. For everything else, there's zero
| advantage to using one.
| dwheeler wrote:
| The real question to me, is how it will take to get more
| (useful) qubits.
| knorker wrote:
| My understanding is that quantum computers only have two real
| use cases, as of today:
|
| 1. Breaking crypto. 2. Simulating _other_ quantum systems.
|
| For (1) it's basically all downsides. For (2) unless you're a
| particle phycisist you'll never need quantum computers.
|
| But that's now. Maybe there will be a killer app for it some
| day, changing everything. Or indeed, we could get it
| indirectly. Maybe simulating quantum systems we could invent
| new battery technologies, which in turn changes our lives.
| esrauch wrote:
| If it is able to break crypto then surely that means it can
| do other "interesting" mathematical calculations that are
| currently extremely slow/hard though?
| olliej wrote:
| Nope.
|
| QC allows a very specific attack that breaks the main
| asymmetric algorithms today, ECC and RSA - there is no
| meaningful attack on anything else (Grover's algorithm is
| technically a sqrt improvement on symmetric algorithms but
| that isn't close to breaking AES256).
|
| There's nothing general purpose in the attack on RSA and
| ECC: you can use the quantum Fourier transform to find the
| period of the key, and that period tells you the key. But
| this is very specific - you have two algorithms that have
| inherently periodic behavior, where the period is meant to
| be secret, and so all you need from the QC is the period.
|
| It's hard to see how to extend that to other problems.
|
| I'm more curious about (and kind of wish more emphasis
| would be made on) programmable analog computers, which is
| what it seems QC should be capable of. A lot of the
| difficult with QC is that people seem really fixated on
| discrete problems (like factoring \o/) where there's a
| single correct answer, and so huge amounts of effort and
| research are going into error correction, etc. There are
| lots of problems (simulations as in this circuit), where
| the classical solution simply requires running millions of
| times with random variance to converge on a sufficiently
| accurate result where a QC would theoretically be great.
| The general use problem in that case becomes "how do I
| convert this classical problem into a QC compatible
| algorithm".
| arcticbull wrote:
| So far, to my knowledge, not faster than a classical
| system. We've been searching for decades for a problem with
| a better quantum solution than a classical solution, and
| it's been a tough slog. These are, at least thus far and
| from my limited perspective, very special-purpose
| accelerators. Shor's algorithm is the one thing that comes
| to mind tbh.
|
| "Breaking crypto" in this case isn't anything other than
| finding prime factors of an integer.
| hannob wrote:
| That's not all that clear.
|
| The problems QCs could solve regarding crypto are from
| number theory. That's basically a branch of math that
| doesn't really describe any real-world stuff. Cryptography
| is as far as I know the only practical application of that
| stuff.
|
| I couldn't think of any implication of "being able to
| factor large numbers quickly" or "calculating discrete
| logarithms quickly" that does not relate to cryptography.
| But I'd be curious if others think these would have any
| implications beyond "we'll have to get new algorithms into
| our cryptography".
| knorker wrote:
| Possibly. This is way beyond my knowledge.
|
| But I'm not aware of many other "search this finite part of
| the number line for this property", where "this finite
| part" is still too big for classical computers.
|
| It almost sounds like quantum computers are tailor made for
| the types of problems we've been building cryptosystems on.
|
| But maybe this is not all quantum computers will be able to
| do. I couldn't even explain exactly how they apply to
| simulations of quantum systems, but have only taken more
| knowledgeable people at their word.
| taeric wrote:
| "Search this finite part of a number line" is probably
| fairly accurate in describing many Mixed Integer
| Programming (MIP) techniques. Such that, if you do get a
| breakthrough at that, you may see larger advances in
| optimization.
| dotnet00 wrote:
| Exactly, ML is a potentially very valuable task for more
| mature quantum computers due to the ability to perform
| much more powerful searches than what we have to accept
| on classical computers.
| staunton wrote:
| (2) can be very relevant to material science and chemistry.
| Those things have huge ranges of important practical
| applications.
|
| For example, people currently struggle to do accurate
| calculations of the band structures of materials and it
| doesn't look like there will be much progress there using
| only classical computers (using more computing power or
| better approximation tricks). Big enough quantum computers
| could do this. Band structure calculations are very
| interesting for pretty much all semiconductor development.
| knorker wrote:
| Indeed. That's what I meant by my example of indirect
| benefit.
|
| Simulating physical systems better, faster, and more
| completely, can have many practical applications.
|
| But unless you're already today trying to simulate physical
| systems like that, then QC probably won't help you one bit.
| jfindley wrote:
| I think there are some other areas where qc is likely to
| deliver better results than classical computers beyond these
| two. Modelling turblent flows is one such example. They
| certainly aren't common, but I think this overstates the case
| somewhat.
| pyrale wrote:
| Basically, quantum computers can solve problems of the BQP
| class in a time that is a polynomial function of input size.
| These same problems, on a traditional computer, can take
| exponential time.
|
| It is hard to tell with certainty what can be done with
| quantum computers, but having powerful quantum computers will
| certainly open new fields of research, like trying to figure
| out if quantum algorithms for NP-class problems have better
| complexity than classical algorithms. Simply finding
| subexponential algorithms for problems that would have
| required exponential time on a traditional computer, may
| prove valuable.
|
| Basically, outside of the immediately available problems,
| this opens up an entirely new complexity class from which to
| attempt to crack other classes of complexity. Trying to break
| NP-problems with turing machines, if not successful in an
| academic sense, has proved immensely valuable and profitable.
| By getting another architecture, we get a shot at similar
| breakthroughs.
| knorker wrote:
| This is so extremely well put that I want to reply just to
| highlight it.
|
| What you said was the optimist view, that there's a world
| of possibilities. It's just that at the moment we just have
| the two I mentioned.
|
| Maybe there's much more, and if we build it (the hardware),
| they (the algorithms) will come.
|
| But for now really just the two. AFAIK.
| gigel82 wrote:
| So it can generate lots of random numbers very fast... cool. /s
| sixQuarks wrote:
| Imagine trying to explain this article to a caveman, or even
| someone from 200 years ago.
| moonshadow565 wrote:
| How exactly does this make their search engine better?
| fsh wrote:
| Google's and IBM's previous "quantum supremacy" demonstrations
| were quickly crushed by improved classical simulations. Let's see
| if it survives this time.
| lbourdages wrote:
| The main issue I had with their demonstrations were that the
| problems they chose were essentially "let's show that a quantum
| computer is better at being a quantum computer than a classical
| computer". Obviously they are, just like I'm better at being a
| human than ChatGPT is.
|
| I'm not saying they suck, but to proclaim that "quantum
| computers are superior" you need an actual use-case, IMO.
| [deleted]
| refulgentis wrote:
| It's absolutely non-obvious, I'm dead serious. As far as I've
| understood keeping up over the years, there's a credible
| argument from a respected academic in the QC community that
| quantum computing is impossible.
| dekhn wrote:
| it's not impossible (from a theoretical, and possibly
| physical perspective) but it's likely that there are no
| problems which would motivate the necessary investment to
| demonstrate a QC doing something useful which we couldn't
| also do in reasonable time on a classical supercomputer.
|
| Kind of like fusors: sure, you can do nuclear fusion in
| your garage but you're putting in far more energy than you
| are generating.
| lbourdages wrote:
| Do you have a link handy? You've piqued my curiosity!
| refulgentis wrote:
| Here's an exhaustive covering:
| https://www.scottaaronson.com/democritus/lec14.html
|
| I would have picked it up in bits and pieces from blogs
| over years, here's an attempt to render that useful that
| is surely nitpickable:
|
| TL;DR: there's two types of caring about Google's claims
| of quantum supremacy:
|
| 1. HN tends to assume these articles are about
| breakthrough in _product usefulness_ and then tsk tsk
| about lack of impact. c.f. top comment currently gravely
| noting after reading the paper, its too noisy and a long
| way off.
|
| 2. The papers are about demonstrating _there is quantum
| computing at all_, the interest in the people in the know
| is about settling that question, the raw strongman feat
| isn't particularly interesting.
| nier wrote:
| From the article linked by parent:
|
| <<To summarize, I think that arguing with skeptics is not
| only amusing but extremely useful. It could be that
| quantum computing is impossible for some fundamental
| reason. So far, though, I haven't seen an argument that's
| engaged me in a really nontrivial way.>>
|
| <<Very little of what we do in theoretical computer
| science is directly connected to a practical application.
| That's just not what we're trying to do. Of course, what
| we do has applications, but indirectly. We're trying to
| understand computation. If you take that as our goal,
| then it seems clear that starting from the best physical
| theories we have is a valuable activity. If you want to
| ask a different question, such as what we can do in the
| next five to ten years, then, that's fine. Just make it
| clear that's what you're doing.>>
| refulgentis wrote:
| This shouldn't leave anyone with the impression they are
| cranks. Scott's claim in this lecture is he hasn't yet
| found making a counter argument "really nontrivial"
|
| To wit, the article names Leonid Levin and Oded Goldreich
| as the primary skeptics.
|
| Leonid has 10K citations, 2012 Knuth Prize, 37 h-index.
|
| Oded has 63K citations, 2017 Knuth Prize, 95 h-index.
|
| What Scott Aaronson considers "really nontrivial" is far
| beyond our comprehension, and he's making a claim about
| synthesizing an counterargument, not settling the entire
| argument.
| dataflow wrote:
| Would you have any links handy for those?
| superposeur wrote:
| Do you know if there are any arguments in the literature to the
| effect that actually useful quantum supremacy may never be
| possible in principle (not merely in practice)?
|
| What I mean is: decoherence with environment must be held at bay
| long enough for the final answer to appear in the qubits with
| high probability. Based on examples from thermodynamics (such as
| impossibility of extracting useful work from a heat reservoir
| without a second, lower-temperature reservoir into which to dump
| heat), I could imagine a scenario where decoherence would always
| foil the computation at some point, perhaps at some intermediate
| time or in the final measurement. In particular examples, this
| would show up as irksome experimental limitations, but in fact
| these limitations would all stem from a deeper prohibition. And
| to the extent that decoherence could be managed (such as in the
| cited experiment), the resulting computation would not be
| "useful".
| dataflow wrote:
| Dumb question:
|
| Say I have a wooden stick and I break it in half in less than a
| second. Assume a computer would need several minutes to simulate
| everything that would've happened in the stick. I clearly got the
| output faster than a computer (and with more precision), so does
| this imply I'm doing anything particularly fascinating?
|
| I assume the same scenario is possible to concoct for a quantum
| computer. I assume it wouldn't be particularly interesting
| either. So what are the criteria for distinguishing those
| scenarios from the "interesting" cases? And how do we know which
| case this one is like?
| mistermann wrote:
| I wouldn't think so in the case of a stick breaking, but if you
| consider a more realistic scenario like running a risk analysis
| across a hyperdimensional problem space, there is certainly
| something extremely interesting going on there.
| D4ckard wrote:
| You are right, basically anything can be viewed as a
| computation. Ray Kurzweil discusses this extensively in his
| book The Singularity is Near, building upon previous work by
| Edward Fredkin.
|
| > To appreciate the feasibility of computing with no energy and
| no heat, consider the computation that takes place in an
| ordinary rock. Although it may appear that nothing much is
| going on inside a rock, the approximately 1025 (ten trillion
| trillion) atoms in a kilogram of matter are actually extremely
| active. Despite the apparent solidity of the object, the atoms
| are all in motion, sharing electrons back and forth, changing
| particle spins, and generating rapidly moving electromagnetic
| fields. All of this activity represents computation, even if
| not very meaningfully organized. (From The Singularity is Near
| Chapter 3, The Limits of Computation)
| bawolff wrote:
| If you're in the stick making business and need to know how
| sticks break, the yes.
| aleksiy123 wrote:
| If you need to measure a billion sticks breaking the simulation
| might start to come out ahead in terms of cost.
|
| I think this makes more sense if you use a bridge instead of a
| stick.
|
| Building a full sized bridge to test your idea is prohibitively
| expensive but a simulation is cheaper to build even if the
| actually simulation is slower to compute.
|
| In real life simulations, small prototypes , and full size
| prototypes are all useful with various trade-offs of cost and
| benefits at various stages.
| [deleted]
| knorker wrote:
| If you can control your experiment perfectly, then you don't
| need the simulation.
|
| Now break the wooden stick in half while on the moon. In space.
| At absolute zero. While under acceleration. While spinning.
| While bombarding it with 10^100 neutrinos/second. And do it
| with 100'000 randomized iterations of the wooden fibers.
| Mizoguchi wrote:
| I think a more fair comparison would be that you know how to
| break the wooden stick so that you end up with one piece
| measuring exactly 152 mm and you can repeat the feat every
| single time with other wooden sticks of different dimensions
| crafted from different trees under different environmental
| conditions.
| tedivm wrote:
| I don't think this is a stupid question at all- it actually
| touches on some pretty interesting topics. The human brain is
| absolutely fascinating, in particular because it is extremely
| energy efficient and fast at the tasks it does. The more we
| learn and understand how the brain is working the more we'll be
| able to improve a ton of computer science topics, from AI to
| signals processing and a whole bunch more in between.
| jabbany wrote:
| One would be the direction of entropy. Breaking the stick is
| not "particularly fascinating" because you're going in the
| direction of increasing entropy. However, _putting it back
| together_ is. In the simulation it takes no more effort to go
| one way or the other, while you probably cannot put the stick
| back together no matter how hard you tried.
|
| A quantum question that is "interesting" would also be similar
| to finding order out of disorder (e.g. factoring).
| tedunangst wrote:
| If I provide two model sticks to a physics simulation, it can
| work out how to put them back together with no more effort
| than it took to model the break?
|
| Like I can model shooting a cannon ball out of a cannon and
| it will tell me where it lands. Or I can model a cannon ball
| sitting on the ground and it will tell me where the cannon
| was?
| jabbany wrote:
| A hypothetical perfect simulation should be able to do it
| both ways indeed!
|
| However, the current consensus take of quantum uncertainty
| means _if_ such a simulator exists, it cannot be of our
| universe. Or, more likely, such a perfect simulator does
| not exist.
|
| Of course all of this is about a hypothetical of a
| hypothetical at this point...
|
| (This is the same problem as entropy (our current
| understanding of it). We know it is increasing one-way
| w.r.t. time, and we can imagine what it means to "reverse
| entropy" and that there's nothing really theoretically
| preventing that, but we can't build an actual machine to do
| that.)
| glompers wrote:
| Nice comment
| dataflow wrote:
| I really love this comment -- it "feels" like the right
| criterion -- but I guess I'm wondering what criteria (if any)
| researchers _actually use_ right now. Do they have any
| criteria for this?
| marcosdumay wrote:
| > so does this imply I'm doing anything particularly
| fascinating
|
| If you mean "useful", then I'd like to point our that breaking
| rods is a common task in many labs all over the world exactly
| because it can calculate some things better than any non-rod
| computer we have.
|
| If you literally meant "fascinating", then you'd have to ask
| yourself. For a lot of people, it is.
| rcme wrote:
| Did you really get the answer faster than a computer?or put
| another way, what question did you answer? How much force was
| applied to the stick before it broke? How fast was the stick
| moving 0.04s after breakage? How much sound was produced? There
| are thousands of questions you don't have answers to when you
| simply break a stick.
| Aperocky wrote:
| I think that was his exact argument against quantum
| computing.
| ImPostingOnHN wrote:
| seems more like an argument _for_ quantum computing
| fnordpiglet wrote:
| The key is things are interesting if they're interesting to
| someone. The fact you can break a stick and observe in detail
| in real time at a minute level better than a computer is very
| interesting to many people. Botanists, biologists, material
| scientists, architects, industrial engineers, etc. It's just
| not interesting to you.
|
| Likewise even contrived quantum experiments like this aren't
| interesting to many people. They're not practical. But they're
| profoundly interested to people interested in quantum
| computing. They're milestones on a roadmap. There will and are
| things computer computers won't do better than classical
| computers and there are things in reality neither will
| simulate, and those things are interesting to someone. In fact
| I think this is a crucial part of basic science- EVERYTHING is
| interesting. Even the lack of something interesting is
| certainly interesting to a psychologist :-)
| wuubuu wrote:
| Not a dumb question!
|
| There is a great video by the mathematician Richard Borcherds
| on this exact objection to current examples of quantum
| supremacy.
|
| https://www.youtube.com/watch?v=sFhhQRxWTIM
| dataflow wrote:
| I love this video, thanks for sharing it. I'm watching it
| right now and I think I'm going to refer to his "teapot
| supremacy" from now on.
| AlexCoventry wrote:
| I declare AlexCoventry supremacy. No one can compute
| AlexCoventry better than I can. :-)
| pieter_mj wrote:
| Wait until you see a GTP5 simulation of yourself.
| tsimionescu wrote:
| There is also an interesting response to that video from a
| leading quantum computer scientist, Scott Aaronson:
|
| https://scottaaronson.blog/?p=5460
|
| He actually goes into significant details about your exact
| question and proposes some resolutions. It's quite long and
| I don't think I can do it justice in a TLDR so leaving the
| link to speak for itself.
| dataflow wrote:
| Thanks for the link! I'll take a look.
|
| Edit: I just read it and I don't feel like he really
| answered the question. He had a rebuttal about being able
| to freely specify parameters, but then his Facebook
| friend addressed his rebuttal, to which he replies "this
| is indeed more interesting"... which, well, it certainly
| is, and also doesn't answer the question!
|
| (Well, I guess he also mentions we should expect speedup
| greater than Avogadro's number, which seems fair, but
| clearly that's not the bar anyone is claiming to meet, so
| it doesn't get us anywhere.)
| carabiner wrote:
| It's like saying you can breathe, yet a computer might take a
| long time to simulate every chemical process taking place in
| your body in those 2 seconds of respiration.
| zdragnar wrote:
| The difference between you breaking a stick and the computer
| modeling it is that you've measured nothing. You don't know,
| with any precision, the amount of force you used, the rate the
| stick broke at, how much mass remains in the two pieces and how
| much was lost to splintering, etc.
|
| In other words, assuming the computer model has sufficiently
| accurate data as an input, it can produce significantly more
| refined output than a human can through trial.
|
| In fields where human trial is exceptionally inefficient-
| molecular physics, chemical synthesis design, structural
| material design, etc- a sufficiently fast computer will allow
| for faster design iteration and perhaps even development of new
| processes and materials.
|
| More concretely, if you wanted to design a new stick that
| breaks under exactly the conditions that you intend it to, then
| you can skip a lot of trial and error if your computer can
| accurately model the materials you are designing and testing.
|
| Think construction, civil engineering, energetic chemistry,
| biological processes (medicine design) etc.
| dataflow wrote:
| > You don't know, with any precision, the amount of force you
| used, the rate the stick broke at, how much mass remains in
| the two pieces and how much was lost to splintering, etc.
|
| But I can measure those with a ruler and a scale. Both before
| and after the breakage. Takes a few seconds, and I'd need to
| do that before punching those numbers into the simulator
| anyway. And I can be precise with where I apply the force,
| etc. So I don't see how this gets to the point of the
| question.
| JumpCrisscross wrote:
| > _I can literally measure those with a ruler and a scale_
|
| You can't measure a number of internal stress-strain
| conditions during the moment of failure. You can't repeat
| the experiment with the same stick. The best way to get a
| fast intuition for why the simulation is superior is to
| take an entry-level CAD course with a focus on material
| design.
| dataflow wrote:
| > The best way to get a fast intuition for why the
| simulation is superior is to take an entry-level CAD
| course with a focus on material design.
|
| I'm sorry but you're completely missing the point of my
| question. The question was not "what can simulations do
| that experiments can't". My background in simulation is
| not zero, and I've never had that question. The question
| as something else entirely.
|
| > You can't measure a number of internal stress-strain
| conditions during the moment of failure
|
| But what if that's _not_ what I 'm interested in
| simulating or measuring? What if all I care about is
| something easily measurable, like the length of each
| remaining piece? Isn't that precisely my point here?
| There are _so_ many things quantum computers can 't
| compute either - yet we seem to be judging them by what
| they're _really_ good at. Just like with the wooden
| stick. So how do I tell if they 're the same sort of
| scenario or not? What's the distinguishing criterion?
| kelnos wrote:
| > _What if all I care about is something easily
| measurable, like the length of each remaining piece?_
|
| If you're not just talking about the two large pieces,
| but also the lengths of the small splintered pieces, I
| suspect the computer is going to be a lot faster at
| figuring that out than you will be manually measuring
| them.
|
| Your responses in this thread seem to be a bit
| disingenuous: first you ask why a computer simulation
| could be better than doing it manually, but then when
| people tell you the things the computer can do
| faster/better than you, you say "but what if I don't care
| about that?" Well, duh, if you don't care about anything
| but the most trivial things, the computer probably isn't
| going to be of any benefit to you.
| dataflow wrote:
| > first you ask why a computer simulation could be better
| than doing it manually
|
| No, I never asked this at all. Some people changed the
| question to this for reasons I still don't understand.
| The actual question I asked was: "what are the criteria
| for distinguishing those scenarios from the interesting
| cases? And how do we know which case this one is like?"
|
| The whole point here is to figure out if the problems
| they're declaring computational superiority on are
| analogous to the wooden stick problem here. Not to ask
| "when are simulations better than doing it manually".
| carabiner wrote:
| Are you just asking if these are unrealistic, contrived
| performance benchmarks? Those have always existed and
| they're fine:
| https://news.ycombinator.com/item?id=20231084
|
| Some are more useful than others. There's no strict
| criteria just as there is no perfect way fully
| characterize CPU performance.
| dataflow wrote:
| "Unrealistic" or "contrived" don't really get to the
| heart of the matter, at least not as I understand them.
| The stick example can be extremely useful and realistic -
| it doesn't really make any difference to the question.
|
| Perhaps another way to phrase it (though I'm not 100%
| sure this is equivalent) is: how do they know whether
| whatever they're accomplishing is quantum computation, as
| opposed to something else (like analog computation) that
| happens to be concerned with quantum physics.
| carabiner wrote:
| You're going to measure every point like this?
| https://www.researchgate.net/profile/Andrzej-
| Baier/publicati...
|
| Likewise, you can crash a car in a lab and do a simulation
| of one. Both tell you "fascinating" things and are still
| done by engineers.
| dataflow wrote:
| This has nothing to do with the question though.
|
| > You're going to measure every point like this?
|
| Why would I need to? Is "tells you a bunch of answers to
| questions you never asked along the way" really the
| distinguishing factor for what constitutes computational
| supremacy? If all I wanted was just the lengths of the
| broken pieces, my simulation _has_ to tell me the stress
| and strain at every point in order to be considered
| superior to a numerical simulator? Merely telling me the
| answer to the question I asked doesn 't count? So if my
| prime factorization tool factored 4096-bit integers
| instantly, that's not enough? It has to factor a bunch of
| unrelated numbers and maybe solve TSP in the middle
| before we can consider it superior to classical
| computation?
| pooloo wrote:
| You really just answered this yourself:
|
| > Assume a computer would need several minutes to simulate
| everything that would've happened in the stick. I clearly
| got the output faster than a computer (and with more
| precision), so does this imply I'm doing anything
| particularly fascinating?
|
| You assert that you have output faster than a computer with
| more precision. However, you do not have any empirical
| data, just observable data; as stated by zdragnar:
|
| > The difference between you breaking a stick and the
| computer modeling it is that you've measured nothing. You
| don't know, with any precision, the amount of force you
| used, the rate the stick broke at, how much mass remains in
| the two pieces and how much was lost to splintering, etc.
|
| Then you further state that you can measure those with a
| ruler and a scale; however, this inherently takes time with
| significant uncertainty in your measurements and
| calculations. Whereas a computer will provide all of those
| numbers.
|
| The other thing to consider is the method of simulation
| such as finite-element analysis (FEA) and the resolution
| you need. You can get segmented data all the way to down a
| specific volume of that stick, good luck with the hand
| calculations on that.
| fnordpiglet wrote:
| I don't think you guys are tackling this right. You can
| observe the stick breaking to absurd precision, even
| using an electron tunneling microscope, and produce
| immense amounts of data about the sticks state at every
| point in the breaking and afterwards faster than a
| computer can simulate breaking the stick. The key is the
| stick breaking and it's observation at any level of
| detail happens in parallel with all aspects of the system
| coherently acting in real time simultaneously in a way
| with perfect effects and transmission of information
| between all quantum parts of the system at the speed of
| light. The computer on the other hand is operating in a
| binary encoding Turing machine forced to rely on
| imperfect relatively serial (compared to the parallelism
| of reality) with encodings that and algorithmic
| mathematical models that produce encodings that are
| similar to what reality produces naturally. The issue is
| the layers of abstraction, nor the measurement and detail
| of the system. All the measurable information about the
| stick and it's breaking is available and measurable with
| no impact on the time it takes to break the stick.
|
| Here's perhaps a more clear example. The universe
| influences itself at a distance with all bodies acting on
| all other bodies at once. This is a classical nbody
| problem where n->inf. We can measure quite easily these
| effects without perturbing the system, which would not be
| able to progress a single iteration in the entire span of
| the universe if simulated by a computer.
| ithkuil wrote:
| But the comparison is not fair.
|
| Yes the electron microscope can image a lot of details
| "in parallel" but not all details from all angles, all
| internal microfractures. You can't easily measure all
| temperature gradients in all cubic nanometers of the
| material etc etc.
|
| The simulation is slow because it works at that level and
| thus as a side effect will also give you that output.
|
| Obviously if you don't need all that information you may
| find another route to arrive at the results you want
| fnordpiglet wrote:
| You _could_ though, it's entirely fair. All information
| about the system is available for measurement down to the
| quantum level across all dimensions, because it's a
| perfect fidelity to reality. Regardless of whatever
| specific tool you choose, in theory I could build tools
| that measure in near infinite detail the stick. I could
| not however build a computer that could calculate that
| detail of the stick with perfect fidelity to reality.
| It's a gross approximation no matter what you do. I would
| however point out that my nbody example is entirely
| analogous and is perhaps more tractable as to why it's
| impossible to compare measuring reality to calculating a
| simulation. Simulations have the advantage that you can
| reproduce and tweak it repeatedly without having to
| configure the universe in a specific way every time. You
| can hold variables constant, you can change the laws of
| nature, you can simplify complexities to narrow in on a
| specific behavior. But simulations are never a
| replacement for measurement of reality, and the reason is
| the opposite of what you're holding out. Reality can be
| measured to an arbitrary level of complexity literally in
| real time, even systems that are so complex we can't
| consider possibly simulating even approximations of the
| system.
| BasedAnon wrote:
| Now if we had a way of measuring it, it would be interesting
| to ask what sort of model of computation we could derive from
| the breaking of sticks
| shredprez wrote:
| I cannot wait to tell you about the cutting-edge technology
| my team is developing -- it's called the twigchain and it's
| going to change everything.
| [deleted]
| somenameforme wrote:
| Oddly enough, I think this can be answered by headlines alone.
| Imagine you could do anything remotely interesting with a
| relative speedup of 47 years to 'instant', where we can say an
| instant is 1 millisecond. That'd mean in one day of
| calculation, you could do things that'd take a current era
| supercomputer more than 4 billion years to achieve. And so on
| upward from there. Yet we're focusing on this odd and
| _relatively_ boring metric of 47 years?
|
| That suggests that the implications intentionally provoked by
| the phrasing "Google's quantum computer instantly makes
| calculations that take rivals 47 years" are obviously false,
| even if the statement itself is true in some extremely specific
| context. So in other words, when genuine quantum progress has
| been achieved - I don't think we'll need to debate it, as the
| results would speak for themselves.
| Strilanc wrote:
| The key detail you're missing is that the problem must be
| _written down_. All details must be specified, so that you can
| unambiguously tell how well it 's being solved.
|
| Yes, you can write down a problem inspired by a specific stick.
| And yes, the written down problem will of course inevitably
| differ in the details from the real stick. Normally you'd think
| of these small differences as small errors in the description.
| But as soon as you try to argue the stick is computing the
| written-down problem, they become errors _in the stick_ because
| they affect its ability to implement the problem you wrote
| down.
| robot_no_421 wrote:
| You're not modeling or predicting anything though. That's like
| saying "what if I built a bridge that failed on the first day?
| A computer would need several days to calculate all the forces
| that led to the failure, but my bridge failed just fine without
| any computer help".
|
| Well... yes. But try building a bridge that doesn't break.
|
| Or to keep with your scenario, try to predict exactly where and
| how your stick will break before you break it. You can't.
| ori_b wrote:
| Does Google's implementation of quantum computing help with
| this sort of scenario, or is it a really fancy way of
| breaking the bridge?
| robot_no_421 wrote:
| Yes, and no. Don't ask me rhetorical questions if you don't
| want a rhetorical answer.
| __MatrixMan__ wrote:
| Well you don't have to sacrifice any sticks or bridges to
| see what happens, so that's a plus.
|
| One could imagine stimulating experiments that are
| implausible to actually perform.
| jcims wrote:
| I wonder about that frequently. The universe 'executes' physics
| in, as far as I can tell, a realtime basis (at least within the
| local reference frame). What is that called as compared to
| computing a model of the same.
| wanderingstan wrote:
| Of course for you it feels like realtime, because the physics
| of "you" are being executed on the same "system".
|
| It's funny to think that if we are living in a simulation,
| that the machine we're running on might have horrible uptime,
| but we'd never know because our "time" only works when the
| machine is running!
| beebeepka wrote:
| Just like a tick rate in a game
| chaboud wrote:
| Why do you think time dilation needs to occur in our
| simulation? Be it from instance-shard transition due to cell
| crossing (special relativity) or inter-node lag due to
| spatial density (general relativity), system design
| constraints dictated the very rules of our "physical"
| existence.
|
| (Poe's Law notice: yes, I'm joking)
| riwsky wrote:
| Your wooden stick is dedicated hardware for computing the
| behavior of wooden sticks. Fascinating? Pretty subjective--but
| one of the best concrete proxies we have for "fascination" is
| "economic value" (cf OpenAI's AGI definition). Is it going to
| make (or save) you a lot of money to quickly find out how that
| stick broke? Probably not.
| gnramires wrote:
| I think you can make the case that, for example, going out,
| taking a stick from a forest, breaking it and record the
| sound will be much more economical than trying to simulate a
| realistic stick breaking sound -- in particular in developer
| time trying to model sticks and wood breaking sounds in a
| convincing way (suppose we need say an art asset for a game).
|
| It's something complexity theory isn't well prepared to
| tackle, at least on first look. Complexity theory doesn't
| take into account the cost of devising an algorithm itself.
| And complexity theory usually only is interested in
| asymptotic behavior -- different in both ways from what we
| have (n=1 and algorithm cost). I think simulations win only
| when there are significant gains to be had over just doing it
| in real life, i.e. when you are only interested in a simple
| aspect that can be summarized by a model, or when for some
| reason the real life event uses a lot of energy or costs
| significantly. An extremely accurate simulation of wood and
| wood breaking, and sound, I think would be quite
| computationally costly -- the wood is already very complex
| (not as complex as the base atomic reality -- you probably
| don't need to model every single atom -- but still very
| complex). Sticks are cheap, and breaking a stick is easy.
| Breaking the stick doesn't seem to take much energy, and
| realistically stick breaking itself (although perhaps not
| stick collecting) comes out ahead energetically!
|
| It gets quite complicated quickly, and it really depends on
| what you want to model, but I think it's interesting to see
| when and if simulations are advantageous (for example, you
| could take the cost of raising a tree into account, or not!
| is the stick "free" because it would grow and decay anyway?).
| I think in the long term many things can be simplified into
| what we're interested, so models increasingly win. But
| realistically (in our limited lives and limited universes :))
| I think there will always be a place for physical
| experimentation, taking pictures of sticks and recording the
| sound as they break. (and hopefully our relationship with
| trees and sticks itself still lasts for a long time...)
|
| I say that as someone who really enjoys modelling nature! :)
| I think modelling and trying to recreate natural process
| (sometimes with an artistic touch as well) is a great way to
| understand and appreciate the wonders of the universe.
|
| Note: Also, even with ideal reversible 0-cost computation,
| there's likely to be some kind of polynomial _time_ overhead
| to 1:1 simulation of reality. This can be enough to justify
| experimentation instead of simulation, in some cases, when
| your model itself isn 't a significant simplification over
| reality.
| riwsky wrote:
| Complexity theory doesn't handle it, but that's why I
| didn't use complexity theory. Economics handles it fine: no
| one's ever paid you or anyone else to do a wood-breaking
| simulation at a sufficient physical level of detail to
| capture the sound accurately, so the comparative savings on
| it aren't economically valuable.
| chaboud wrote:
| Sound effect libraries have long included the sounds of
| wood breaking/cracking, generally recorded (rather than
| simulated). A couple of jobs ago, we sold sound effect
| libraries that cost 2-5x the cost of the software
| applications that they were used with (hundreds to
| thousands of dollars).
|
| I think you'd be hard pressed to find economists (beyond
| freshman college students who recently discovered
| Nietzsche second hand at a party) arguing for a purely
| direct monetary expression of value, but even they would
| have to admit that recordings of broken sticks in snow
| covered forests have non-zero value.
| gnramires wrote:
| Indeed, not a disagreement, just a contribution to the
| commentary :)
| ablatt89 wrote:
| Any problems where the configuration space is large, and you
| want to find some optimal configurations to the problem, would
| in theory benefit since you can directly map the configurations
| into the entangled qubits. Entangled qubits give you the
| ability to physically represent large configuration spaces.
|
| The difficult is ensuring entanglement between qubits, scaling
| up the qubit count, noise reduction between the qubits and the
| other physical parts of the quantum computer, error correction,
| and generating the circuit to represent the optimization
| problem, formalizing a proof that the total time of quantum
| computation (computation + preparation) is less than to
| simulate on high performance computers and what not.
|
| There's several YouTube videos where some company has mapped
| their problem into a quantum circuit and claim it provided
| solutions to optimization problems that they couldn't have
| found classically but dunno, I guess it would really require AB
| testing between classically computing it on an HPC versus a
| quantum computer.
| selimnairb wrote:
| Regardless of how good our classical or quantum computers get,
| simulating "complex" natural systems is ultimately limited by
| our ability to observe and measure the physical properties of
| the "components" (I put quotes here to acknowledge the
| difficulty/impossibility of delineating system components) of
| that system. I'm not sure quantum computers will help with
| that.
| sfpotter wrote:
| You're missing the point of the question.
| selimnairb wrote:
| I'm saying that for some problems, neither classical nor
| quantum computers may arrive at an acceptable answer.
| CrampusDestrus wrote:
| When you break a stick you just break a stick. When you
| simulate breaking a stick you know everything about the broken
| stick.
|
| What if you broke the stick to study its material properties?
| Then you would need to spend months carefully taking samples
| and measuring all the broken spots.
|
| With a perfect simulation? You're done the moment it ends. All
| the data is there available with a copy paste
| hutzlibu wrote:
| "With a perfect simulation? You're done the moment it ends.
| All the data is there available with a copy paste"
|
| As far as I understand, to perfectly simulate reality, you
| would need a second reality.
| wholinator2 wrote:
| Well of course the simulation is a simulation. They are
| speaking of the ideal case. A real simulation is an
| application of a model, the physics of the molecules and
| structures interacting. They are saying that in the ideal
| case, once the simulation is complete you have complete
| knowledge of the entire system at every moment in time. Of
| course there's a resolution to all this. The time steps are
| comparatively large to the planck time and the wave
| functions are approximations, everything is an
| approximation, that's what makes it a simulation.
|
| But the principle is that within your model, once the
| simulation is complete you know _everything_. You have
| measured every aspect of reality possible from your
| simplified version. If your model was arbitrarily close to
| reality you really would know everything
| troupe wrote:
| The fact that the calculation would take 47 years on a classical
| computer is only a break through if the calculation is something
| we actually want to do. Otherwise it just means that we have
| created an unnecessary problem designed to be solved quickly by a
| quantum computer.
|
| That isn't to say that this quantum computer isn't impressive and
| a step forward. Just that the comparison isn't really meaningful
| unless the problem being calculated transcends computer
| architecture--meaning it is actually interesting to solve
| regardless of the architecture being used.
| jl6 wrote:
| The ability to entangle particles feels like a really amazing new
| capability. (New in the last 100 years anyway). Like it's a brand
| new kind of substance that has never been made before. Everything
| in all of history has been built out of boring old atoms, not
| these fancy new entangled things.
|
| Even if quantum computers turn out not to be able to solve
| interesting problems, I wonder if computers are the only thing we
| can make out of it.
| DonHopkins wrote:
| Electric space heaters? That's crypto's most practical
| application.
| oldgradstudent wrote:
| Crypto makes extremely expensive space heaters.
|
| On the positive side, they are just as efficient as regular
| space heaters in producing heat from electricity.
| rvz wrote:
| Nope.
|
| That is now overtaken by AI (Especially LLMs), [0] which does
| not have any efficient methods of training, fine-tuning and
| inferencing.
|
| Crypto on the other hand has already has alternative
| consensus algorithms available today which can cut it
| emissions down significantly [1].
|
| [0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-
| training...
|
| [1] https://consensys.net/blog/press-release/ethereum-
| blockchain...
___________________________________________________________________
(page generated 2023-07-03 23:01 UTC)