[HN Gopher] For algorithms, a little memory outweighs a lot of time
___________________________________________________________________
For algorithms, a little memory outweighs a lot of time
Author : makira
Score : 108 points
Date : 2025-05-21 19:34 UTC (3 hours ago)
(HTM) web link (www.quantamagazine.org)
(TXT) w3m dump (www.quantamagazine.org)
| cperciva wrote:
| Minus the fuzz: A multitape Turing machine running in time _t_
| can be simulated using _O(sqrt(t log t))_ space (and typically
| more than _t_ time).
|
| https://arxiv.org/abs/2502.17779
| LPisGood wrote:
| I think it is very intuitive that more space beats the pants off
| of more time.
|
| In time O(n) you can use O(n) cells on a tape, but there are
| O(2^n) possible configurations of symbols on a tape of length n
| (for an alphabet with 2 symbols), so you can do so much more with
| n space than with n time.
| thatguysaguy wrote:
| Intuitive yes, but since P != PSPACE is still unproven it's
| clearly hard to demonstrate.
| dragontamer wrote:
| The article is about a new proof wherein P == PSPACE.
|
| Something we all intuitively expected but someone finally
| figured out an obscure way to prove it.
|
| --------
|
| This is a really roundabout article that takes a meandering
| path to a total bombshell in the field of complexity theory.
| Sorry for spoiling but uhhh, you'd expect an article about P
| == PSPACE would get to the point faster....
| LPisGood wrote:
| This article is not about a proof that P = PSPACE. That
| would be way bigger news since it also directly implies P =
| NP.
| LPisGood wrote:
| I think that since many people find it intuitive that P !=
| NP, and PSPACE sits way on top of polynomial hierarchy that
| it is intuitive even if it's unproven.
| porphyra wrote:
| There's not even a proof that P != EXPTIME haha
|
| EDIT: I am a dumbass and misremembered.
| doc_manhat wrote:
| I think there is right? It's been a long time but I seem to
| remember it following from the time hierarchy theorem
| qbane wrote:
| But you also spend time on updating cells, so it is not _that_
| intuitive.
| LPisGood wrote:
| I'm not sure what you mean here. If you're in the realm of
| "more space" than you're not thinking of the time it takes.
|
| More precisely, I think it is intuitive that the class of
| problems that can be solved in any time given O(n) space is
| far larger than the class of problems that can be solved in
| any space given O(n) time.
| Almondsetat wrote:
| If your program runs in O(n) time, it cannot use more than
| O(n) memory (upper bound on memory usage.
|
| If your program uses O(n) memory, it must run at least in
| O(n) time (lower bound on time).
| delusional wrote:
| This is obviously demonstrably true. A Turing running in
| O(n) time must halt. The one in O(n) space is free not to.
| thaumasiotes wrote:
| Almondsetat's proof seems more obvious. Given O(n) time,
| you can only use O(n) space, so you're comparing "O(n)
| space, any amount of time" with "O(n) space, O(n) time",
| and it turns out you get more resources the first way.
| hn_acc1 wrote:
| My intuition: the value of a cell can represent the result of
| multiple (many) time units used to compute something. If you
| cannot store enough intermediate results, you may end up
| needing to recalculate the same / similar results over and over
| - at least in some algorithms. So one cell can represent the
| results of hundreds of time units, and being able to store /
| load that one value to re-use it later can then replace those
| same hundreds of time units. In effect, space can be used for
| "time compression" (like a compressed file) when the time is
| used to compute similar values multiple times.
|
| If intermediate results are entirely uncorrelated, with no
| overlap in the work at all, that would not hold - space will
| not help you. Edit: This kind of problem is very rare. Think of
| a cache with 0 percent hit rate - almost never happens.
|
| And you can't really do it the other way around (at least not
| in current computing terms / concepts): you cannot use a single
| unit of time as a standin / proxy for hundreds of cells, since
| we don't quite have infinitely-broad SIMD architectures.
| slt2021 wrote:
| I think this theorem applies well for modern LLMs: large
| language model with pre-computed weights can be used to
| compute very complex algorithms that approximate human
| knowledge, that otherwise were impossible or would have
| required many orders more compute to calculate
| frollogaston wrote:
| Also, the O(1) random memory access assumption makes it easy to
| take memory for granted. Really it's something like O(n^(1/3))
| when you're scaling the computer to the size of the problem,
| and you can see this in practice in datacenters.
|
| I forget the name of the O(1) access model. Not UMA, something
| else.
| whatever1 wrote:
| Lookup tables with precalculated things for the win!
|
| In fact I don't think we would need processors anymore if we were
| centrally storing all of the operations ever done in our
| processors.
|
| Now fast retrieval is another problem for another thread.
| chowells wrote:
| Oh, that's not a problem. Just cache the retrieval lookups too.
___________________________________________________________________
(page generated 2025-05-21 23:00 UTC)