[HN Gopher] Ask HN: What'd be possible with 1000x faster CPUs?
___________________________________________________________________
Ask HN: What'd be possible with 1000x faster CPUs?
Imagine if we had an unlikely scientific breakthrough and many
orders of magnitude faster general-purpose CPUs, probably alongside
petabyte-scale RAM modules and appropriately fast memory bus,
become widely available. Besides making bloatware on a previously
unimaginable scale possible, what other interesting, maybe
revolutionary, impossible today or at least impractical,
applications would crop up then?
Author : xept
Score : 28 points
Date : 2022-09-21 20:25 UTC (2 hours ago)
| ilaksh wrote:
| The thing is, computing has been getting steadily faster, just
| not at quite the pace it was before and in a different way.
|
| With GPUs we have proven that parallelism can be just as good or
| even better than speed increases in enhancing computation. And
| there again have been speed increases trickling in.
|
| I don't think it's realistic to say that more speed advances are
| unlikely. We have already been through many different paradigm
| shifts in computing, from mechanical to nanoscale. There are new
| paradigms coming up such as memristors and optical computing.
|
| It seems like 1000x will make Stable Diffusion-style video
| generation feasible.
|
| We will be able to use larger, currently slow AI models in
| realtime for things like streaming compression or games.
|
| Real global illumination in graphics could become standard.
|
| Much more realistic virtual reality. For example, imagine a
| realistic forest stream that your avatar is wading through, with
| realtime accurate simulation of the water, and complex models for
| animal cognition of the birds and squirrels around you.
|
| I think with this type of speed increase we will see fairly
| general purpose AI, since it will allow average programmers to
| easily and inexpensively experiment with combining many, many
| different AI models together to handle broader sets of tasks and
| eventually find better paradigms.
|
| It also could allow for emphasis on iteration in AI, and that
| could move the focus away from parallel-specific types of
| computation back to more programmer-friendly imperative styles,
| for example if combined with many smaller neural networks to
| enable program synthesis, testing and refinement in real time.
|
| Here's a weird one: imagine something like emojis in VR, but in
| 3d, animated, and customized on the fly for the context of what
| you are discussing, automatically based on an AI you have given
| permission to.
|
| Or, hook the AI directly into your neocortex. Hook it into
| several people's neocortices and then train an animated AI 3d
| scene generation system to respond to their collective thoughts
| and visualizations. You could make serialized communication
| almost obsolete.
| thfuran wrote:
| >With GPUs we have proven that parallelism can be just as good
| or even better than speed increases in enhancing computation.
|
| Not really, no. It's just that certain classes of problems can
| be very readily parallelized and it's relatively easy to figure
| out how to do something 1000x in parallel compared to figuring
| out how to achieve a 1000x single thread speedup.
|
| >Much more realistic virtual reality. For example, imagine a
| realistic forest stream that your avatar is wading through,
| with realtime accurate simulation of the water, and complex
| models for animal cognition of the birds and squirrels around
| you.
|
| I'm not sure 1000x would do much more than scratch the surface
| of that, especially if you're already tying a lot of it up with
| higher fidelity rendering.
| throwaway81523 wrote:
| Realistically, AI network training at the level being done by
| corporations with big server farms, becomes accessible to solo
| devs and hobbyists (let's count GPU's as general purpose). So if
| you want your own network for Stable Diffusion or Leela Chess,
| you can do on your own PC. I think that is the most interesting
| obvious consequence.
|
| Also, large scale data hoarding becomes far more affordable (I
| assume the petabyte ram modules also mean exabyte disk drives).
| So you can be your own Internet Archive, which is great.
| Alternatively, you can be your own NSA or Google/Facebook in
| terms of tracking everyone, which is less great.
| MisterSandman wrote:
| Much more complicated redstone CPUs in Minecraft.
| Tepix wrote:
| Assuming that a CPU at today's speeds would require vastly less
| power, we would have very powerful, very efficient mobile devices
| such as smartwatches.
|
| Probably using AI a lot more, on-device for every single camera.
| quadcore wrote:
| Good code.
| rozap wrote:
| Atlassian products would be twice as fast.
| cutler wrote:
| A Ruby on Rails renaissance.
| Jaydenaus wrote:
| First thing that comes to mind is using your mobile device as
| your main workstation would become a lot more realistic.
| robertlagrant wrote:
| Be able to run Emacs as fast as I can run Vim?
| bob1029 wrote:
| Single shard MMO with no instancing requirements?
| mixmastamyk wrote:
| Real-time ray tracing was the goal in the old days. Are we there
| yet at adequate quality?
| wtallis wrote:
| No, we're not there yet. Ray tracing in games is still merely
| augmenting traditional rasterization, and requires heavy post-
| processing to denoise because we cannot yet run with enough
| rays per pixel to get a stable, accurate render.
| ttoinou wrote:
| Infinite arbitrary precision real time Mandelbrot zoom generation
| :-)
| ilaksh wrote:
| Can't you already do this with a good shader program? Well,
| Google search finds one that claims 'almost infinite'.
| stevejobs69 wrote:
| >'almost finite'
|
| I mean one of the fundamental attributes of infinity is that
| you can never be 'almost there'.
| operator-name wrote:
| Only if you roll your own arbitrary precision type on the
| gpu, which is much harder given the constraints.
| frontierkodiak wrote:
| Incredible biodiversity monitoring-- everywhere, all the time
| exq wrote:
| Instead of electron we'd be bundling an entire OS with our chat
| apps.
| ilaksh wrote:
| Electron basically IS an entire OS. Since Chromium has APIs for
| doing just about anything, including accessing the filesystem
| and USB devices and 500 other APIs.
| tiernano wrote:
| java might run at a decent speed... Might, but probably won't
| (jk, sorry, I couldn't help myself...) [edit Grammarly decided to
| remove some text when fixing spelling...]
| captaincrunch wrote:
| Likely we would see 8192 keys for SSH
| domenicrosati wrote:
| Simulation? Like fluid dynamics. I heard that was CPU intensive.
| yoyopa wrote:
| it would be nice for the architecture field. we deal with lots of
| crappy unoptimized software that's 20-30 years old. so if you
| like nice buildings and better energy performance (which requires
| simulations), give us faster cpus.
|
| imagine you're working on airport. thousands of sheets, all of
| them PDF. hundreds or thousands of people flipping PDFs and
| waiting 2-3+ seconds for the screen to refresh. CPUs baby, we
| need CPUs.
| mwint wrote:
| Is there any way I can contact you? I have an aspirational
| semi-related project.
| kramerger wrote:
| Windows update in the background would take 3 hours invested of
| 4.
|
| Average nodejs manifest file would contain x12000 more
| dependencies.
|
| Also, we would see a ton more AI being done on the local CPU.
| Anything from genuine OS improvements to super realistic cat
| filters on teams/zoom.
|
| And finally, I think people would need to figure out storage and
| network bottlenecks because there is only so much you can do with
| compute before you end up stalling waiting for more data
| naikrovek wrote:
| we have always been memory-bound, in one way or another, even
| today.
|
| the difference in performance between an application using RAM
| with random access patterns and an application using RAM
| sequentially is far more than you are expecting it to be if you
| haven't actually measured it. an order of magnitude or more for
| sequential stuff over random access. having your data already
| in the L1 cache before you need it is worth the effort it takes
| to make that happen.
| anigbrowl wrote:
| Whole brain simulation, AGI.
| cguess wrote:
| Still not even close to a brain though.
| lm28469 wrote:
| A truck with 1000000000hp still won't beat a Ferrari on a race
| track, nothing guarantees faster hardware would solve any of
| our AI problems
| Salgat wrote:
| Training time is a massive constraint on advancement of the
| science, so at the very least the field would progress much
| faster and be much more accessible to researchers.
| h2odragon wrote:
| 1 million Science per Minute Factorio bases.
___________________________________________________________________
(page generated 2022-09-21 23:01 UTC)