[HN Gopher] The Future of Deep Learning Is Photonic
       ___________________________________________________________________
        
       The Future of Deep Learning Is Photonic
        
       Author : Anon84
       Score  : 92 points
       Date   : 2021-07-05 13:56 UTC (9 hours ago)
        
 (HTM) web link (spectrum.ieee.org)
 (TXT) w3m dump (spectrum.ieee.org)
        
       | RicoElectrico wrote:
       | This future always seems to be just around the corner for the
       | past decade or so. Feels like vaporware to me.
        
         | throwaways885 wrote:
         | Deep learning is here, today. Not really vapourware.
        
         | dharmaturtle wrote:
         | AlexNet came out 2012, less less than 10 years ago.
         | 
         | Going from AlexNet to, say, self driving cars in _under_ ten
         | years sounds insane to me. This is only the beginning.
        
           | melling wrote:
           | Google has been working on the self-driving car for 12 years.
           | 
           | https://en.m.wikipedia.org/wiki/Waymo
           | 
           | They had cars on the road in 2009:
           | 
           | "San Francisco was one of the first cities where we tested
           | our self-driving cars, dating back to 2009 when we traveled
           | everything from Lombard Street to the Golden Gate Bridge. Now
           | that we have the world's first fleet of fully self-driving
           | cars running in Arizona, the hilly and foggy streets of San
           | Francisco will give our cars even more practice in different
           | terrains and environments."
        
           | TaylorAlexander wrote:
           | Good point I had not connected the two but that's pretty
           | significant!
        
       | skybrian wrote:
       | On the other hand, will there be dramatic improvements on the
       | software side using improved algorithms? It seems like deep
       | learning research is currently more focused on making things
       | possible than making them efficient.
        
         | varelse wrote:
         | You might want to watch Nvidia's presentation on the recent
         | MLPerf competition. There's a tremendous amount of engineering
         | going into making this stuff more efficient on current
         | hardware.
         | 
         | https://developer.nvidia.com/blog/mlperf-v1-0-training-bench...
        
           | mirker wrote:
           | Most of the effort is in fine tuning the existing methods,
           | which is important, but also leads to reinforcing the status
           | quo in terms of algorithms. In other words, transistor
           | scaling gets a sizable fraction of optimization, and the
           | viable algorithms are ones most similar to dense matrix
           | multiplication. If you unfix the algorithms (e.g., analog or
           | very sparse models), you'll fall off this optimization
           | efforts and your model will eat dust.
        
             | varelse wrote:
             | Which is why you need a killer demo. New ideas are great,
             | pure research rocks. But new ways to solve real world
             | problems derived from that research is the killer product
             | IMO.
             | 
             | See an unnamed AI ASIC company proudly announcing they have
             | achieved better perf/$ without achieving better perf for
             | how not to do this.
        
       | varelse wrote:
       | IMO these sorts of methods become interesting the day they are
       | demonstrated working in a conventional AI framework on a
       | commercially important graph like Resnet or BERT. Everything else
       | follows from a killer demo like that.
       | 
       | The benefit is obvious if it works, but it also entails embracing
       | non-deterministic results by design because it's analog. It's
       | also a major shock to the tech stack. It took a half decade for
       | GPUs to be noticed. This is a much more traumatic transition, no?
        
       | ArnoVW wrote:
       | Bumped into this company some years ago that use light to perform
       | 'random projection', which can be used to approximate matrix
       | multiplication. Which is used a _lot_ in the AI space (reduction
       | of dimension etc)
       | 
       | How? They do the projection quite literally: with DLP projectors
       | that project on CCD through a 'dirty' medium, for randomness. At
       | current DLP resolutions and refresh rates, every pixel being one
       | bit, I'll let you imagine how much data you can push through,
       | using only 30W. The whole thing is sold as an appliance, a sort
       | of co-processor that can do just one operation (but a really
       | complex one)
       | 
       | They released their first commercial product a couple of months
       | ago, hope it works out for them as the idea sounds pretty
       | amazing.
       | 
       | edit: found their website https://lighton.ai/
        
         | mattbit wrote:
         | Yeah, LightOn has been doing this for years already, it's kind
         | of strange that there was no mention of them in the article. I
         | know them because their offices are close to the research
         | center where I work (in Paris, France). If I'm not wrong they
         | were planning to offer their optical processor as a cloud
         | service too.
        
           | orange3xchicken wrote:
           | fyi the founder Igor Carron runs a pretty nice academic blog
           | on compressive sensing (a bit less academic in recent months)
           | 
           | https://nuit-blanche.blogspot.com/
           | 
           | Also hosts the advanced matrix factorization jungle website
           | which is a nice browse if you're interested in mf techniques.
           | 
           | https://sites.google.com/site/igorcarron2/matrixfactorizatio.
           | ..
        
       | rejectedandsad wrote:
       | Wasn't aware that Luminous was an SNN application. That's...far
       | less interesting, given we can't even get electronic SNN's to do
       | anything useful.
        
       | [deleted]
        
       | adamnemecek wrote:
       | I have said this many times and way considered a fool.
       | Electricity is so bad compared with light.
        
         | analog31 wrote:
         | Part of my day job is optics design. There are some tradeoffs.
         | One of them is that the structures of IC's are approaching a
         | couple orders of magnitude smaller than the wavelengths of
         | visible light. To illustrate this point, we have to use
         | something other than visible light to make chips. Getting light
         | through smaller structures becomes quite lossy. A way to make
         | up for the size penalty might be to exploit increases in speed,
         | but the structures for making high speed optics (such as
         | femtosecond lasers) remain bulky.
         | 
         | I'm certainly not trying to nay-say it, and continued research
         | is worthwhile. The history has been that when a new thing is
         | announced that will beat silicon, by the time that thing comes
         | to fruition, silicon has caught up through just grinding
         | incremental improvement.
         | 
         | Optics can solve some interconnection issues, since it isn't
         | confined to 2-dimensional (or "2-1/2" dimensional) structures.
        
           | api wrote:
           | If you could hit terahertz clock speeds something with only
           | 100k transistors and hundreds of nanometers or larger feature
           | sizes could kill modern CPUs... provided you could keep it
           | fed with data from fast enough I/O of course.
        
           | the__alchemist wrote:
           | Why is visible-light wavelength a limitation? Energy
           | requirements?
        
             | analog31 wrote:
             | Just technology. Visible light can use materials that are
             | relatively straightforward: Glass, pure silica, and some
             | other things can be made transparent to visible light.
             | Silicon is friendly as a detector material. Getting
             | materials, detectors, and light sources, to work at shorter
             | and shorter wavelengths is a technological hurdle. Not
             | insurmountable, but hard.
        
               | aborsy wrote:
               | Loss is a main issue. It's minimum at particular
               | wavelengths
        
         | marcosdumay wrote:
         | Analog computers seem to be very well fit for running neural
         | networks (that's since the beginning, this is not the fist time
         | normal computers are beaten), but they are a really bad choice
         | for mostly everything else.
         | 
         | In all likelihood, we will find many more usages for optical
         | processors, but I really don't think they will ever be your
         | main CPU.
        
           | orbifold wrote:
           | Analog computers can't easily be mutliplexed and the
           | integration density of memory + other compute is nowhere near
           | as high as current SRAM/DRAM. This might change with
           | memristive crossbars, but that still doesn't solve the
           | structural part of the problem, since most deep learning
           | workloads are nowadays structurally very far from a feed-
           | forward perceptron and dynamic execution etc. are absent from
           | analog approaches.
        
             | visarga wrote:
             | Modern architectures are becoming simpler, usually just
             | stacks of transformers, and they are good for text, vision,
             | audio, or almost any task without special modifications.
             | That means if they could implement an optical transformer
             | that is 100x more efficient/faster everyone will want to
             | have it.
        
       | shoto_io wrote:
       | Ah... I love the new buzzword people will be throwing around soon
       | enough. It sounds great I have to admit
       | 
       | "Photonic AI". Wow.
        
         | Guest42 wrote:
         | Pretty soon Microsoft will introduce us to the P# language and
         | have intro courses for those that don't want to deal with all
         | that math stuff, with free credits for Azure.
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-07-05 23:00 UTC)