[HN Gopher] Training of Physical Neural Networks
___________________________________________________________________
Training of Physical Neural Networks
Author : Anon84
Score : 68 points
Date : 2024-07-10 13:13 UTC (9 hours ago)
(HTM) web link (arxiv.org)
(TXT) w3m dump (arxiv.org)
| UncleOxidant wrote:
| So it sounds like these PNNs are essentially analog
| implementations of neural nets? Seems like an odd choice of
| naming to call them 'physical'.
| tomxor wrote:
| ANN is taken.
| TheLoafOfBread wrote:
| I mean LoRA was taken too before LoRA became a thing
| tomxor wrote:
| I don't mean globally, LoRA are at least in different
| domains. Artificial Neural Networks and Physical Neural
| Networks are both machine learning, discussion referring to
| both is highly probable, and the former far more
| established so it calling it an Analog Neural Network would
| never last long.
| pessimizer wrote:
| Makes sense as opposed to "abstract." With the constant
| encoding and decoding that has to be done when things are going
| in an out of processors and storage (or sensors), digital
| processes are always in some sense simulations.
| ksd482 wrote:
| _PNNs resemble neural networks, however at least part of the
| system is analog rather than digital, meaning that part or all
| the input /output data is encoded continuously in a physical
| parameter, and the weights can also be physical, with the
| ultimate goal of surpassing digital hardware in performance or
| efficiency._
|
| I am trying to understand what format does a node take in PNNs.
| Is it a transistor? Or is it more complex than that? Or, is it a
| combination of a few things such as analog signal and some other
| sensors which work together to form a single node that looks like
| the one we are all familiar with?
|
| Can anyone please help me understand what exactly is "physical"
| about PNNs?
| sigmoid10 wrote:
| It's just a general idea to implement the computation part of
| neurons directly in hardware instead of software. For example
| by calculating sums or products using voltages in circuits,
| i.e. analog computing. The actual implementation is up to the
| designer, who in turn will try to mimic a certain architecture.
| Shawnecy wrote:
| My knowledge in this area is incredibly limited, but I figured
| the paper would mention NanoWire Networks (NWNs) as an emerging
| physical neural network[0].
|
| Last year, researchers from the University of Sydney and UCLA
| used NWNs to demonstrate online learning of handwritten digits
| with an accuracy of 93%.
|
| [0] = https://www.nature.com/articles/s41467-023-42470-5
| tomxor wrote:
| Last time I read about this the main practical difficulty was
| model transferability.
|
| The very thing that makes it so powerful and efficient is also
| the thing that make it uncopiable, because sensitivity to tiny
| physical differences in the devices inevitably gets encoded into
| the model during training.
|
| It seems intuitive this is an unavoidable, fundamental problem.
| Maybe that scares away big tech, but I quite like the idea of
| having invaluable, non-transferable, irreplaceable little
| devices. Not so easily deprecated by technological advances,
| flying in the face of consumerism, getting better with age,
| making people want to hold onto things.
| bongodongobob wrote:
| Reminds me of the evolutionary FPGA experiment that was
| dependent on magnetic flux or something. The same program
| wouldn't work on a different FPGA.
| cyberax wrote:
| Here's the paper about it: https://www.researchgate.net/publi
| cation/2737441_An_Evolved_...
|
| And a more approachable article:
| https://www.damninteresting.com/on-the-origin-of-circuits/
| rusticpenn wrote:
| What thy did was overfitting. We later found other ways of
| getting around the issue.
| trextrex wrote:
| Well, the brain is a physical neural network, and evolution
| seems to have figured out how to generate a (somewhat) copiable
| model. I bet we could learn a trick or two from biology here.
| tomxor wrote:
| Some parts are copiable, but not the more abstract things
| like the human intellect, for lack of a better word.
|
| We are not even born with what you might consider basic
| mental faculties, for example it might seem absurd, but we
| have to learn to see... We are born with the "hardware" for
| it, a visual cortex, an eye, all defined by our genes, but
| it's actually trained from birth, there is even a feedback
| loop that causes the retina to physically develop properly.
| immibis wrote:
| They raised some cats from birth in an environment with
| only vertically-oriented edges, none horizontal. Those cats
| could not see horizontally-oriented things.
| https://computervisionblog.wordpress.com/2013/06/01/cats-
| and...
|
| Likewise, kittens with an eye patch over an eye in the same
| time period remain blind in that eye forever.
| tomxor wrote:
| Wow, that's a horrific way of proving that theory.
| BriggyDwiggs42 wrote:
| Geez poor kitties, but that is interesting.
| alexpotato wrote:
| Another example:
|
| Children who were "raised in the wild" or locked in a room
| by themselves have shown to be incapable of learning full
| human language.
|
| The working theory is that our brains can only learn
| certain skills at certain times of brain development/ages.
| hansworst wrote:
| The way the brain does it is by giving users a largely
| untrained model that they themselves have to train over the
| next 20 years for it to be of any use.
| alexpotato wrote:
| > Last time I read about this the main practical difficulty was
| model transferability.
|
| There is a great write up of this in this old blog post:
| https://www.damninteresting.com/on-the-origin-of-circuits/
| robertsdionne wrote:
| This is "Mortal Computation" coined in Hinton's The Forward-
| Forward Algorithm: Some Preliminary Investigations
| https://arxiv.org/abs/2212.13345.
| craigmart wrote:
| Schools?
___________________________________________________________________
(page generated 2024-07-10 23:00 UTC)