Newsgroups: comp.ai.neural-nets
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!batcomputer!cornell!uw-beaver!ubc-cs!alberta!arms
From: arms@cs.UAlberta.CA (Bill Armstrong)
Subject: Re: Fault-Tolerance of NN
Message-ID: <arms.673553421@spedden>
Keywords: fault-tolerance, fault-recovery, fault-detection
Sender: news@cs.UAlberta.CA (News Administrator)
Organization: University of Alberta, Edmonton, Canada
References: <lx6gjll@rpi.edu>
Date: Mon, 6 May 1991 18:10:21 GMT

omlinc@cs.rpi.edu (Christian Omlin) writes:

>Hi !

>A few papers have appeared recently dealing with retraining (using
>backpropagation) as a strategy by which feedforward NN's can recover
>from faults such as neuron stuck-at faults. A few questions come to
>my mind:
> 
> 1. Often, retraining a network is claimed to be easier (i.e. faster)
>    than training the original, flawless network with small random initial
>    weights. My experiments show that a network is not guaranteed to
>    relearn the intended I/O mapping, i.e. it a network may get trapped
>    in a local minimum. Is relearning inherently easier than learning
>    assuming there are enough units in the hidden layer ?

I think one could answer the question in each instance by trying to
determine whether the damaged network even has the capacity for
repair.  Let us suppose that the stuck-at signal has to be eliminated
(since it is now useless), then the question is: can some other input
replace it?  If there are several inputs feeding into the same
elements as it does, then maybe some of them have almost zero weight,
and could be recruited to perform the correction.  To do that, they
would have to already be connected at least indirectly to the network
inputs that the damaged part needs.  That seems like a fairly easy way
to correct.

Now, if there were only two inputs to the node the damaged one feeds
into, then the task which was previously done by two inputs must be
done by one.  That one may not even have connections to the right
inputs.  Hence the correction would likely have to come at a higher
level in the tree.  But then, the orginal stuck-at signal will have
passed through a sigmoid in a weighted combination with other inputs,
so the correction is no longer a "linear" matter, where one just
subtracts the erroneous signal and adds a correct one.  In fact, if
the stuck-at value shifts the value going into a sigmoid far away from
the centre, ALL of the inputs to that sigmoid will have a much smaller
dynamic range, and hence themselves would have to be "corrected"!  So in
this case, the correction might require a complete reorganization of
the computation.

>2. Suppose we can retrain a network, we are not guaranteed that the
>   network exhibits the same characteristics (e.g. generalization)
>   which may have been one of the criteria during the design of the NN.
>   Wouldn't it be more reasonable to detect structural damages of the
>   NN before it is used in an application and repair the damage ? 
>   (This would require some method for detecting such faults.)

The generalization could be approached by looking at the product of
the weights and derivatives of sigmoids through which the stuck-at
signal passes.  The greater this product, the greater is the
distortion of the network output caused by the stuck-at fault.  Hence,
I think you are right about trying to detect structural damage before
use.

>3. Giving a NN a retraining capability, certainly requires
>   additional hardware and information about the training set. How
>   big is the additional cost of hardware of a NN with retraining
>   capability as opposed to a non-retrainable NN ?

It would appear from the above, that having a lot of redundant
connections could make retraining easier.  Unfortunately, this might
not lead to as good generalization in the first place!  If the weights
of the connections are not zero, then this would also slow down the
network in its undamaged state.  So it is not only the cost of
hardware, but the damage to the speed of execution.

>4. It seems fault-tolerance is not an inherent property of NN, rather
>   they have to be designed with fault-tolerance in mind. There seem
>   to be two possibilities for improving the fault-tolerant behavior:
>   changes in the architecture and a changes in the training procedure.
>   Which of the two is more effective ? 

It seems to me that fault-tolerance is about the same as insensitivity
to input data which contains a small number of values that are way out
of line.  In the case of NN, a value with is way out of line can still
have an effect because of large weights along paths to the output.

I can contrast that with the adaptive logical networks which I work
on: a stuck-at logical signal x either gets through an AND gate (or an
OR gate) or it doesn't.  For example if a different input y to the
said AND gate is 0, then the signal x will have absolutely no effect
on the output of the AND, which is still 0.

If one could limit the size of the weights in a
backpropagation-trained network, then it is possible one could get
some fault tolerance, and if one could use instead of sigmoids,
functions that are constant outside the mid-range, then that would
localize the damage.

>Any comments are appreciated.

A lot of the above is conjecture, not based on experience.  But I
think it tends to help explain your experience.


>Christian

>----------------------------------------------------------------------------
>Christian W. Omlin			

>office:                                 home:
>Computer Science Department             Foxberry Farm
>Amos Eaton 119                          Box 332, Route #3
>Rensselaer Polytechnic Institute        Averill Park, NY 12018
>Troy, NY 12180 USA                      (518) 766-5790
>(518) 276-2930                        

>e-mail: omlinc@turing.cs.rpi.edu
>----------------------------------------------------------------------------
--
***************************************************
Prof. William W. Armstrong, Computing Science Dept.
University of Alberta; Edmonton, Alberta, Canada T6G 2H1
arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
