Newsgroups: comp.ai.neural-nets
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!omlinc
From: omlinc@cs.rpi.edu (Christian Omlin)
Subject: fault-tolerance of feedforward networks`
Message-ID: <j+wg+7.@rpi.edu>
Keywords: fedforward networks, sensitivity, weight perturbation
Sender: omlinc@cs.rpi.edu
Nntp-Posting-Host: cs.rpi.edu
Organization: Rensselaer Computer Science, Troy NY
Distribution: usa
Date: 26 Apr 91 13:56:29 GMT
Lines: 31

Hi !

I am running simulations with backprop networks. The network is used as
a classifier.
I am interested in the sensitivity of the network to perturbations
in the weights. My experiments indicate that the performance degrades
more rapidly when the weights from the input to the hidden layer are
perturbed as opposed to perturbation of weights from the hidden to
the output layer. This implies that, for my experiments, the shape
of the decision regions is largely determined by the first hidden 
layer. Are there any references (simulations, etc) confirming this
behavior ?

Thanks.

Christian

----------------------------------------------------------------------------
Christian W. Omlin			

office:                                 home:
Computer Science Department             Foxberry Farm
Amos Eaton 119                          Box 332, Route #3
Rensselaer Polytechnic Institute        Averill Park, NY 12018
Troy, NY 12180 USA                      (518) 766-5790
(518) 276-2930                        

e-mail: omlinc@turing.cs.rpi.edu
----------------------------------------------------------------------------


