Newsgroups: comp.ai.neural-nets
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!cheng
From: cheng@ral.rpi.edu (Wei-Ying Cheng)
Subject: quastion
Message-ID: <HQ-=JK@rpi.edu>
Sender: cheng@ral.rpi.edu 
Nntp-Posting-Host: mars.ral.rpi.edu
Organization: Rensselaer Polytechnic Institute, Troy NY
Distribution: comp.ai.neural-nets
Date: 20 Mar 91 03:55:31 GMT
Lines: 15

I have a question which may be interesting:

Suppose we have a sample set S which contains a huge number of 
samples. It is impossible to sum all the samples in the set S.
If we choose a sample s=(x,y) from the sample set S randomly
according to a prior distribution. We hope NN to learn this 
sample. Suppose  the output of NN is y' if the input of NN is
x, then we have a norm d =|y - y'|. It is obvious that d is
a random variable. The problem is how to develop a learning 
rule such that d can be minimized in probablity sence. e.g
E(d) is minimized, or P(min(d)) converge to 1, etc. Is there
any reference talking about this problem? Since this problem is
concerned with the generalization problem of NN, I think it
is very interesting.  I greatly appreciate any help.

