Newsgroups: comp.ai.neural-nets
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!news.cs.indiana.edu!noose.ecn.purdue.edu!lips.ecn.purdue.edu!kavuri
From: kavuri@lips.ecn.purdue.edu (Surya N Kavuri )
Subject: Re: Radial basis functions
Message-ID: <1991May15.212920.5972@noose.ecn.purdue.edu>
Sender: root@noose.ecn.purdue.edu (ECN System Management)
Organization: Purdue University Engineering Computer Network
Date: Wed, 15 May 1991 21:29:20 GMT

References: <1991May15.000449.7396@noose.ecn.purdue.edu> <1991May15.194433.10566@nntp-server.caltech.edu>

In article <1991May15.194433.10566@nntp-server.caltech.edu>, tylerh@nntp-server.caltech.edu (Tyler R. Holcomb) writes:
> kavuri@lips.ecn.purdue.edu (Surya N Kavuri ) writes:
> 
> 
> > Radial basis functions are nonmonotonic and are not 
> > used for nets using backprop.  How can then one determine
> > the centers and radii of these functions ?
> > 
> >					SURYA KAVURI
> 
> Remember - backprop just means "gradient descent", which
> is exactly how Poggio goes about training his RBF networks.

  Thanks for your reply.  I would like to ask some questions.

  Do they use RMS error minimization instead of Kmeans ? 
  I can imagine serious local minima problems with gradient 
 descent algos for RBF nets.  How does this fare with K-means approach ?


> However, M&D's method, while appearing very simplistic, is
> orders of magnitude faster and every bit as effective as "backprop".
> 
> Without knowing more about your intended application, I would recommend
> the K-means approach in M&D.


** How does K-means approach fare in case of sparse training data ?
 As I see it, RBF with K-means is a better scheme for classification 
 than KNNs (K nearest neighbors) as it does not need to store the 
 class samples explicitly.  RBF only needs to store a few centers
 and scale factors.  This advantage seems to blur when the number
 of RBFs needed is high.  Number of RBFs can be large when the 
 classes are close and are elongated. 

 ** How does Kmeans approach work when the classes are close ?
  As I understand, you randomly pick some centers  for clusters 
 and classify all the points in a class to these clusters depending
 on their distance(Euclidean) from these centers.  When classes are 
 close, is a cluster allowed to include points from more than one class ?
 Or, do we limit the growth of the spheres when a point of a different
 class is encountered ? If we choose to stop the growth of the sphere
 when a point of a different class is encountered, the spheres grown for
 a particular class may not be able to cover all the sample points for 
 that class.  This may need to further pick some more starting points 
 and make new spheres until all the patterns are well enclosed.  

 Basically, How does Kmeans apprach work in the case of very close classes ?

> -- 
>                 ------------------------------------------------------------
> Tyler Holcomb   *   "Remember, one treats others with courtesy and repsect *
> tylerh@juliet   *   not because they are gentlemen or gentlewomen, but     *
>   caltech.edu   *   because you are."       -Garth Henrichs                *

 SURYA N. KAVURI
 (FIAT LUX)
 
