Newsgroups: comp.ai.neural-nets
Path: utzoo!utgpu!jarvis.csri.toronto.edu!csri.toronto.edu!songw
From: songw@csri.toronto.edu (Wenyi Song)
Subject: Re: Refs/opinions wanted -- Neural nets & approximate reasoning
Illegal-Object: Bad Keywords value found by ZMailer on jarvis.csri.toronto.edu:
	vs(?illegal word in phrase?) . dynamics of neural nets ;
Message-ID: <8811200202.AA15157@russell.csri.toronto.edu>
Summary: We can explain the results in other frameworks
Keywords: symbolic processing vs. dynamics of neural nets
Lines: 30
Organization: University of Toronto, CSRI
References: <88Nov18.011810est.6198@neat.ai.toronto.edu>
Date: Sat, 19 Nov 88 21:02:03 EST

In article <88Nov18.011810est.6198@neat.ai.toronto.edu> bradb@ai.toronto.edu (Brad Brown) writes:
>...
>   On the other hand, practical applications of NNs are held
>   back by
>... 
>   (3)  Absence  of   an  ability   to  easily  explain  why  a
>        particular result  was achieved.   Because knowledge is
>        distributed throughout  the network  and  there  is  no
>        concept of  the network  as a whole proceeding stepwise
>        toward a solution, explaining results is difficult.

It may remain difficult, if not impossible, to explain results of NN in
terms of traditional symbolic processing. However this is not a drawback
if you do not attempt to unify them into a grand theory of AI :-)

An alternative is to explain the phenomenology in terms of the dynamics
of neural networks. It seems to me that this is the correct way to go.
We gain much better global predicability of information processing in
neural networks by trading off controllability of local quantum steps.

The Journal of Complexity devoted a special issue on neural computation
this year.

