Newsgroups: comp.ai
Path: utzoo!utgpu!jarvis.csri.toronto.edu!csri.toronto.edu!tjhorton
From: tjhorton@csri.toronto.edu (Tim Horton)
Subject: Re: Biological relevance and AI (was Re: Who else isn't a science?)
Message-ID: <8806222042.AA17079@dixon.csri.toronto.edu>
Organization: University of Toronto, CSRI
References: <3c671fbe.44e6@apollo.uucp> <10510@agate.BERKELEY.EDU> <13100@shemp.CS.UCLA.EDU> <1988Jun14.135709.307@mntgfx.mentor.com>
Date:	Wed, 22 Jun 88 15:22:17 EDT

msellers@mntgfx.mentor.com writes:
>>> ...  AI grabs onto the neural net paradigm,
>>> say, and then never bothers to check if what is done with neural
>>> nets has anything to do with actual brains.
>
>Where are you getting your information regarding AI & the neural net paradigm?
>...  We do have a considerable amount of knowledge about the human brain,
>and (for the time being more to the point) about invertebrate nervous
>systems and the actions of individual neurons.

Where are you getting your information regarding the human brain?
Most of the brain is unknown; it really is a unscratched problem.
And on the contrary, knowledge of operations of neurons seems to be
entirely less to the point.

Look at it this way.  What if we were without any appropriate theory,
(of serial computation or even electronic calculation), and a Motorola
68030 landed from another planet and inspired awe among us.

We might be able to watch voltage levels on pins, or take chips apart
and guess at the makings of structures like transistors and resistors.
We might figure out some of the 68000's I/O dependancies, and a
little bit about what happens when we poke a probe here or there.
One could imagine, then, claiming to "have a considerable amount of
knowledge" about 68000's.  It would certainly be a curious statement.
We surely mightn't have a clue about the underlying principles of cpu
design; very likely we wouldn't understand some of the most basic
fundamentals.  If, for example, we were without integer arithmetic,
much of what a 68000 did wouldn't make a spec of sense...

(Realize that even that little bit of math took us many thousands of
years to develop.  Surely computer science is young enough to admit of
possibilities for simple and eloquent but as-yet-undiscovered principles,
along as-yet-unimagined lines)...

Meanwhile, somebody might build "equivalents" to transistors -- call them
"light switches" -- and proceed to do research on things *we* would call
"electric lighting".  Though it may be all the rage at the time, and
garner support from granting agencies that overestimate the 68000-ish
properties, the research wouldn't exactly have a damn thing to do with
essential properties of the MC68030.

This is the sort of thing I think this person was getting at.


As for neural nets; take backpropagation for instance. It's almost
completely implausible for any biological system we know of, but it may
be one of the best general-purpose algorithms available.  Not only does
it require more than any biological neural system apparently provides,
but it's ridiculously slow.  (People really don't seem to always like to
tell you how *long* it takes them to get their hyped-up results!)  Further,
the basic structure of the problems this technology works on doesn't seem
to have much at all to do with the structure of most problems that natural
intelligence addresses.

