Newsgroups: comp.ai.philosophy
Path: utzoo!utgpu!watserv1!ssingh
From: ssingh@watserv1.waterloo.edu (Sneaky Sanj ;-)
Subject: Perceptron limitations...
Message-ID: <1991Apr2.092041.9391@watserv1.waterloo.edu>
Organization: University of Waterloo
Date: Tue, 2 Apr 1991 09:20:41 GMT
Lines: 29

I am writing up a paper for philosophy and I would like to push the
non-reductive materialism model of the mind.

The basic idea is that an increase in quantity gives rise to a 
spontaneous and sudden change in quality.

I was wondering if it is correct to cite Minsky & Pappert's _Perceptrons_
to support such a model of minds, where in order to have a human
mind be able to process a symbolic language like English, it must be
of sufficient complexity, and failure to demonstrate this capacity
in lower primates is the result of a lower complexity brain.

This is analagous to the idea of a single layer net unable to 
learn the xor rule and a multi-layer one was able to successfully
implement it.

It might be that I am way off base. But regrettably, I lack the 
skill to understand the rigour of the book, so if anyone can
help me out, I would be very grateful.

Thanks in advance.

Ice. "We're all clones..."-Alice Cooper.

-- 
"No one had the guts... until now!"  
$anjay $ingh     Fire & "Ice"     ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
ROBOTRON Hi-Score: 20 Million Points | A new level of (in)human throughput...
!blade_runner!terminator!terminator_II_judgement_day!watmath!watserv1!ssingh!
