[HN Gopher] The Society of Mind (1986) [pdf]
___________________________________________________________________
The Society of Mind (1986) [pdf]
Author : eigenvalue
Score : 111 points
Date : 2022-12-22 23:01 UTC (23 hours ago)
(HTM) web link (www.acad.bg)
(TXT) w3m dump (www.acad.bg)
| lisper wrote:
| I was a grad student in AI at the time this book came out so I
| can tell you a little bit about the historical context from my
| personal perspective with benefit of hindsight. The field at the
| time was dominated by two schools of thought, called the "neats"
| and the "scruffies". At the risk of significant
| oversimplification, the neats thought that the right way to do AI
| was using formal logic while the scruffies took an empirical
| approach: noodle around with code and see what works. Both
| approaches led to interesting results. The neat legacy is modern-
| day theorem provers while the scruffy legacy is chatbots, self-
| driving cars, and neural nets.
|
| SoM didn't fit neatly (no pun intended) into either camp. It
| wasn't empirical and it wasn't formal. It was just a collection
| of random loosely-associated ideas and nothing ever came of it.
| It was too informal to lead to interesting theoretical results,
| and it was too vague to be implemented and so no one could test
| it experimentally. And both of those things are still true today.
| I think it's fair to say that if the author had been anyone but
| Minsky no one would have paid any attention to it at all.
| at_a_remove wrote:
| This made its way into pop culture via the X-Files, in an
| episode about A.I.: "Scruffy minds like me like puzzles. We
| enjoy walking down unpredictable avenues of thought, turning
| new corners but as a general rule, scruffy minds don't commit
| murder."
| eternalban wrote:
| > It was just a collection of random loosely-associated ideas
| and nothing ever came of it.
|
| I remember buying this in '89 and being completely underwhelmed
| by it. There is nothing there imo. I stopped paying attention
| to the name Minsky after this introduction to the 'great man'.
| DonHopkins wrote:
| https://web.media.mit.edu/~minsky/papers/SymbolicVs.Connecti...
| Logical vs. Analogical or Symbolic
| vs. Connectionist or Neat vs.
| Scruffy Marvin Minsky
|
| INTRODUCTION BY PATRICK WINSTON
|
| Engineering and scientific education conditions us to expect
| everything, including intelligence, to have a simple, compact
| explanation. Accordingly, when people new to AI ask "What's AI
| all about," they seem to expect an answer that defines AI in
| terms of a few basic mathematical laws.
|
| Today, some researchers who seek a simple, compact explanation
| hope that systems modeled on neural nets or some other
| connectionist idea will quickly overtake more traditional
| systems based on symbol manipulation. Others believe that
| symbol manipulation, with a history that goes back millennia,
| remains the only viable approach.
|
| Minsky subscribes to neither of these extremist views. Instead,
| he argues that Artificial Intelligence must employ many
| approaches. Artificial Intelligence is not like circuit theory
| and electromagnetism. AI has nothing so wonderfully unifying
| like Kirchhoff's laws are to circuit theory or Maxwell's
| equations are to electromagnetism. Instead of looking for a
| "Right Way," Minsky believes that the time has come to build
| systems out of diverse components, some connectionist and some
| symbolic, each with its own diverse justification.
|
| Minsky, whose seminal contributions in Artificial Intelligence
| are established worldwide, is one of the 1990 recipients of the
| prestigious Japan Prize---a prize recognizing original and
| outstanding achievements in science and technology.
|
| https://en.wikipedia.org/wiki/Neats_and_scruffies
|
| Neat and scruffy are two contrasting approaches to artificial
| intelligence (AI) research. The distinction was made in the 70s
| and was a subject of discussion until the middle 80s. In the
| 1990s and 21st century AI research adopted "neat" approaches
| almost exclusively and these have proven to be the most
| successful.[1][2]
|
| "Neats" use algorithms based on formal paradigms such as logic,
| mathematical optimization or neural networks. Neat researchers
| and analysts have expressed the hope that a single formal
| paradigm can be extended and improved to achieve general
| intelligence and superintelligence.
|
| "Scruffies" use any number of different algorithms and methods
| to achieve intelligent behavior. Scruffy programs may require
| large amounts of hand coding or knowledge engineering.
| Scruffies have argued that the general intelligence can only be
| implemented by solving a large number of essentially unrelated
| problems, and that there is no magic bullet that will allow
| programs to develop general intelligence autonomously.
|
| The neat approach is similar to physics, in that it uses simple
| mathematical models as its foundation. The scruffy approach is
| more like biology, where much of the work involves studying and
| categorizing diverse phenomena.[a]
|
| https://www.amazon.com/Made-Up-Minds-Constructivist-Artifici...
|
| Made-Up Minds: A Constructivist Approach to Artificial
| Intelligence (Artificial Intelligence Series) Paperback -
| January 1, 2003
|
| Made-Up Minds addresses fundamental questions of learning and
| concept invention by means of an innovative computer program
| that is based on the cognitive-developmental theory of
| psychologist Jean Piaget. Drescher uses Piaget's theory as a
| source of inspiration for the design of an artificial cognitive
| system called the schema mechanism, and then uses the system to
| elaborate and test Piaget's theory. The approach is original
| enough that readers need not have extensive knowledge of
| artificial intelligence, and a chapter summarizing Piaget
| assists readers who lack a background in developmental
| psychology. The schema mechanism learns from its experiences,
| expressing discoveries in its existing representational
| vocabulary, and extending that vocabulary with new concepts. A
| novel empirical learning technique, marginal attribution, can
| find results of an action that are obscure because each occurs
| rarely in general, although reliably under certain conditions.
| Drescher shows that several early milestones in the Piagetian
| infant's invention of the concept of persistent object can be
| replicated by the schema mechanism.
|
| https://dl.acm.org/doi/10.1145/130700.1063243
|
| Book review: Made-Up Minds: A Constructivist Approach to
| Artificial Intelligence By Gary Drescher (MIT Press, 1991)
| varjag wrote:
| Well it did spur the research into multi-agent systems (popular
| study area in 1990s) and interaction protocols. So there was a
| degree of influence.
|
| But taken on its own it is indeed more a book of musings. I put
| GEB, ANKOS and the like into the same genre.
| lisper wrote:
| I'm with you on ANKOS, but GEB is an accessible and fun (if a
| bit wordy) introduction to formal systems and Godel's
| theorem, so I wouldn't put it in the same category. GEB also
| was not marketed as anything revolutionary (except in its
| pedagogy). ANKOS and SoM were.
| neilv wrote:
| Another thing to be aware of with SoM is that Minsky was
| reading in many fields, and trying to sketch out theories
| informed by that.
|
| One time, before the DL explosion, during a lull in AI, I sent
| a colleague a Minsky pop-sci quote from the earlier AI years,
| before our time, asserting that, soon, a self-teaching machine
| will be able to increase in power exponentially. I was making a
| joke about how that was more than a little over-optimistic. My
| colleague responded something like, "What you fail to see is
| that modern-day Marvin _is_ that machine. "
|
| By the time I was bumping into AI at Brown and MIT, the
| students (including Minsky's protege, Push Singh, who started
| tackling commonsense reasoning) described SoM various ways,
| including:
|
| * Minsky sketching out spaces for investigation, where each
| page was at least one PhD thesis someone could tackle. I see
| some comments here about the book seeming light and hand-wavy,
| but I suppose it's possible there's more thinking behind what
| is there than is obvious, and that it wasn't intended to be the
| definitive answer, but progress on a framework, and very
| accessible.
|
| * Suggestion (maybe half-serious) that the different theories
| of human mind or AI/robotics reflect how the particular
| brilliant person behind the theory thinks. I recall the person
| said it as "I can totally believe that Marvin is a society of
| mind, ___ thinks by ___ ..."
|
| I don't know anyone who held it out as a bible, but at the time
| it seemed probably everyone in AI would do well to be aware of
| the history of thinking, and the current thinking of people who
| founded the field and who have spent many decades at the center
| of the action of a lot of people's work.
| echelon wrote:
| This corroborates my experience.
|
| Reading _Society of Mind_ in undergrad is one of the things
| that led me to doubt AI progress and to stray away from the
| field [1]. It was handwavy, conceptual, and far removed from
| the research and progess at the time. If you held it up to
| Norvig 's undergraduate level _Artificial Intelligence: A
| Modern Approach_ , you could sense Minsky's book was as
| wishfully hypothetical as Kaku's pop-sci books on string
| theory.
|
| [1] Recent progress has led me right back. There's no more
| exciting place to be right now than AI.
| briga wrote:
| Even if it didn't lead to empirical results I think most of
| the value of the book today is in the questions Minsky asked.
| How is intelligence organized in a distributed system like a
| neural net? ChatGPT may be able to do amazing things, but the
| mechanisms it uses are still very opaque. So even if the
| theory may not be "useful", it is still worth pursuing IMO
|
| It's also pretty well written and written by someone who
| clearly spent a lot of mental energy on the problem
| detourdog wrote:
| Inspired me as an undergrad Industrial Design student in
| 1989ish that and The Media Lab: Inventing the Future at M.I.T
| by Stewart Brand were the two most influential technology books
| for me at that time.
|
| Coincidentally enough it turns out my cousin was in the thick
| of it while the pre-media lab was still part of the
| architecture school. She would tell me stories of what she was
| up to in college... when I read that back I had to loop back
| and ask her about it.
| codetrotter wrote:
| > Marvin Minsky's "Society of Mind" is a theoretical framework
| for understanding the nature of intelligence and how it arises
| from the interaction of simpler processes. The concept suggests
| that the mind is not a single entity, but rather a society of
| many different agents or processes that work together to produce
| intelligent behavior.
|
| > The concept of a "Society of Mind" is still relevant today and
| has influenced a number of fields, including artificial
| intelligence, cognitive psychology, and philosophy. It has also
| influenced the development of artificial intelligence systems
| that are designed to mimic the way the human mind works, using
| techniques such as artificial neural networks and machine
| learning.
|
| > Overall, the concept of a "Society of Mind" continues to be an
| important and influential idea in the study of intelligence and
| the nature of the mind.
|
| Or so at least, says ChatGPT when I asked it about this just now.
| Simplicitas wrote:
| How long before "ChatGPT" is just "Chat", as in "I just asked
| Chat and it said ..."?
| jaredsohn wrote:
| Won't they just give it a name? i.e Siri, Alexa, Jeeves
|
| Just noticed that GPT sounds a little like Jeeves.
| lhuser123 wrote:
| I hope never. But we really like to make things harder by
| giving the same name to many things.
| naillo wrote:
| Minsky's "Society of Mind" is still relevant today, IMO. It's a
| provocative idea that explains the complexity of the human mind
| as a society of simpler processes working together. In AI, it's
| inspired researchers to try and build systems with lots of
| interconnected, simple processes that work together like the
| human mind. And in cognitive psychology, it's a key concept
| that's helped researchers understand the mind as a complex
| network of simpler processes.
| timkam wrote:
| There is still plenty of research going on on agents, symbolic
| AI, and other approaches that sometimes and somewhat reflect (or
| have informed) ideas from Society of Mind. Some of the ideas are
| relevant from an application perspective (for sure, we have
| complex socio-technical systems where different 'agents'
| interact), others make it into learning, for example into RL,
| which was hyped some years go. Other ideas feel old-fashioned and
| stuck in the past; this is, in my opinion, not necessarily
| because the ideas are generally bad, but often because some of
| the sub-communities move very slowly and struggle to embrace a
| pragmatic approach to modern applied research.
|
| Generally, I think it's good to maintain 'old' knowledge, and the
| only way to do so in a sustainable manner is to maintain a
| diversity of research directions, where plenty of researchers are
| committed to keep the lights on by slowly advancing directions
| that are not on top of the hype cycle at the moment.
| sdwr wrote:
| This post is fucking catnip for me. I still believe society of
| mind is due for a huge resurgence, due to exactly what you're
| saying. Bottom-up composable skills will be a huge step forward,
| and free up GPT etc to be creative while getting the low-level
| stuff 100% correct (instead of screwing up arithmetic half the
| time)
|
| The inverse side of the coin is emotions, meaningful
| relationships, and wisdom, which I think work in similar way but
| more diffuse. There can be a horny submodule that analyzes
| incoming content for sexual context, one for anger, fear,
| gratitude, etc. The same way an image processor convolves over
| pixels looking for edges and features, an emotional processor
| will operate over data looking for changes in relationships.
|
| Feelings act like filters on incoming data, and are composed out
| of base physiological reactions. Like anger involves adrenaline,
| which increases pain tolerance and dampens conscious thought in
| favor of subconscious and instant reactions.
| FredPret wrote:
| This makes so much intuitive sense to me. I love society of
| mind, I wonder if it bears a relationship to how the human mind
| works
| tern wrote:
| Absolutely, see Internal Family Systems. With a little
| practice, you can empirically determine that this is how your
| mind and emotions work.
|
| https://ifs-institute.com/
| bryan0 wrote:
| Im not an expert in this field, but I think things have actually
| gone in the other direction. Look at IBM's Watson (which won on
| jeopardy), and it was a system which consisted of diverse agents
| which would all evaluate the question in their own way and report
| back a result and a confidence score.
|
| Now look at GPT, it is transformers all the way down and it is
| doing much more diverse things than Watson could ever do.
|
| So I think the key is not in the diversity of agents, but in the
| diversity of data representations. GPT is limited by the text
| representing language, but what if you could train on data even
| more fundamental and expressive
| tern wrote:
| In a clinical setting, the idea is alive and well in the form of
| Internal Family Systems and other "parts work."
|
| I wouldn't be surprised if microservices come from the same root
| of inspiration as well, via object oriented programming (message
| passing), etc.
|
| The very idea of intelligence arising from communicating parts I
| think originated from that time-period and has influenced many
| fields, though there could be earlier references.
| FredPret wrote:
| The society of agents idea helped me understand politics more.
|
| Of course he applies the idea to a single mind where each agent
| is a neuron / set of neurons, and in politics each agent is a
| mind.
| NelsonMinar wrote:
| I haven't heard anyone talk about this book in a long time. I
| read it while a grad student at the MIT Media Lab in the 90s
| (albeit not his course). I struggled to understand its relevance
| even then. I think for many people that book was an introduction
| to ideas about multi-agent and distributed systems. I'd already
| had that and didn't feel the book added much to the discussion
| other than introducing the idea.
|
| Academia has fashions. I feel like agent based systems will have
| their day again, perhaps with each agent backed by a deep
| learning system. It'll be easier to reason about than "convolve
| all these deep learning systems in a giant impenentrable
| network". That may be good or it may be bad.
|
| Minsky is of course infamous for having killed neural networks
| for a whole generation of researchers. He lived long enough to
| see their resurgence, I wonder if he commented on that?
| mtraven wrote:
| Logical Versus Analogical or Symbolic Versus Connectionist or
| Neat Versus Scruffy:
| https://ojs.aaai.org//index.php/aimagazine/article/view/894
| (from 1991, and a response to the revival of connectionism that
| happened in the late 80s).
|
| I often wonder what Minsky would think about the current
| generation of AI. My guess is that he'd be critical, because
| while their accomplishments are pretty impressive on the
| surface, they do very little to explain the mechanics of how
| humans perform complex problem solving, or really any kind of
| psychological model at all, and that is what he was really
| interested in. This has been a methodological problem with
| neural net approaches for many generations now.
|
| Minsky was as much a psychologist as a mathematician/engineer -
| Society of Mind owed a lot to Freud. That style of thinking
| seems to have dropped by the wayside, maybe for good reasons,
| but it's also kind of a shame. I'm not sure what insights you
| get into the human mind from building LLMs, powerful though
| they may be.
|
| For more of Minsky's thoughts on human intelligence, here's a
| recent book that collected some of his writings on education:
| https://direct.mit.edu/books/book/4519/Inventive-MindsMarvin...
| (disclaimer: I wrote the introduction).
| jerb wrote:
| > I often wonder what Minsky would think about the current
| generation of AI.
|
| I suspect he'd react similiarly to Chomsky who in, a recent
| interview (MLST), was highly critical of LLMs as "not even a
| theory" (of what, i'm not sure... language aquisition?
| language production? maybe both)
|
| Minksy was more broadly critical of NNs because it wasn't
| clear how difficult the problems they solved actually were.
| Until we had a better measure of that, saying "I got a NN to
| do X" is kind of meaningless. He elaborates in this excellent
| interview from 1990, beginning at 45:00:
| https://youtu.be/DrmnH0xkzQ8?t=2700
| ghaff wrote:
| >My guess is that he'd be critical, because while their
| accomplishments are pretty impressive on the surface, they do
| very little to explain the mechanics of how humans perform
| complex problem solving, or really any kind of psychological
| model at all, and that is what he was really interested in.
|
| The success of machine learning/neural nets--in no small part
| because of the amount of computation resources we can throw
| at them--has really led to hogging the attention compared to
| fields like cognitive science, neurophysiology, and so forth.
| Work is certainly going on in other areas but I'm still
| struck that some of the questions that were being asked when
| I took brain science as an undergrad many decades ago (e.g.
| how do we recognize a face as a face?) are still being asked
| today.
|
| Given that ML is the thing that's getting the flashy results,
| it's not surprising it's the thing in a limelight--even if
| there's a suspicion that it maybe (probably?) only gets you
| so far without better understanding how learning happens in
| people (and other animals) and other aspects of thinking and
| intelligence.
| EliasY wrote:
| I believe "the society of mind" contains a bunch of really good
| but unorganized ideas for building intelligent models, but was
| written in such a way that it remained virtually impossible to
| implement them into a working program. Minsky's last book called
| "The Emotion Machine" tries to reorganize these ideas into one
| giant architecture composed of at least five interconnected
| macrolevels of cognitive processes built from specialized agents.
| Having said that, "The Society of Mind" is one of the most
| difficult books I've read.
| empiko wrote:
| You can argue that there might be some similarities or analogies
| to be made. But that's it. The book is actually very irrelevant
| and it had literally no impact on how these new systems were
| created and conceptualized.
| btown wrote:
| In a way, any GAN
| (https://en.wikipedia.org/wiki/Generative_adversarial_network)
| has aspects of a Society of Mind: two different networks
| communicating with each other, with the discriminator attempting
| to find flaws with the generator's ongoing output.
|
| And
| https://scholar.google.com/scholar?hl=en&as_sdt=0%2C31&as_vi...
| shows many attempts to generalize this to multiple adversarial
| agents specializing in different types of critique.
|
| One of the challenges, I think, is that while some of these
| agents could interact with the world, it's just far more rapid
| for training if they just use their own (imperfect) models of the
| relevant subset of the world to give answers instantaneously.
| Bridging this to increasingly dynamic physical environments and
| arbitrary tasks is a fascinating topic for research.
| scottlocklin wrote:
| [dead]
| Barrin92 wrote:
| I don't think the work has become irrelevant at all. ML models
| are fine but they're really just big function approximators. In
| the context of Minsky's book they're the sort of processes or
| agents which when put together and interacting could maybe
| constitute a generally intelligent system. Which is how they
| actually tend to be used in the real world already, as parts of
| more complex systems that interact or communicate.
| bhouston wrote:
| I think it is an interesting idea and it is sort of akin to
| Freud's ego, superego, subconscious, etc. It is a
| conceptualization, an abstraction, probably a little arbitrary,
| that does not map well to physical constructs.
|
| To view it from the perspective of deep learning neural networks,
| one would view the society of mind as a proposed super structure
| on top of the various deep learning neutral networks. There is
| this already, like the reinforcement learning structure for Chat
| GPT, or the multi-focus attentional systems used for code
| generation.
|
| As we built out a full AGI that can interact with the whole, it
| is likely we will have specialized systems which mimic a society
| of mind, but given that Minsky's ideas are pretty rough and sort
| of vague, I am not sure his writings provide the best guidance,
| but probably can inspire a bit of work.
| edgyquant wrote:
| Think of humans as artificial neurons. Language is how we back
| propagate etc
| 1letterunixname wrote:
| The technological singularity is close for purely automate-able
| processes.
|
| General AI, either in the form mimicking humans or a "being"
| similar is a ways off, amorphous, and far off for what people
| might see portrayed in fiction.
|
| One also has to ponder the functional definitions of self-
| awareness, intelligence(s), and consciousness as not magical but
| as emergent properties of the "individual" inferred by others
| through behaviors, especially communication. It is
| anthropocentric and arrogant to assume other agents are lesser or
| incapable simply by lacking a common language or mutual
| behavioral understanding. Learning and optimization for improving
| one's power and resources (fitness function, gradient vectors,
| bank account balance;), etc.), especially through play and speed
| of adaptation through feedback would be strong signals of this.
| dang wrote:
| Related:
|
| _The Society of Mind (2011)_ -
| https://news.ycombinator.com/item?id=30586391 - March 2022 (37
| comments)
|
| _The Society of Mind_ -
| https://news.ycombinator.com/item?id=12050936 - July 2016 (2
| comments)
|
| _Marvin Minsky 's Society of Mind Lectures_ -
| https://news.ycombinator.com/item?id=10971310 - Jan 2016 (6
| comments)
|
| _The Society of Mind (1988)_ -
| https://news.ycombinator.com/item?id=8877144 - Jan 2015 (6
| comments)
|
| _The Society of Mind Video Lectures_ -
| https://news.ycombinator.com/item?id=8668750 - Nov 2014 (10
| comments)
|
| _Marvin Minsky 's "The Society of Mind" now CC licensed_ -
| https://news.ycombinator.com/item?id=6846505 - Dec 2013 (2
| comments)
|
| _MIT OCW:The Society of Mind (Graduate Course by Minsky)_ -
| https://news.ycombinator.com/item?id=856714 - Oct 2009 (2
| comments)
| abudabi123 wrote:
| You can follow a lecture series at MIT presented by Marvin Minsky
| on Ai. I think it was recorded before Nvidia fitted GPUs in a
| shoe box and changed the game and the price.
| cs702 wrote:
| Funnily enough, I'm currently trying to make my way through a
| preprint showing that models of dense associate memory with
| bipartite structure, _including Transformers_ (!!!), are a
| special case of a more general routing algorithm that implements
| a "block of agents" in a differentiable model of Minsky's
| Society of Mind: https://arxiv.org/abs/2211.11754. Maybe
| "symbolic" and "connectionist" AI are two sides of the same coin?
|
| EDIT: I feel compelled to mention that the efficient
| implementation of that more general routing algorithm can handle
| input sequences with more than 1M token embeddings in a single
| GPU, which quite frankly seems like it should be impossible but
| somehow it works:
| https://github.com/glassroom/heinsen_routing#routing-very-lo....
| p1esk wrote:
| How does his routing algorithm compare to attention? I saw this
| question in the repo faq, but no satisfactory answer is given.
| cs702 wrote:
| I _think_ the next-to-last faq ( "Is it true that
| EfficientVectorRouting implements a model of associative
| memory?") answers that. Did you see it?
| p1esk wrote:
| Oh I see, thanks! Interesting. It sounds like this is some
| kind of _dynamic_ attention, as opposed to static attention
| in transformers, where queries and key don't change during
| the calculation of their similarity. His routing algorithm
| computes the similarity iteratively.
|
| Is this your assessment as well?
| cs702 wrote:
| I'm still trying to make my way through the preprint :-)
|
| EDIT: According to https://ml-jku.github.io/hopfield-
| layers/#update , attention is the update rule for an
| (iterative) "dense associate memory," even though in
| practice it seems that one update works really, _really_
| well for attention if you train it with SGD.
| [deleted]
| dpflan wrote:
| Thank you. If you gain any insights while reading, please
| share!
| eigenvalue wrote:
| Wow, this sounds exactly like what I was talking about. Thanks
| for the reference.
| eigenvalue wrote:
| Minsky wrote this book in 1986, towards the end of his very long
| career thinking about how to build intelligent machines. For a
| basic overview, see:
|
| https://en.wikipedia.org/wiki/Society_of_Mind
|
| You can find a complete pdf of the book here:
|
| http://www.acad.bg/ebook/ml/Society%20of%20Mind.pdf
|
| My question to the HN community is, has all this work become
| irrelevant given recent progress in machine learning,
| particularly with Transformer based models such as GPT-3 or
| "mixed modality" models such as Gato?
|
| It seems to me that some of these ideas could make a comeback in
| the context of a group of interacting models/agents that can pass
| each other messages. You could have a kind of "top level" master
| model that responds to a request from a human (e.g., "I just
| spilled soda on my desk, please help me") and then figures out a
| reasonable course of action. Then the master model issues
| requests to various "specialist models" that are trained on
| particular kinds of tasks, such as an image based model for
| exploring an area to look for a sponge, or a feedback control
| model that is trained to grasp the sponge, etc. Or in a more
| relevant scenario to how this tech is being widely used today, a
| GitHub Copilot type agent might have an embedded REPL and then
| could recruit an "expert debugging" agent which is particularly
| good at figuring out what caused an error and how to modify the
| code to avoid the error and fix the bug.
|
| I suppose the alternative is that we skip this altogether and
| just train a single enormous Transformer model that does all of
| this stuff internally, so that it's all hidden from the user, and
| everything is learned at the same time during end-to-end
| training.
| endlessvoid94 wrote:
| I just might have to dust off the copy on my shelf and give it a
| re-read.
| [deleted]
| mindcrime wrote:
| I've read this book a couple of times, with my most recent re-
| read being within the last year or two. So I guess that means
| that I, for one at least, find something of value in SofM even
| now.
|
| So the question then might be "what do you find valuable in it?"
|
| That would take a lot of words to answer fully, but let me start
| by saying that I agree with a lot of the other comments in on
| this post. The theory, inasmuch as you can call it that, isn't
| super concrete, isn't necessarily something you can implement
| directly as such, does mostly lack any kind of experimental
| evidence, and is kind of hand wavy. Sooooo... what value does it
| have?
|
| Well for me it's mostly something I look at as inspirational from
| a very high-level, abstract point of view. It strikes me as more
| of a framework that could support very many theories, rather than
| a specific realizable theory. But I believe that there's
| something fundamentally correct (or at least _useful_ ) about the
| idea of a collections of semi-autonomous agents collaborating in
| a style akin to SofM. And on top of that, I think there are at
| least a handful of specific notions contained in the book that
| might be realizable and might prove useful. If you want a
| specific example, I'd say that I think something like K-lines may
| prove useful.
|
| Of course I have no experimental evidence, or much of anything
| else beyond intuition, to support my beliefs in this. And I'm
| just one random guy who's pretty much a nobody in the AI field. I
| just sit quietly at my computer and work, not really trying to
| attract a lot of attention. And in the process of doing so, I do
| occasionally consult _Society of Mind_. YMMV.
|
| And just to be clear in case anybody wants to misinterpret what
| I'm saying. It's not my "bible", and I'm not a Minsky acolyte,
| and I don't consider SofM to be the "be all end all" any more
| than I consider _A New Kind of Science_ , _Godel Escher, Bach_ ,
| _Hands on Machine Learning with Scikit-Learn, Keras &
| Tensorflow_, _Computational Approaches to Analogical Reasoning:
| Current Trends_ , or _Parallel Distributed Processing, Vol. 1:
| Foundations_ to be the "be all, end all". I'm all about applying
| Bruce Lee's mantra:
|
| _" Use only that which works, and take it from any place you can
| find it."_
| squokko wrote:
| Society of Mind is just a bunch of unfalsifiable speculations...
| more New Age mysticism than science or engineering. Not sure how
| it would have any impact
| dev_0 wrote:
| [dead]
| LesZedCB wrote:
| it reminds me a little bit of the thousand brains theory from
| Numenta. we'll see what they turn out in the future. i think
| philosophically they're a closer match to minsky.
| dr_dshiv wrote:
| One of the first AI proposals was from Oliver Selfridge. He
| called it Pandemonium because it was a set of demons (into
| processes) and the loudest demon was successful.
|
| In response, Paul Smolensky made the "Harmonium"--which was the
| first restricted Boltzmann machine. There whichever process
| produces the most harmony among the elements was successful. It's
| still a really great paper.
|
| Harmony maximization was the same as free energy minimization.
| When Smolensky and Hinton collaborated (they were both Postdocs
| under David Rummelhart and Don Norman at UCSD), they called it
| "goodness of fit." Still used today!
| eigenvalue wrote:
| Interesting! I started reading through this pdf after reading
| your comment here and it has a lot of cool ideas:
|
| https://stanford.edu/~jlmcc/papers/PDP/Volume%201/Chap6_PDP8...
| neilv wrote:
| Also look at his later book, "The Emotion Machine".
|
| When I took Minsky's "Society of Mind" class, he was working on
| the later book, and many lectures were him talking about what he
| had been working on earlier that day.
| XMPPwocky wrote:
| https://socraticmodels.github.io/ seems somewhat related, using a
| LLM as the top-level model.
| schizo89 wrote:
| Minsky is hand waiver to say the least.
| lolc wrote:
| I've been wondering about this too. The book gave me a way to
| think about consciousness, but I do wonder whether we'll ever see
| machines that use concepts at the described level. Because humans
| don't seem to be built that way, and the models we've built so
| far don't either.
| mooneater wrote:
| Sounds similar to what Google's SayCan is doing. https://say-
| can.github.io/
|
| They taught it separate skills. When a situation arises, the
| skills (which you could almost consider sub-agents) compete to
| decide who is most likely to be relevant here. Then that skill
| takes over for a bit.
|
| They also have a version called "Inner Monologue" in which the
| different parts "talk to each other" in the sense of
| collaboratively creating a single inner monologue, allowing for
| reactiveness/closed loop behaviour.
|
| I interviewed 2 authors of SayCan/Inner Monologue here:
| https://podcasts.apple.com/us/podcast/karol-hausman-and-fei-...
___________________________________________________________________
(page generated 2022-12-23 23:00 UTC)