[HN Gopher] Revisiting Minsky's Society of Mind in 2025
___________________________________________________________________
Revisiting Minsky's Society of Mind in 2025
Author : suthakamal
Score : 64 points
Date : 2025-06-18 15:40 UTC (7 hours ago)
(HTM) web link (suthakamal.substack.com)
(TXT) w3m dump (suthakamal.substack.com)
| suthakamal wrote:
| As a teen in the '90s, I dismissed Marvin Minsky's 1986 classic,
| The Society of Mind, as outdated. But decades later, as
| monolithic large language models reach their limits, Minsky's
| vision--intelligence emerging from modular "agents"--seems
| strikingly prescient. Today's Mixture-of-Experts models, multi-
| agent architectures, and internal oversight mechanisms are
| effectively operationalizing his insights, reshaping how we think
| about building robust, scalable, and aligned AI systems.
| detourdog wrote:
| I was very inspired by the book in 1988-89 as a second year
| industrial design student. I think this has been a thread on HN
| about 2 years ago.
| generalizations wrote:
| Finally someone mentions this. Maybe I've been in the wrong
| circles, but I've been wishing I had the time to implement a
| society-of-mind-inspired system ever since llamacpp got started,
| and I never saw anyone else reference it until now.
| sva_ wrote:
| Honestly, I never really saw the point of it. It seems like
| introducing a whole bunch of inductive biases, which Richard
| Sutton's 'The Bitter Lesson' warned against.
| fishnchips wrote:
| Having studied sociology and psychology in my previous life I am
| now surprised how relevant some of the almost forgotten ideas
| became to my current life as a dev!
| colechristensen wrote:
| MIT OpenCourseWare course including video lectures taught by
| Minsky himself:
|
| https://ocw.mit.edu/courses/6-868j-the-society-of-mind-fall-...
| suthakamal wrote:
| amazing find. thank you for sharing this!
| fossuser wrote:
| > Eventually, I dismissed Minsky's theory as an interesting relic
| of AI history, far removed from the sleek deep learning models
| and monolithic AI systems rising to prominence.
|
| That was my read of it when I checked it out a few years ago,
| obsessed with explicit rules based lisp expert systems and "good
| old fashioned AI" ideas that never made much sense, were nothing
| like how our minds work, and were obvious dead ends that did
| little of anything actually useful (imo). All that stuff made the
| AI field a running joke for decades.
|
| This feels a little like falsely attributing new ideas that work
| to old work that was pretty different? Is there something
| specific from Minsky that would change my mind about this?
|
| I recall reading there were some early papers that suggested some
| neural network ideas more similar to the modern approach (iirc),
| but the hardware just didn't exist at the time for them to be
| tried. That stuff was pretty different from the mainstream ideas
| at the time though and distinct from Minsky's work (I thought).
| spiderxxxx wrote:
| I think you may be mistaking Society of Mind with a different
| book. It's not about lisp or "good old fashioned AI" but about
| how the human mind _may_ work - something that we could
| possibly simulate. It 's observations about how we perform
| thought. The ideas in the book are not tied to a specific
| technology, but about how a complex system such as the human
| brain works.
| suthakamal wrote:
| I don't think we're talking about the same book. Society of
| Mind is definitely not an in-the weeds book that digs into
| things like lisp, etc. in any detail. Instead of changing your
| mind, I'd encourage you to re-read Minsky's book if you found
| my essay compelling, and ignore it if not.
| adastra22 wrote:
| You are surrounded by GOFAI programs that work well every
| moment of your life. From air traffic control planning, do
| heuristics based compiler optimization. GOFAI has this problem
| where as soon as they solve a problem and get it working, it
| stops being "real AI" in the minds of the population writ
| large.
| fossuser wrote:
| Because it isn't AI and it never was and had no path to
| becoming it, the new stuff is and the difference is obvious.
| mcphage wrote:
| Philosophy has the same problem, as a field. Many fields of
| study have grown out of philosophy, but as soon as something
| is identified, people say "well that's not Philosophy, that's
| $X" ... and then people act like philosophy is useless and
| hasn't accomplished anything.
| empiko wrote:
| I completely agree with you and I am surprised by the praise in
| this thread. The entire research program that this books
| represents is dead for decades already.
| photonthug wrote:
| It seems like you might be confusing "research programs" with
| things like "branding" and surface-level terminology. And
| probably missing the fact that society-of-mind is about
| architecture more than implementation, so it's pretty
| agnostic about implementation details.
|
| Here, enjoy this thing clearly building on SoM and edited
| earlier this week: ideas https://github.com/camel-
| ai/camel/blob/master/camel/societie...
| suthakamal wrote:
| I pretty clearly articulate the opposite. What's your
| evidence to support your claim?
| drannex wrote:
| Good timing, I just started rereading my copy last week to get my
| vibe back.
|
| Not only is it great for tech nerds such as ourselves for tech,
| but its a great philosophy on thinking about and living life.
| Such a phenomenal read, easy, simple, wonderful format, wish more
| tech-focused books were written in this style.
| mblackstone wrote:
| In 2004 I previewed Minsky's chapters-in-progress for "The
| Emotion Machine", and exchanged some comments with him (which was
| a thrill for me). Here is an excerpt from that exchange: Me: I am
| one of your readers who falls into the gap between research and
| implementation: I do neither. However, I am enough of a reader of
| research, and have done enough implementation and software
| project management that when I read of ideas such as yours, I
| evaluate them for implementability. From this point of view, "The
| Society of Mind" was somewhat frustrating: while I could well
| believe in the plausibility of the ideas, and saw their value in
| organizing further thought, it was hard to see how they could be
| implemented. The ideas in "The Emotion Machine" feel more
| implementable.
|
| Minsky: Indeed it was. So, in fact, the new book is the result of
| 15 years of trying to fix this, by replacing the 'bottom-up'
| approach of SoM by the 'top-down' ideas of the Emotion machine.
| suthakamal wrote:
| agree. A lot has changed in the last 20 years, which makes SoM
| much more applicable. I would've agreed in 2004 (and say as
| much in the essay).
___________________________________________________________________
(page generated 2025-06-18 23:00 UTC)