[HN Gopher] Doug Lenat has died
___________________________________________________________________
Doug Lenat has died
Author : snewman
Score : 339 points
Date : 2023-09-01 17:44 UTC (5 hours ago)
(HTM) web link (garymarcus.substack.com)
(TXT) w3m dump (garymarcus.substack.com)
| brundolf wrote:
| Doug was at times blunt, but he was fundamentally a kind and
| generous person, and he had a dedication to his vision and to the
| people who worked alongside him that has to be admired. He will
| be missed.
|
| I worked at Cycorp (not directly with Doug very often, but it
| wasn't a big office) between 2016 and 2020
|
| An anecdote: during our weekly all-hands lunch in the big
| conference room, he mentioned he was getting a new car (his old
| one was pretty old, but well-kept) and he asked if anybody could
| use the old car. One of the staff raised his hand sheepishly and
| said his daughter was about to start driving. Doug gifted him the
| car on the spot, without a second thought.
|
| He also loved board games, and was in a D&D group with some
| others at the company. I was told he only ever played lawful good
| characters, he didn't know how to do otherwise :)
|
| Happy to answer what questions I can
| zitterbewegung wrote:
| I would expect lawful good because it would be the most
| logical.
| late25 wrote:
| I don't know much about him. What makes you start by saying
| he's blunt?
| brundolf wrote:
| It was a part of his personality, as it is for many people
| who are intelligent and opinionated, and some can mistake
| that for unkindness. But I wanted to emphasize that in his
| case it wasn't.
| late25 wrote:
| Got it. I was merely curious if there were any particular
| stories, rumors, or legends about his bluntness (like there
| is Linus).
| brundolf wrote:
| No, it was never anything at that level. I would describe
| (pre-reformed) Linus as more than just "blunt"
| Rochus wrote:
| He was a hero of knowledge representation and ontology. A bit odd
| that we learn about his sad passing from a Wikipedia article,
| while at the time of this comment there is still no mention on
| e.g. https://cyc.com/.
| Rochus wrote:
| Thirteen hours later still no mention on the Cycorp website.
| Also the press doesn't seem to notice. Pretty odd.
|
| The post originally pointed to Lenat's Wikipedia page; now it's
| an obituary by Gary Marcus which seems more appropriate.
| dang wrote:
| Related. Others?
|
| _Cyc_ - https://news.ycombinator.com/item?id=33011596 - Sept
| 2022 (2 comments)
|
| _Why AM and Eurisko Appear to Work (1983) [pdf]_ -
| https://news.ycombinator.com/item?id=28343118 - Aug 2021 (17
| comments)
|
| _Early AI: "Eurisko, the Computer with a Mind of Its Own"
| (1984)_ - https://news.ycombinator.com/item?id=27298167 - May
| 2021 (2 comments)
|
| _Cyc_ - https://news.ycombinator.com/item?id=21781597 - Dec 2019
| (173 comments)
|
| _Some documents on AM and EURISKO_ -
| https://news.ycombinator.com/item?id=18443607 - Nov 2018 (10
| comments)
|
| _One genius 's lonely crusade to teach a computer common sense
| (2016)_ - https://news.ycombinator.com/item?id=16510766 - March
| 2018 (1 comment)
|
| _Douglas Lenat 's Cyc is now being commercialized_ -
| https://news.ycombinator.com/item?id=11300567 - March 2016 (49
| comments)
|
| _Why AM and Eurisko Appear to Work (1983) [pdf]_ -
| https://news.ycombinator.com/item?id=9750349 - June 2015 (5
| comments)
|
| _Ask HN: Cyc - Whatever happened to its connection to AI?_ -
| https://news.ycombinator.com/item?id=9566015 - May 2015 (3
| comments)
|
| _Eurisko, The Computer With A Mind Of Its Own_ -
| https://news.ycombinator.com/item?id=2111826 - Jan 2011 (9
| comments)
|
| _Open Cyc (open source common sense)_ -
| https://news.ycombinator.com/item?id=1913994 - Nov 2010 (22
| comments)
|
| _Lenat (of Cyc) reviews Wolfram Alpha_ -
| https://news.ycombinator.com/item?id=510579 - March 2009 (16
| comments)
|
| _Eurisko, The Computer With A Mind Of Its Own_ -
| https://news.ycombinator.com/item?id=396796 - Dec 2008 (13
| comments)
|
| _Cycorp, Inc. (Attempt at Common Sense AI)_ -
| https://news.ycombinator.com/item?id=20725 - May 2007 (1 comment)
| symbolicAGI wrote:
| Doug Lenat, RIP. I worked at Cycorp in Austin from 2000-2006.
| Taken from us way too soon, Doug none the less had the
| opportunity to help our country advance military and intelligence
| community computer science research.
|
| One day, the rapid advancement of AI via LLMs will slow down and
| attention will again return to logical reasoning and knowledge
| representation as championed by the Cyc Project, Cycorp, its
| cyclists and Dr. Doug Lenat.
|
| Why? If NN inference were so fast, we would compile C programs
| with it instead of using deductive logical inference that is
| executed efficiently by the compiler.
| nextos wrote:
| Exactly. When I hear books such as _Paradigms of AI
| Programming_ are outdated because of LLMs, I disagree. They are
| more current than ever, thanks to LLMs!
|
| Neural and symbolic AI will eventually merge. Symbolic models
| bring much needed efficiency and robustness via regularization.
| optimalsolver wrote:
| The best thing Cycorp could do now is open source its
| accumulated database of logical relations so it can be ingested
| by some monster LLM.
|
| What's the point of all that data collecting dust and
| accomplishing not much of anything?
| halflings wrote:
| > If NN inference were so fast, we would compile C programs
| with it instead of using deductive logical inference that is
| executed efficiently by the compiler.
|
| This is the definition of a strawman. Who is claiming that NN
| inference is always the fastest way to run computation?
|
| Instead of trying to bring down another technology (neural
| networks), how about you focus on making symbolic methods
| usable to solve real-world problems; e.g. how can I build a
| robust email spam detection system with symbolic methods?
| symbolicAGI wrote:
| The point is that symbolic computation as performed by Cycorp
| was held back by the need to train the Knowledge Base by hand
| in a supervised manner. NNs and LLMs in particular became
| ascendant when unsupervised training was employed at scale.
|
| Perhaps LLMs can automate in large part the manual operations
| of building a future symbolic knowledge base organized by a
| universal upper ontology. Considering the amazing emergent
| features of sufficiently-large LLMs, what could emerge from a
| sufficiently large, reflective symbolic knowledge base?
| detourdog wrote:
| That what I have settled on. The need for a symbolic library
| of standard hardware circuits.
|
| I'm making a sloppy version that will contain all the symbols
| needed to run a multi-unit building.
| nikolay wrote:
| Even being a controversial figure, he was one of my heroes.
| Getting excited about Eurisko in the '80s and '90s was a big
| driver for me at the time! Rest in piece, dear computer pioneer!
| brador wrote:
| Anyone know how he died? I can't find any information about it
| but someone mentioned heart attack on Reddit?
| detourdog wrote:
| I still intend to integrate OpenCyc.
| mindcrime wrote:
| If anybody wants to hear more about Doug's work and ideas, here
| is a (fairly long) interview with Doug by Lex Fridman, from last
| year.
|
| https://www.youtube.com/watch?v=3wMKoSRbGVs&pp=ygUabGV4IGZya...
| mistrial9 wrote:
| reading the bio of Lex Fridman on wikipedia.. "Learning of
| Identity from Behavioral Biometrics for Active Authentication"
| what?
| modeless wrote:
| Makes sense to me. He basically made a system that detects
| when someone else is using your computer by e.g. comparing
| patterns of mouse and keyboard input to your typical usage.
| It would be useful in a situation such as if you left your
| screen unlocked and a coworker sat down at your desk to prank
| you by sending an email from you to your boss (or worse,
| obviously). The computer would lock itself as soon as it
| suspects someone else is using it instead of you.
| dang wrote:
| Please don't go offtopic in predictable/nasty ways - more at
| https://news.ycombinator.com/item?id=37355320.
| lionkor wrote:
| Like anything reasonably complex, it means little to you if
| its not your field - that said, I have no clue either.
| lern_too_spel wrote:
| Just search for Doug Lenat on YouTube. I can guarantee that any
| one of the other videos will be better than a Fridman
| interview.
| dang wrote:
| Hey you guys, please don't go offtopic like this. Whimsical
| offtopicness can be ok, but offtopicness in the intersection
| of:
|
| (1) generic (e.g. swerves the thread toward larger/general
| topic rather than something more specific);
|
| (2) flamey (e.g. provocative on a divisive issue); and
|
| (3) predictable (e.g. has been hashed so many times already
| that comments will likely fall in a few already-tiresome hash
| buckets)
|
| - is the bad kind of offtopicness: the kind that brings
| little new information and eventually lots of nastiness.
| We're trying for the opposite here--lots of information and
| little nastiness.
|
| https://news.ycombinator.com/newsguidelines.html
| mindcrime wrote:
| Only about two of them will be more contemporary though, and
| both are academic talks, not interviews. I get that you don't
| like Lex Fridman, which is a perfectly fine position to hold.
| But there is something to be said for seeing two people just
| sit and talk, as opposed to seeing somebody monologue for an
| hour. The Fridman interview with Doug is, IMO, absolutely
| worth watching. And so are all of the other videos by / about
| Doug. _shrug_
| yarpen_z wrote:
| I don't know this particular interview, but it's not
| necessarily about not liking Lex. I listened to many
| episodes of his podcast and while I appreciate the
| selection of guests from the CS domain, many of these
| interviews aren't very good. They are not completely
| terrible but they should have been so much better: Lex had
| so many passionate, educated, experienced and gifted
| guests, yet his ability to ask interesting and focused
| questions is not on the same level.
| pengaru wrote:
| He's a shitty interviewer. Often doesn't even engage with
| his guest's responses, as if he's not even listening to
| what they're saying, instead moving mechanically to his
| next bullet-point. Which is completely ridiculous for
| what's supposed to be a long-format conversational
| interview.
|
| The best episodes are ones where the guest drives the
| interview and has a lot of interesting things to say.
| Fridman's just useful for attracting interesting domain
| experts somewhere we can hear them speak for hours on
| end.
|
| The Jim Keller episodes are excellent IMO, despite
| Fridman. Guests like Keller and Carmack don't need a good
| interviewer for it to be a worthwhile listen.
| Jun8 wrote:
| Ahh, another one of the old guard has moved on. Here are two
| excerpts from the book _AI: The Tumultuous History Of The Search
| For Artificial Intelligence_ (a fantastic read of the early days
| of AI) to remember him by;
|
| "Lenat found out about computers in a a manner typical of his
| entrepreneurial spirit. As a high school student in Philadelphia,
| working for $1.00 an hour to clean the cages of experimental
| animals, he discovered that another student was earning $1.50 to
| program the institution's minicomputer. Finding this occupation
| more to his liking, he taught himself programming over a weekend
| and squeezed his competitor out of the job by offering to work
| for fifty cents an hour less.31 A few years later, Lenat was
| programming Automated Mathematician (AM, for short) as a doctoral
| thesis project at the Stanford AI Laboratory." p. 178
|
| And here's an count of an early victory for AI in gaming against
| humans by Lenat's EURISKO system
| (https://en.wikipedia.org/wiki/Eurisko):
|
| "Ever the achiever, Lenat was looking for a more dramatic way to
| prove teh capabilities of his creation. The identified the
| occasion space-war game called Traveler TCS, then quite popular
| with the public Lenat wanted to reach. The idea was for each
| player to design a fleet of space battleships according to a
| thick, hundred-page set of rules. Within a budget limit of one
| trillion galactic credits, one could adjust such parameters as
| the size, speed, armor thickness, autonomy and armament of each
| ship: about fifty adjustments per ship were needed. Since the
| fleet size could reach a hundred ships, the game thus offered
| ample room for ingenuity in spite of the anticlimactic character
| of the battles. These were fought by throwing dice following
| complex tables based on probability of survival of each ship
| according to its design. The winner of the yearly national
| championship was commissioned inter galactic admiral and received
| title to a planet of his or her choice ouside the solar system.
|
| Several months before the 1981 competition, Lenat fed into
| EURISKO 146 Traveler concepts, ranging from the nature of games
| in general to the technicalities of meson guns. He then
| instructed the program to develop heuristics for making winning
| war-fleet designs. The now familiar routine of nightly computer
| runs turned into a merciless Darwinian contest: Lenat and EURISKO
| together designed fleets that battled each other. Designs were
| evaluated by how well they won battles, and heuristics by how
| well they designed fleets. This rating method required several
| battles per design, and several designs per heuristic, which
| amounted to a lot of battles: ten thousand in all, fought over
| two thousand hours of computer time.
|
| To participants in the national championship of San
| Mateo,California, the resulting fleet of ninety-six small,
| heavily armored ships looked ludicrous. Accepted wisdom dictated
| fleets of about twenty behemoth ships, and many couldn't help
| laughing. When engagements started, they found out that the weird
| armada held more than met the eye. One interesting ace up Lenat's
| sleeve was a small ship so fast as to be almost unstoppable,
| which guaranteed at least a draw. EURISKO had conceived of it
| through the "look for extreme cases" heuristic (which had
| mutated, incidentally, into mutated, incidentally, into "look for
| almost extreme cases")." p. 182
|
| If you're a young person working in AI, by which I mean you're
| less than 30, and if you have not already done so, you should
| read about AI history in three decade 60s - 90s.
| brundolf wrote:
| I may be getting this wrong, but I think I remember hearing
| that his auto-generated fleets won Traveller so entirely,
| several years in a row, that they had to shut down the entire
| competition because it had been broken
|
| Edit: Fixed wrong name for the competition
| mindcrime wrote:
| I think you mean "EURISKO won the Traveller championship so
| entirely..."
|
| In which case, yes, something like that did happen. Per the
| Wikipedia page:
|
| _Lenat and Eurisko gained notoriety by submitting the
| winning fleet (a large number of stationary, lightly-armored
| ships with many small weapons)[3] to the United States
| Traveller TCS national championship in 1981, forcing
| extensive changes to the game 's rules. However, Eurisko won
| again in 1982 when the program discovered that the rules
| permitted the program to destroy its own ships, permitting it
| to continue to use much the same strategy.[3] Tournament
| officials announced that if Eurisko won another championship
| the competition would be abolished; Lenat retired Eurisko
| from the game.[4] The Traveller TCS wins brought Lenat to the
| attention of DARPA,[5] which has funded much of his
| subsequent work._
| brundolf wrote:
| Whoops yes :)
| bpiche wrote:
| Worked with their ontologists for a couple of years. Someone once
| told me that they employed more philosophers per capita than any
| other software company. A dubious distinction, maybe. But it
| describes the culture of inquisitiveness there pretty well too
| nyx_land wrote:
| Weird, I interviewed with him summer 2021 hoping to be able to
| land an ontologist job at Cycorp. It went spectacularly badly
| because it turned out I really needed to brush up more on my
| formal logic skills, but I was surprised to even get an
| interview, let alone with the man himself. He still encouraged me
| to work on reviewing logic and to apply again in the future but I
| stopped seeing listings at Cycorp for ontologists and started
| putting off returning to that aspiration thinking Cycorp has been
| around long enough that there was no rush. Memento mori
| snowmaker wrote:
| I interviewed with Doug Lenat was I was a 17 year old high school
| student, and he hired me as a summer intern for Cycorp - my first
| actual programming job.
|
| That internship was life-changing for me, and I'll always be
| grateful to him for taking a wild bet on a literally a kid.
|
| Doug was a brilliant computer scientist, and a pioneer of
| artificial intelligence. Though I was very junior at Cycorp, it
| was a small company so I sat in many meetings with him. It was
| obvious that he understood every detail of how technology worked,
| and was extremely smart.
|
| Cycorp was 30 years ahead of its time and never actually worked.
| For those who don't know, it was essentially the first OpenAI -
| the first large-scale commercial effort to create general
| artificial intelligence.
|
| I learned a lot from Doug about how to be incredibly ambitious,
| and how to not give up. Doug worked on Cycorp for multiple
| decades. It never really took off, but he managed to keep funding
| it and keep hiring great people so he could keep plugging away at
| the problem. I know very few people who have stuck with an idea
| for so long.
| xNeil wrote:
| That sounds awesome! Was coming back to Cycorp to permanently
| work ever in the works for you? Or did you think the intern was
| nice but you didn't want a career in the field?
|
| Also - what exactly did you do in the internship as a 17 year
| old - what skills did you have?
| varjag wrote:
| Never met the guy but his work was one of my biggest inspirations
| in computing.
|
| I feel it's appropriate to link a blog post of mine from 2018.
| It's a quick recap of Lenat works on the trajectory that brought
| him towards Cyc, with links to the papers.
|
| http://blog.funcall.org//lisp/2018/11/03/am-eurisko-lenat-do...
| hu3 wrote:
| The end of the article [1] reminds me to publish more of what I
| make and think. I'm no Doug Lenat and my content would probably
| just add noise to the internet but still, don't let your ideas
| die with you or become controlled by some board of stakeholders.
| I'm also no open-source zealot but open-source is a nice way to
| let others continue what you started.
|
| [1]
|
| "Over the last year, Doug and I tried to write a long, complex
| paper that we never got to finish. Cyc was both awesome in its
| scope, and unwieldy in its implementation. The biggest problem
| with Cyc from an academic perspective is that it's proprietary.
|
| To help more people understand it, I tried to bring out of him
| what lessons he learned from Cyc, for a future generation of
| researchers to use. Why did it work as well as it did when it
| did, why did fail when it did, what was hard to implement, and
| what did he wish that he had done differently? ...
|
| ...One of his last emails to me, about six weeks ago, was an
| entreaty to get the paper out ASAP; on July 31, after a nerve-
| wracking false-start, it came out, on arXiv, Getting from
| Generative AI to Trustworthy AI: What LLMs might learn from Cyc
| (https://arxiv.org/ftp/arxiv/papers/2308/2308.04445.pdf).
|
| The brief article is simultaneously a review of what Cyc tried to
| do, an encapsulation of what we should expect from genuine
| artificial intelligence, and a call for reconciliation between
| the deep symbolic tradition that he worked in with modern Large
| Language Models."
| mrcwinn wrote:
| Here's one for you, Doug. My condolences.
|
| https://chat.openai.com/share/dbd59c92-696b-45d3-8097-c09a23...
| toomuchtodo wrote:
| https://en.wikipedia.org/wiki/Douglas_Lenat
|
| https://web.archive.org/web/20230901183515/https://garymarcu...
|
| https://archive.ph/icb92
| eigenvalue wrote:
| I have always thought of Cyc as being the AI equivalent of
| Russell and Whitehead's Principia--something that is technically
| ambitious and interesting in its own right, but ultimately just
| the wrong approach that will never really work well on a
| standalone basis, no matter how long you work on it or keep
| adding more and more rules. That being said, I do think it could
| prove to be useful for testing and teaching neural net models.
|
| In any case, at the time Lenat starting working on Cyc, we didn't
| really have the compute required to do NN models at the level
| where they start exhibiting what most would call "common sense
| reasoning," so it makes total sense why he started out on that
| path. RIP.
| tunesmith wrote:
| It's fun reading through the paper he links just because I've
| always been enamored by taking a lot of those principles that
| they believe should be internal to a computer, and instead making
| them external to a community.
|
| In other words, I think it would be so highly useful to have a
| browseable corpus of arguments and conclusions, where people
| could collaborate on them and perhaps disagree with portions of
| the argument graph, adding to it and enriching it over time, so
| other people could read and perhaps adopt the same reasoning.
|
| I play around with ideas with this site I occasionally work on,
| http://concludia.org/ - really more an excuse at this point to
| mess around with the concept and also get better at Akka (Pekko)
| programming. At some point I'll add user accounts and editable
| arguments and make it a real website.
| frenchwhisker wrote:
| I've had the same idea (er, came to the same conclusion) but
| never acted on it. Awesome to see that someone has! Great name
| too.
|
| I thought of it while daydreaming about how to converge public
| opinion in a nation with major political polarization. It'd be
| a sort of structured public debate forum and people could
| better see exactly where in the hierarchy they disagreed and,
| perhaps more importantly, how much they in fact agreed upon.
| high_priest wrote:
| I don't think this is the goal of your project, so let me ask
| this way. Is there any similiar project, where we provide
| truths and fallacies, combine them with logical arguments and
| have a language model generate sets of probable conclusions?
|
| Would be great for brainstorming.
| tomodachi94 wrote:
| So basically a multi-person Zettelkasten? The idea with a
| Zettelkasten (zk for short) is that each note is a singular
| idea, concept, or argument that is all linked together.
| Arguments can link to their evidence, concepts can link to
| other related concepts, and so on.
|
| https://en.m.wikipedia.org/wiki/Zettelkasten
| tunesmith wrote:
| Sort of except that it also tracks truth propagation - one
| person disagreeing would inform others that portion of the
| graph is contested. So the graph has behavior. And, the links
| have logical meaning, beyond just "is related to" - it
| respects boolean logic.
|
| You can see some of the explanation at
| http://concludia.org/instructions .
| quickthrower2 wrote:
| You would need a highly disciplined and motivated set of
| people in the team. I have been on courses where teams do
| this on pen/paper and it is a real skill and it is _all_
| you do for days. Forget anything else like programming,
| finishing work, etc.
| couchand wrote:
| > it respects boolean logic.
|
| Intuitionist or classical?
| tunesmith wrote:
| Intuitionist. Truth is provability; the propagation model
| is basically digital logic. If you mark a premise to a
| conclusion false, the conclusion is then marked "false"
| but it really just means "it is false that it is proven";
| vitiated. Might still be true, just needs further work.
| dredmorbius wrote:
| Cyc ("Syke") is one of those projects I've long found vaguely
| fascinating though I've never had the time / spoons to look into
| it significantly. It's an AI project based on a comprehensive
| ontology and knowledgebase.
|
| Wikipedia's overview: <https://en.wikipedia.org/wiki/Cyc>
|
| Project / company homepage: <https://cyc.com/>
| jfengel wrote:
| I worked with Cyc. It was an impressive attempt to do the thing
| that it does, but it didn't work out. It was the last great
| attempt to do AI in the "neat" fashion, and its failure helped
| bring about the current, wildly successful "scruffy" approaches
| to AI.
|
| It's failure is no shade against Doug. Somebody had to try it,
| and I'm glad it was one of the brightest guys around. I think
| he clung on to it long after it was clear that it wasn't going
| to work out, but breakthroughs do happen. (The current round of
| machine learning itself is a revival of a technique that had
| been abandoned, but people who stuck with it anyway discovered
| the tricks that made it go.)
| Kuinox wrote:
| Why did it didn't work out ?
| jfengel wrote:
| I don't know if there's really an answer to that, beyond
| noting that it never turned out to be more than the sum of
| its parts. It was a large ontology and a hefty logic
| engine. You put in queries and you got back answers.
|
| The goal was that in a decade it would become self-
| sustaining. It would have enough knowledge that it could
| start reading natural language. And it just... didn't.
|
| Contrast it with LLMs and diffusion and such. They make
| stupid, asinine mistakes -- real howlers, because they
| don't understand anything at all about the world. If it
| could draw, Cyc would never draw a human with 7 fingers on
| each hand, because it knows that most humans have 5. (It
| had a decent-ish ontology of human anatomy which could
| handle injuries and birth defects, but would default reason
| over the normal case.) I often see ChatGPT stumped by
| simple variations of brain teasers, and Cyc wouldn't make
| those mistakes -- once you'd translated them into CycL (its
| language, because it couldn't read natural language in any
| meaningful way).
|
| But those same models do a scary job of passing the Turing
| Test. Nobody would ever have thought to try it on Cyc. It
| was never anywhere close.
|
| Philosophically I can't say why Cyc never developed "magic"
| and LLMs (seemingly) do. And I'm still not convinced that
| they're on the right path, though they actually have some
| legitimate usages right now. I tried to find uses for Cyc
| in exactly the opposite direction, guaranteeing data
| quality, but it turned out nobody really wanted that.
| dredmorbius wrote:
| One sense that I've had of LLM / generative AIs is that
| they lack "bones", in the sense that there's no
| underlying structure to which they adhere, only outward
| appearances which are statistically correlated (using
| fantastically complex statistical correlation maps).
|
| Cyc, on the other hand, lacks flesh and skin. It's _all_
| skeleton and can generate facts but not embellish them
| into narratives.
|
| The best human writing has _both_ , much as artists
| (traditional painters, sculptors, and more recently
| computer animators) has a _skeleton_ (outline, index
| cards, Zettlekasten, wireframe) to which flesh, skin, and
| fur are attached. LLM generative AIs are _too_ plastic,
| Cyc is _insufficiently_ plastic.
|
| I suspect there's some sort of a middle path between the
| two. Though that path and its destination also
| increasingly terrify me.
| ushakov wrote:
| Sounds similar to WolframAlpha?
| bpiche wrote:
| Had? Cycorp is still around and deploying their software.
| jfoutz wrote:
| Take a look at https://en.m.wikipedia.org/wiki/SHRDLU
|
| Cyc is sort of like that, but for everything. Not just a
| small limited world. I believe it didn't work out because
| it's really hard.
| ansible wrote:
| If we are to develop understandable AGI, I think that
| some kind of (mathematically correct) probabilistic
| reasoning based on a symbolic knowledge base is the way
| to go. You would probably need to have some version of a
| Neural Net on the front end to make it useful though.
|
| So you'd use the NN to recognize that the thing in front
| of the camera is a cat, and that would be fed into the
| symbolic knowledge base for further reasoning.
|
| The knowledge base will contain facts like the cat is
| likely to "meow" at some point, especially if it wants
| attention. Based on the relevant context, the knowledge
| base would also know that the cat is unlikely to be able
| to talk, unless it is a cat in a work of fiction, for
| example.
| DonHopkins wrote:
| [delayed]
| DonHopkins wrote:
| As Roger Schank defined the terms in the 70's, "Neat" refers
| to using a single formal paradigm, logic, math, neural
| networks, and LLMs, like physics. "Scruffy" refers to
| combining many different algorithms and approaches, symbolic
| manipulation, hand coded logic, knowledge engineering, and
| CYC, like biology.
|
| I believe both approaches are useful and can be combined and
| layered and fed back into each other, to reinforce and
| transcend complement each others advantages and limitations.
|
| Kind of like how Hailey and Justin Bieber make the perfect
| couple: ;)
|
| https://edition.cnn.com/style/hailey-justin-bieber-
| couples-f...
|
| Marvin L Minsky: Logical Versus Analogical or Symbolic Versus
| Connectionist or Neat Versus Scruffy
|
| https://ojs.aaai.org/aimagazine/index.php/aimagazine/article.
| ..
|
| https://ojs.aaai.org/aimagazine/index.php/aimagazine/article.
| ..
|
| "We should take our cue from biology rather than physics..."
| -Marvin Minsky
|
| >To get around these limitations, we must develop systems
| that combine the expressiveness and procedural versatility of
| symbolic systems with the fuzziness and adaptiveness of
| connectionist representations. Why has there been so little
| work on synthesizing these techniques? I suspect that it is
| because both of these AI communities suffer from a common
| cultural-philosophical disposition: They would like to
| explain intelligence in the image of what was successful in
| physics--by minimizing the amount and variety of its
| assumptions. But this seems to be a wrong ideal. We should
| take our cue from biology rather than physics because what we
| call thinking does not directly emerge from a few fundamental
| principles of wave-function symmetry and exclusion rules.
| Mental activities are not the sort of unitary or elementary
| phenomenon that can be described by a few mathematical
| operations on logical axioms. Instead, the functions
| performed by the brain are the products of the work of
| thousands of different, specialized subsystems, the intricate
| product of hundreds of millions of years of biological
| evolution. We cannot hope to understand such an organization
| by emulating the techniques of those particle physicists who
| search for the simplest possible unifying conceptions.
| Constructing a mind is simply a different kind of problem--
| how to synthesize organizational systems that can support a
| large enough diversity of different schemes yet enable them
| to work together to exploit one another's abilities.
|
| https://en.wikipedia.org/wiki/Neats_and_scruffies
|
| >In the history of artificial intelligence, neat and scruffy
| are two contrasting approaches to artificial intelligence
| (AI) research. The distinction was made in the 70s and was a
| subject of discussion until the middle 80s.[1][2][3]
|
| >"Neats" use algorithms based on a single formal paradigms,
| such as logic, mathematical optimization or neural networks.
| Neats verify their programs are correct with theorems and
| mathematical rigor. Neat researchers and analysts tend to
| express the hope that this single formal paradigm can be
| extended and improved to achieve general intelligence and
| superintelligence.
|
| >"Scruffies" use any number of different algorithms and
| methods to achieve intelligent behavior. Scruffies rely on
| incremental testing to verify their programs and scruffy
| programming requires large amounts of hand coding or
| knowledge engineering. Scruffies have argued that general
| intelligence can only be implemented by solving a large
| number of essentially unrelated problems, and that there is
| no magic bullet that will allow programs to develop general
| intelligence autonomously.
|
| >John Brockman compares the neat approach to physics, in that
| it uses simple mathematical models as its foundation. The
| scruffy approach is more like biology, where much of the work
| involves studying and categorizing diverse phenomena.[a]
|
| [...]
|
| >Modern AI as both neat and scruffy
|
| >New statistical and mathematical approaches to AI were
| developed in the 1990s, using highly developed formalisms
| such as mathematical optimization and neural networks. Pamela
| McCorduck wrote that "As I write, AI enjoys a Neat hegemony,
| people who believe that machine intelligence, at least, is
| best expressed in logical, even mathematical terms."[6] This
| general trend towards more formal methods in AI was described
| as "the victory of the neats" by Peter Norvig and Stuart
| Russell in 2003.[18]
|
| >However, by 2021, Russell and Norvig had changed their
| minds.[19] Deep learning networks and machine learning in
| general require extensive fine tuning -- they must be
| iteratively tested until they begin to show the desired
| behavior. This is a scruffy methodology.
| at_a_remove wrote:
| Neats and scruffies also showed up in The X-Files in their
| first AI episode.
| dredmorbius wrote:
| "Neat" vs. "scruffy" syncs well with my general take on Cyc.
| Thanks for that.
|
| I _do_ suspect that well-curated and hand-tuned corpora,
| including possibly Cyc 's, _are_ of significant use to LLM
| AI. And will likely be more so as the feedback / autophagy
| problem exacerbates.
| pwillia7 wrote:
| Wow -- I hadn't thought of this but makes total sense.
| We'll need giant definitely-human-curated databases of
| information for AIs to consume as more information becomes
| generated by the AIs.
| dredmorbius wrote:
| There's a long history of informational classification,
| going back to Aristotle and earlier ("Categories"). See
| especially Melville Dewey, the US Library of Congress
| Classification, and the work of Paul Otlet. All are based
| on _exogenous classification_ , that is, subjects and/or
| works classification catalogues which are _independent_
| of the works classified.
|
| Natural-language content-based classification as by
| Google and Web text-based search relies effectively on
| documents self-descriptions (that is, their content
| itself) to classify and search works, though a ranking
| scheme (e.g., PageRank) is typically layered on top of
| that. What distinguished early Google from prior full-
| text search was that the latter had _no_ ranking
| criteria, leading to keyword stuffing. An alternative
| approach was Yahoo, originally Yet Another Hierarchical
| Officious Oracle, which was a _curated and ontological_
| classification of websites. This was already proving
| infeasible by 1997 /98 _as a whole_ , though as training
| data for machine classification might prove useful.
| rvbissell wrote:
| Why not combine the two approaches? A bicameral mind, of
| sorts?
| jfengel wrote:
| I'm sure somebody somewhere is working on it. I've already
| seen articles teaching LLMs offload math problems onto a
| separate module, rather than trying to solve them via the
| murk of neural network.
|
| I suppose you'd architect it as a layer. It wants to say
| something, and the ontology layer says, "No, that's stupid,
| say something else". The ontology layer can recognize
| ontology-like statements and use them to build and evolve
| the ontology.
|
| It would be even more interesting built into the
| visual/image models.
|
| I have no idea if that's any kind of real progress, or if
| it's merely filtering out the dumb stuff. A good service,
| to be sure, but still not "AGI", whatever the hell that
| turns out to be.
|
| Unless it turns out to be the missing element that puts it
| over the top. If I had any idea I wouldn't have been
| working with Cyc in the first place.
| mindcrime wrote:
| There are absolutely people working on this concept. In
| fact, the two day long "Neuro-Symbolic AI Summer School
| 2023"[1] just concluded earlier this week. It was two days
| of hearing about cutting edge research at the intersection
| of "neural" approaches (taking a big-tent view where that
| included most probabilistic approaches) and "symbolic" (eg,
| "logic based") approaches. And while this approach might
| not be _the_ contemporary mainstream approach, there were
| some heavy hitters presenting, including the likes of
| Leslie Valiant and Yoshua Bengio.
|
| [1]: https://neurosymbolic.github.io/nsss2023/
| sanderjd wrote:
| I'm so looking forward to the next swing of the pendulum back
| to "neat", incorporating all the progress that has been made
| on "scruffy" during this current turn of the wheel.
| DonHopkins wrote:
| The GP had the terms "neat" and "scruffy" reversed. CYC is
| scruffy, and neural nets are neat. See my sibling post
| citing Roger Schank who coined the terms, and quoting
| Minsky's paper, "Logical Versus Analogical or Symbolic
| Versus Connectionist or Neat Versus Scruffy" and the "Neats
| and Scruffies" wikipedia page.
|
| https://news.ycombinator.com/item?id=37354564
| kevin_thibedeau wrote:
| Definitely would be nice to have a ChatGPT that could
| reference an ontology to fact check itself.
| at_a_remove wrote:
| I've often thought that Cyc had an enormous value as some kind of
| component for AI, a "baseline truth" about the universe (to the
| degree that we understand it and have "explained" our
| understanding to Cyc in terms of its frames). AM (no relation to
| any need for screaming) was a taste of the AI dream.
| optimalsolver wrote:
| >I've often thought that Cyc had an enormous value as some kind
| of component for AI
|
| Same. I wonder if training an LLM on the database would make it
| more "grounded"? We'll probably never know as Cycorp will keep
| the data locked away in their vaults forever. For what purpose?
| Probably even they don't know.
|
| >AM (no relation to any need for screaming)
|
| heh.
| ftxbro wrote:
| Here's a 2016 Wired article about Doug Lenat, he was the guy who
| made Eurisko and CYC https://www.wired.com/2016/03/doug-lenat-
| artificial-intellig...
| pinewurst wrote:
| How about a black bar for Doug?
| headhasthoughts wrote:
| Why? He shared little with the wider community, contributed to
| mass surveillance with Cyc's government collaborations, and
| hasn't really done anything of note.
|
| I don't dislike Lenat, but he doesn't fit the commercial value
| of people who get black bars, he doesn't fit the ideological
| one, and he doesn't fit the community-benefit one.
| rvz wrote:
| Not even Warnock got a black bar even when asked for one as a
| mark of respect: [0]
|
| I guess the black bar really is an ideological thing. Rather
| than being supposedly a 'mark of respect'.
|
| Regardless, RIP Doug.
|
| [0] https://news.ycombinator.com/item?id=37197852
| acqbu wrote:
| Wow, that is really mean!
| dang wrote:
| Wouldn't it also be a mark of respect to check, before
| saying something that mean, whether it's true or not?
|
| https://web.archive.org/web/20230821003655/https://news.yco
| m...
| [deleted]
| pinewurst wrote:
| Why do people have to have 'commercial value' to get black
| bars? Why do people have to pass the ideological police? Why
| isn't serving as a visible advocate of a certain logical
| model enough?
|
| I think my bias comes from having started my career in AI on
| the inference side and having (perhaps not so much long term
| :) seen Cyc as a shining city on a hill. Lenat certainly
| established that logical model even if we've since gone onto
| other things.
| vkou wrote:
| I believe the parent poster claims that a black bar should
| meet either a commercial, hacker-cultural, _or_ open-source
| contribution one.
| junon wrote:
| I think you don't understand the meaning of the black bar if
| "commercial value" is one of the metrics.
| vkou wrote:
| [flagged]
| mdp2021 wrote:
| > _by which criteria_
|
| Historic value.
| vkou wrote:
| And _which_ category of important-enough-to-be-historic
| contributions has he made?
| skyyler wrote:
| Take a moment to reflect on what you're doing right now.
|
| You're turning a celebration of life for a _very_
| recently departed figure into a _pissing contest_.
|
| Extremely distasteful.
| vkou wrote:
| I think you're misunderstanding the direction and intent
| of this subthread.
|
| You're right that talking about Jobs is off-topic,
| though.
| sgt101 wrote:
| Didn't he:
|
| - invent case based reasoning
|
| - build Eurisko and AM
|
| - write a discipline defining paper ("Why AM and Eurisko
| appear to work")
|
| - undertake an ambitious but ultimately futile high risk
| research gamble with Cyc?
| steve_adams_86 wrote:
| While futile from a personal and business aspect, it's
| certainly valuable and useful otherwise. Maybe that's
| implied here as you're listing contributions, but I wanted
| to emphasize that it wasn't a waste outside of that narrow
| band of futility.
| zozbot234 wrote:
| Case-based reasoning is VERY old. It shows up prominently
| in the Catholic tradition of practical ethics, drawing on
| Aristotelian thought. Of course in a more informal sense,
| people have been reasoning on a case-by-case basis since
| time immemorial.
| EdwardCoffin wrote:
| I got a lot of value out of some of the papers he wrote, and
| what bits of _Building Large Knowledge-Based Systems_ I
| managed to read.
| ftxbro wrote:
| he is the patron hacker of players who use computers to break
| board games or war games
| toomuchtodo wrote:
| Consider giving more grace. Life is short, and kindness is
| free.
| jonahbenton wrote:
| Oh, so sorry to hear that. Good summary of his work- the Cyc
| project- on the twitter thread. Had missed that last paper- with
| Gary Marcus- on Cyc and LLM.
| mrmincent wrote:
| Sad to hear of his passing, I remember building my uni project
| around OpenCyc in my one "Intelligent Systems" class many many
| years ago. It was a dismal failure as my ambition far exceeded my
| skills, but it was so enjoyable reading about Cyc and the
| dedicated work Douglas had put in over such a long time.
___________________________________________________________________
(page generated 2023-09-01 23:00 UTC)