[HN Gopher] Draft Paper Discovered in Which Joseph Weizenbaum En...
___________________________________________________________________
Draft Paper Discovered in Which Joseph Weizenbaum Envisions ELIZA's
Applications
Author : abrax3141
Score : 80 points
Date : 2024-03-19 04:27 UTC (18 hours ago)
(HTM) web link (sites.google.com)
(TXT) w3m dump (sites.google.com)
| adamgordonbell wrote:
| I interviewed Jeff Shrager. He is one of the people behind this
| site and the effort to investigate Eliza.
|
| They do a lot of interesting work and the history of Eliza is
| more complicated than you would guess.
|
| (The interview was before chatGPT was a big thing.)If you don't
| mind the plug:
|
| https://corecursive.com/eliza-with-jeff-shrager/
| MR4D wrote:
| I'd highly recommend that podcast as well. I've listened to
| that episode twice. It was really fascinating.
|
| EDIT - meant to add that the summary style of your interviews
| is great. Keep up the good work!
| alexvoda wrote:
| Same, another recommendation for the podcast.
| whyenot wrote:
| It's amazing to me that the "chat bot" interface for ELIZA,
| developed in the mid 1960s really isn't very different from that
| of ChatGPT 4, 60 years later.
| hprotagonist wrote:
| I recommend also "computer power and human reason", his 1976
| treatise. it really presages all of the last few years of AIspew.
| vincent-manis wrote:
| To Weizenbaum's point, back in the 80s I used to cart a Teletype
| and an acoustic coupler to high schools to talk about computer
| science. The big demo was Eliza. Even after I explained that it
| was a simplistic program (getting it to say "Perhaps we
| fragisticulate each other in your dreams"), and showed the
| students the scripts it was using, I found students would want to
| have serious conversations with it, of the "please don't look at
| it just now" variety. Seeing that the students and teachers
| consistently missed the point, I stopped using it.
| TMWNN wrote:
| > Seeing that the students and teachers consistently missed the
| point, I stopped using it.
|
| In retrospect you (and everyone else) missed the point. As
| primitive as ELIZA is, it is actually closer to the LLM
| approach than, well, everything else in AI over the past 60
| years.
|
| People don't realize how simple LLMs' code is. We're talking a
| few hundred lines! That's not much longer than ELIZA.
|
| All the magic is in the data those lines process. That's what
| takes up gigabytes of storage and requires many gigabytes of
| memory and GPUs/Apple Silicon to run. (Quite possibly
| representative of how seven pounds of meat and 20w of power can
| still outperform said gigabytes and silicon.) Had such compute
| power been available, there is no reason to think that
| Weizenbaum or others at the AI Lab could not have built a true
| language model in 1967.
|
| I suspect that, if anything, ELIZA's simplicity caused
| researchers to not pursue it further, as it was obvious that
| true AI would of course require millions of intricate lines of
| code. Thus we got 50 wasted years building ever more-complex
| expert systems (i.e., fancy versions of Twenty Questions, or
| Akinator), or attempts to replicate the human brain in hardware
| (Danny Hillis named his company Thinking Machines for a
| reason). A future history of AI may describe the 50 years
| between ELIZA and "Attention is All You Need" in a chapter
| called "The Route Not Taken".
___________________________________________________________________
(page generated 2024-03-19 23:00 UTC)