https://billwadge.com/2022/11/13/gofai-is-dead-long-live-nf-ai/ Bill Wadge's Blog Just another WordPress.com site [cropped-sail41] Skip to content * Home * A short academic biography [6100 views] * About * Contact * Lucid Language [440 views] * Wadge Degrees [160 views] - We Demand Data - the story of Lucid and Eduction [1700 views] GOFAI is dead - long live (NF) AI! Posted on November 13, 2022 by Bill Wadge Art is what you can get away with. - Marshall McLuhan [All the images in this post were produced with generative AI - Midjourney, DALL-E 2, Stable diffusion. Most by Paul DelSignore, not by me} [022d9uxcdymrt_z29-2]The Monna Lisa 'by' Picasso I used to teach the AI course at the University of Victoria - thank God I'm retired. I couldn't have kept up with the breakthroughs in translation, game playing, and especially generative AI. When I taught AI, it was mainly Good Old Fashioned AI (GOFAI). I retired in 2015, just before the death of GOFAI. I dodged a bullet. I am in awe of NFAI (New-Fangled AI) yet I still don't understand how it works. But I do understand GOFAI and I'd like to share my awe of NFAI and my understanding of why GOFAI is not awe-full. Seek and Ye Shall Find For a long timeAI was almost a joke amongst non-AI computer scientists. There was so much hype but the hyped potential breakthroughs never materialized. One common quip was that AI was actually natural stupidity. Many departments, like my own, basically boycotted the subject, maybe only offering a single introductory course The heart of GOFAI is searching - of trees and, more generally, graphs. For many decades the benchmark for tree searching was chess. Generations (literally) of AI researchers followed the program first proposed by Norbert Wiener in the 40s, based on searching the chess game tree. Every ten years AI evangelists would promise that computer chess mastery was only ten years away [wsvalxf0bgg91] Wiener's idea, described in his pioneering book Cybernetics, was a min/max search of the game tree, resorting to a heuristic to evaluate positions when the search got too deep. The chess game tree gets big very quickly and it wasn't until decades later (the late 1990's) that IBM marshalled the horsepower to realize Wiener's dream. They built a special purpose machine, Deep Blue, capable of examining 100 million positions per second. Deep Blue eventually won first a game, then a whole match, against Gary Kasparov, the world champion. Deep Blue was the high water mark of GOFAI and there was no real followup. Deep Blue's successor, Watson, could win at mastermind but commercial applications never materialized. AlphaGo and AlphaZero I was impressed by Deep Blue but wondered about the game of Go (Baduk, Wei-chi). The board is 19x19 and the game tree is incomparably bigger than that of chess. If you'd asked me at the time I would have said Go mastery was inconceivable (which, if we had to use GOFAI, was true). [df20f04b-b293-4357-9fc] Then in 2016 the unthinkable occurred: a program, called "AlphaGo", started beating Go champions. It did not use Wiener's approach; instead it used Machine Learning (ML) (don't ask me how that works). AlphaGo trained by playing millions of games against itself. Originally it was given hundreds of thousands of expert level human games but its successor AlphaZero dispensed with them and simply taught itself. It took only a few hours to reach expert level, which for humans took hundreds of years. Variants of the software mastered chess, checkers, and shogi in a similar fashion. Google Translate About the same time users of Google Translate noticed a sudden dramatic increase in the quality of its translations, although Google at the time said nothing. In fact Google had switched from statistical to neural networks (don't ask). [1prw6bjegr1nl9atm0pd5] Currently Google's translations are spectacularly good (and Deepl's even better). In translating into French, for example, it usually gets the pronouns right as well a well as the gender/number agreements. If you want perfect output you still have to at least proofread the output because occasionally there are mistakes, usually minor. Here is a translation from today's Le Monde. It's perfect. Since the beginning of the conflict, Russia has refused to describe its invasion of Ukraine as a war, calling it a "special military operation". Shortly after the conflict broke out, the authorities banned the Russian media from using any information other than official statements on the subject and banned the use of words such as "war" and "invasion", under penalty of being punished by the law. GOFAI never produced really good translations, relying as it did on facts and rules. I remember the whole class getting a good laugh out of the mistakes. Once, translating a news article about Iraq from German, it claimed Saddam "warned the US before an attack" when what it should have said was "Saddam warned the US of an attack". Modern systems would never make such a basic mistake. Generative Speech Then comes GPT 3 and the other systems that generate speech. Originally they were merely predictors that would complete incomplete sentences. But now they run on and produce whole articles. Which (usually) make sense. Here's an example: Author Prompt: Write me an article about creativity please [1f2b1zf_zqzqsvzxusy] GPT-3 Response: I'll be honest. I don't know much about creativity. I'm not a psychologist and I'm not a creative myself. I'm just a guy who likes to read about psychology and creativity. What I can do, though, is write you an article about creativity that's based on what I have read. That way you get a proper article, but you don't have to pay for the expertise of a psychologist or a creative person. GOFAI never achieved anything along these lines. But then my mind was well and truly boggled by ... Generative Art Along came DALL-E and DALL-E 2. But it wasn't till Stable Diffusion was released that I started paying attention. Of course there was the pictures of astronauts on horseback and cats wearing sunglasses. But what really impressed me was pictures in the style of well known artists. Here are two of my favourites : [screenshot-2022-11-11-at]"Lockers" 'by' Picasso The first is an abstract image in the style of Picasso. I can't find the original but MidJourney's version is just marvellous. I'd have no hesitation to print it, frame it, and hang it on my wall. My second favourite is a wonderful portrait of Superman - 'by' Rembrandt! As one observer commented, "those eyes have seen some shit!" [heydyn1qxyqn2itz-wqa6a4mkp80p] But even the cheesy astronaut image is impressive. [0-2] The striking fact is that you can't see the astronaut's left leg. The image generator seems to understand that you can't see through opaque objects (namely, the horse). GOFAI would need literally hundreds of rules just about what to do when bodies overlap, what to show, what objects are transparent and to what degree etc etc. On reflection [1ky4zr8ejbrshkuhh] OK let's go all in - let's look at a cat wearing sunglasses. Ew cheesy - but there's something remarkable about the image. It's the reflections in the lenses of the sunglasses. Not only are they visible, but the reflections are, correctly, the same. How does Midjourney coordinate the images in separate parts of the picture? A closer look [1tx9nlc1xu]Almost symmetrical ... When I see this image I have to ask, where did all this come from? Midjourney is trained on 500 billion images but condenses this training to 5 GB. So there's not enough room to include exact copies of images found in the training set. We can assume that this (apparent) photo does not exist as is on the internet. In particular what about the blue feathers on either side of the subject's neck (they are not mirror images). Where did they come from? Did one of the training images have them? The mystery is that this image is the result of combining training set images, but how are they put together? The best GOFAI could do is chop up the training images and put them together like a badly fitting crossword puzzle with visible seams and limited symmetry. I'm baffled. The social implications of AI technology It is questionable if all the mechanical inventions yet made have lightened the day's toil of any human being. ~ John Stuart Mill There is a lot of controversy Midjourney and other generative image programs. [1dwcpszo07] The first question is, are these images art? I think some of the images presented here are definitely art, even good art. If you're not convinced, have another 'Rembrandt'. The second question is, is imitating the style of certain artists fair? I don't know, but there seems no way to stop it. Currently nothing stops a human artist from studying living artists and imitating their styles. Midjourney etc are just especially good at this. In a sense, this imitation broadens the exposure of the imitated artists. Now everyone can have, say, a Monet of their own. Finally, a vital question is, how will this affect today's working artists? Here the answer is not so optimistic. Generative AI is not the first disruptive technology. There's photography, the closest analog, digital art in general, the telephone, the automobile, the record player, the printing press, and so on. Each of these had the effect of obsoleting the skills of whole professions. It didn't wipe them out, but the vast increase in productivity put large numbers out of work. And those that remained had to acquire and use the new tools. Because of economic competition they had to work harder than ever to keep up. [fv5snzfwya] Labor saving technology inevitably becomes profit saving technology. The tractor is an example. Initially it (and farm machinery in general) were marketed as labor saving. But eventually competition forced every farmer to get machinery or sell out (which most had to do). The result was the same or more food produced by a fraction of the former number of farmers, working their butts off. So I predict AI will shrink the number of artists and force them to use Midjourney etc. For art consumers, it will be good news - like drinking from a firehose. A new individual Monet every week. Do it yourself illustrations for personal blogs. But not change in society as a whole. Share this: * Twitter * Facebook * Like this: Like Loading... Related [00791e4] About Bill Wadge I am a retired Professor in Computer Science at UVic. View all posts by Bill Wadge - This entry was posted in Uncategorized. Bookmark the permalink. - We Demand Data - the story of Lucid and Eduction [1700 views] 1 Response to GOFAI is dead - long live (NF) AI! 1. [77cb] P.M.Lawrence says: November 14, 2022 at 7:10 am What answer do you get if you ask NFAI "What do you recommend we use NFAI for?" Reply Leave a Reply Cancel reply Enter your comment here... [ ] Fill in your details below or click an icon to log in: * * * * Gravatar Email (required) (Address never made public) [ ] Name (required) [ ] Website [ ] WordPress.com Logo You are commenting using your WordPress.com account. ( Log Out / Change ) Twitter picture You are commenting using your Twitter account. ( Log Out / Change ) Facebook photo You are commenting using your Facebook account. ( Log Out / Change ) Cancel Connecting to %s [ ] Notify me of new comments via email. [ ] Notify me of new posts via email. [Post Comment] [ ] [ ] [ ] [ ] [ ] [ ] [ ] D[ ] This site uses Akismet to reduce spam. Learn how your comment data is processed. * Search for: [ ] [Search] * Email Subscription Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address: [ ] Sign me up! Join 67 other followers * Recent Posts + GOFAI is dead - long live (NF) AI! + We Demand Data - the story of Lucid and Eduction [1700 views] + Hyperstreams - Nesting in Lucid [1000 views] + PyLucid - where to get it, how to use it [100 views] + Shennat dissertation: Dimensional analysis of Lucid programs [380 views] * Archives + November 2022 + July 2022 + June 2022 + May 2022 + March 2022 + February 2022 + January 2022 + December 2021 + August 2021 + June 2021 + May 2021 + March 2021 + February 2021 + January 2021 + September 2020 + August 2020 + July 2020 + June 2020 + April 2020 + March 2020 + February 2020 + January 2020 + December 2019 + October 2019 + June 2019 + April 2019 + March 2019 + October 2018 + September 2018 + November 2017 + June 2017 + May 2017 + April 2017 + March 2017 + March 2016 + January 2016 + December 2015 + November 2015 + October 2015 + September 2015 + July 2012 + April 2012 + March 2012 + December 2011 + May 2011 + April 2011 + March 2011 + February 2011 + July 2010 * Meta + Register + Log in + Entries feed + Comments feed + WordPress.com Bill Wadge's Blog Blog at WordPress.com. * Follow Following + [wpcom-] Bill Wadge's Blog Join 67 other followers [ ] Sign me up + Already have a WordPress.com account? Log in now. * + [wpcom-] Bill Wadge's Blog + Customize + Follow Following + Sign up + Log in + Copy shortlink + Report this content + View post in Reader + Manage subscriptions + Collapse this bar Loading Comments... Write a Comment... [ ] Email (Required) [ ] Name (Required) [ ] Website [ ] [Post Comment] %d bloggers like this: [b]