Post AdadelKziAb6afo7UW by John@socks.masto.host
 (DIR) More posts by John@socks.masto.host
 (DIR) Post #AdaLaGYQQHt8mgnOvw by simon@fedi.simonwillison.net
       2024-01-07T00:04:12Z
       
       1 likes, 0 repeats
       
       It’s OK to call it Artificial Intelligence: I wrote about how people really love objecting to the term "AI" to describe LLMs and suchlike because those things aren't actually "intelligent" - but the term AI has been used to describe exactly this kind of research since 1955, and arguing otherwise at this point isn't a helpful contribution to the discussion.https://simonwillison.net/2024/Jan/7/call-it-ai/
       
 (DIR) Post #AdaLw0VWlUWpO3o7ZA by tychotithonus@infosec.exchange
       2024-01-07T00:08:10Z
       
       0 likes, 0 repeats
       
       @simon Hearty agreement - though we now have the challenge of how to manage the boom of popular awareness, balanced with their understandably non-technical definition of intelligence. I'd argue ;) that the "arguing otherwise" is often about exactly that vocabulary mismatch.
       
 (DIR) Post #AdaM70h8PR2t9YU1LM by simon@fedi.simonwillison.net
       2024-01-07T00:09:42Z
       
       1 likes, 0 repeats
       
       Short version: "I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet."
       
 (DIR) Post #AdaMQvLC1aJGL8iR6W by serapath@mastodon.gamedev.place
       2024-01-07T00:14:00Z
       
       0 likes, 0 repeats
       
       @simon maybe, but because some science people working on it called it that doesnt mean we have to accept the word.the more general term hides the more specific and nuanced and more informative details, also once introduced into the mainstream vocabulary it might clash with other mainstream meaning and it is easier for a small group to change their wording than for a large group.i generally think scientists should strive to simplify their language, but some actually hide behind it.
       
 (DIR) Post #AdaMe2OV6z1fypW3dI by simon@fedi.simonwillison.net
       2024-01-07T00:15:51Z
       
       0 likes, 0 repeats
       
       @serapath I think refusing to accept the word at this point actively hurts our ability to have important conversations about itIs there an argument that refusing to use the word Artificial Intelligence can have a positive overall impact on conversations and understanding? I'm open to hearing one!
       
 (DIR) Post #AdaMpl6vCHw1qpsmfI by eliocamp@mastodon.social
       2024-01-07T00:17:02Z
       
       0 likes, 0 repeats
       
       @simon Bad assumption, I'd say.
       
 (DIR) Post #AdaN0wW38InsPRy8Xo by mia@front-end.social
       2024-01-07T00:18:56Z
       
       0 likes, 0 repeats
       
       @simon What makes this hard currently is that many of the loudest advocates *are explicitly* talking about skynet (or "digital god" or whatever). And it seems like they're using this history of the term as cover with a general audience.
       
 (DIR) Post #AdaNAYhKXhxZR1pYsC by serapath@mastodon.gamedev.place
       2024-01-07T00:19:58Z
       
       0 likes, 0 repeats
       
       @simon i do think AI gives way to much credibility to it. People saw and read scifi movies/books and believe chat gpt & co. despite all the confident bullshit it shares.also, image recognition is different from a language learning model, so what are we even talking about when talking about AI?it is way to broad to make useful statements, other than what we all saw in scifi movies at some point imho
       
 (DIR) Post #AdaNLcQN1o0EB5Ahg8 by zzzeek@hachyderm.io
       2024-01-07T00:20:20Z
       
       0 likes, 0 repeats
       
       @simon the term "AI" deeply misleads laypeople into thinking sentient minds are at play, leading to all kinds of misuse/harm. I dont have to list links to all the damage "AI" has done so far due to people putting it in charge of things since "it's intelligent".going to keep using technical terms like "machine learning" so that all the non-tech people I talk to understand a tech person like me does not consider this stuff to be "intelligent" in any way we usually define that term for humans
       
 (DIR) Post #AdaNLftY7m7qwRqY4m by zzzeek@hachyderm.io
       2024-01-07T00:22:31Z
       
       0 likes, 0 repeats
       
       @simon less harm was done in 1955, 1960, 1970 etc. because we didn't have machines that were so singularly focused on pretending to be (confident, authoritative) humans at such massive scales, there was little chance of misunderstanding back then.  now these machines have "I hope you misunderstand what I do" at their core
       
 (DIR) Post #AdaNYUWJj6uvQc5ijo by kevinbowrin@ottawa.place
       2024-01-07T00:20:38Z
       
       0 likes, 0 repeats
       
       @simon I was guilty of this just this morning, you've changed my mind. Thank you!
       
 (DIR) Post #AdaNiLy6NRBhnin2NE by not2b@sfba.social
       2024-01-07T00:20:40Z
       
       0 likes, 0 repeats
       
       @simon "AI" isn't wrong, but I think it is most helpful to use the most specific term that applies. So if you are talking about issues with LLMs in particular, better to say LLMs.
       
 (DIR) Post #AdaNsnDjuP8HDa5A4e by simon@fedi.simonwillison.net
       2024-01-07T00:21:04Z
       
       0 likes, 0 repeats
       
       @serapath@gamedev.place That's the exact position I'm arguing againstYes, it's not "intelligent" like in science fiction - but we need to educate people that science fiction isn't real, not throw away a whole academic discipline and pick a different word!Image recognition is part of AI too
       
 (DIR) Post #AdaOCbrDrmN5le026a by simon@fedi.simonwillison.net
       2024-01-07T00:22:30Z
       
       0 likes, 0 repeats
       
       @not2b That's what I've been doing, but I think it's actually hurting my ability to communicate. I have to start every blog entry with "LLMs, Large Language Models, the technology behind ChatGPT and Bard" - and I'm not sure that's helping people understand my material better!
       
 (DIR) Post #AdaOQBIbPGypS6HH2e by tcook@hachyderm.io
       2024-01-07T00:34:09Z
       
       0 likes, 0 repeats
       
       @simon Feels a lot like how the government long ago adopted “cyber” to encompass any kind of computer/network discussions. Everybody in industry hates it but is forced to play along because it opens doors (and wallets) of those outside the industry.
       
 (DIR) Post #AdaOzBPiyI3LwaIbPk by scottjenson@social.coop
       2024-01-07T00:42:33Z
       
       0 likes, 0 repeats
       
       @simon I so want to agree with you. What's making me a ReplyGuy is that people outside the field put far too much weight on what AI means. Too many don't understand how narrow LLMs are, spinning doomsday scenarios far too easily. (but they ARE powerful!) I don't like to use the term just to back these people off the ledge
       
 (DIR) Post #AdaPJ6Elva6jgFfnIu by adr@mastodon.social
       2024-01-07T00:46:13Z
       
       0 likes, 0 repeats
       
       @simon yeah, I have been historically fairly resistant to calling LLM and related things AI and I figure I'm just being...obtuse for no reason and should relax about it.
       
 (DIR) Post #AdaPgiOeDbtGgpYLx2 by simon@fedi.simonwillison.net
       2024-01-07T00:50:26Z
       
       0 likes, 0 repeats
       
       @scottjenson Yeah, that's exactly why I was resistant to the term too - the "general public" (for want of a better term) knows what AI is, and it's Skynet / The Matrix / Data from Star Trek / Jarvis / UltronI decided to give the audience of my writing the benefit of the doubt that they wouldn't be confused by science fiction
       
 (DIR) Post #AdaPs86ixyEbcGc2JE by serapath@mastodon.gamedev.place
       2024-01-07T00:50:42Z
       
       0 likes, 0 repeats
       
       @simon hm, yeah no.i disagree.mainstream people have as much rights to their words as scientists, but mainstream is in the majority and AI will also continue to be abused by marketing to make outrageous claims.i dont think AI helps anyone and i will continue to ignore anyone talking about AI
       
 (DIR) Post #AdaQ2bXQOmDVn161Wy by simon@fedi.simonwillison.net
       2024-01-07T00:52:46Z
       
       0 likes, 0 repeats
       
       @zzzeek That's a very strong argument. I'm going to add a longer section about science fiction to my post, because that's the reason I held off on the term for so long too
       
 (DIR) Post #AdaQGMJPnyqeca6NcW by lgw4@hachyderm.io
       2024-01-07T00:56:09Z
       
       0 likes, 0 repeats
       
       @simon "Artificial intelligence has been used incorrectly since 1955" is not a convincing argument to me (and means our predecessors are as much to blame for misleading the general public as contemporary hucksters claiming ChatGPT is going to cause human extinction).
       
 (DIR) Post #AdaR3m0v2Gd0WV0sBE by simon@fedi.simonwillison.net
       2024-01-07T01:05:44Z
       
       0 likes, 0 repeats
       
       @zzzeek Added that section here https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against
       
 (DIR) Post #AdaRG3noiuunAWkAYC by simon@fedi.simonwillison.net
       2024-01-07T01:06:46Z
       
       0 likes, 0 repeats
       
       @lgw4 I don't think they were wrong to coin a term in 1955 with a perfectly reasonable definition, then consistently apply that definition for nearly 70 years.It's not their fault that science fiction redefined it from under them!
       
 (DIR) Post #AdaRRZR9Rp1ySzd9Lk by simon@fedi.simonwillison.net
       2024-01-07T01:07:21Z
       
       0 likes, 0 repeats
       
       @serapath Which version of AI do you think doesn't help anyone? It's a pretty broad field!
       
 (DIR) Post #AdaReyCgg2E72mFpg0 by simon@fedi.simonwillison.net
       2024-01-07T01:09:59Z
       
       0 likes, 0 repeats
       
       I added an extra section to my post proving a better version of the argument as to why we shouldn't call it AI  https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against
       
 (DIR) Post #AdaS0LfCeHKPsumoa0 by Captainobservant@zeroes.ca
       2024-01-07T01:16:17Z
       
       0 likes, 0 repeats
       
       @simon The problem is people taking it literally. Yeah, AI is a field of computer science. But it's being marketed as a product. And it's being hyped as if it's now an achieved reality, instead of just software that mimics human conversation, art, etc.
       
 (DIR) Post #AdaT2slEfGTh4FNcxc by not2b@sfba.social
       2024-01-07T01:28:04Z
       
       0 likes, 0 repeats
       
       @simon I think that is fine though.
       
 (DIR) Post #AdaTidV7n4vfu0konA by zzzeek@hachyderm.io
       2024-01-07T01:35:36Z
       
       0 likes, 0 repeats
       
       @simon thanks!
       
 (DIR) Post #AdaTtPv6cmcqy2301w by trisweb@m.trisweb.com
       2024-01-07T01:36:18Z
       
       0 likes, 0 repeats
       
       @simon odd argument. I don’t see an argument in it.
       
 (DIR) Post #AdaU4i4BUrocQtg8Mi by crazyeddie@mastodon.social
       2024-01-07T01:38:52Z
       
       0 likes, 0 repeats
       
       @simon NegaMax is AI.  Mic drop.
       
 (DIR) Post #AdaV0mppvQzQ1Bf1Rg by serapath@mastodon.gamedev.place
       2024-01-07T01:50:11Z
       
       0 likes, 0 repeats
       
       @simon The word AI does not help anyone with anything, because you also cant tell which version or part i even mean when saying that, hence it is just confusing. 😁i just meant the term
       
 (DIR) Post #AdaVCj5hfhNPsgm8um by trisweb@m.trisweb.com
       2024-01-07T01:50:17Z
       
       0 likes, 0 repeats
       
       @simon nah. I ain’t callin it AI until it is, hard stop.
       
 (DIR) Post #AdaVPFcqXCyQGCFZJI by futuraprime@vis.social
       2024-01-07T01:51:19Z
       
       0 likes, 0 repeats
       
       @simon  I’ve been thinking about this too, but on a slightly different line. It’s not about science fiction, it’s that we so strongly tie language with intelligence. The Turing test is based on this connection. We measure children’s development in language milestones, and look for signs of language in animals to assess their intelligence. It goes back a long way—“dumb” in English has meant both “unable to speak” and “unintelligent” for 800 years. The confusion is reflexive and deep-seated.
       
 (DIR) Post #AdaVzJoASecbeokBOK by codinghorror@infosec.exchange
       2024-01-07T02:00:53Z
       
       0 likes, 0 repeats
       
       @simon comments like yours aren't a helpful contribution to the discussion, IMO. But what do I know, I just founded Stack Overflow.
       
 (DIR) Post #AdaWv1CHoYIyjSXS64 by trdebunked@mastodon.social
       2024-01-07T02:11:13Z
       
       0 likes, 0 repeats
       
       @simon personally, i refuse to call it artificial intelligence until it is REAL intelligence.
       
 (DIR) Post #AdaZSns04Rxv9XHtCK by zzzeek@hachyderm.io
       2024-01-07T01:37:19Z
       
       0 likes, 0 repeats
       
       @simon just as the center of my assertion "I hope you misunderstand what I do", I would use the "AI Safety" letter as the prime example, of billionaires and billionaire-adjacent types declaring that this "AI" is so, so close to total sentience that governments *must* stop everyone (except us!  who should be gatekeepers) from developing this *so very dangerous and powerful!* technology any furtherlots of non-tech ppl signed onto that thing and it was quite alarming
       
 (DIR) Post #AdaZSoh30enZhrAelU by simon@fedi.simonwillison.net
       2024-01-07T02:39:50Z
       
       0 likes, 0 repeats
       
       @zzzeek urgh, yeah the thing where people are leaning into the science fiction definition to help promote the technology is really upsetting
       
 (DIR) Post #AdaaEzR2NJoavBeY0e by codinghorror@infosec.exchange
       2024-01-07T02:02:26Z
       
       0 likes, 0 repeats
       
       @simon I mean, I dunno, maybe you built something significant? What is it? Can you share it with us? Or are you another talking head? Feel free to enlighten us, oh master of terminology! What did you build that shaped the world?
       
 (DIR) Post #AdaaF09LiZGD8cNw4u by simon@fedi.simonwillison.net
       2024-01-07T02:43:23Z
       
       0 likes, 0 repeats
       
       @codinghorror I've built open source stuff, but hopefully my credibility in this particular space comes from having spent the last year working hard to help people understand what's going on - https://simonwillison.net/2023/Aug/3/weird-world-of-llms/ and suchlike
       
 (DIR) Post #AdaadQu9tYIP4DvhEe by SnoopJ@hachyderm.io
       2024-01-07T02:44:09Z
       
       0 likes, 0 repeats
       
       @simon to me, the relationship between "AI" and "LLM/etc." feels somewhat akin to the relationship between "speed" and "velocity" in common usage.It's not 1:1 or anything, but "AI" feels like it's in word-gruel territory more often. And it's probably fine if colloquial usage doesn't really care about how mushy that usage is.
       
 (DIR) Post #Adab09aiQswk1wfZ32 by richardsheridan@mastodon.sandwich.net
       2024-01-07T02:10:18Z
       
       0 likes, 0 repeats
       
       @codinghorror @simon Lol. Django among others. Aggressively building open tooling for LLM application. Mediocre dunk attempt. https://en.wikipedia.org/wiki/Simon_Willison
       
 (DIR) Post #Adab0AjGCbNLYjfu0O by codinghorror@infosec.exchange
       2024-01-07T02:19:04Z
       
       0 likes, 0 repeats
       
       @richardsheridan @simon I’ll take that. Stack Overflow far more relevant to world than “Django”
       
 (DIR) Post #Adab0BamzaC4EkieRM by codinghorror@infosec.exchange
       2024-01-07T02:19:48Z
       
       0 likes, 0 repeats
       
       @richardsheridan @simon I mean, is it a movie reference? Lame. Bring it. I’m ready for you. Let’s go. Let’s see who is on the right side of history here. Cmon. Let’s do this.
       
 (DIR) Post #Adab0CRFqWA2rTGYDY by codinghorror@infosec.exchange
       2024-01-07T02:31:42Z
       
       0 likes, 0 repeats
       
       @richardsheridan @simon these models have zero understanding of what they are “talking” about, it’s just scraped text statistical inference. Basically a fancy “summarize these 100 articles using the most common words in each article” feature. Which, to be fair, is more useful than cryptocurrency. But that is an absurdly low bar. So yeah, zero “intelligence”. I’ll die on this hill with gusto.
       
 (DIR) Post #Adab0DCkzu9tEnUUG8 by simon@fedi.simonwillison.net
       2024-01-07T02:45:11Z
       
       0 likes, 0 repeats
       
       @codinghorror @richardsheridan I agree with you! These things are spicy autocomplete, they're not "artificial intelligence" in the science fiction definitionMy argument here is that AI should mean what it's meant in academia since the 1950s, and we should reclaim it from science fiction
       
 (DIR) Post #Adab0EWI6lNVJfdbLU by codinghorror@infosec.exchange
       2024-01-07T02:32:27Z
       
       0 likes, 0 repeats
       
       @richardsheridan @simon sorry but that’s the truth. If the truth hurts, I’m sorry. Deal with it dot gif
       
 (DIR) Post #Adab0GTsogdlOmWhs0 by codinghorror@infosec.exchange
       2024-01-07T02:35:50Z
       
       0 likes, 0 repeats
       
       @richardsheridan @simon “I don’t think refusing to use the term AI is an effective way for us to do that.” hard disagree and espousing this viewpoint sets us back in computer science as a whole. You are actively causing harm to the field.
       
 (DIR) Post #AdabRA8cVWjDXrPHIu by chrisbrennan@mastodon.social
       2024-01-07T02:46:21Z
       
       0 likes, 0 repeats
       
       @simon Calling LLMs “AI” is a bald faced lie. The promoters try to excuse it by saying they’re using a different definition of intelligence now. But they know nobody else is using this novel definition. They are getting away with it because we live in the Era of Shamelessness.
       
 (DIR) Post #Adabd3yXFJATuIMe2q by simon@fedi.simonwillison.net
       2024-01-07T02:46:20Z
       
       0 likes, 0 repeats
       
       @codinghorror @richardsheridan I'm ready to be convinced of that - that calling it "AI" really does cause harm and that there are more useful terms we can be using - but you need to make the argument
       
 (DIR) Post #AdadR4PE1GWLfwG0iO by glyph@mastodon.social
       2024-01-07T02:59:06Z
       
       0 likes, 0 repeats
       
       @simon one argument that you’re not addressing here is that it dates anything you are writing, in a way that makes it hard to understand without first understanding its contemporaneous terminology. Our current view of AI as an actual *technology*—statistical machine-learning techniques, as opposed to just the chatbot UI paradigm—is quite new and quite *at odds with* previous understanding of the term (like, say, expert systems). It may be at odds with future understandings as well.
       
 (DIR) Post #AdadR6cPnCGAXQRZYG by glyph@mastodon.social
       2024-01-07T02:59:06Z
       
       0 likes, 0 repeats
       
       @simon Also, not for nothing but you are giving the lay public _way_ too much credit when it comes to understanding the limitations of LLMs and PIGs. Numerous people are doing additional jail time because even highly-educated, nationally-renowned *lawyers* cannot wrap their heads around this. The term very definitely obscures more than it reveals, and the “well, actually” pedantic conversation about it’s inappropriateness *does* drive deeper understanding of it.
       
 (DIR) Post #AdadR8rNSXPtUPSY9Q by glyph@mastodon.social
       2024-01-07T02:59:07Z
       
       0 likes, 0 repeats
       
       @simon I do absolutely appreciate your argument here too, I feel like pedantic linguistic prescriptivism needs to meet a *really* high bar in order to be considered worthwhile, and many are not holding it to that bar (simply ending with the thought-terminating “spicy autocomplete” cliche is probably worse than nothing at this point, discourse-wise) but this does still seem to me like the rare case where it is in fact warranted.
       
 (DIR) Post #AdadelKziAb6afo7UW by John@socks.masto.host
       2024-01-07T03:02:13Z
       
       0 likes, 0 repeats
       
       @simon @codinghorror @richardsheridan Current AIs do not "form abstractions and concepts" in the way early researchers imagined.They imagined knowledge graphs with confirmed links between "cat is feline" and "cat eats mouse" and suchlike.Not a black box of [indeterminate] probabilities.
       
 (DIR) Post #Adae337FGQREg8xP96 by Grizzlysgrowls@twit.social
       2024-01-07T03:04:07Z
       
       0 likes, 0 repeats
       
       @simon Well... not exactly...
       
 (DIR) Post #AdaeQb25KPoiWvltnk by jannem@fosstodon.org
       2024-01-07T03:10:17Z
       
       0 likes, 0 repeats
       
       @simon @codinghorror @richardsheridan Has "AI" ever carried connotations of actual intelligence in the CS field? "AI" used to mean expert systems, logical inference, playing chess, "fuzzy logic", and so on and so on - none of which had any more to do with actual intelligence than deep neural networks.
       
 (DIR) Post #AdaeomQMcsuwoJexG4 by pieist@qoto.org
       2024-01-07T03:24:37Z
       
       0 likes, 0 repeats
       
       @simon Is it also OK not to?
       
 (DIR) Post #AdafRlcJYrcFbIq7wu by simon@fedi.simonwillison.net
       2024-01-07T03:41:13Z
       
       0 likes, 0 repeats
       
       @jannem @codinghorror @richardsheridan right: that's my point: AI is a term we have used since the 1950s for technology that "isn't actually intelligent", so there's plenty of pretender for using it that wayThat's why we have the term "AGI"
       
 (DIR) Post #AdafeOge7tKGFTt8uO by codinghorror@infosec.exchange
       2024-01-07T03:41:26Z
       
       0 likes, 0 repeats
       
       @simon @richardsheridan blog post incoming tomorrow. Expect an ICBM
       
 (DIR) Post #AdafsdzI2Ftzl0P03k by codinghorror@infosec.exchange
       2024-01-07T03:42:11Z
       
       0 likes, 0 repeats
       
       @simon @richardsheridan fair bit far too subtle to the point of irrelevancy
       
 (DIR) Post #AdagJ7QfC9qsb6pX3Q by simon@fedi.simonwillison.net
       2024-01-07T03:42:28Z
       
       0 likes, 0 repeats
       
       @carlana @glyph I love that idea that layman have more confidence over the definition than practitioners do!
       
 (DIR) Post #Adagb7hEwKDedhAWpc by simon@fedi.simonwillison.net
       2024-01-07T03:43:55Z
       
       0 likes, 0 repeats
       
       @pieist yes, absolutely - I think the thing that's not OK here is fiercely arguing that people who call LLMs AI shouldn't do that to the point of derailing  more useful conversations
       
 (DIR) Post #AdaheWCcNo8PSz1pZo by codinghorror@infosec.exchange
       2024-01-07T03:47:48Z
       
       0 likes, 0 repeats
       
       @simon it's fair, I will write up my viewpoint in substantially more detail tomorrow. There is no "intelligence" in LLMs.
       
 (DIR) Post #AdahrYYWwblIWvUjuS by b_cavello@mastodon.publicinterest.town
       2024-01-07T03:26:49Z
       
       0 likes, 0 repeats
       
       @codinghorror @simon do you always act like this much of a jerk? Weird flex.
       
 (DIR) Post #AdahrZOznXjH9e2dge by codinghorror@infosec.exchange
       2024-01-07T03:40:46Z
       
       0 likes, 0 repeats
       
       @b_cavello @simon if you are harming the entire industry and everyone in it, fuck yes
       
 (DIR) Post #AdahraeHADXv1KCM8u by simon@fedi.simonwillison.net
       2024-01-07T03:49:39Z
       
       0 likes, 0 repeats
       
       @codinghorror @b_cavello just to clarify, I'm not a "AI is the best thing ever" hype-merchant - I have written extensively about the many downsides and flaws of modern AI- https://simonwillison.net/2022/Sep/5/laion-aesthetics-weeknotes/- https://simonwillison.net/2023/Apr/10/ai-safety/- https://simonwillison.net/series/prompt-injection/- https://simonwillison.net/tags/ai+ethics/
       
 (DIR) Post #Adai3qMUZItpEFiswC by b_cavello@mastodon.publicinterest.town
       2024-01-07T03:50:51Z
       
       0 likes, 0 repeats
       
       @simon I’m inclined to disagree, but I do think that it’s a bit of a lost battle. I’d rather encourage people to “yes, and” and just get more specific:https://www.aspendigital.org/report/ai-101/#section6I don’t take issue with the term “AI,” however, and I think that’s a handy alternative. Sisi Wei actually beat me to the punch on this in a recent #TalkBetterAboutAI conversation: https://youtu.be/KSsxuEtGgEg
       
 (DIR) Post #Adai3rirVcO5RvCGRc by b_cavello@mastodon.publicinterest.town
       2024-01-07T03:51:20Z
       
       0 likes, 0 repeats
       
       @simon I think you’re right that the academic community has largely had a shared sense of the term, but the public hasn’t (and I think there are some p compelling arguments for challenging conceptions of intelligence:https://buttondown.email/ninelives/archive/language-is-a-poor-heuristic-for-intelligence/)
       
 (DIR) Post #AdaiG7Nt6ZBlUxEFGa by lgw4@hachyderm.io
       2024-01-07T03:51:10Z
       
       0 likes, 0 repeats
       
       @simon Machines with intelligence similar to (or better than) that of humans (that is, the current popular concept of artificial intelligence) has been present in science fiction since the 19th century. Dystopian (and utopian) fantasies of humans subjugated (or assisted) by these machine intelligences have been science fiction tropes continuously since then. I would wager that John McCarthy was aware of this fact. No one "redefined it from under them."
       
 (DIR) Post #AdaiUfsvd3yGYQglsW by simon@fedi.simonwillison.net
       2024-01-07T03:51:44Z
       
       0 likes, 0 repeats
       
       @codinghorror I'm not arguing they there's any intelligence in them here - I'm arguing that reacting to that fact by trying to discourage the use of the term "AI" (which I myself have tried to do in the past) isn't the best use of our efforts
       
 (DIR) Post #Adaipz01kzZ3uzAaqO by simon@fedi.simonwillison.net
       2024-01-07T03:54:36Z
       
       0 likes, 0 repeats
       
       @lgw4 that's not an argument I'd heard before! I know science fiction had AI all the way back to Erewhon https://en.m.wikipedia.org/wiki/Erewhon but I was under the impression that the term itself was first used by McCarthy
       
 (DIR) Post #Adajghyzks7aUQpQaO by glyph@mastodon.social
       2024-01-07T04:02:54Z
       
       0 likes, 1 repeats
       
       @simon @carlana this strikes closer to the heart of my objection. A lot of insiders—not practitioners as such, but marketers & executives—use "AI" as the label not in spite of its confusion with the layperson's definition, but *because* of it. Investors who vaguely associate it with machine-god hegemony assume that it will be very profitable. Users assume it will solve their problems. It's a term whose primary purpose has become deceptive.
       
 (DIR) Post #Adajgjy0NWWAdwNfJw by glyph@mastodon.social
       2024-01-07T04:04:11Z
       
       0 likes, 1 repeats
       
       @simon @carlana At the same time, a lot of the deception is unintentional. When you exist in a sector of the industry that the public knows as "AI", that the media calls "AI", that industry publications refer to as "AI", that *other* products identify as "AI", going out on a limb and trying to build a brand identity around pedantic hairsplitting around "LLMs" and "machine learning" is a massive uphill battle which you are disincentivized at every possible turn to avoid.
       
 (DIR) Post #AdakEPstYD5ECk34F6 by codinghorror@infosec.exchange
       2024-01-07T04:06:16Z
       
       0 likes, 0 repeats
       
       @simon @jannem @richardsheridan "AGI is also known as strong AI,[11][12] full AI,[13] human-level AI[6] or general intelligent action" so many weasel words here it's hard to keep count. I will be destroying this tomorrow in context.
       
 (DIR) Post #AdakR6OdJnUxWmwVpw by ganonmaster@open3dlab.social
       2024-01-07T04:08:56Z
       
       0 likes, 0 repeats
       
       @simon I do tend to agree with your argument. It doesn't matter that much what we call it at this point - it's a clear umbrella term for the majority of the population. You can get more granular as discussion gets more specific and academic. I don't think my mom is going to understand the difference between AGI and a multi-modal large language model (MMLLM?) - it's absurd to expect otherwise. Meanwhile, these systems are becoming part of everyone's life - these nuances are meaningless.
       
 (DIR) Post #AdakR8aPB06SJsSwSm by ganonmaster@open3dlab.social
       2024-01-07T04:12:30Z
       
       0 likes, 0 repeats
       
       @simon Focusing on the semantics is a distraction from the real tangible impact that these systems are having on our daily lives. AI is causing measurable harm as we speak, and quarreling about semantics is a stupid, meaningless distraction from the real world impact that these systems are having. (power consumption/global warming, inaccurate/invalid results, cheap/slave labor used for data labeling, rights issues, privacy violations, etc.)
       
 (DIR) Post #AdaktPzW3gn8POhFTs by simon@fedi.simonwillison.net
       2024-01-07T04:13:38Z
       
       0 likes, 0 repeats
       
       @ganonmaster 100% this - my concern is that anyone who says "You know it's not even AI?" is wasting an opportunity to have a more useful conversation
       
 (DIR) Post #AdalcYP7crnRX4CCqu by simon@fedi.simonwillison.net
       2024-01-07T04:27:01Z
       
       0 likes, 0 repeats
       
       @glyph @carlana thats exactly it - I've been half-heartedly fighting the LLM hairsplitting fight for most of the last year and I got tired of it - it didn't feel like it was gaining anything meaningful
       
 (DIR) Post #AdanCWsYsEPTBIgU1Q by evan@cosocial.ca
       2024-01-07T04:38:03Z
       
       0 likes, 0 repeats
       
       @simon this is a solid take.
       
 (DIR) Post #AdanOpCQaJ4HTZq6mO by 3wordchant@social.coop
       2024-01-07T04:39:17Z
       
       0 likes, 0 repeats
       
       @simon "so-called AI", "technology marketed as 'AI'", or even just "AI" in quotes, seem to solve the "most people [..] don’t know what it means" issue, while contributing a lot less to the other problem: while "AI is [..] already widely understood", its common understanding is something way beyond what it actually does, which is dangerous for all the reasons we seem to be agreeing on in this thread.
       
 (DIR) Post #AdaoxipSMUXdo9Ib5s by glyph@mastodon.social
       2024-01-07T05:31:45Z
       
       0 likes, 0 repeats
       
       @simon @carlana personally I am trying to Get Into It over the terminology less often, but I will still stick to terms like "LLMs", "chatbots", and "PIGs" in my own writing. Not least of which because the tech behind PIGs/PVGs, LLMs, and ML classifiers are actually all pretty different, despite having some similar elements
       
 (DIR) Post #AdaqD8I64pvv4gbjVI by deadwisdom@fosstodon.org
       2024-01-07T05:46:45Z
       
       0 likes, 0 repeats
       
       @simon We need a new term.
       
 (DIR) Post #Adarv6XW382uL4LogK by simon@fedi.simonwillison.net
       2024-01-07T06:02:00Z
       
       0 likes, 0 repeats
       
       And another section trying to offer a useful way forward: Let’s tell people it’s “not AGI” insteadhttps://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead
       
 (DIR) Post #Adas5lFHYpDPTvu5Wy by simon@fedi.simonwillison.net
       2024-01-07T06:03:30Z
       
       0 likes, 0 repeats
       
       @deadwisdom I think we should keep AI and push AGI for the science fiction version https://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead
       
 (DIR) Post #AdasJUNX4ISpDxJC2S by simon@fedi.simonwillison.net
       2024-01-07T06:05:59Z
       
       0 likes, 0 repeats
       
       @glyph @carlana what are PIGs and PVGs? I tried digging around and couldn't figure those ones out!
       
 (DIR) Post #Adash91G8JJTvhAcUa by slangevi@cosocial.ca
       2024-01-07T06:06:54Z
       
       0 likes, 0 repeats
       
       @simon @codinghorror @richardsheridan yes, AI is a broad area of research in computer science and most of the sub-areas have not focussed on Artificial General Intelligence. The science fiction use of the term is far from the bulk of the research in this space. There was a period where Neural Networks fell out of favour due to their black box nature. Now with deep learning advances they are all the rage again, but still limited in their knowledge representation and ability to reason.
       
 (DIR) Post #AdastUbtgUAaMrCBJA by nen@mementomori.social
       2024-01-07T06:09:16Z
       
       0 likes, 0 repeats
       
       @simon @glyph @carlana I think it's those battles that can't be won, but will be lost even more badly if we stop fighting altogether.
       
 (DIR) Post #Adat7jnnztMHXUYexc by glyph@mastodon.social
       2024-01-07T06:13:51Z
       
       0 likes, 0 repeats
       
       @simon @carlana probabilistic image / video generators
       
 (DIR) Post #Adaw2sorLQStv4QaMC by deivudesu@mastodon.social
       2024-01-07T06:48:24Z
       
       0 likes, 0 repeats
       
       @simon A reasonably pragmatic option. Unfortunately undercut by the efforts of a sizeable portion of the LLM crowd (including more than a few clowns at OpenAI) to present LLMs ("AI") as merely a few incremental improvements short of AGI.I would go one step stronger and state that LLMs are not AGI, *nor will they ever be*. (I admit this is a controversial, and potentially risky, statement)
       
 (DIR) Post #AdawEJSicx1vjiWPmS by simon@fedi.simonwillison.net
       2024-01-07T06:50:11Z
       
       0 likes, 0 repeats
       
       ... OK, I'm cutting myself off now - I added one last section, "Miscellaneous additional thoughts", with further thinking inspired by the conversation here: https://simonwillison.net/2024/Jan/7/call-it-ai/#misc-thoughts - plus a closing quote from @glyph
       
 (DIR) Post #Adawbzck2NJ79uV9DE by Seruko@mstdn.social
       2024-01-07T06:51:14Z
       
       0 likes, 0 repeats
       
       @simon it's not AI at all. Don't let the push marketing sons of bitches claim the memetic space. "Auto complete at scale" ain't intelligent
       
 (DIR) Post #AdawoK4popjc4HWN4S by simon@fedi.simonwillison.net
       2024-01-07T06:51:48Z
       
       0 likes, 0 repeats
       
       @deivudesu I'm personally unexcited about this ongoing quest for AGI - I just want useful tools, pretty much LLMs with some of the sharper edges filed offIf AGI ever does happen my hunch is that LLMs may form a small part of a larger system, but certainly wouldn't be the core of it
       
 (DIR) Post #AdaxGm3POWjQwgQ8ki by simon@fedi.simonwillison.net
       2024-01-07T06:57:21Z
       
       0 likes, 0 repeats
       
       @Seruko I 100% agree that autocomplete at scale isn't intelligent, but I still think "Artificial Intelligence" is an OK term for this field of research, especially since we've been using it to describe non-intelligent artificial systems since the 1950sI like "AGI" as the term to use for what autocomplete-at-scale definitely isn't
       
 (DIR) Post #AdaxrsEbNWIypRLYWG by deivudesu@mastodon.social
       2024-01-07T07:08:03Z
       
       0 likes, 0 repeats
       
       @simon I hear you, and I think the conflation of LLMs and AGI, in addition to being scientific nonsense, does a disservice to the field: focusses people's attention on non-problems (Skynet taking over the world), while ignoring real problems/limitations (training bias replicating social problems, generative models' inability to build logic-based representations…)People pushing LLMs to be viewed as the first step toward AGI, are generally doing so out of either naivety or greed.
       
 (DIR) Post #AdayuwedVA3vMMYJ9s by karlhigley@recsys.social
       2024-01-07T07:24:15Z
       
       0 likes, 0 repeats
       
       @simon @jannem The term AI has always carried connotations of actual intelligence (which is what has fueled inflated expectations and AI winters.) It’s been applied to a series of things that we know *in hindsight* aren’t actually intelligent and won’t lead to “real” intelligence no matter how far we push them—but that’s not how the term was being used or intended at the time. If there’s a precedent here, it’s that anything we call AI turns out not to be.
       
 (DIR) Post #AdazfnbuwazIvAcejg by kittylyst@mastodon.social
       2024-01-07T07:32:33Z
       
       0 likes, 0 repeats
       
       @simon @glyph This is an interesting piece, Simon - thank you for writing it.I wonder if you're not somewhat undermining your own argument somewhat.There is no reason at all why the interface to an LLM needs to be a chat interface "like you're talking to a human". That is a specific choice - and we have known for decades that humans will attach undue significance to something that "talks like a person" - all the way back to Eliza. 1/
       
 (DIR) Post #Adb04uGpJuG8odjIWm by radiac@mastodon.cloud
       2024-01-07T07:34:54Z
       
       0 likes, 0 repeats
       
       @simon Your readers are probably fine, but the problem is this is the first time this has escaped into the real world. It is being put in front of muggles who have been trained on sci-fi and have wildly unrealistic expectations. We know LLMs are glorified photocopiers, but normal people who I've spoken with genuinely expect the "intelligence" bit to mean that answers come from human-like knowledge and thought. The danger is the AI label means they trust what LLMs generate without question.
       
 (DIR) Post #Adb0Jar7mk04wji8Rc by simon@fedi.simonwillison.net
       2024-01-07T07:36:25Z
       
       0 likes, 0 repeats
       
       @kittylyst @glyph I'm more than happy to undermine my own argument on this one, I don't particularly strong opinion here other than "I don't think it's particularly useful to be pedantic about the I in AI".100% agree that the chat interface is a big part of it, and also something which isn't necessarily the best UI for working with these tools, see also: https://simonwillison.net/2023/Oct/17/open-questions/#open-questions.005.jpeg
       
 (DIR) Post #Adb0XBHpeDfJhOgLyK by kittylyst@mastodon.social
       2024-01-07T07:35:28Z
       
       0 likes, 0 repeats
       
       @simon @glyph Therefore, this is an explicit design choice on the part of the product designers from these companies - and I struggle to see any reason for it other than to deliberately exploit the blurring of the distinction between "AI" & AGI - for the purpose of confusing non-technical investors and thus to juice valuations - regardless of the consequences. 2/
       
 (DIR) Post #Adb0XCAmLvcMRoOEcK by simon@fedi.simonwillison.net
       2024-01-07T07:37:58Z
       
       0 likes, 0 repeats
       
       @kittylyst @glyph The thing I've found particularly upsetting here is the way ChatGPT etc talk in the first person - they even offer their own opinions on things some of the time! It's incredibly misleading.Likewise the thing where people ask them questions about their own capabilities, which they then convincingly answer despite not having accurate information about "themselves" https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/
       
 (DIR) Post #Adb0XDM9x6Jc7Oipzk by kittylyst@mastodon.social
       2024-01-07T07:37:21Z
       
       0 likes, 0 repeats
       
       @simon @glyph After all, it isn't *their* futures that will be decided by a capricious bureaucrat with an "AI" model in a few years time - and, you see, there is all this money out there, if it can just be claimed while the balls are still in the air. /3
       
 (DIR) Post #Adb0xNzb1vcodPeEee by simon@fedi.simonwillison.net
       2024-01-07T07:39:26Z
       
       0 likes, 0 repeats
       
       @radiac I do agree with that, but I'm not sure that's the battle worth fighting right now - my concern is that if we start the conversation with "you know it shouldn't really be called AI, right?" we've already put ourselves at a disadvantage with respect to helping people understand what these things are and what they can reasonably be used to do
       
 (DIR) Post #Adb1o0rFm4lkERq9AW by kittylyst@mastodon.social
       2024-01-07T07:45:27Z
       
       0 likes, 0 repeats
       
       @simon @glyph Absoluely - this is what I'm getting at when I say that these are explicit product design decisions without a convincing justification other than to cynically juice valuations.
       
 (DIR) Post #Adb2HpcI8MRc3U3o3M by dalias@hachyderm.io
       2024-01-07T04:45:18Z
       
       1 likes, 0 repeats
       
       @glyph @simon @carlana "Has become"? I think that deceptiveness has always been core to "AI", going back to the 80s or 70s. Overselling what they were doing to get funding.
       
 (DIR) Post #Adb2JgdgU0tCn6b2gq by radiac@mastodon.cloud
       2024-01-07T07:56:48Z
       
       0 likes, 0 repeats
       
       @simon True, it's not like we can change the narrative now anyway - it's intentional, it's billionaire marketing. Trying to rebrand as "not AGI" is not going to work, the public have never heard of AGI and won't be interested in the difference.It's trolling vs abuse, or hacker vs cracker again - if I say in the real world "I enjoy trolling" I lose friends, or "I'm a hacker" they imagine me skating around train stations looking for landlines. Difference is, misnomers like that don't risk harm.
       
 (DIR) Post #Adb2WdxHlyhIEQwXE8 by UlrikeHahn@fediscience.org
       2024-01-07T07:57:33Z
       
       0 likes, 0 repeats
       
       @simon I’m with you on this. Nothing published in the journal Artificial Intelligence in the 50 years of its existence qualifies as “artificial intelligence” in the sense of the word that people concerned about its use impute. That people misinterpret a term used in academic research isn’t something to be fixed by changing academic terminology, but by changing lay understanding of what is and isn’t implied imo. The key thing is increasing understanding of what #LLMs do and don’t do - as you are!
       
 (DIR) Post #Adb2yhKSp3WMnuA6Hw by knutson_brain@sfba.social
       2024-01-07T08:03:14Z
       
       0 likes, 0 repeats
       
       @simon Sticking with #ArtificialStupidity …
       
 (DIR) Post #Adb3L6OkITGWvPqdIu by lanodan@queer.hacktivis.me
       2024-01-07T08:15:58.461101Z
       
       0 likes, 0 repeats
       
       @simon Another usage of the term AI is also inside video games where it's effectively a glorified pathfinder or something that can somewhat play a game like chess.In fact before the current LLMs trend, that's what would pop in my head for "AI" in most contexts.Meanwhile if you take a tool like Siri, it's typically called something like a voice-assistant to maybe be clear it's not something like Skynet but instead something that became a rather common (and boring?) tool.
       
 (DIR) Post #Adb4L4lGTy7znpvro8 by DetersHenning@eupolicy.social
       2024-01-07T08:25:26Z
       
       0 likes, 0 repeats
       
       @simon @UlrikeHahn But isn't there a difference between the research and "those things" (i.e. recent consumer products like bing chat etc., which are not research about intelligence but consumer products marketed as intelligent)?
       
 (DIR) Post #Adb5bNUoh8j8UkZmiG by futuraprime@vis.social
       2024-01-07T08:38:03Z
       
       0 likes, 0 repeats
       
       @simon Doesn’t this just re-establish the same problem? AGI isn’t a well-known term, so you’re still left defining the terms of the debate you’re hoping to avoid in order to avoid misleading the reader.
       
 (DIR) Post #Adb6FcE5LcpRlMw9Wi by tml@urbanists.social
       2024-01-07T08:46:49Z
       
       0 likes, 0 repeats
       
       @simon But isn't "forming abstractions and concepts" as in your McCarthy quote exactly what large language models don't do? Or have I misunderstood?
       
 (DIR) Post #Adb6deUwYJe5l8HefQ by mario_angst_sci@fediscience.org
       2024-01-07T08:49:50Z
       
       0 likes, 0 repeats
       
       @simon @not2b I don't know - I actually think you are bringing nuance to the discussion (at the expense of grabbing a bit more attention by using AI, which by now is incredibly vague in general discourse) with a statement like "LLMs, a type of statistical model...", which is sorely needed.Also, I still try to use SALAMI whenever I can ;).
       
 (DIR) Post #Adb7OlW0HT6J8IFFM8 by phiofx@hachyderm.io
       2024-01-07T08:58:04Z
       
       0 likes, 0 repeats
       
       @simon the mathematical structure of algorithms is as objective as it gets in terms of classifying them and "#AI" in its current #llm form is an incremental evolution of a vast prior body that ultimately goes back to linear regression. Fitting functions to data, extrapolating and doing something with the outcome is bread and butter in many industries.I suspect that one factor reinforcing the (ab)use of the term "AI" is to decouple any regulatory discussion from historically established norms
       
 (DIR) Post #Adb7ZaLEAIB3fBMcDY by strypey@mastodon.nzoss.nz
       2024-01-07T09:03:49Z
       
       0 likes, 0 repeats
       
       @simon> The most influential organizations building Large Language Models today are OpenAI, Mistral AI, Meta AI, Google AI and Anthropic. All but Anthropic have AI in the title; Anthropic call themselves “an AI safety and research company”. Could rejecting the term “AI” be synonymous with a disbelief in the value or integrity of this whole space?Rejecting those companies and their business models? Yes. For me "AI" is a marketing phrase and using it to describe #MOLE is doing unpaid PR work.
       
 (DIR) Post #Adb7pmceGwfiNxEhlI by strypey@mastodon.nzoss.nz
       2024-01-07T09:06:45Z
       
       0 likes, 0 repeats
       
       @simon> Slapping the label “AI” on something is seen as a cheap trick that any company can use to attract attention and raise money, to the point that some people have a visceral aversion to the term.Exactly. Well put.
       
 (DIR) Post #Adb9aRcEICLOnZqzwG by heydon@front-end.social
       2024-01-07T09:22:50Z
       
       0 likes, 0 repeats
       
       @simon I read "artificial" like "pseudo" so it works for me.
       
 (DIR) Post #AdbBMDcDkQLLESC5po by astrojuanlu@social.juanlu.space
       2024-01-07T09:42:32Z
       
       0 likes, 0 repeats
       
       @simon Counterargument: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/ "AI" as a term, like many other things, was a male ego thing. McCarthy: "I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him." https://en.m.wikipedia.org/wiki/History_of_artificial_intelligence "AI" is the biggest terminology stretch in the history of computing, and using it is "OK" only because everybody else is doing it, but that's a weak excuse.
       
 (DIR) Post #AdbGy7ZXAGi7GQpWJU by KarlPettersson@mastodon.nu
       2024-01-07T10:47:03Z
       
       0 likes, 0 repeats
       
       @simon I have seen claims that "smart" was used within industry to avoid claims of intelligence after the AI winters. But that term is, of course, also not very informative today.
       
 (DIR) Post #AdbJaO4uRZHfMwsctc by fpbhb@mastodon.social
       2024-01-07T11:15:43Z
       
       0 likes, 0 repeats
       
       @simon At least my motivation for challenging the term is _not_ that AI is not actually intelligent, but to spark discussion about the level of abuse by AGI proponents. Just look at OpenAI’s mission statement: they are actively abusing what the “I” implies to the general public with a pompous vision, intentionally shifting the meaning of “I”. They should call themselves ClosedAGI instead. We should focus on “Useful Computation”, whatever paradigms that requires.
       
 (DIR) Post #AdbKH9zyfP6w9cy8Om by happyborg@fosstodon.org
       2024-01-07T11:24:16Z
       
       0 likes, 0 repeats
       
       @simon I think there is a point because something has changed. People are suddenly experiencing something uncannily like all the fictional AIs they've read about and watched in movies.Many people, including plenty I expect to know better are seeing a conversational UX with a black box behind it, as opposed to a few lines of basic, and then make wildly overblown assumptions about what it is. Deliberately encouraged by those using deceptive framing such as 'hallucinations' to describe errors.
       
 (DIR) Post #AdbML8JT4kwfzd6jIG by _chris_real@kolektiva.social
       2024-01-07T11:47:08Z
       
       0 likes, 0 repeats
       
       @simon Using words that have achieved common meaning through time (despite their origin) is how we are able to communicate.This is a thoughtful justification, but it's also a support of common sense.
       
 (DIR) Post #AdbPkOpx4pX2LL2cNs by Jason_Dodd@mastodon.social
       2024-01-07T12:25:48Z
       
       0 likes, 0 repeats
       
       @simon I always thought that if it's actually intelligent then it would just be AI, Actual Intelligence.
       
 (DIR) Post #AdbdyUSmftbQPnUZIO by simon@fedi.simonwillison.net
       2024-01-07T15:03:18Z
       
       0 likes, 0 repeats
       
       @futuraprime maybe!My hunch is that it's easier to teach people that new term than convive them to reject a term that everyone else in society is already using
       
 (DIR) Post #AdbeBPA1hVXRjZsOlU by simon@fedi.simonwillison.net
       2024-01-07T15:06:40Z
       
       0 likes, 0 repeats
       
       @tml yeah, that's a point that could be arguedI think LLMs fit the general research area of /trying/ to get machines to do that - in the same way that creating the LISP programming language was part of attempts to build towards that goal
       
 (DIR) Post #AdbfG4HiqOS0OIGP44 by simon@fedi.simonwillison.net
       2024-01-07T15:19:36Z
       
       0 likes, 1 repeats
       
       @astrojuanlu I hadn't seen that quote regarding cybernetics before, that's fascinating!
       
 (DIR) Post #Adbfh5GvHY4DGrluro by jimgar@fosstodon.org
       2024-01-07T10:57:17Z
       
       0 likes, 0 repeats
       
       @serapath @simon Yeah, it’s polysemic. It means x to researchers, but y to laypeople who only know of ChatGPT. I honestly haven’t seen/heard anyone IRL immediately jumping into a conversation with “but it’s not actually intelligent!!”. What I have experienced is getting partway into a conversation and having to say it - because it has become obvious the other person DOES think “Intelligence” is human-like decision making.
       
 (DIR) Post #Adbfh5yse7EFTCL1No by simon@fedi.simonwillison.net
       2024-01-07T15:23:49Z
       
       0 likes, 0 repeats
       
       @jimgar @serapath that observation that the term AI is polysemic just expanded my understanding of the core issue here substantially! Thanks for that
       
 (DIR) Post #Adbhvk0AqVr9YT3Qp6 by serapath@mastodon.gamedev.place
       2024-01-07T15:49:09Z
       
       0 likes, 0 repeats
       
       @simon @jimgar and i learned a new term called "polysemic" 😁thanks. ...makes sense to me.
       
 (DIR) Post #AdbmMZZm7Hbqr5kxQ8 by jeffsussna@mastodon.social
       2024-01-07T16:38:56Z
       
       0 likes, 0 repeats
       
       @astrojuanlu @simon some (such as me) might claim everything about AI, not just the name, is a male ego thing. Also cybernetics was about much more than artificial intelligence.
       
 (DIR) Post #AdbocMD5mIFFwYniqW by darkuncle@infosec.exchange
       2024-01-07T17:01:58Z
       
       0 likes, 0 repeats
       
       @simon wired: “AI isn’t actually intelligent”tired: “crypto means cryptography”expired: “actually it’s GNU/Linux”in all cases, objectors are correct, but missing the point of general audience (as opposed to technical audience) communication.
       
 (DIR) Post #AdbrWf1xBuhYRGZB1E by faassen@fosstodon.org
       2024-01-07T17:35:28Z
       
       0 likes, 0 repeats
       
       @simon@astrojuanlu I didn't know that either. I can see why one would want to disassociate symbolic AI from cybernetics, but of course there's an irony given where AI ended up. The trend towards connectionism in AI was already well underway by the early 90s, though; considering neural networks as AI is nothing new.
       
 (DIR) Post #AdbwJxdIPrGNYYPsHo by codinghorror@infosec.exchange
       2024-01-07T18:30:42Z
       
       0 likes, 0 repeats
       
       @simon @jannem @richardsheridan then we need something like the SAE J3106 designations for "intelligence" see https://blog.codinghorror.com/the-2030-self-driving-car-bet/
       
 (DIR) Post #AdbxoNCq02lTVo6eZM by sanchom@mastodon.social
       2024-01-07T18:46:39Z
       
       0 likes, 0 repeats
       
       @simon I agree! I wrote a bit about the terminological critique here: https://sanchom.github.io/atlas-of-ai.html
       
 (DIR) Post #AdbyHt9wHFnf0kpB7Q by simon@fedi.simonwillison.net
       2024-01-07T18:51:55Z
       
       0 likes, 0 repeats
       
       @glyph Added this just now, I think I learned from https://social.juanlu.space/@astrojuanlu/111714012496518004 which gave me an excuse to link to https://99percentinvisible.org/episode/project-cybersyn/ (I'll never skip an excuse to link to that)
       
 (DIR) Post #AdbzCYfd7vVryLfak4 by simon@fedi.simonwillison.net
       2024-01-07T18:55:44Z
       
       0 likes, 0 repeats
       
       @codinghorror @jannem @richardsheridan Absolutely, something like that would help enormously - as it stands, any arguments that "it's not really intelligent" inevitably lead to a debate about what "intelligence" really is, which doesn't appear to have any useful conclusion yet
       
 (DIR) Post #AdbzeUhzS0WbnE5vsm by simon@fedi.simonwillison.net
       2024-01-07T19:04:56Z
       
       0 likes, 0 repeats
       
       Casual thought: maybe a good term for "artificial intelligence" that's actually intelligent... is intelligence!
       
 (DIR) Post #AdbzxfYXLORiwDofiq by sayrer@mastodon.social
       2024-01-07T19:07:32Z
       
       0 likes, 0 repeats
       
       @simon @glyph this is well covered in the older Norvig books (I just looked because I am sitting next to them). PAIP has a very humorous chapter on “GPS" the general problem solver, and AI: A Modern Approach covers the history very well in Section 1.3 (~page 17), and mentions escape from cybernetics, but not the personal stuff.
       
 (DIR) Post #Adc0qLvo0fL004m7do by Migueldeicaza@mastodon.social
       2024-01-07T19:20:26Z
       
       0 likes, 0 repeats
       
       @simon @jannem @codinghorror @richardsheridan but that feels like “this is what we have managed to deliver” - the name seemed to be the general aspiration of what they were trying to achieve.
       
 (DIR) Post #Adc1LUuUg0eepC0qBc by jcf@mastodon.social
       2024-01-07T19:21:30Z
       
       0 likes, 0 repeats
       
       @simon I’ve yet to hear rigorous definitions of either artificial or intelligence.
       
 (DIR) Post #Adc1ZiYgjHHVCyaseO by michaelgemar@mstdn.ca
       2024-01-07T19:25:19Z
       
       0 likes, 0 repeats
       
       @simon A problem I see is that the colloquial use of “intelligence” implies conscious agency, and brings with a whole host of assumptions that are not warranted with artificial system, and that can cause huge problems.
       
 (DIR) Post #Adc3CTV8kHIjkzykvA by jimmylittle@hachyderm.io
       2024-01-07T19:47:43Z
       
       0 likes, 0 repeats
       
       @simon In America at least, “intelligence” has a negative connotation of “spying”. Central Intelligence Agency, “gathering intelligence”, etc.Might be counterproductive to start telling people a computer is doing it.(Yes, I know computers have been doing it for decades. But lots of people are paranoid and stupid)
       
 (DIR) Post #Adc3cQLX68ejV8rIpc by happyborg@fosstodon.org
       2024-01-07T19:51:42Z
       
       0 likes, 0 repeats
       
       @simon we began debating this on the Safe Network forum and it quickly became obvious that it is incredibly hard to define. There are so many ways to look at phenomena that could be called intelligence, so many timescales and scopes.Really the first step is to clearly specify your terms. Anything ambiguous is pretty useless.
       
 (DIR) Post #Adc3tVuvdfCzhaZ1Wq by andrewfeeney@phpc.social
       2024-01-07T19:55:28Z
       
       0 likes, 0 repeats
       
       @simon I propose we split off the term “Eh Eye” to refer to the at best useless and at worst harmful hype driven vaporware emerging from the LLM boom, and leave the computer scientists, neuroscientists, philosophers and theologians to argue about the definition of Artificial Intelligence.
       
 (DIR) Post #Adc5qRSpwFuyvl5Sj2 by futuraprime@vis.social
       2024-01-07T20:17:14Z
       
       0 likes, 0 repeats
       
       @simon Yeah, that’s fair. Certainly everyone equates LLMs with AI.The other part of my reluctance is that lots of people are trying to broaden the term to capitalise on it—I’ve seen “AI” applied to all sorts of unsupervised learning tasks to make them sound fancier. The gulf between someone’s random forest classifier and GPT4 is so huge it makes me want to be more specific.
       
 (DIR) Post #Adc6EIXEmvEaBhWMQC by objectObject@hachyderm.io
       2024-01-07T20:19:00Z
       
       0 likes, 0 repeats
       
       @simon I feel like LLMs are one of the first technologies where "Artificial Intelligence" sort of applies. GPT4 can do things I cannot do, do tasks which it wasn't explicitly trained on, etc. It's not very good at a lot of this and has obvious limitations. But it seems much harder to explain it away as "just" doing XYZ, as with earlier AI technologies like symbolic calculus, expert systems or statistical classifiers.
       
 (DIR) Post #Adc6owbjHEoGtZ61r6 by zef@hachyderm.io
       2024-01-07T20:27:35Z
       
       0 likes, 0 repeats
       
       @simon and for the current iteration: artificial intellintish
       
 (DIR) Post #Adc6zwt0a5rcweQSTQ by marquiskurt@iosdev.space
       2024-01-07T20:27:52Z
       
       0 likes, 0 repeats
       
       @simon I suppose, if we're playing by Talos Principle rules... 🤔
       
 (DIR) Post #Adc87MoChTNlK7HOWu by gpshead@infosec.exchange
       2024-01-07T20:42:23Z
       
       0 likes, 0 repeats
       
       @simonThe less you know the more confident you are. Just ask an LLM.I intentionally avoid the term AI and advise other technically minded folks to do the same because it is a purely Marketing term. It will never have a meaningful definition.Everything I've ever worked on to automate tasks with computers in the past 30 years would be called AI today by a Marketing Department despite none of it involving ML.Their definition is "this term attracts attention and money", oriented around their goal. The lay person hearing it has a definition of "hype buzzword bingo score for Product Name". It doesn't communicate anything.Elide the term AI from any context in which it gets used to describe something and it should still be just as meaningful. If not, nothing was being said.Be right back. I'm gonna go hit Tab in my command line so the shell's AI can do what I want for me. 😛@carlana @glyph
       
 (DIR) Post #Adc9kZxNqkDjLippWi by simon@fedi.simonwillison.net
       2024-01-07T21:00:55Z
       
       0 likes, 0 repeats
       
       @futuraprime I was tasked with delivering a recommendation system a while ago, and the product owners REALLY wanted it to use machine learning and AI... I eventually realized that what they wanted was "an algorithm", so I got something pretty decent working with a pretty dumb Elasticsearch query plus a little bit of SQL
       
 (DIR) Post #AdcA5HxRxrChbkk0tU by bouncing@twit.social
       2024-01-07T20:48:35Z
       
       0 likes, 0 repeats
       
       @spacehobo @codinghorror @richardsheridan @simon I don’t think the musician is well known outside of jazz circles, but maybe I’m mistaken. Either way, who cares? It’s a great project.
       
 (DIR) Post #AdcA5IssWL8oTrbsPI by simon@fedi.simonwillison.net
       2024-01-07T21:04:31Z
       
       0 likes, 0 repeats
       
       @bouncing @spacehobo @codinghorror @richardsheridan Adrian named Django after Django Reinhardt because he's really into gypsy jazz - see https://www.youtube.com/watch?v=_6CNlqSF1oA for a recent example!