Post AVDPYNseBaGOfsYSYa by 22@octodon.social
 (DIR) More posts by 22@octodon.social
 (DIR) Post #AVDNpTrkz7JGFlA9yK by simon@fedi.simonwillison.net
       2023-05-01T15:14:24Z
       
       0 likes, 1 repeats
       
       @jeffjarvis An interesting thing about that article is that "When did The New York Times first report on AI?" is a question that's almost the perfect case of something a language model will be unable to answer correctlyExplaining why that is to people is so hard though!One of the big skills involved in using this  tech productively is developing robust instincts in terms of what questions it will likely answer well and what questions it's going to blatantly mess up
       
 (DIR) Post #AVDO4Cy261ooBc7Z0i by simon@fedi.simonwillison.net
       2023-05-01T15:17:00Z
       
       0 likes, 0 repeats
       
       @jeffjarvis A key message I've been trying to get across to people is that, while this stuff looks trivially easy to use, it really isn't - it takes a great deal of experience and knowledge to use it in a way that avoids the many, many traps and pitfallsOnce you climb that steep and invisible learning curve it's one of the most powerful new tools I've ever encountered - but figuring out how to help guide people there is proving really difficult
       
 (DIR) Post #AVDOYngEABjwjjNd7w by ravigupta@mastodon.social
       2023-05-01T11:22:15Z
       
       0 likes, 0 repeats
       
       @jeffjarvis But they are acing medical tests somehow?
       
 (DIR) Post #AVDOYoI9tA56dN7vFY by simon@fedi.simonwillison.net
       2023-05-01T15:22:31Z
       
       0 likes, 0 repeats
       
       @ravigupta @jeffjarvis Most of those "it got an A on this test!" things are multiple choice questionsMultiple choice questions are the kind of thing a language model can be incredibly effective at - much, much easier than "when did the NY Times first mention AI?"
       
 (DIR) Post #AVDOkQtGijZrWy1j3Q by simon@fedi.simonwillison.net
       2023-05-01T15:24:42Z
       
       0 likes, 0 repeats
       
       @macloo @jeffjarvis Hah, "embryo", wow!And people these days are unhappy that "hallucinate" is too much anthropomorphism
       
 (DIR) Post #AVDPWKj0wk64JE4TPU by jeffjarvis@mastodon.social
       2023-05-01T11:02:54Z
       
       0 likes, 3 repeats
       
       Sigh. LLMs don't hallucinate. They assemble words. They know nothing, understand nothing. They are being misused by tech platforms and misrepresented by media. When A.I. Chatbots Hallucinate https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucinatation.html?smid=tw-share
       
 (DIR) Post #AVDPYNseBaGOfsYSYa by 22@octodon.social
       2023-05-01T15:33:35Z
       
       0 likes, 0 repeats
       
       @simon @jeffjarvis this is the message of the classic xkcd 1425: “In CS, it can be hard to explain the difference between the easy and the virtually impossible”As a developer who works daily with very highly skilled and technical people (financial asset traders), the opacity for them of what I do continually surprises me. The gentle “I’m not sure how long this will take but…” is just as often followed by a request that’s a one-liner change or a request that needs three teams to coordinate work on.
       
 (DIR) Post #AVDPa0LqdZ2WHOzBxI by seth@s3th.me
       2023-05-01T15:30:51Z
       
       0 likes, 0 repeats
       
       @jeffjarvis Spot on
       
 (DIR) Post #AVDQEuEQILwnAl14T2 by jakob@pxi.social
       2023-05-01T15:41:15Z
       
       0 likes, 0 repeats
       
       @simon @jeffjarvis I don't think it takes extraordinary skill - or at least it would not need to, if the mental models of people who drive public discourse were better informed. Of course you can't understand or explain a thing well, if the metaphor you are using to describe it is ill equipped and leads you down unhelpful paths.https://pxi.social/@jakob/110283919520935594
       
 (DIR) Post #AVDTWrDwW406v32jmi by jsit@social.coop
       2023-05-01T15:24:11Z
       
       0 likes, 0 repeats
       
       @jeffjarvis Calling them “AI” in the first place is a big part of the problem.
       
 (DIR) Post #AVDTWrtQ1rB4zgRrQu by shacker@zirk.us
       2023-05-01T16:19:59Z
       
       0 likes, 0 repeats
       
       @jsit @jeffjarvis I agree with this so strongly. We agree it’s not intelligence but call it that anyway.  Which means we’ll be jaded and unclear when AGI does arrive.
       
 (DIR) Post #AVDTmLk3Kk6JudRw7E by jsit@social.coop
       2023-05-01T16:22:31Z
       
       0 likes, 0 repeats
       
       @shacker And importantly this isn’t like a metaphysical distinction, like we’re arguing about whether it’s conscious or not. It’s not *designed* to give correct answers to questions like, say, Wolfram Alpha is. That’s almost just a side-effect.
       
 (DIR) Post #AVDYCZyRRPtxOkwV0q by jakob@pxi.social
       2023-05-01T15:48:28Z
       
       0 likes, 0 repeats
       
       @simon @jeffjarvis incidentally, "answering" is already an unhelpful frame.To answer a question, as an act of human-like communication, an interlocutor needs to access a knowledge model that binds truth values to propositions. An LLM cannot answer a question in that sense.Some prompts can generate useful outputs, though. That's how I would try to approach defining sensible constraints.
       
 (DIR) Post #AVDYCag2pImPZzLJya by simon@fedi.simonwillison.net
       2023-05-01T17:10:29Z
       
       0 likes, 0 repeats
       
       @jakob @jeffjarvis that's an illustration of the problem: despite lacking a knowledge model, they CAN answer many questions really effectively - they do that all the timeI'm not convinced that trying to explain to people that what they see LLMs do every day isn't technically correct or possible is a useful path
       
 (DIR) Post #AVDae1YvAyrSgnDhtQ by jakob@pxi.social
       2023-05-01T17:37:05Z
       
       0 likes, 0 repeats
       
       @simon @jeffjarvis it is epistemically false to purport there is a schema of question and answer that LLMs can partake in. LLMs generate outputs from large N data pools, stochastic training algorithms and prompts. That you frame the input prompt as a question is a choice.People see things all the time where an initial model of what they see is incongruent with the underlying causes. That's where journalists and Benoit Blanc come in. Hopefully only one of them is fictional.
       
 (DIR) Post #AVDbhBjvtOAP4rRXwu by simon@fedi.simonwillison.net
       2023-05-01T17:49:16Z
       
       0 likes, 0 repeats
       
       @jakob @jeffjarvis I understand how they work - the thing I find so interesting about them is how many things they manage to be useful for despite the very clear flaws limitations tools in the way they are built
       
 (DIR) Post #AVDknIb2DDa9fRGnKK by sheldonrampton@mas.to
       2023-05-01T19:31:16Z
       
       0 likes, 0 repeats
       
       @simon @jakob @jeffjarvis I find the same thing is the case with human beings. Despite the very clear flaws and limitations in the way WE are built, we still often manage to be useful and occasionally even inspired.
       
 (DIR) Post #AXI0Pwoh1MqMMsgTmS by robertklaschka@mastodon.online
       2023-05-01T11:07:51Z
       
       0 likes, 0 repeats
       
       @jeffjarvis I've come to conclusion that using words like hallucinate is part of a deliberate effort to make people think of these technologies as similar to human Intelligence. Similar to the vast amount of time spent making robots dance so we anthropomorphise them.
       
 (DIR) Post #AXI0Pxo1LLtrR5NSN6 by _L1vY_@mstdn.social
       2023-05-01T17:07:45Z
       
       0 likes, 1 repeats
       
       @robertklaschka @jeffjarvis Those dancing robots got people to accept ultraviolent police machinery out on our streets in broad daylight 😤