Post AT1LzwsLNJriz79Mq8 by simon@fedi.simonwillison.net
 (DIR) More posts by simon@fedi.simonwillison.net
 (DIR) Post #AT0zlR8UO5BrV5NR4a by simon@fedi.simonwillison.net
       2023-02-24T20:03:48Z
       
       0 likes, 0 repeats
       
       Thoughts and impressions of AI-assisted search from Bing (also my weeknotes)https://simonwillison.net/2023/Feb/24/impressions-of-bing/
       
 (DIR) Post #AT101pB6layUhQPsDA by simon@fedi.simonwillison.net
       2023-02-24T20:06:45Z
       
       0 likes, 0 repeats
       
       Here's a controversial note from my post:> Something I’m struggling with here is the idea that this technology is *too dangerous* for regular people to use, even though I’m quite happy to use it myself. That position feels elitist, and justifying it requires more than just hunches that people might misunderstand and abuse the technology.
       
 (DIR) Post #AT10E3zfVboC7jtxFw by SnoopJ@hachyderm.io
       2023-02-24T20:08:59Z
       
       0 likes, 0 repeats
       
       @simon I think we're well past "hunch" when it comes to misunderstanding the technology, considering the various flavors of "wildly incorrect" that are floating around in the public sphereBut I do appreciate your attention to this dimension of the "how does this technology integrate with us?" problem
       
 (DIR) Post #AT10SPOuiLEMZTYAWu by simon@fedi.simonwillison.net
       2023-02-24T20:10:51Z
       
       0 likes, 0 repeats
       
       @SnoopJ but does it actually matter that it produces wild inaccuracies? So does soscial media and regular search - wild inaccuracies are everywhere alreadyThe big question for me is how quickly people can learn that just because something is called an "AI" doesn't mean it won't produce bullshitI want to see real research into this!
       
 (DIR) Post #AT10gOHRg1Pf9Niwl6 by colin@hoagie.fun
       2023-02-24T20:12:09Z
       
       0 likes, 0 repeats
       
       @simon In the mind of many tech folks, there are two tiers of users. There’s them, and there’s “regular users”. Regular users don’t understand anything and don’t use any features of any app.
       
 (DIR) Post #AT10wdqhYoxM4Kxp1E by SnoopJ@hachyderm.io
       2023-02-24T20:16:43Z
       
       0 likes, 0 repeats
       
       @simon clarification: I was responding specifically to "people might misunderstand…the technology"For me, there is absolutely zero doubt about this: most people I know have misunderstood the principles of the technology, and often *very* badly.OTOH, as you say, there are plenty of examples of this outside of machine learning too, and even inside of that sandbox, we've seen almost this exact hype cycle before. The crystal ball is currently hazier than normal…
       
 (DIR) Post #AT11F7qelLqyK3ky3M by earth2marsh@hachyderm.io
       2023-02-24T20:17:35Z
       
       0 likes, 0 repeats
       
       @simon Thought guns?
       
 (DIR) Post #AT11m8pi7eO8vu3cH2 by jacob@social.jacobian.org
       2023-02-24T20:17:50Z
       
       0 likes, 0 repeats
       
       @simon I don’t have a position on AI specifically, but I do want to point out that the concept of technology that requires advanced training/certification/licenses is nothing new. There are all sorts of things on my wife’s vet truck that she uses daily that I can’t use for various reasons. Including things that could land both of us in jail if I were to use. The idea of requiring a license to use something dangerous seems fine.
       
 (DIR) Post #AT11m9JUKwCwQFzOgi by simon@fedi.simonwillison.net
       2023-02-24T20:21:10Z
       
       0 likes, 0 repeats
       
       @jacob yeah that's such an interesting angle on thisHow dangerous is something that just spits out words?Incredibly dangerous! Words can start wars
       
 (DIR) Post #AT11xikYvMtRVrAdvs by n1k0@mamot.fr
       2023-02-24T20:21:37Z
       
       0 likes, 0 repeats
       
       @simon it’s not they’d abuse the technology, it’s that they’d be abused by it
       
 (DIR) Post #AT12sTVgG6uwkkGwnQ by mattwilcox@mstdn.social
       2023-02-24T20:38:42Z
       
       0 likes, 0 repeats
       
       @simon cars are that. You have to take a test to prove you understand how to safely use them. Is driving elitist?Do we need further proof that people will take whatever is returned at face value, unless trained otherwise?
       
 (DIR) Post #AT14GzwlPXmCG4CJii by smy20011@m.cmx.im
       2023-02-24T20:53:59Z
       
       0 likes, 0 repeats
       
       @simon I think It's not a technology for fact retrieval but It's good for idea generation.Have working with AI art community for a long time and we have exactly same problem as LLMs. AI generated good looking images but got detail wrong (fingers & body composition) and It's super hard to fix them.6 month after the release of Stable Diffusion, people mostly using it for inspiration instead of use it as a product for customers. The final product still need manual repaint and check to make sure everything is correct and sometimes it's harder to draw it from scratch.
       
 (DIR) Post #AT14YUY8splm5u1TTk by joelteitelbaum@mastodon.world
       2023-02-24T20:54:28Z
       
       0 likes, 0 repeats
       
       @simon the problem is less about “regular people” but rather bad actors. Call them “irregular people” perhaps…
       
 (DIR) Post #AT14jELPTVJJ1HBi64 by simon@fedi.simonwillison.net
       2023-02-24T20:55:17Z
       
       0 likes, 0 repeats
       
       @smy20011 that's what's so interesting about Bing though: it adds retrieval to an existing langauge model... and it seems to work a lot better than I would have expected
       
 (DIR) Post #AT14z5Vg5YbaFmYI4m by faassen@fosstodon.org
       2023-02-24T21:02:25Z
       
       0 likes, 0 repeats
       
       @simon @jacob Humans also produce dangerous words. What if a human uses a tool to produce dangerous words? Dangerous for who? The Soviet Union restricted photo copying. Licensing predictive text is too broad as it may restrict autocomplete on my phone.Drawing the line is a very tricky issue.
       
 (DIR) Post #AT15QG50tjTQr22Hx2 by simon@fedi.simonwillison.net
       2023-02-24T21:07:08Z
       
       0 likes, 0 repeats
       
       @faassen @jacob the license on the Chinese GLM-130B language model is pretty interesting! https://simonwillison.net/2023/Jan/10/the-glm-130b-license/
       
 (DIR) Post #AT1BUqLyZi10EpXVuC by mhp@mastodon.social
       2023-02-24T22:14:52Z
       
       0 likes, 0 repeats
       
       @simon That phrasing reminded me of another area that is considered too dangerous for most - rolling your own crypto. Haven't thought about it in too much depth but maybe there are similarities in terms of failures being non-obvious unless you are skilled in the field, lots of sharp edges and places you can make mistakes that don't even feel that risky, and the catastrophic consequences of failure!
       
 (DIR) Post #AT1JGA0W1GRWU10kYy by williamgunn@mastodon.social
       2023-02-24T23:42:25Z
       
       0 likes, 0 repeats
       
       @simon @jacob  It's not that any source is 100% accurate, but whether accuracy is the main thing or just a nice thing to have. Saying it's mostly correct and then talking about all the other things about it that are good is treating accuracy as a nice-to-have. Imagine a news outlet saying, "Our reporters mostly don't lie." Imagine a publisher of reference materials saying, "our reference manuals are mostly correct."
       
 (DIR) Post #AT1LzwsLNJriz79Mq8 by simon@fedi.simonwillison.net
       2023-02-25T00:10:47Z
       
       0 likes, 0 repeats
       
       @williamgunn @jacob yeah, I find the idea of a search engine that hallucinates and makes up facts out of thin air extremely uncomfortable
       
 (DIR) Post #AT1MGAzYetNrOoGKki by williamgunn@mastodon.social
       2023-02-25T00:15:53Z
       
       0 likes, 0 repeats
       
       @simon @jacob I find the idea of a product manager who isn't uncomfortable about that even more worrisome. Makes one wonder what would be too far.
       
 (DIR) Post #AT1MY36ciXOE9a5cau by simon@fedi.simonwillison.net
       2023-02-25T00:19:24Z
       
       0 likes, 0 repeats
       
       @williamgunn @jacob I have to admit my concern has been tempered slightly by actually getting to use the new Bing - it makes me wonder if maybe it's possible to pull this off well enough that the trade off is worthwhile
       
 (DIR) Post #AT1ZKKiziFvC9CpMbg by glyph@mastodon.social
       2023-02-25T02:42:22Z
       
       0 likes, 0 repeats
       
       @simon @jacob what proponents of AI need to be doing, at least in the US, is to make sure that LLMs are classified as a kind of handgun. The rest of us need to get them classified as a kind of truck
       
 (DIR) Post #AT1ZXbHpRoChHHRn1c by glyph@mastodon.social
       2023-02-25T02:44:57Z
       
       0 likes, 0 repeats
       
       @simon @williamgunn @jacob as you are using LLMs and finding your skepticism tempered by using them, please remember that their training criteria is *plausibility to humans*. This is a machine that is designed exclusively to make you think it’s worthwhile. You should calibrate your skepticism accordingly (i.e. keep it extremely, unreasonably high at all times, only be convinced by exhaustively thorough proof)
       
 (DIR) Post #AT1aWtIF76QcmyrmF6 by williamgunn@mastodon.social
       2023-02-25T02:55:54Z
       
       0 likes, 0 repeats
       
       @simon @jacob I'm sure it'll be worth it for some people! I've been using LLMs for several months now (early testing at Quora) and the more I use them, the more I get a sense of "this was created by and for people who do not share my values".My values include accuracy, attribution, consent, and concern for all the people who will be harmed through impersonation, scammed by very believable phishing scams, romance scams, appropriation of their intellectual property, etc.
       
 (DIR) Post #AT1xjdKe4AcLzq2FKS by jaanus@mastodon.justtact.com
       2023-02-25T07:15:57Z
       
       0 likes, 0 repeats
       
       @simon the opposite is far worse: not considering at all who might be harmed by your work.All technology has risks and potential for harm. It is prudent to consider those, but modern capitalism does not provide enough incentives to actually do it. So, it’s often just not done. Or we just don’t yet know how.
       
 (DIR) Post #AT1zME5jwXbwxk1uVc by geekwisdom@twit.social
       2023-02-25T07:34:05Z
       
       0 likes, 0 repeats
       
       @simonThis apples to ALL tech. As creators we need to consider the impact to society in what we create.An example as simple as the microwave, while convenient has created a "I  need it now" world.And of course don't forget Atomic energy and related weapons.Historically we suck at considering the scope and consequences of what we create.