Post AcqEg7Xmuj64WU2rtA by maria@thelife.boats
 (DIR) More posts by maria@thelife.boats
 (DIR) Post #Acq1J5b6galsQCHjxA by matt@toot.cafe
       2023-12-15T14:56:01Z
       
       0 likes, 0 repeats
       
       I'm getting tired of simplistic, indignant characterizations of generative AI like this one: https://social.ericwbailey.website/@eric/111584809768617532 "a spicy autocomplete powered by theft that melts the environment to amplify racism and periodically, arbitrarily lie"It's a tool like any other; it can be used for good as well as bad. Yes, the copyright issue is real, but we can presumably overcome it by using models whose developers are more scrupulous about their sources of training data, not throwing out the whole thing.
       
 (DIR) Post #Acq1J6nYDoJs957BzM by matt@toot.cafe
       2023-12-15T14:57:25Z
       
       0 likes, 0 repeats
       
       I'll mention again a more balanced take from @danilo that I posted the other day: https://redeem-tomorrow.com/the-average-ai-criticism-has-gotten-lazy-and-thats-dangerousI also like @simon's writing on generative AI.
       
 (DIR) Post #Acq1J7k2iL6j4UTu9w by simon@fedi.simonwillison.net
       2023-12-15T15:40:25Z
       
       0 likes, 0 repeats
       
       @matt @danilo "And so the problem with saying “AI is useless,” “AI produces nonsense,” or any of the related lazy critique is that destroys all credibility with everyone whose lived experience of using the tools disproves the critique, harming the credibility of critiquing AI overall." 💯
       
 (DIR) Post #Acq3BZx88rZe9s8Iwi by dalias@hachyderm.io
       2023-12-15T16:01:27Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo The core problem here, and I don't know how to solve it, is extreme ignorance about information provenance in these people going by their "lived experience" with AI. What AI produces is no less nonsense than the output of a magic 8 ball. The process by which it's produced has nothing to do with the truth of the statement.
       
 (DIR) Post #Acq5WrHn5mtzuE4U8O by nf3xn@mastodon.social
       2023-12-15T16:27:37Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo They are on the sidelines. It shows that they are not involved. The views are largely irrelevant. Most people haven't a clue what they are talking about and without a hint of irony just repeat shit they have heard, like a 'stochastic parrot' lol.
       
 (DIR) Post #Acq6N6gF6M1Ci1QFQu by simon@fedi.simonwillison.net
       2023-12-15T16:37:09Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo that's not true. 90% of the output I get from LLMs is genuinely useful to me. Comparing it to a magic 8-ball doesn't work for me, at all.
       
 (DIR) Post #Acq6Y57NFblV35LW88 by simon@fedi.simonwillison.net
       2023-12-15T16:38:18Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo a lot of the time I'm not using LLMs to look up "facts" about the world, because I know that's the pattern of use most likely to lead to problems
       
 (DIR) Post #Acq6iW6LmWNldGqKrQ by dalias@hachyderm.io
       2023-12-15T16:38:43Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo How do you distinguish the 90% from the 10%?
       
 (DIR) Post #Acq75d7zp6A85LdNsO by maria@thelife.boats
       2023-12-15T16:42:35Z
       
       0 likes, 0 repeats
       
       @simon honest question, please explain how you determine which is in the 90% and which is in the 10%you know it produces errors and gives wrong information; how do you deal with that?
       
 (DIR) Post #Acq7I23exYcUsaWHNg by simon@fedi.simonwillison.net
       2023-12-15T16:44:01Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo @maria same way I do with random information I find on Google, or stuff that a confident but occasionally confidently wrong teacher might tell me
       
 (DIR) Post #Acq7iBPKow7zPE7yiW by simon@fedi.simonwillison.net
       2023-12-15T16:46:38Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo @maria I genuinely think that the idea that "LLMs get things confidently wrong, so they're useless for learning" is misguidedI can learn a TON from an unreliable teacher, because it encourages me to engage more critically with the information and habitually consult additional sourcesIt's rare to find any single source of information that's truly infallible
       
 (DIR) Post #Acq7iG2pa267msdZpo by simon@fedi.simonwillison.net
       2023-12-15T16:49:30Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo @maria maybe there are people out there who can't learn from LLMs because they don't have the ability to responsible consume unreliable information, but I would hope that everyone can learn information skills that overcome that - otherwise they're already in trouble from exposure to Google search
       
 (DIR) Post #Acq7tikfBl1s0QbbjE by dalias@hachyderm.io
       2023-12-15T16:48:51Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo @maria Do you understand that a lot of people are not doing that, but taking the output as the result of thought by a superintelligence?
       
 (DIR) Post #Acq8HzntROrDqGgwD2 by simon@fedi.simonwillison.net
       2023-12-15T16:50:59Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo @maria this is why I think it's so important to dispel the idea that these things are superintelligencesThey're spicy autocomplete... but it turns out spicy autocomplete can be incredibly useful if you take the time to learn how to use it effectively, which isn't nearly as easy as it looks at first
       
 (DIR) Post #Acq8Yt4lhxSttndhNA by simon@fedi.simonwillison.net
       2023-12-15T16:53:43Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo @maria one of the biggest challenges of this technology is that it looks easy to use, but that's actually very deceptive - it's extremely hard to use wellUsing it to get great results in a responsible way requires a ton of practice and knowledge about how the tech works, which is difficult to teach people because so much of it depends on developing intuition about what works reliably and what doesn't
       
 (DIR) Post #Acq8YxLDqj0H7zMtZg by simon@fedi.simonwillison.net
       2023-12-15T16:55:49Z
       
       0 likes, 0 repeats
       
       @dalias @matt @danilo @maria I encourage people who are getting started with it to try and find a situation where it confidently gives them a clearly incorrect resultMy hope is that the earlier you see it get something obviously wrong, the quicker you can form a mental model that it's not "intelligent" in the human sense of the word
       
 (DIR) Post #Acq8wuVW2Ob9HwbaHw by maria@thelife.boats
       2023-12-15T16:57:32Z
       
       0 likes, 0 repeats
       
       @simon I'm a writer and researcher and super attuned to the value of questionable sources and accounts. It's essential though that one also understand the *why* of the unreliability--biases, changes in perspective. But ChatGPT has confidently attributed words to me that I would never have written Really struggling to see how the unchecked spread of disinformation like that could be anything but harmful.
       
 (DIR) Post #Acq99B4uFmUC9n0Zxw by dalias@hachyderm.io
       2023-12-15T16:59:14Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo @maria I disagree mostly with the comparison to search. Each search result is already attributed to a source which the user has a mental model for potentially being a party they do or don't trust. That is, except the featured snippets, which are an evil precursor to LLMs and are presented as Truth on the query with source mission (before) or buried and misattributed (now) to claim a legitimate source is saying something wrong (due to truncation/misformatting) in the snippet.
       
 (DIR) Post #Acq9PrZq0lc4b9QsjY by dalias@hachyderm.io
       2023-12-15T17:03:52Z
       
       1 likes, 0 repeats
       
       @simon @matt @danilo @maria Sadly I think a lot of ppl still insist on seeing this as a bug that's on the verge of being overcome rather than the fundamental nature of the tool...
       
 (DIR) Post #Acq9cqWpDmWJ6vDLIe by trochee@dair-community.social
       2023-12-15T17:05:14Z
       
       0 likes, 0 repeats
       
       @simon @dalias @matt @danilo @maria This has an uncanny resemblance to the antivax antimask crowd"I want to let my child's body build up natural immunities, and getting a bad cold is something I can tolerate, so I don't see why *we* should try to do anything to stop the spread of measles/COVID/flu/head-lice"
       
 (DIR) Post #Acq9pGsl43aKJvOjOC by simon@fedi.simonwillison.net
       2023-12-15T17:09:36Z
       
       0 likes, 0 repeats
       
       @maria right, that's one of the hundreds of completely unintuitive lessons people have to learn: this technology is wholly unsuited to providing useful attribution or citation of anything...... in its default form at least. Mix in techniques like retrieval augmented generation (RAG) and citations get massively more useful, see this recent piece from O'Reilly publishing https://www.oreilly.com/radar/copyright-ai-and-provenance/There's so much people have to understand to use it responsibly- which is a huge problem!
       
 (DIR) Post #AcqA9lTt0Ztb0qXJa4 by dalias@hachyderm.io
       2023-12-15T17:01:59Z
       
       0 likes, 0 repeats
       
       @tarasovich @simon @matt @danilo @maria The difference is you've been conditioned to evaluate whether you trust them and to believe that a large portion of ppl who aren't trying to deceive you have some reasonable basis for the things they tell you rather than just stringing together smart sounding words to bullshit you.
       
 (DIR) Post #AcqA9mTvHvWG7FYrHE by simon@fedi.simonwillison.net
       2023-12-15T17:12:13Z
       
       0 likes, 0 repeats
       
       @dalias @tarasovich @matt @danilo @maria that's why we need to help people learn the many weird and unintuitive things they need to understand to effectively evaluate output from LLMsRight now we are basically dropping people in the deep end and pretending they don't need to learn to swim
       
 (DIR) Post #AcqAPPZ4GEOhfoQ9KK by ZacBelado@hachyderm.io
       2023-12-15T17:13:54Z
       
       0 likes, 0 repeats
       
       @simon @danilo @dalias @matt @maria You != everyoneGeneralizations based on personal experience don’t work as guidelines for the ;larger population.
       
 (DIR) Post #AcqAkuwuliWB5mpW40 by matt@toot.cafe
       2023-12-15T17:15:06Z
       
       0 likes, 0 repeats
       
       @ZacBelado @simon @danilo @dalias @maria On the other hand, I don't like assuming that the larger population isn't as smart, or discerning, or whatever, as us.
       
 (DIR) Post #AcqAkvi3wQERS0tAYK by simon@fedi.simonwillison.net
       2023-12-15T17:19:02Z
       
       0 likes, 0 repeats
       
       @matt @ZacBelado @danilo @dalias @maria absolutely12 months ago it felt like the discourse was leaning in the direction of "this technology isn't safe for anyone other than the experts to use, it's irresponsible to make it available to everyone" - I felt very uncomfortable with that
       
 (DIR) Post #AcqAzDygojygA9TxTc by luis_in_brief@social.coop
       2023-12-15T17:06:46Z
       
       0 likes, 0 repeats
       
       @dalias @simon @matt @danilo @maria “which the user has a mental model for potentially being a party they do or don't trust” lol/sob, they absolutely don’t have that mental model, that’s why so much disinfo was quite effective pre-llm.There’s a fair critique that the use of first person pronouns and conversational style make it even harder for people to do that analysis. But it was pretty hard for most people already.
       
 (DIR) Post #AcqAzEy18j2BEMAw4G by simon@fedi.simonwillison.net
       2023-12-15T17:22:38Z
       
       0 likes, 0 repeats
       
       @luis_in_brief @dalias @matt @danilo @maria the first person pronoun thing is such a huge problem, I really wish that hadn't become the standard for how these tools work
       
 (DIR) Post #AcqBdMqFHV8xEjfrNI by dvogel@mastodon.social
       2023-12-15T17:30:35Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo I get where you're going with this and I don't disagree, but in the spirit of sharpening this argument, it needs to be paired with additional points. On it's own it sounds like you're saying we should hold ourselves back from calling out horoscopes as cleverly worded nonsense that have no connection to reality because for many people the content seems to line up quite well with their lived experience of reading their horoscope daily.
       
 (DIR) Post #AcqBdP9Sh1heOugEbY by dvogel@mastodon.social
       2023-12-15T17:32:43Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo It might help to reframe this around a dividing point between the useful parts of LLM tech and the distracting parts. As people hone in on specific useful types of LLM outputs their lived experience will include fewer of the distractions, and thus the critique will lose credibility. This separates it from something like a horoscope which only provide distractions.
       
 (DIR) Post #AcqBpWgOWBHFEWrdhY by dalias@hachyderm.io
       2023-12-15T17:31:03Z
       
       0 likes, 0 repeats
       
       @simon @tarasovich @matt @danilo @maria No, we need to dethrone the SV garbage who are pushing these things and promoting them as intelligence.
       
 (DIR) Post #AcqC82oDoPey5FM6Hg by matt@toot.cafe
       2023-12-15T17:23:48Z
       
       0 likes, 0 repeats
       
       @luis_in_brief @dalias @simon @danilo @maria It would be interesting to see an LLM tuned for instruction-following, particularly iterative instruction following taking prior context into account, that is specifically trained not to emulate a person, e.g. no first-person pronouns.
       
 (DIR) Post #AcqC83lQGJ0z2r3NYm by simon@fedi.simonwillison.net
       2023-12-15T17:32:08Z
       
       0 likes, 0 repeats
       
       @matt @luis_in_brief @dalias @danilo @maria I just built one as an experiment: https://chat.openai.com/g/g-bno1OSvBy-objective-advisor
       
 (DIR) Post #AcqC8439CPC1vpLXBg by matt@toot.cafe
       2023-12-15T17:26:04Z
       
       0 likes, 0 repeats
       
       @luis_in_brief @dalias @simon @danilo @maria I use a screen reader often on my PC, and all the time on my phone. I use an older-generation, obviously synthetic-sounding text-to-speech engine, in part because I prefer my talking computers to sound like computers. I wonder what the equivalent of that would be for LLMs.
       
 (DIR) Post #AcqCLd7U7ZN0SoqNCy by MattHodges@mastodon.social
       2023-12-15T17:33:41Z
       
       0 likes, 0 repeats
       
       @simon @dalias @matt @danilo @maria I think it's both true that the UX of a chatbot can be a problematic way to surface information AND that critics of these tools are significantly under-appreciating that we humans still have cognition. Earlier this year I worked with Bing Chat to help plan my vacation and half a dozen replies genuinely asked "how can you know if this hotel is real?" and, well, I would hope people could figure out on their own how I might be able to know that.
       
 (DIR) Post #AcqCe3IHXqYyr0g7aS by luis_in_brief@social.coop
       2023-12-15T17:41:40Z
       
       1 likes, 0 repeats
       
       @simon @dalias @matt @danilo @maria me: AI regulation is a fiendish problem; we'll be lucky if we get out of our immediate local minima in my lifetime; we need to be patient and thoughtfulalso me: implement this proposal tomorrow https://crookedtimber.org/2023/05/22/ban-llms-using-first-person-pronouns/
       
 (DIR) Post #AcqCqI2aXfzi48Ao08 by simon@fedi.simonwillison.net
       2023-12-15T17:44:59Z
       
       0 likes, 0 repeats
       
       @riley @dalias @matt @danilo @maria I agree with everything you said there... and yet, I'm finding ways to put this technology to use on a daily basis that are improving my life and workThat's why I'm so interested in helping other people learn how to use this stuff effectively - if you can climb the deceptive learning curve it's enormously beneficial
       
 (DIR) Post #AcqD04hJ0UdfZtssbI by dalias@hachyderm.io
       2023-12-15T17:45:56Z
       
       1 likes, 0 repeats
       
       @simon @matt @luis_in_brief @danilo @maria I love how it's declared itself to be "informative and objective" 🤣 🤡
       
 (DIR) Post #AcqDQqeXKglbq1M6ka by simon@fedi.simonwillison.net
       2023-12-15T17:49:12Z
       
       0 likes, 0 repeats
       
       @dalias @matt @luis_in_brief @danilo @maria hay, yeah the defaults it came up with are pretty laughable there - but it works as a quick prototype
       
 (DIR) Post #AcqDahxoUeERmk4d7o by dalias@hachyderm.io
       2023-12-15T17:53:22Z
       
       0 likes, 0 repeats
       
       @simon @riley @matt @danilo @maria I don't understand the motive to use it. With exceptional care and expertise, it might be possible to use without harming yourself. But you're still making use of something that's an environmental disaster (in the sense of both physical environment and information environment), that's empowering the worst people, and that's built on stolen labor. Something like that needs an extremely compelling public interest reason to use it, not "I'm smart so I can".
       
 (DIR) Post #AcqDrReCPVIu6UscfA by maria@thelife.boats
       2023-12-15T18:00:29Z
       
       0 likes, 0 repeats
       
       @simon This is a common argument but I confess it doesn't make much sense to me; I would prefer to learn from an informed person than an uninformed oneIf an 'unreliable teacher' (this is an oxymoron!) is so valuable, then why not just generate random argle-bargle about whatever it is you're researching?(srs question, it's like y'all have never heard of Oblique Strategies)
       
 (DIR) Post #AcqE4hakROxfJpaTPE by simon@fedi.simonwillison.net
       2023-12-15T18:01:31Z
       
       0 likes, 0 repeats
       
       @dalias @riley @matt @danilo @maria do you feel differently about the openly licensed models you can run on your own laptop? Those are getting shockingly good these daysI'm still looking forward to someone producing a usable LLM trained entirely on public domain data - I think it's going to happen soonMicrosoft's Phi-2 only cost (estimated) ~$35,000 to train - so building these things is getting much more accessible to smaller organizations
       
 (DIR) Post #AcqEEfZ0psmHvPWoV6 by simon@fedi.simonwillison.net
       2023-12-15T18:02:45Z
       
       0 likes, 0 repeats
       
       @wwahammy @dalias @matt @danilo @maria which LLMs are you using?I've found GPT-4 to be a huge improvement on that front
       
 (DIR) Post #AcqETS4vFwZeRdVEPY by dalias@hachyderm.io
       2023-12-15T18:06:14Z
       
       0 likes, 0 repeats
       
       @simon @riley @matt @danilo @maria Where are they going to get this "public domain data"?
       
 (DIR) Post #AcqEg7Xmuj64WU2rtA by maria@thelife.boats
       2023-12-15T18:07:22Z
       
       0 likes, 0 repeats
       
       @simon You keep saying it's beneficial but have not brought any receipts, to my knowledge
       
 (DIR) Post #AcqEvih74W8A5QqTB2 by simon@fedi.simonwillison.net
       2023-12-15T18:08:25Z
       
       0 likes, 0 repeats
       
       @wwahammy @dalias @matt @danilo @maria sounds like you've run into one of the many traps that plague this space: the free LLMs have very deep flaws, which causes discerning people to write off the entire technology class as pointless hype
       
 (DIR) Post #AcqF8eiFBfZNVBXhHk by simon@fedi.simonwillison.net
       2023-12-15T18:08:59Z
       
       0 likes, 0 repeats
       
       @dalias @riley @matt @danilo @maria Project Gutenberg plus Wikipedia is a good start
       
 (DIR) Post #AcqF8ivrUyq6aZwd4C by simon@fedi.simonwillison.net
       2023-12-15T18:11:49Z
       
       0 likes, 0 repeats
       
       @dalias @riley @matt @danilo @maria text produced by the US federal government, the European Union etc should be really valuable here too
       
 (DIR) Post #AcqFcbvw5Iv1ZPELwW by simon@fedi.simonwillison.net
       2023-12-15T18:15:01Z
       
       0 likes, 0 repeats
       
       @maria the really good LLMs at this point honestly feel like they're edging into the class of an over-confident undergraduate teaching assistant: they're right most of the time - especially about certain specialist subjects - but occasionally very confidently wrongBut they are a TA who never gets frustrated, never condescends or dismisses you and is instantly available 24 hours a dayThat's worth putting up with occasional mistakes!
       
 (DIR) Post #AcqFn0gIvYf9DhsTcu by dalias@hachyderm.io
       2023-12-15T18:17:05Z
       
       0 likes, 0 repeats
       
       @simon @riley @matt @danilo @maria Wikipedia isn't PD, has license & attribution requirements that need to be met.
       
 (DIR) Post #AcqFyPMvTqFUyLFU4u by simon@fedi.simonwillison.net
       2023-12-15T18:17:34Z
       
       0 likes, 0 repeats
       
       @maria I've written a ton about this - a few links:- https://simonwillison.net/2023/Mar/27/ai-enhanced-development/- https://simonwillison.net/2023/Aug/27/wordcamp-llms/- https://simonwillison.net/2023/Sep/29/llms-podcast/- https://til.simonwillison.net/gpt3And everything tagged LLMs: https://simonwillison.net/tags/llms/
       
 (DIR) Post #AcqGE7YHUU48Lc3J0C by maria@thelife.boats
       2023-12-15T18:22:17Z
       
       0 likes, 0 repeats
       
       @simon Couldn't disagree more. I never met an undergraduate TA who would have literally invented jokes I would never have made in my wildest nightmare about Jim Jones being a Wilco fan (literally, this)https://popula.com/2023/04/30/yakkin-about-chatgpt-with-david-roth/
       
 (DIR) Post #AcqGRkfbMV0PR0yaEC by maria@thelife.boats
       2023-12-15T18:27:34Z
       
       0 likes, 0 repeats
       
       @simon I'll have a look, thanks... am in deep on my own nonsense just now
       
 (DIR) Post #AcqGyPVh3RIdMgulQO by simon@fedi.simonwillison.net
       2023-12-15T18:36:07Z
       
       0 likes, 0 repeats
       
       @maria yeah, maybe a TA with a sideline in wild conspiracy theories, who spends their free time on some of the worst Internet forums
       
 (DIR) Post #AcqOeyDD7hpEpgViGe by luis_in_brief@social.coop
       2023-12-15T19:44:53Z
       
       0 likes, 0 repeats
       
       @wwahammy @simon @dalias @matt @danilo @maria dunno, man, it's generating working python and shell that Does Shit for me. Maybe my scripts count as bullshit to you, or maybe the time I saved is bullshit to you, or maybe it's bullshit that I didn't hire somebody on fiverr to write them for me, but they're pretty useful bullshit to me.
       
 (DIR) Post #AcqOeyvsRdYR4DPNtA by dalias@hachyderm.io
       2023-12-15T19:50:54Z
       
       0 likes, 0 repeats
       
       @luis_in_brief @wwahammy @simon @matt @danilo @maria If you got working Python, it didn't "generate" it. It copy and pasted from a gigantic corpus of FOSS with licenses requiring attribution & possibly copyleft, and per its creators' explicitly programmed intent, it stripped enough to hide that.
       
 (DIR) Post #AcqOezXoAbtaxr9g0m by simon@fedi.simonwillison.net
       2023-12-15T20:00:24Z
       
       0 likes, 0 repeats
       
       @dalias @luis_in_brief @wwahammy @matt @danilo @maria the copy and pasting metaphor doesn't feel right to meI think of it more as it taking an /average/ of every example it's seen - still completely ignoring licensing and copyright issuesI often use it to refactor my code - "extract this into a function" for example - where everything it outputs is "copied and pasted" from my own input that I gave it, just in a very slightly different shape
       
 (DIR) Post #AcqPOGm2xQ9hQsPBPk by dalias@hachyderm.io
       2023-12-15T20:10:13Z
       
       1 likes, 0 repeats
       
       @simon @luis_in_brief @wwahammy @matt @danilo @maria It's like a windowed superimposition where there are often large supports where the window value against a particular piece of training data is 1.0. I don't see that as materially different from "copy and paste". It is very much "interpolated plagiarism", and it's been documented that there's explicit stripping of identifying characteristics to cover up the similarity to source material.
       
 (DIR) Post #AcqRV2zsmGr2djEieW by simon@fedi.simonwillison.net
       2023-12-15T20:34:02Z
       
       0 likes, 0 repeats
       
       @dalias @luis_in_brief @wwahammy @matt @danilo @maria "interpolated plagiarism" is a great way of describing it!I've called it "money laundering for copyrighted data" in the past https://simonwillison.net/2023/Aug/3/weird-world-of-llms/#how-theyre-trained
       
 (DIR) Post #AcqUd97mBDYpp3Twh6 by glyph@mastodon.social
       2023-12-15T21:05:40Z
       
       0 likes, 0 repeats
       
       @luis_in_brief @dalias @simon @matt @danilo @maria I personally know a couple of researchers who have studied digital literacy and disinformation and based on what they have told me about their work, at a population scale, approximately 0%  of users have any sense of what the meta-context of a web page and browser chrome are telling them about the provenance of information.Watching google’s “what is a web browser” vox pop videos is eye-opening.
       
 (DIR) Post #AcqUdA3CjhUwhALoCu by dalias@hachyderm.io
       2023-12-15T21:09:26Z
       
       1 likes, 0 repeats
       
       @glyph @luis_in_brief @simon @matt @danilo @maria That claim sounds sus. I could see it being the case with poor sampling (which is often what you're stuck with for this kind of study), but there is a large enough literate/educated population for some appreciable % to recognize these things.
       
 (DIR) Post #AcqUyVWM6bv5MhAGp6 by SnoopJ@hachyderm.io
       2023-12-15T17:15:23Z
       
       1 likes, 0 repeats
       
       @dalias It doesn't help that the industry has actively fostered this false belief. People would probably believe it either way, but the outright lies about the last 5 years of development (driven almost exclusively by scale) aren't doing favors for anybody except people who invested years ago who will profit from overvaluation, or orgs looking for easy-mark investors trying to get in on it post-hocThe exaggerations seem to be directly propotional to the amount of money sloshing around.@simon
       
 (DIR) Post #AcqYhdtC8epADG3qvQ by JessTheUnstill@infosec.exchange
       2023-12-15T21:50:48Z
       
       0 likes, 0 repeats
       
       I find the most novel part of LLM theory whether or not you can consider its output a "transformative work" (setting aside whatever current copyright law/policy says, but just the general philosophical concept of "transformative work")When a person creates a new transformative work, it - overall - increases the "entropy" of the space of human knowledge. There is something new about the world that someone has made. Nothing about what an LLM's output can in and of itself increase the entropy of the human knowledge space, because it's incapable of being more than the sum of its parts. Everything that comes out of it is just a remix of what went into it. That doesn't mean that it couldn't hypothetically be valuable in various use cases. But even in a perfect use case, it is simply a research librarian who can spit back stuff that sounds like the things said in all the books she's read, but would be incapable of creating new works for addition to the library, and unable to actually do science and grow human understanding.It's applying layers and layers of funhouse mirrors over other peoples' artwork. It's remixing a billion samples of other peoples' music, but unlike when a human does it, it has no concept of what "looks good" or "sounds interesting". It's just filtering all that information through a shitload of linear algebra and giving something that rhymes with its input.@simon @dalias @luis_in_brief @wwahammy @matt @danilo @maria
       
 (DIR) Post #AcqZ6niUjnBpvp3KpU by simon@fedi.simonwillison.net
       2023-12-15T21:53:46Z
       
       0 likes, 0 repeats
       
       @JessTheUnstill @dalias @luis_in_brief @wwahammy @matt @danilo @maria Yeah, absolutely - I think the issue of if this is legally "fair use" (I'm not a lawyer, but from what I've seen consensus seems to be leaning towards "yes") should be considered independently from the moral/ethical answer to that question
       
 (DIR) Post #AcqZ6rxAz9JJ4Vx7Gi by simon@fedi.simonwillison.net
       2023-12-15T21:54:25Z
       
       0 likes, 0 repeats
       
       @JessTheUnstill @dalias @luis_in_brief @wwahammy @matt @danilo @maria Where it gets even more complicated is when there's human guidance and iteration involved. Publishing six paragraphs that an LLM spits out from a single prompt feels very different to me from me taking those six paragraphs and then prompting it a dozen times more to direct them in a direction that matches the thing I'm trying to communicate - even if I then copy and paste out the end result.
       
 (DIR) Post #AcqpCj7aKmvQgRdVHk by Biggles@qoto.org
       2023-12-16T00:59:19Z
       
       0 likes, 0 repeats
       
       @simon @matt @danilo @maria @dalias "Useless for learning" is a bit of a straw man. More accurate perhaps is "actively dangerous for the lazy or gullible". As an example, I point to *multiple* instances of lawyers turning in phony case citations. These people should absolutely know better - yet it's happened multiple times, and will happen again. The llm is presented in the news as an AI - artificial intelligence - and source of information. To most people, that brings to mind a trusted advisor, or subject matter expert - and when they say "provide 5 legal citations that support my argument" - boy, it sure sounds convincing, because the AI is generally incapable of saying "I don't know" - and that's the dangerous bit. Lots of tools human beings make are both useful and dangerous. Fire, the automobile, a chainsaw. We generally don't hand those out to people without some sort of training or warning. We regulate their use. But the law and human society are still catching up here. LLMs are useful in the right hands, very much so. But they need a wrapper preventing children, the gullible, and apparently lawyers from diving in without some warnings. You simply can't trust the output the same way you'd trust, say, a teacher of the subject.
       
 (DIR) Post #AcqqguH5WzHPut2I2S by simon@fedi.simonwillison.net
       2023-12-16T01:16:04Z
       
       0 likes, 0 repeats
       
       @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria I want to live in a world where every human has the ability to automate tedious tasks using a computer (which looks a lot like "writing scripts") - without first needing to get a conputer science dregree or equivalentIf LLMs are the tech that gets us there I'll be pretty thrilled https://simonwillison.net/2023/Aug/27/wordcamp-llms/#helping-everyone
       
 (DIR) Post #AcqqriXDWIHegvjdCK by simon@fedi.simonwillison.net
       2023-12-16T01:18:14Z
       
       0 likes, 0 repeats
       
       @Biggles @matt @danilo @maria @dalias I agree, "actively dangerous for the lazy or gullible" is a good summary of where we are today That's why I spend so much effort trying to counter the hype and explaining to people that this stuff isn't science fiction AI, it's spicy autocomplete - it takes a surprising amount of work to learn how to use it effectively
       
 (DIR) Post #Acqr4sMSISWovqyYiG by Biggles@qoto.org
       2023-12-16T01:20:42Z
       
       0 likes, 0 repeats
       
       @danilo @maria @matt @simon @dalias "Spicy autocomplete" made me snort. Well played.
       
 (DIR) Post #AcqrHUfdgmWZDnxvBQ by simon@fedi.simonwillison.net
       2023-12-16T01:22:15Z
       
       0 likes, 0 repeats
       
       @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria I try ink we can have bothLLMs can massively smooth the learning curve, and means way more people can get to a point where they can build custom softwareExcel * 1000It's absurd to me how difficult it is to automate tedious things with computers right now
       
 (DIR) Post #AcqrSa2Q4bwx3OuJZQ by simon@fedi.simonwillison.net
       2023-12-16T01:23:05Z
       
       0 likes, 0 repeats
       
       @Biggles @danilo @maria @matt @dalias I wish I could take credit for that one but I've seen it pretty widely used by AI skeptics - I think it's a great short description!
       
 (DIR) Post #AcqrdOVfH9RgU7k8Aq by simon@fedi.simonwillison.net
       2023-12-16T01:24:24Z
       
       0 likes, 0 repeats
       
       @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria imagine a world in which genuine subject experts can write useful software themselves, rather than having to recruit non-subject-expert software engineers to do it for them
       
 (DIR) Post #AcqtNJ1Y6U1ykK73g0 by simon@fedi.simonwillison.net
       2023-12-16T01:46:13Z
       
       0 likes, 0 repeats
       
       @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria yeah, I worry about that possibility (automation primarily reducing salaries) a lot: https://simonwillison.net/2023/Dec/8/sebastian-majstorovic/The world currently runs on a huge array of shockingly poorly constructed spreadsheets, with practically no automated tests or version control to help maintain themI'm not convinced that a world with LLM-assisted code by non-professional programmers is going to be much worse than that
       
 (DIR) Post #AcqtjsiU52dVobxbKS by mnl@hachyderm.io
       2023-12-16T01:50:27Z
       
       0 likes, 0 repeats
       
       @simon @dalias @matt @danilo @maria I recommend people to always press regenerate multiple times when starting with LLMs, to get a sense of how "wide" it spreads, and what that looks like. I still do it regularly to remind myself.
       
 (DIR) Post #AcquHG9HyqRQ2tPirI by mnl@hachyderm.io
       2023-12-16T01:56:27Z
       
       0 likes, 0 repeats
       
       @simon @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria if non software developers can build software the way they can build an ecommerce shop with etsy or shopify, please, let them take our jobs while we figure out something else.Ideas I have where LLMs can help:- a new OS built around the concept of low power consumption as its prime objective. LLMs can help so much with tedious code like drivers.1/
       
 (DIR) Post #AcquxEITFvqgSrinQ0 by axleyjc@federate.social
       2023-12-16T02:03:40Z
       
       0 likes, 0 repeats
       
       @simon @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria Counterpoint: PHP developers that don't work at Meta. I was probably one of them.
       
 (DIR) Post #Acqy2XvE19Rmdo4j2m by simon@fedi.simonwillison.net
       2023-12-16T02:38:35Z
       
       0 likes, 0 repeats
       
       @mawhrin @mnl @luis_in_brief @wwahammy @dalias @matt @danilo @maria the thing I find so fascinating about LLMs is the enormous array of things they are useful for despite their many flaws
       
 (DIR) Post #Acr2AOQBcl8VJCYQV6 by simon@fedi.simonwillison.net
       2023-12-16T03:24:19Z
       
       0 likes, 0 repeats
       
       @corbin @mnl @mawhrin @luis_in_brief @wwahammy @dalias @matt @danilo @maria have you spent any time with ChatGPT Code Interpreter?It's really quite shocking how much more useful LLMs get when you give them the ability to write and wsecure code, and then see the errors that occur and iterate in a loop to fix themNot sure it could help much with driver work, but I've used it to reverse-engineer mystery binary files a few times already
       
 (DIR) Post #Acr2LlgtRuOmfAnDv6 by neirbowj@mastodon.online
       2023-12-16T03:26:32Z
       
       0 likes, 0 repeats
       
       @simon @dalias @matt @danilo That's an important and underappreciated insight. The relative value vs danger has a lot to do with how commonly an AI consumer recognizes and internalizes that fundamental limitation.
       
 (DIR) Post #AcrArwfOyXtrbWwvWC by mnl@hachyderm.io
       2023-12-16T05:02:17Z
       
       0 likes, 0 repeats
       
       @simon @mawhrin@circumstances.run @luis_in_brief @wwahammy @dalias @matt @danilo @maria code produced/modified by LLMs is deterministic and removes the LLM out of the loop.When targeting more declarative formal representation, there arguably isn't even code involved.
       
 (DIR) Post #AcrCSURUyXEYjlR680 by JessTheUnstill@infosec.exchange
       2023-12-16T05:20:08Z
       
       0 likes, 0 repeats
       
       Yeah, there is at least something to be said to the transformative process of a human selecting and editing the output of an LLM, the same as a DJ can make novel and new songs that are entirely comprised of samples or other people's songs. That's using an LLM simply as a tool that aids in the creation of new work, the same as Photoshop or a synth or a search engine. But the LLM itself isn't doing the creation, and we shouldn't let capitalists claim that the humans who are doing that editing, selecting, and curation should be compensated less.@simon @dalias @luis_in_brief @wwahammy @matt @danilo @maria
       
 (DIR) Post #AcrD2MPGKYuIErUiPY by paco@infosec.exchange
       2023-12-16T05:26:41Z
       
       0 likes, 0 repeats
       
       @simon A friend who’s confidently wrong sometimes has a history of right and wrong that you built up and understand over time. You factor it in to how you understand what they say. LLMs are more like total strangers whose specific history is unknown to you, so mentally you approximate it by assuming it is average. But people have a mind that you can simulate. You understand people intuitively because you are one. AI is super hard to simulate in your own head because it’s totally alien. A person saying something has qualifications, experiences, history with me, etc. I can use this metadata to help me apply the right level of trust or skepticism. But for LLMs, even people who are decently savvy still struggle to apply the right filter. It is alien. Until LLMs get better at explaining their reasoning, they will be hard for me to trust.
       
 (DIR) Post #AcrLkqsfBlTr7yyMam by mnl@hachyderm.io
       2023-12-16T07:04:11Z
       
       0 likes, 0 repeats
       
       @simon @corbin @mawhrin@circumstances.run @luis_in_brief @wwahammy @dalias @matt @danilo @maria it’s great at doing a fair amount of the boilerplate part of driver dev (not that I’m a super driver dev, it’s been a minute). Datasheet to structs, /proc entries, debugging/logging tools, nice cli utilities to exercise things from userland, fuzzing harnesses, consistent docs, example snippets for userland apis, etc… Which leaves you all the more time to enjoy the good parts of driver development…1/
       
 (DIR) Post #AcrLktz9e5SIkz25cu by mnl@hachyderm.io
       2023-12-16T07:04:11Z
       
       0 likes, 0 repeats
       
       @simon @corbin @mawhrin@circumstances.run @luis_in_brief @wwahammy @dalias @matt @danilo @maria which is of course figuring out that where the datasheet says to write 0x29 to register A before clearing the select line, you actually need to clear the select line, then write 0x29. 😂
       
 (DIR) Post #AcrVrmjlzWF7qVl7FA by tanepiper@tane.codes
       2023-12-16T08:57:19Z
       
       0 likes, 0 repeats
       
       @simon @wwahammy @dalias @matt @danilo @maria Bingo Simon, you hit the nail on the head.It's like filtered bottled water for everyone that can afford it, and toxic sludge from the tap for the rest of us.I've been working with these things for over a year now and yes - they are just spicy Markov Chains built on flawed and biased data. They have some uses too.
       
 (DIR) Post #AcsNw8TIWLWJw8Zmoy by simon@fedi.simonwillison.net
       2023-12-16T19:02:53Z
       
       0 likes, 0 repeats
       
       @corbin @mnl that's another big problem with this space: the difference between the best model (currently still GPT-4) and the smaller ones is enormousI love experimenting with local models but I never use them for code, because I know they're no good at that(That might change with Mixtral though, it's very impressive)Ethan Mollick wrote about this problem: https://simonwillison.net/2023/Dec/10/ethan-mollick/
       
 (DIR) Post #AcsPsglTHGyf2qYpnc by mnl@hachyderm.io
       2023-12-16T19:24:54Z
       
       0 likes, 0 repeats
       
       @simon @corbin mixtral and even mistral finetunes are extremely impressive, and for my purposes which is mostly “transpose from a semi formal to formal language” they are extremely interesting, since they beat gpt4 by so much speed wise. I am reasonably sure I’ll be able to shed gpt4 in the next 2 months.
       
 (DIR) Post #AcsVCUPVQ5FcI7Y1SK by zauberlaus@chaos.social
       2023-12-16T20:24:36Z
       
       0 likes, 0 repeats
       
       @simon in terms of code - you do know there are limits to software patents in europe? if it just basically works the same way it’s pretty much fair game here. @dalias @luis_in_brief @wwahammy @matt @danilo @maria
       
 (DIR) Post #AcsVggNnTGC46v95gu by simon@fedi.simonwillison.net
       2023-12-16T20:30:17Z
       
       0 likes, 0 repeats
       
       @zauberlaus @dalias @luis_in_brief @wwahammy @matt @danilo @maria absolutely no idea I'm afraid
       
 (DIR) Post #ActDHC5pLYf8ubOfHk by Homoevolutis0@austintexas.social
       2023-12-17T04:37:59Z
       
       0 likes, 0 repeats
       
       @simon @dalias @matt @danilo @maria but most cant.
       
 (DIR) Post #ActZfyuJhoXWFTEnDM by williampietri@sfba.social
       2023-12-17T08:49:30Z
       
       0 likes, 0 repeats
       
       @simon @dalias @matt @danilo I think you underestimate how useful people find semi-random output that poses as meaningful. The Magic Eight Ball. I Ching. Horoscopes. Tarot. Palmistry. Cold reading. Et cetera, ad nauseam. People manufacture meaning. And devotees will all tell you how useful it is.
       
 (DIR) Post #Acu4ipoJ5749TzGc1w by simon@fedi.simonwillison.net
       2023-12-17T14:37:35Z
       
       0 likes, 0 repeats
       
       @williampietri @dalias @matt @danilo yeah I understand thatCompanies sometimes get excited about Myers-Briggs style tests - effectively horoscopes for corporate America - which manage to be semi-useful because they give people an opportunity to talk and think about their working styles, even if the test itself is pseudoscience junkThat's not what's happening with LLMs though - based on 18+ months of experience extensively using them now
       
 (DIR) Post #Acu53dpeoMdqPjouES by simon@fedi.simonwillison.net
       2023-12-17T14:40:23Z
       
       0 likes, 0 repeats
       
       @williampietri @dalias @matt @danilo a magic 8-ball of horoscope can't do this: https://simonwillison.net/2023/Apr/12/code-interpreter/
       
 (DIR) Post #Acu5FYcm02R7jmTYFU by dalias@hachyderm.io
       2023-12-17T14:42:55Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo "I'm not as easily duped as ~those people~ .. this is the real thing!" 🤔Please don't take that too seriously. It's kinda a joke. But there are important relationships.
       
 (DIR) Post #Acu5fTgD0R2HvnRUDA by simon@fedi.simonwillison.net
       2023-12-17T14:48:18Z
       
       0 likes, 0 repeats
       
       @dalias @williampietri @matt @danilo yeah the psychological trickery risks of LLMs are profoundIt's so easy to anthropomorphize them: people assign opinions and feelings to them, assume they are a super-intelligence, fall in love with them, question if it's ethical to "keep them locked up" All for statistical next token prediction / spicy autocomplete!My argument is that deceptive spicy autocomplete is still massively useful when you learn how to put it to work
       
 (DIR) Post #Acu5xF0UIYS5dWOjB2 by simon@fedi.simonwillison.net
       2023-12-17T14:51:02Z
       
       0 likes, 0 repeats
       
       @dlatchx @mnl @maria that's true for LLMs working in their own, but it gets more complex when you give them access to tools like the ability to run searchesThey're also really good at accurately quoting a source if you copy and paste chunks of content from that source into their token context as part of answering a questionHere's a related prototype I built: https://til.simonwillison.net/llms/claude-hacker-news-themes#user-content-adding-attribution
       
 (DIR) Post #Acu7GFt3Lm5Fp2YDSK by dalias@hachyderm.io
       2023-12-17T15:06:09Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo Yes it's useful to get to copy snippets from other people's code or writing without having to think about where they came from... 🤔That's basically the whole purpose the big capitalist sponsors behind this stuff have in mind. Robbing and enclosing the commons.
       
 (DIR) Post #AcuBszTYhT4E2BLsvI by dalias@hachyderm.io
       2023-12-17T15:10:18Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo When things like that Python benchmark work, I'm always suspicious that there is very similar code somewhere in the training set which it largely copied with some boundary conditions changed to match your prompt, then stripped of identifying information. Essentially just accelerating reuse of FOSS code, if it had given you the details to credit & ensure license compliance - but it purposefully was designed not to do that.
       
 (DIR) Post #AcuBt0M9QUjglUtU12 by simon@fedi.simonwillison.net
       2023-12-17T15:57:44Z
       
       0 likes, 0 repeats
       
       @dalias @williampietri @matt @danilo my mental model is more that it's taking an average of every code example it's seen that's relevant to the challenge - like if you were to go and read every snippet of code on GitHub (via their code search) that uses a specific API, then build a solution based on the patterns you picked up from that comprehensive review of everything else(I do that myself on GitHub pretty often, this feels like a weird blurry automation of that process)
       
 (DIR) Post #AcuBt2Plm0op9IbOvw by dalias@hachyderm.io
       2023-12-17T15:13:12Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo I'll grant that this could be useful if, instead of disguising plagiarism & copyright infringement, this didn't behave generatively but instead as a pattern search to find code with suitable license easy to adapt to your problem, then suggested a patch to make it applicable to your problem. On top of legal & moral reasons, patch form would optimize for identifying where the machine may be introducing errors.
       
 (DIR) Post #AcuBt5LyqYZQGPquwa by dalias@hachyderm.io
       2023-12-17T15:14:20Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo But of course doing that would be contrary to the sponsoring orgs' explicit goal to deceive the public (and particularly the next round of "investors") that this is "AI" rather than spicy stack overflow c&p.
       
 (DIR) Post #AcuC94k0tJd4zvibUO by simon@fedi.simonwillison.net
       2023-12-17T16:00:20Z
       
       0 likes, 0 repeats
       
       @eestileib @dalias @williampietri @matt @danilo "The vast majority of what it's going to be used for is to entrench and automatically enforce racism, deny health coverage people are entitled to, and censor the speech of minorities."THAT is the conversation we need to be having about this stuff! It's why I get frustrated that so much of the conversation is about existential risk, science fiction terminator scenarios etc I want to see regulation of the /applications/ of these tools
       
 (DIR) Post #AcuCKpzTYk9mizYkIi by dalias@hachyderm.io
       2023-12-17T16:01:27Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo I think you're underestimating the degree of similarity of programs in the corpus to the "successful" outputs. We get lots of blog posts about successful outputs like that, but no quantitative data about success rate. Nobody who was trying to get something useful rather than to dunk blogs about the attempts where there was nothing sufficiently similar in the training and the output is nonsense.
       
 (DIR) Post #AcuEuRcbq5tbTmyDZI by simon@fedi.simonwillison.net
       2023-12-17T16:31:42Z
       
       0 likes, 0 repeats
       
       @dalias @williampietri @matt @danilo that's the thing about programming though: unlike writing in human languages, the best code is predictable and boringI often see them hallucinate an API method that doesn't exist, but which is clearly a "good idea" in terms of concistency with how everything else works - so I use them for API design assistance!In that case even the nonsense output is useful to me
       
 (DIR) Post #AcuGRb4ZoyLkImQLeC by simon@fedi.simonwillison.net
       2023-12-17T16:48:34Z
       
       0 likes, 0 repeats
       
       @dalias @williampietri @matt @danilo I do find it extremely frustrating that I can't dig into the training data to research this myself because the training data isn't public - at least for the highest quality models that I spend the most time with
       
 (DIR) Post #AcuI3RIQibQ8wGVocy by dalias@hachyderm.io
       2023-12-17T17:03:22Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo I don't think a lot of ppl appreciate the rage of those of us who lived through having to tiptoe around cleanroom reimplementing trivial things proprietary software did under threat of being accused of copyright infringement, only to have LLMs do it at gigantic scale much more flagrantly, this time taking from our work and giving to the proprietary software landlords...
       
 (DIR) Post #AcuIpNB5KlSK47vunQ by simon@fedi.simonwillison.net
       2023-12-17T17:15:30Z
       
       0 likes, 0 repeats
       
       @dalias @williampietri @matt @danilo that comparison to cleanroom implementations is a really interesting one - this is absolutely an industrial scale automation of that process
       
 (DIR) Post #AcuJ1cgBWcJjcNJy9A by dalias@hachyderm.io
       2023-12-17T17:17:29Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo What LLMs are doing absolutely would not meet the standards early free software folks had to hold themselves to for "clean room" to mitigate risk of their creations being deemed derivative.
       
 (DIR) Post #AcuJPZZ6ks18uSQmmm by simon@fedi.simonwillison.net
       2023-12-17T17:21:10Z
       
       0 likes, 0 repeats
       
       @dalias @williampietri @matt @danilo ... to the point that I wonder how much of the legal precedent set by those cleanroom projects could be relevant to legal arguments about usage of LLMsOver 40 years of precedent now as far as I can tell: The first IBM BIOS cleanroom clone was released all the way back in  1982! https://en.m.wikipedia.org/wiki/Columbia_Data_Products
       
 (DIR) Post #AcuJyCHHKUSuotbMZc by dalias@hachyderm.io
       2023-12-17T17:26:04Z
       
       0 likes, 0 repeats
       
       @simon @williampietri @matt @danilo There are some commonalities with :weed: legalization: I'd like for copyright to be weaker here, but not for those of us wronged by it before to get no retroactive benefit/reprieve while big business gets a new opportunity to profit.
       
 (DIR) Post #AcuTYBntT5VtMX5umO by mnl@hachyderm.io
       2023-12-17T19:15:33Z
       
       0 likes, 0 repeats
       
       @simon @dlatchx @maria I do use perplexity.ai and web assistant mode (basically both using bing search and then sprinkling some sparkly ai dust on it), and a variety of research specific gpt agents that use Arxiv/semantic scholar and other indexes to surface documents. While the summary of documents often misses the point, it gives me a good insight if it’s worth my time to dig deeper or move on.1/
       
 (DIR) Post #AcuTYEhyfXZ0N3LjTU by mnl@hachyderm.io
       2023-12-17T19:15:34Z
       
       0 likes, 0 repeats
       
       @simon @dlatchx @maria I especially like giving them a paper’s reference section, asking them to retrieve the abstracts, and then allowing me to “ask questions” of the reference section, which can further lookup information in the index.“Which datasets do the papers about xyz out of MIT reference? Which paper is about philosophy more than CS?”Does it replace more traditional lit-research methods? No of course not, it’s just a new method one can use.