Post AaIm7rwDEDeBrcGL6u by UncivilServant@med-mastodon.com
(DIR) More posts by UncivilServant@med-mastodon.com
(DIR) Post #AaIVEh5aGQqR1UJZMu by simon@fedi.simonwillison.net
2023-09-30T15:51:05Z
0 likes, 1 repeats
Honestly, at this point if you try to make the case that LLMs aren't actually useful for anything I take a little bit of personal offenseYou're effectively saying that people like me are deluding ourselves, falling for the hype when there's actually nothing thereThere are plenty of valid reasons to criticize LLMs - but not being useful genuinely isn't one of them(For more detailed thoughts on this, see the "comparing LLMs to crypto" section here: https://simonwillison.net/2023/Sep/29/llms-podcast/#comparing-llms-to-crypto)
(DIR) Post #AaIW1ihF4ngkCKJ3U8 by doug@union.place
2023-09-30T15:59:52Z
0 likes, 0 repeats
@simon I think all I've seen is how bad it is for what it is loudly espoused for, particularly in terms of mainstream adoption. I've tried generating summaries of bodies of text, and code analysis/suggestions, and it is often so incorrect it becomes difficult to take seriously in any context.What we need is more examples of where it excels, consistently.
(DIR) Post #AaIWEbXczRkXlsx9vc by scubbo@fosstodon.org
2023-09-30T16:01:35Z
0 likes, 0 repeats
@simon good for you!I completely agree - "LLMs/AIs are good for absolutely nothing" is even more lazy and ignorant a position than "they can do anything and everything!". While it's certainly true that they're currently being mis- and over-applied, there are plenty of valuable use-cases.
(DIR) Post #AaIWRU2OCUVuGNTzfM by impersonal@mastodon.social
2023-09-30T16:04:15Z
0 likes, 0 repeats
@simon Might turn out to be more of a niche tool than we initially thought during The Initial Hype?More than saying you have deluded yourself I think they are saying they have not looked into the topic deeply enough, lack understanding.But it does feel a bit like saying "but I can not till the earth of my fields with a velocipede. I'll stick to my horses thank you very much".
(DIR) Post #AaIWguGywMDzyTf98a by simon@fedi.simonwillison.net
2023-09-30T16:04:17Z
0 likes, 0 repeats
@doug We talk about that a fair bit in the podcast episodeI think the key to this is that it's actually quite difficult to use this stuff effectively - because it lays SO MANY traps for you, there are so many examples of things that it will get blatantly wrong, often in very convincing waysOnce you learn how to side-step those traps it becomes amazingly productive - but that takes quite a lot of effort
(DIR) Post #AaIWtgHOHL90L5wejQ by simon@fedi.simonwillison.net
2023-09-30T16:05:12Z
0 likes, 0 repeats
@impersonal Oh absolutely, if someone's criticism is that it's over-hyped I couldn't agree more with them
(DIR) Post #AaIX4ZCmv9XHaHz0ls by ncweaver@thecooltable.wtf
2023-09-30T16:06:51Z
0 likes, 0 repeats
@simon They are, however, of remarkably limited utility. For programming they are useful, because most programming is actually recycling boiler plate, so having a compressed view of all programs and auto-completing off of that is hugely useful.For translation they are useful, as it is a compressed representation of the two languages.Otherwise, they really are bullshit generators: spewing out an uncompressed representation of the lossy compressed data starting at a specific start.
(DIR) Post #AaIX4bonCp7lgjmOHo by ncweaver@thecooltable.wtf
2023-09-30T16:08:09Z
0 likes, 0 repeats
@simon And you can really tell. Copilot has been in the evolution for years, as is Google Translate.The new hype was driven by the eliza effect on the bullshit generators. I think a good dividing line is "everything after ChatGPT went public" is in the bullshit category unless proven otherwise.
(DIR) Post #AaIX4dtTUO3e7pz9rU by ncweaver@thecooltable.wtf
2023-09-30T16:10:57Z
0 likes, 0 repeats
@simon Additionally, on their own, LLMs are bullshit generators in the technical term, they have no sense of correctness.But for the co-pilot programming role, you have a post-bullshit correctness filter in the compiler that explicitly removes a huge amount of the problems.
(DIR) Post #AaIXEj8IrgHdGojKFM by briankrebs@infosec.exchange
2023-09-30T16:07:15Z
0 likes, 0 repeats
@simon AI is a machine. Machines are either a benefit, or a hazard. If it's a benefit, it's not my problem. Seriously, though...I think rather what I'm hearing most in criticism of AI is that we're not weighing the pros and cons of building and unleashing these machines. And that their utility becomes less important when a machine is actively and demonstrably causing harm (arguably of the reckless variety) for some subset of human beings.
(DIR) Post #AaIXQMSQzrnAQ2h5E0 by simon@fedi.simonwillison.net
2023-09-30T16:09:36Z
0 likes, 0 repeats
@dingemansemark Definitely. CFCs were super-useful for refrigeration, but we phased them out because the negative impacts (a giant hole in the ozone layer!) weren't worth the positive applicationsAre the positive impacts of LLMs worth the negatives? I can't answer that question myself with any confidence - everything is still shaking outI've been taking this attitude: they're here now, and they're not going to be uninvented, so let's figure out the most positive possible applications of them
(DIR) Post #AaIXhLeDGHt6pVIrHE by simon@fedi.simonwillison.net
2023-09-30T16:11:45Z
0 likes, 0 repeats
@mxfraud Right - those are examples of things I count as valid resaons to criticize LLMs"They're not actually useful for anything" is the thing I'm arguing against here - I've absolutely seen people trying to make that argument
(DIR) Post #AaIXs0dhi5EerL9Hkm by simon@fedi.simonwillison.net
2023-09-30T16:15:06Z
0 likes, 0 repeats
@ncweaver I think one of the big challenges is that the things which people intuitively expect them to be good at - looking up facts, solving math problems, things that computers have been excellent at for decades - are the things that LLMs are most likely to mess up!It takes skill, knowledge and practice to use LLMs to even a fraction of their full potential - but no-one is telling people that, and there are few useful resources for teaching it
(DIR) Post #AaIYHseu0LGLdY2tJQ by simon@fedi.simonwillison.net
2023-09-30T16:17:36Z
0 likes, 0 repeats
@briankrebs That's criticism I will absolutely join in with - these things have the potential to be extremely disruptive (in both good and bad ways) to a wide range of things, and we need to be thinking very hard about those impacts and how to mitigate the negative onesSaying "It's all just dumb hype, they're not actually useful for anything" gets in the way of those conversations in my opinion
(DIR) Post #AaIYUdjApTg55r7yHw by lawyerjsd@mastodon.social
2023-09-30T16:20:17Z
0 likes, 0 repeats
@simon hold up: when I see your posts I keep thinking of the master of law degree (LLM), which is useless most of the time. What are you referring to?
(DIR) Post #AaIYhb8RmM1Som7zfM by ncweaver@thecooltable.wtf
2023-09-30T16:24:57Z
0 likes, 0 repeats
@simon Also you are using LLMs in a tight human-and-checking computer coupled feedback loop.That ends up explicitly attacking the error/bullshit problem on the lossy decompression that is an LLM's execution of spicy autocomplete.But this also means that LLMs are a remarkably limited and weak tool, and provably can't go beyond that without some notion of "truth" being added on outside of the LLM itself.
(DIR) Post #AaIYwJM8AErY1JXnc0 by RileyNorman@masto.ai
2023-09-30T16:31:53Z
0 likes, 0 repeats
@simon "You're effectively saying that people like me are deluding ourselves"You should ask old internet denizens how often people have said that about whatever their newest tech is...Spoiler: It's really common. You'll hear that supporting argument with investors too, sometimes right before they lose their shirts.
(DIR) Post #AaIZA1V3Lj3Sh8FvWq by axleyjc@federate.social
2023-09-30T16:33:23Z
0 likes, 0 repeats
@simon @briankrebs use of the word "potential" in response to Brian's mention of harms to subsets of people comes off as dismissive of real harms happening now. May not have been your intent. Or you pivoted from harms to people to "disruption" in the abstract which is most often used to refer to jobs and industry impacts vs. individuals.
(DIR) Post #AaIZkZ481vw3Dp5QA4 by Rainer_Rehak@mastodon.bits-und-baeume.org
2023-09-30T16:41:54Z
0 likes, 0 repeats
@simon So your argument is, that you are infallible? (No offense, really asking)
(DIR) Post #AaIaLdXgLh0TowaTGi by simon@fedi.simonwillison.net
2023-09-30T16:48:37Z
0 likes, 0 repeats
@dingemansemark if you want to use this as an analogy for LLMs, I'd say we are at a point where some people are raising valid questions about the (not quite yet proven) impact of CFCs on the ozone layer and refrigeration companies are paying no attention to them at all
(DIR) Post #AaIaj9Ou4AIpHvh30S by simon@fedi.simonwillison.net
2023-09-30T16:53:00Z
0 likes, 0 repeats
@lawyerjsd Large Language Models - see https://simonwillison.net/2023/Aug/27/wordcamp-llms/ and https://simonwillison.net/2023/Sep/29/llms-podcast/
(DIR) Post #AaIauyXmG1w95iTz6W by stiller_leser@mastodon.social
2023-09-30T16:22:47Z
0 likes, 0 repeats
@ncweaver @simon Not for junior devs though. Also send me down one wrong route after while learning...
(DIR) Post #AaIauzYWUk7yEJq5uC by simon@fedi.simonwillison.net
2023-09-30T16:53:51Z
0 likes, 0 repeats
@stiller_leser @ncweaver I think they can be incredibly useful for junior developers too: https://simonwillison.net/2023/Sep/29/llms-podcast/#does-it-help-or-hurt-new-programmers
(DIR) Post #AaIb84jYYRpq3OErCK by interpipes@thx.gg
2023-09-30T16:29:46Z
0 likes, 0 repeats
@ncweaver @simon @briankrebs does it though? The largest danger in LLMs isn’t really the stuff that’s hilariously, obviously wrong. A coding copilot failing to produce code that compiles ~= the melting point of eggs.There is plenty of code that will compile and look plausible but will not actually work correctly. For evidence of this, see every bug introduced into software by a human developer who thought they were writing the right thing.
(DIR) Post #AaIb85ouW1iDQHkeBM by simon@fedi.simonwillison.net
2023-09-30T16:56:57Z
0 likes, 0 repeats
@interpipes @ncweaver @briankrebs this is why I like the intern analogy: if you blindly land code written by your new intern without a detailed review you're going to have a bad time, but that doesn't mean you can't get useful work out of them https://simonwillison.net/2023/Sep/29/llms-podcast/#code-interpreter-as-a-weird-kind-of-intern
(DIR) Post #AaIbbXf84sriAjDcRs by simon@fedi.simonwillison.net
2023-09-30T17:02:42Z
0 likes, 0 repeats
@RileyNorman hah, yeah see also SOAP, XSLT, NFTs. Sometimes the critics are right!(Bundling SOAP and XSLT with NFTs isn't really fair on the former two though)
(DIR) Post #AaIbqlPVFHH39dAT9U by simon@fedi.simonwillison.net
2023-09-30T17:05:06Z
0 likes, 0 repeats
@Rainer_Rehak not as a general rule, but I have 14+ months of experience now doing useful work with LLMs (I was using GPT-3 on a regular basis before ChatGPT came out)Have I been deluding myself for over a year?
(DIR) Post #AaIc3HpuVpU1CFUW36 by simon@fedi.simonwillison.net
2023-09-30T17:07:28Z
0 likes, 0 repeats
@dingemansemark that's not to say the negative impact of LLMs isn't proven yet - there are plenty of well documented negative impacts alreadyJust none of them quite as negative as the hole in the ozone layer - at least not yet, from what I've seen
(DIR) Post #AaIdfqIquilf3Gyeem by standev@mastodon.online
2023-09-30T17:25:52Z
0 likes, 0 repeats
@simon to be fair crypto hype lasted a lot longer than a year
(DIR) Post #AaIdrV03waKcSMkMFM by simon@fedi.simonwillison.net
2023-09-30T17:27:33Z
0 likes, 0 repeats
@mxfraud Right: those are exactly the kinds of issues we need to be highlighting and discussingSaying "actually it's all completely useless anyway" doesn't help us have those discussions - it actively distracts from and de-legitimizes them, which is why I'm arguing against the "it's not actually useful at all" framing here
(DIR) Post #AaIe3QxUnNyPOhEkz2 by simon@fedi.simonwillison.net
2023-09-30T17:28:31Z
0 likes, 0 repeats
@standev Yeah, and crypto had one extremely compelling use-case: if you got in early and convinced enough other people to buy in after you, you could make a lot of money out of it before the inevitable crash
(DIR) Post #AaIeEVqqwbrZl2PeVc by standev@mastodon.online
2023-09-30T17:30:56Z
0 likes, 0 repeats
@simon it is also pretty good for money laundering and evading capital controls
(DIR) Post #AaIePNAHa6I20BgAjI by xavdid@mastodon.social
2023-09-30T17:31:34Z
0 likes, 0 repeats
@simon LLMs are much more obviously useful, sure. The reason they (rightfully) draw comparisons is the level of hype and product adoption they're seeing, seemingly overnight. Before, tons of companies were adding crypto to their brand or product. Now, every SaaS under the sun has the phrase "AI Powered" somewhere on their landing page (regardless of how useful that is for the product). I think LLMs are here to stay in a different way, but I understand the comparison.
(DIR) Post #AaIefSkwYp0zybV1n6 by axleyjc@federate.social
2023-09-30T16:45:05Z
0 likes, 0 repeats
@interpipes @ncweaver @simon @briankrebs there's lots of code that compiles but is completely insecure! And the fact that many LLMs used for coding aren't trained on good code means there's a lot of latent bad code lossily compressed in the data set. If you use one, use one that curates the training data.This is a great balanced review of using an LLM for development: https://youtu.be/_nG6d6HSGB4
(DIR) Post #AaIefTg18cfWpcCbke by jernej__s@infosec.exchange
2023-09-30T17:32:32Z
0 likes, 0 repeats
@axleyjc @interpipes @ncweaver @simon @briankrebs There's one thing I'm really interested about Copilot – if there's no copyright infringement involved, why did Microsoft only train it on open source projects, when they have a huge library of code that they themselves have written?
(DIR) Post #AaIefUdZZCJ7oK4Aa0 by simon@fedi.simonwillison.net
2023-09-30T17:36:39Z
0 likes, 0 repeats
@jernej__s @axleyjc @interpipes @ncweaver @briankrebs My guess is that they were worried about trade secrets. Open source shouldn't have any trade secrets in it because people wouldn't choose to release it publicly if it did. Microsoft's internal code has all sorts of patterns in that might be considered confidential.
(DIR) Post #AaIefUpGrhfIObXVoW by axleyjc@federate.social
2023-09-30T17:30:07Z
0 likes, 0 repeats
@interpipes @ncweaver @simon @briankrebs The most popular refrain on stack overflow when developers asked questions about problems with cross origin web requests was to respond with a recommendation to add the header Access-Control-Allow-Origin: *Replies from the original posters thank them for that "solution". Of course it "worked", but only if you narrowly define "worked" as "functions" vs "functions securely"Fallacy of LLMs as sources of truth is that frequency correlates with truth.
(DIR) Post #AaIfmG5IAkTqemp08e by slott56@fosstodon.org
2023-09-30T17:49:21Z
0 likes, 0 repeats
@simon @doug having to learn how to sidestep problems doesn’t sound like much of an endorsement. Indeed it sounds very much like it adds new problems. It seems remarkably unhelpful to add problems that require deep expertise to sidestep.
(DIR) Post #AaIgIbnayQFrGnyC3c by simon@fedi.simonwillison.net
2023-09-30T17:55:23Z
0 likes, 0 repeats
@slott56 @doug Right: if someone tells you "LLMs are easy! They'll give you a huge productivity boost right out of the gate" then that person is misleading youMy message is "LLMs are surprisingly difficult to use. I have managed to get enormous productivity boosts after investing a lot of effort in learning how to use them effectively."
(DIR) Post #AaIgTDN68O20hAdK6q by simon@fedi.simonwillison.net
2023-09-30T17:56:07Z
0 likes, 1 repeats
@slott56 @doug LLMs are a chainsaw disguised as a pair of pliers
(DIR) Post #AaIgsDpjYPv7Mpx4wC by slott56@fosstodon.org
2023-09-30T18:00:06Z
0 likes, 0 repeats
@simon @doug the large investment path isn’t as terrifying as having to sidestep problems. I want to focus on the fact that it introduces problems that require deep expertise to side-step. That’s daunting. Folks get incredible productivity gains from the pomodoro method and don’t have to stidestep subtle, difficult-to-even-identify problems.
(DIR) Post #AaIh4RqJIGluI83F4a by simon@fedi.simonwillison.net
2023-09-30T18:02:56Z
0 likes, 0 repeats
@slott56 @doug I'm increasingly seeing evidence that convinces me that it helps, rather than hurts, new programmers - some notes on that here: https://simonwillison.net/2023/Sep/29/llms-podcast/#does-it-help-or-hurt-new-programmers
(DIR) Post #AaIhQG87WQLwKsmYXQ by glyph@mastodon.social
2023-09-30T18:07:51Z
0 likes, 0 repeats
@simon @dingemansemark My educated guess about the net of their utility is probably slightly more negative than Simon's but I think that CFCs are an illuminating analogy. The harm from CFCs was not rhetorical, it was well quantified. Whereas some of the writing about LLM harms at this point is wildly distorted. I've seen a dozen articles at this point which claim something like "every time you ask an LLM a question, it destroys a gallon of fresh water!" Except you can run an LLM on a laptop.
(DIR) Post #AaIhdLyd5vEmEaaRTk by impersonal@mastodon.social
2023-09-30T18:09:09Z
0 likes, 0 repeats
@simon @doug what types of traps have you identified so far?
(DIR) Post #AaIhoa8cmemdL8cFge by MattHodges@mastodon.social
2023-09-30T18:09:39Z
0 likes, 0 repeats
@simon I've started approaching this with a different perspective. I was also previously annoyed, and felt that I needed to convince people of their valuable use cases. But then I decided that persuasion campaign isn't worth the toil. If people choose not to learn about new, powerful, tools, that's going to put them at a disadvantage, not me. I won't be offended by their selective detriment. I'm continuing to invest in learning the opportunities —and risks! — for my work and my teams' work.
(DIR) Post #AaIi0zvoi63ioOXtcu by ulidig@mastodon.social
2023-09-30T18:11:37Z
0 likes, 0 repeats
@simon I can't say that LLMs aren't good for anything. However, they aren't good for the vast majority of things that people use them for.https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/"'Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose,' the team's paper concluded. '...ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style.' Among the set of preferred ChatGPT answers, 77 percent were wrong."
(DIR) Post #AaIiBrba0YLmAoGFeq by anthroposamu@mastodon.social
2023-09-30T18:11:50Z
0 likes, 0 repeats
@simon good read for me as an AI luddite. Thanks.
(DIR) Post #AaIiMdYqZ14tnm7V8i by ulidig@mastodon.social
2023-09-30T18:13:54Z
0 likes, 0 repeats
@simon @stiller_leser @ncweaver ChatGPT's odds of getting code questions correct are worse than a coin fliphttps://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/"'Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose,' the team's paper concluded. 'Nonetheless, ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style.' Among the set of preferred ChatGPT answers, 77 percent were wrong."
(DIR) Post #AaIjVFX7Mt1fqCywGO by simon@fedi.simonwillison.net
2023-09-30T18:30:58Z
0 likes, 0 repeats
@ulidig @stiller_leser @ncweaver I thought that one was a very poorly constructed paper. I looked at data they collected for it and their prompting strategy was terrible, plus they were using very specific standards for marking things as incorrect which I didn't think were robust https://github.com/SamiaKabir/ChatGPT-Answers-to-SO-questions
(DIR) Post #AaIlK8UzNh0SkOoZ9M by lewiscowles1986@phpc.social
2023-09-30T18:51:27Z
0 likes, 0 repeats
@simon I'd encourage you to not take offence at technology choice. You're too smart for that.We all invest time in things we think are useful, and they are always more or less useful than we perceive, because usefulness is a longer-term thing.Let it gloss over you, or don't, but please do not become entrenched. It's not about you, or any individual. You do so much of worth, including around LLM education, regardless of their value.
(DIR) Post #AaIlVqBxGv9pRbUcHA by simon@fedi.simonwillison.net
2023-09-30T18:53:00Z
0 likes, 0 repeats
@impersonal @doug the biggest one by far is hallucination - they can produce extremely convincing answers to all sorts of things which are entirely invented and unrelated to realityActually less of a problem for code, because if they hallucinate an API the code won't work when you test it!
(DIR) Post #AaIluk9sqCuzyIOlPM by Npars01@mstdn.social
2023-09-30T18:58:09Z
0 likes, 0 repeats
@simon There's lots of things that are "useful" yet imprudent to use.Nuclear bombs ended WW2 but it started a nuclear arms race we still might not survive.
(DIR) Post #AaIm7rwDEDeBrcGL6u by UncivilServant@med-mastodon.com
2023-09-30T18:20:06Z
0 likes, 0 repeats
@ncweaver @simon I wonder if LLMs would be useful somehow as beta testers? I don't know how you would implement it, but an ideal beta tester should be incapable of making a grilled-cheese sandwich without blowing up their kitchen.I feel like creative incompetence is one area where LLMs might be able to match humans.
(DIR) Post #AaIm7t6AufD7SnvoHI by simon@fedi.simonwillison.net
2023-09-30T18:59:33Z
0 likes, 0 repeats
@UncivilServant @ncweaver my hunch is that they'd do pretty badly at that, because they'd simulate the "average" human where for beta test were you want people who can exhibit really surprising edge cases you didn't think of
(DIR) Post #AaImJiipJofEkweWn2 by simon@fedi.simonwillison.net
2023-09-30T19:02:03Z
0 likes, 0 repeats
@Npars01 totally agree - I'm always keen to discuss the many, many drawbacks of LLMs and other machine learning approaches
(DIR) Post #AaIn5etCs4sSlmMmaO by simon@fedi.simonwillison.net
2023-09-30T19:11:13Z
0 likes, 0 repeats
@lewiscowles1986 I'm not offended if people chose not to use them - that's a very reasonable ethical choice, which I've compared to being q vegan in the pastWhat offends me is when people deny that there is any utility to them at all - indicating that anyone who is finding value in them is deceiving themselves(And I'm aware I've been arguing that point with crypto advocates for years, so it's somewhat ironic I'm now on the other side of it)
(DIR) Post #AaInhSl6QB9u9QbPNo by smach@masto.machlis.com
2023-09-30T19:17:59Z
0 likes, 0 repeats
@simon I think some people are annoyed that LLMs take skill, iteration, and patience to learn and use well. The dream was that they'd know exactly what we need after just one generic question about a complex request.It would be helpful if people evaluate them for what they are, not based on an unrealistic sci-fi vision. (They definitely help me answer questions about my code. They will not, however, write a complex app from scratch based on a 1-sentence prompt.)
(DIR) Post #AaIoMA0b2Rfts6FuJU by slott56@fosstodon.org
2023-09-30T19:25:36Z
0 likes, 0 repeats
@simon @doug the presence of ethical rules is troubling, also. There are now two layers of ethical considerations: (1) should this even be done with computers? And now, this new, murky realm of (2) is the code or documentation produced by an ethical and trustworthy tool.It’s introduced yet another problem.
(DIR) Post #AaIoa4KwvG8mKqPKiG by dys_morphia@sfba.social
2023-09-30T19:26:18Z
0 likes, 0 repeats
@simon @slott56 @doug this is an argument I’ve been trying to make for using them as a productivity tool for technical writers, but instead people want chatbots that surface the synthetic text directly to end users without a human in the loop. I’m all please let’s not hand chainsaws to readers to make custom furniture. Let’s use chainsaws to make furniture faster so we can offer more kinds. (This anology has become strained. As a specific example, by custom furniture I mean code samples with annotations. By more kinds I mean code samples in more languages. )
(DIR) Post #AaIqs5eCzKK5WM8Xfk by bjn@mstdn.social
2023-09-30T19:20:35Z
0 likes, 0 repeats
@ncweaver @simon So the compiler checks for syntactic correctness, how do you check for algorithmic correctness? Have the LLM generate test code as well? How do you know if that is correct?
(DIR) Post #AaIqs6R83RSFy51bvM by simon@fedi.simonwillison.net
2023-09-30T19:53:49Z
0 likes, 0 repeats
@bjn @ncweaver the same way you test code written by a junior internYou can get GPT to write automated tests for you, but at some point you're going to actually have to do the work yourself to verify that the code behaves as it should
(DIR) Post #AaIsoZ1W1jupNXU7SS by gabboman@app.wafrn.net
2023-09-30T20:00:15.000Z
0 likes, 0 repeats
@simon@fedi.simonwillison.net saying that LLM are useful is like saying piracy and slavery are useful. The only issue is that they rely in deeply inmoral shit
(DIR) Post #AaIsoZvWfUicBFgqlE by simon@fedi.simonwillison.net
2023-09-30T20:15:24Z
0 likes, 0 repeats
@gabboman those comparisons are a little extreme for me , but they still workThe American south ran its economy on slavery for 100+ years - utterly indefensible, but the argument against wasn't that it "wasn't useful", it's that it was morally abhorrentI'm all for arguments against LLMs that detail the harm they are causing
(DIR) Post #AaIv85c1Hq9RrSzui0 by tomw@mastodon.social
2023-09-30T20:41:27Z
0 likes, 0 repeats
@simon The *best* case scenario is that it produces deeply unoriginal work. You may call that "useful" but I don't. In a way it's worse than being obviously useless, because it leads people to take and use low quality text.
(DIR) Post #AaJ00BwAO1IyvqTTUG by bjn@mstdn.social
2023-09-30T21:36:10Z
0 likes, 0 repeats
@simon @ncweaver sounds like it’s simpler to write it myself.
(DIR) Post #AaJ0ZGal4XWlVIBaaG by simon@fedi.simonwillison.net
2023-09-30T21:41:25Z
0 likes, 0 repeats
@bjn @ncweaver It is... until you learn all of the tricks necessary to take advantage of LLM assistanceI estimate I get a 2-5x productivity boost on the time I spend typing code into a text editor now thanks to LLMsMore notes on that here: https://simonwillison.net/2023/Sep/29/llms-podcast/#does-it-help-or-hurt-new-programmers
(DIR) Post #AaJ0mAqhjdd3SPncVU by gabboman@app.wafrn.net
2023-09-30T21:08:06.000Z
0 likes, 0 repeats
@simon@fedi.simonwillison.net well I am drunk now but AI is basically reliant on slave labor to clasify data and ignoring copyright. so slave labor and stolen labor too #if-i-said-a-stupid-thing-ignore-it-pls #if-i-said-something-logical-not-that-much #please-respond-in-24-hours-if-the-argument-was-stupid
(DIR) Post #AaJ0mBa50vvPj91rEW by simon@fedi.simonwillison.net
2023-09-30T21:43:26Z
0 likes, 0 repeats
@gabboman New models are coming out so fast now and from such a diverse group of research labs that I wouldn't be surprised to see one trained entirely on out-of-copyright data at some point soon - I'm very much looking forward to seeing how well that worksAdobe have one of those for image generation now - their Firefly model was trained exclusively on licensed stock photography, and in fact that's one of their selling points for it
(DIR) Post #AaJ0zIZqJVo1BqAeEi by simon@fedi.simonwillison.net
2023-09-30T21:46:06Z
0 likes, 0 repeats
@coderanger That's more-or-less how I've been describing them to people - LLMs are a party trick that can guess the next word: https://simonwillison.net/2023/Sep/29/llms-podcast/#what-are-large-language-models
(DIR) Post #AaJ1A3ilt8GeIgThFw by simon@fedi.simonwillison.net
2023-09-30T21:47:08Z
0 likes, 0 repeats
@tomw I think generating reams of text is one of the least interesting ways to apply these tools
(DIR) Post #AaJ1KmyiTh9ZYi3Jw0 by simon@fedi.simonwillison.net
2023-09-30T21:48:55Z
0 likes, 0 repeats
@jgordon @doug Yeah, that fits my experience: I've found that over time I've started to develop a pretty strong intuition for which kinds of questions are appropriate for it and which will produce wildly inaccurate resultsThe challenge I'm finding is that teaching and explaining that intuition to other people is really hard! You kind of have to spend the time with them to develop it yourself
(DIR) Post #AaJ1WkHoJRiSgVgX4a by jemal@jemal.contact
2023-09-30T20:28:16Z
0 likes, 0 repeats
@simon I’m not here to argue that they aren’t useful to you, but merely to point out that because the power and server costs are VC-funded, you’re underestimating the cost to run the model post-training, which is a real environmental concern. Personally, I’ve never gotten an answer that a) worked or b) was obvious why it didn’t work unless you dug in, only to find that they were giving results mashed up from multiple incompatible versions of an API . I wasted a lot of time finding that out.
(DIR) Post #AaJ1WlKgQFblvi2LBo by simon@fedi.simonwillison.net
2023-09-30T21:50:29Z
0 likes, 0 repeats
@jemal I'm a lot less worried about the cost of running them than I used to be, because I can run reasonably capable models on my laptop (and even on my iPhone) nowOne of the biggest positive benefits of the openly licensed models that have emerged over the past six months is that a huge amount of effort has gone into optimizing them and finding tricks to get them to run on less powerful hardware
(DIR) Post #AaJ3JAlzErcygaSIZU by postmodern@ruby.social
2023-09-30T22:12:22Z
0 likes, 0 repeats
@simon can you melt an egg? Are there any countries in Africa that start with the letter K?The benefit of LLMs is that instead of searching StackOverflow or Quora and getting a wrong answer, an LLM can rephrase your question and generate the wrong answer for you, because it was trained on the same StackOverflow/Quora answers from 2020.
(DIR) Post #AaJ3VHp8qjNm1qhvFo by simon@fedi.simonwillison.net
2023-09-30T22:13:31Z
0 likes, 0 repeats
@postmodern Yeah, one of the most important (and very non-obvious) skills you have to develop with LLMs is how to identify questions and tasks that are a good fit for it
(DIR) Post #AaJ4iUtHg8MwH1VZb6 by brainwane@social.coop
2023-09-30T22:28:35Z
0 likes, 0 repeats
@simonI find it interesting that there is something a person could say, by implying you have been duped, that could offend you. Does it cause in you a subjective feeling of offense, of being offended? Or are you saying it is *objectively* offensive to say or imply that a group of people have been scammed? (As you know, I use Whisper so I do know ML tools are useful, btw.) @lewiscowles1986
(DIR) Post #AaJ57ITwwzxN1pU3dY by simon@fedi.simonwillison.net
2023-09-30T22:33:05Z
0 likes, 0 repeats
@brainwane @lewiscowles1986 It's a pretty mild effect, I'm not particularly offended - I just wanted to point out that if someone says "LLMs are useless" there's an implication they may not have considered which is "anyone who thinks they are useful is deluding themselves"
(DIR) Post #AaJ93TJ5A7anG7pe1A by brainwane@social.coop
2023-09-30T23:15:08Z
0 likes, 0 repeats
@simon Thanks for clarifying. The whole topic of offense, condescension, insult, honor, etc. gets very subtle and differs among cultures, among people with different psychologies, etc. so I appreciate you sharing more about what you meant, especially as I think I have rarely read an engineer explicitly telling his peers that he reads a particular critique of his work or approach and takes - even mild - personal offense at it. Culturally it's so unusual! @lewiscowles1986
(DIR) Post #AaJCs31WwR1ONIQk5Y by SnoopJ@hachyderm.io
2023-09-30T23:59:52Z
0 likes, 0 repeats
@simon this is a valid criticism for problems for which they *aren't* useful, and I am similarly insulted when I point that out in a narrow domain and someone jumps down my throat about how useful they are (in that person's opinion) in totally unrelated domains.I don't think *everyone* who finds them useful/interesting is a credulous fool taken in by dancing bearware, but I also do believe that such fools do exist in non-trivial numbers.
(DIR) Post #AaJE13AFt5alN6Mrq4 by simon@fedi.simonwillison.net
2023-10-01T00:12:15Z
0 likes, 0 repeats
@SnoopJ That's a very reasonable take. I love "dancing bearware"!
(DIR) Post #AaJUdClUEwDRPxzwbw by lewiscowles1986@phpc.social
2023-10-01T03:19:14Z
0 likes, 0 repeats
@simon After reading the replies, I'd suggest an innate [imo] or [to me] if that makes things easier. I assume this on every social media post, and sadly a lot of news pieces too.So long as the folks paying you keep paying... meh, it seems to be working out.Just please don't bet the farm or encourage others to, on what seems like "speculative technology".Most folks problem with AI are the bold, far-reaching claims, and implications for those of us unable to opt-out.
(DIR) Post #AaJcoq5ZArz2CIRsxM by gordotango@hachyderm.io
2023-10-01T04:50:42Z
0 likes, 0 repeats
@simon @slott56 @doug LLMs are a 3cm star key with four universal joints and a finicky electric drill attached. They can be used for very specific tasks with a lot of work and careful monitoring. It has no place on the public internet, or for generating content. If you have an analysis use case where it can generate domain specific insights from data, have at it, but you’d best double check the results.
(DIR) Post #AaJnqeiRWq2h02hZaq by kitten_tech@fosstodon.org
2023-10-01T06:54:39Z
0 likes, 0 repeats
@simon I feel that a grey cloud of depressed cynicism had fallen over the section of the computer nerd community I live in, largely driven by the enshitiffication of commercial software and services, and how blockchain technology started as an interesting breakthrough in distributed algorithms and quickly turned into grift and waste, and now it's hard for anyone to get enthusiastic about anything any more.
(DIR) Post #AaJpWn0PCKS8pIl4uO by jonasvautherin@fosstodon.org
2023-10-01T07:13:10Z
0 likes, 0 repeats
@simon @briankrebs I am a bit confused because I see you talk a lot about the "it's not all bullshit" part while dismissing the "maybe we should think more before releasing new technology that could do harm" part by saying "well now it's out there, too late".And then you complain that the first (the one you fuel) gets in the way of the second.Or am I missing something? Can we please talk more about the ethical concerns behind releasing tech (for profit) without thinking about consequences?
(DIR) Post #AaJwY00j1uXF87Qq5Q by kittylyst@mastodon.social
2023-10-01T08:32:11Z
0 likes, 0 repeats
@simon I always enjoy reading what you write, and there are some really interesting parts in the piece - and you also, en passant, pointed out that my carbon maths was wrong for LLMs (they're not as bad as I was claiming).I'll also take the viewpoint that these things now exist, and so it is on us (as senior technologists) to see how we can use them to help new programmers - and that might, in turn, help democratise programming - which is, effectively, still a priest class today. 1/
(DIR) Post #AaJzDFx6rbdYLyXcx6 by emaad@sigmoid.social
2023-10-01T09:01:34Z
0 likes, 0 repeats
@simon @slott56 @doug can’t think of a single situation where I’d choose scissors over a chainsaw
(DIR) Post #AaK0t3Akrx7GplPt7g by Techychap@hachyderm.io
2023-10-01T09:20:20Z
0 likes, 0 repeats
@simon @impersonal @doug from a coding perspective, if the model had access to fast compilers or code analysers it could self correct those hallucinations.
(DIR) Post #AaK1REe6iYfUGQRkZ6 by Techychap@hachyderm.io
2023-10-01T09:26:46Z
0 likes, 0 repeats
@simon I think there are better parallels with the .com bubble of early 2000 rather than the crypto bubble. It's not that the technology is useless, it is just harder to be a success than a lot of people realise.There is a lot of interest now and everyone is trying to get on that bandwagon. But it is hard to see who we should back as many of them will turn out to be failures.
(DIR) Post #AaK6SnRbK7UIFWA1RY by Rainer_Rehak@mastodon.bits-und-baeume.org
2023-10-01T10:22:56Z
0 likes, 0 repeats
@simon Okay, but what is „useful Work“ for your understanding?
(DIR) Post #AaKHFRI5TeJ7GyXn9M by robertpi@functional.cafe
2023-10-01T12:23:54Z
0 likes, 0 repeats
@simon I agree. I think the hype machine for LLMs and crypto is largely the same, but the underlying technology is very different.LLMs have a lot of issues, but the technology is interesting in a way crypto never really was.
(DIR) Post #AaKKEWrzY4Gu9Iazk8 by kitten_tech@fosstodon.org
2023-10-01T06:58:05Z
0 likes, 0 repeats
@simon having said that, I've yet to find anything anybody has done with LLMs particularly useful for the problems I face in my own life, and I'm aware that's touching a raw nerve for me - too complex to detail here, but in summary I often feel like an outsider - and so I suppress the irrational anger that brings up for me, to avoid the temptation to join in with the hating on them - cathartic as it would be :-)
(DIR) Post #AaKKEXnQ6YD11PSrFw by kitten_tech@fosstodon.org
2023-10-01T07:24:50Z
0 likes, 0 repeats
@simon OTOH I think that NFTs could have been used to build a new DNS without the centralisation/enshittification problems of the current hierarchical commercial model (and there are now working systems using no PoW energy waste), and am also grumpy it's now impossible to discus such things without being crushed between people getting excited about speculating on domain names and people getting angry with anything involving cryptocurrencies :-)
(DIR) Post #AaKKEYoWJwgQB6zFbs by kitten_tech@fosstodon.org
2023-10-01T08:00:00Z
0 likes, 0 repeats
@simon and I suppose I should elaborate a bit on my experience with LLMs, while trying not to be grumpy: everything I've seen people be excited about them so far boils down to solving problems I don't have, or making me more productive at this I DO do by doing the fun parts for me, so I only have soulless drudgery left. They'll make programmers more productive in a burnout-inducing factory line way!
(DIR) Post #AaKKEZl0oTTH6WLxmS by kitten_tech@fosstodon.org
2023-10-01T08:06:09Z
0 likes, 0 repeats
@simon the side of my work I'd love to get a tool to do - the "why did this fail in production and I can't recreate in dev" type of thing - might be helped by LLMs if there's a way to give it interactive access to the entire source tree, docs for third party components, and so on (that'll be more than the input token limits so it'll need an interactive browsing interface to find the relevant stuff) and some kind of supervised safe access to prod so it can investigate... Has anyone done that?
(DIR) Post #AaKKEaXDvE2HW2uSvY by simon@fedi.simonwillison.net
2023-10-01T12:57:22Z
0 likes, 0 repeats
@kitten_tech have you explored Code Interpreter / "Advanced Data Analysis" yet? That's the one that has access to a Python sandbox and can run code, re-execute if it sees an error etcYou can upload files to it, so I've experimented with uploading a zip full of code and prompting it to search and run that dude itself, with occasional successSee also https://til.simonwillison.net/llms/code-interpreter-expansions
(DIR) Post #AaKKEbWCGWoCZ9R9xw by kitten_tech@fosstodon.org
2023-10-01T08:19:14Z
0 likes, 0 repeats
@simon but, I don't think that you're deluded about the value of LLMs. You just don't like doing the fun parts of programming, and want to build tools to automate away the things I like to do so I can't get paid to do them any more! I don't think you're bad. We just like different things.
(DIR) Post #AaKKiAZ52hFfetsqRc by kittylyst@mastodon.social
2023-10-01T08:36:00Z
0 likes, 0 repeats
@simon My concerns are twofold.First, which I think you addressed in your piece, is the "grain of salt", the skepticism that is required to take the output of LLMs and apply them.You and I know that what they are producing is "spicy autocomplete" or "eerily accurate madlibs", depending upon your viewpoint. But the general population does not. A population that has 20+% of people that do not understand how ranked-choice voting works. 2/
(DIR) Post #AaKKiBO7yu5KDDlc0m by kittylyst@mastodon.social
2023-10-01T08:39:03Z
0 likes, 0 repeats
@simon This population is likely to view the convincing, authoritative voice as something it is not (one reason why I think that it was massively irresponsible to package LLMs with this style of interface, and especially with a style that contains faux-selfhood). FWIW, we've known about this problem since Eliza, and yet nothing was done.Second, and related, is the myth of algorithmic infallability, e.g.: https://emptycity.substack.com/p/computer-says-guilty-an-introduction 3/
(DIR) Post #AaKKiCNSIt8pHQSabQ by kittylyst@mastodon.social
2023-10-01T08:45:55Z
0 likes, 0 repeats
@simon I'm not sure where we go from here - but in general, I feel like we should be defaulting towards to caution in a way that we very much are not right now.One technical point that I do want to push back on - I don't see how LLMs evolve from their current state of statistical bins to a "model of the world", which they currently lack. Adding more parameters, and ingesting an increasingly more polluted input set just may not give any better results than today. /4
(DIR) Post #AaKKiD6TbV9bX3WXmC by simon@fedi.simonwillison.net
2023-10-01T13:02:39Z
0 likes, 0 repeats
@kittylyst I agree with everything you just saidI'm not convinced LLMs can "model the world" if we keep making them bigger, and I'm very worried that we won't find a way to teach people how not to fall victim to science fiction thinking about what these things are capable ofThat's why I talk about LLMs and not "AI" - I'm trying to emphasize that this is mainly about a subset of machine learning - spicy autocomplete- not about Data/Jarvis/Skynet
(DIR) Post #AaKKzbOUXs5pn6F9WK by charles@akk.de-lacom.be
2023-10-01T08:36:05.623123Z
0 likes, 0 repeats
@simon I think before even asking if they’re useful, there are a few more questions. Like “should we create yet another technology relying on exploitation?“ or “are computers more important than human life?“.
(DIR) Post #AaKKzcJD8zSmd0mRvc by simon@fedi.simonwillison.net
2023-10-01T13:05:04Z
0 likes, 0 repeats
@charles those are absolutely the conversations I want people to be focusing onThe problem is that critics lose credibility if they start their criticisms with "and the stuff isn't even useful!" - because the stuff IS usefulI want critics of this technology to have as much credibility as possible! There are so many problems we need to highlight here.
(DIR) Post #AaKLDAKJyfSrthlCHQ by simon@fedi.simonwillison.net
2023-10-01T13:08:31Z
0 likes, 0 repeats
I absolutely hate how LLMs imitate humans and use "I" pronouns and express their opinions - I talked about that here: https://simonwillison.net/2023/Aug/27/wordcamp-llms/#llm-work-for-you.036.jpeg
(DIR) Post #AaKLQNXQAZZwdVBiu8 by simon@fedi.simonwillison.net
2023-10-01T13:09:14Z
0 likes, 0 repeats
@emaad @slott56 @doug opening a package
(DIR) Post #AaKLbxr5U1vuSPuZF2 by simon@fedi.simonwillison.net
2023-10-01T13:10:36Z
0 likes, 0 repeats
@Techychap @impersonal @doug that's exactly what ChatGPT Code Interpreter / Advanced Data Analysis does - it has Python, but you can extend it to be able to run other languages too https://til.simonwillison.net/llms/code-interpreter-expansions
(DIR) Post #AaKLtApCW3PtbYEcwC by simon@fedi.simonwillison.net
2023-10-01T13:16:06Z
0 likes, 0 repeats
@Rainer_Rehak writing working (tested) code on my phone while walking my dog. Digital forensics against weird binary formats that I've not been able to open. Learning new things faster. Extracting structured data from unstructured documents. Brainstorming names for things. Understanding the jargon in academic papers. Using advanced GitHub features. Building better search tools. Writing software that uses AppleScript, Bash, jq and a myriad of other DSLs that I never learned.
(DIR) Post #AaKNOGGMRnAZOyKIzI by simon@fedi.simonwillison.net
2023-10-01T13:32:43Z
0 likes, 0 repeats
In case you're wondering why I keep actively engaging with AI skepticsIt's because I LIKE you if you're skeptical about AI. It means you have a critical mindset, and you care deeply about ethics of technologyMost of the negative things people say about AI are 100% justifiedI care that you have the best possible information to help inform your criticism, and that you can avoid arguments that are easily debunked so you can focus attention on the huge number of arguments that hold weight
(DIR) Post #AaKP6nhLb4KbnMkcYS by simon@fedi.simonwillison.net
2023-10-01T13:52:06Z
0 likes, 0 repeats
I sometimes wonder how much better our society would be if there had been better informed criticism of the automobile and highway systems during the decades that those were first being rolled out(It's quite possible that the smartest, most well informed cricificsm in the world would have been ignored entirely)
(DIR) Post #AaKPgtto1G4rS5nAga by JMMaok@mastodon.online
2023-10-01T13:56:53Z
0 likes, 0 repeats
@simon I think most critics recognize LLMs will be good for some things. In particular, they will be good for productivity where high volume is good and low quality is OK. So we anticipate their use for things like creating marketing copy, rationalizing insurance claim denials, and generating various medium stakes application materials (yes, probably also for writing code). But if your job is more in the evaluation, decision, or coordination space, these make your job harder.
(DIR) Post #AaKPt0BoSQ7OR7z8sa by John@socks.masto.host
2023-10-01T13:57:36Z
0 likes, 0 repeats
@simon The US population was about 100M in 1920, which might have made a lot of things seem different.The humans to natural environment ratio was three times better?
(DIR) Post #AaKPt2MAOtaZ9oqRIO by John@socks.masto.host
2023-10-01T13:59:01Z
0 likes, 0 repeats
@simon I am not sure how to work that observation around modern AI though.Other than to say that a century from now people will be viewing it from an entirely different perspective.Either it will have worked, or will be a funny thing that happened.
(DIR) Post #AaKQ7EqOlieTwPl0am by mhoye@mastodon.social
2023-10-01T14:00:09Z
0 likes, 0 repeats
@simon I think a lot of this has already happened; consider all the ai ethics researchers and their teams who got kicked to the curb as soon as there was money on the table.
(DIR) Post #AaKQLFI3uGWxNZNMDg by ZaneSelvans@social.coop
2023-10-01T14:01:13Z
0 likes, 0 repeats
@simon Have you read Peter D. Norton's book "Fighting Traffic?" It's about just this, in the period from 1915 to 1930.
(DIR) Post #AaKSR2l1cG5K9foc4G by webology@mastodon.social
2023-10-01T14:29:11Z
0 likes, 0 repeats
@simon I think there was criticism, but the benefit to people who literally never left their hometowns or states was rather life changing in ways that GPT chat bots are not.
(DIR) Post #AaKSexrabp7D1wWXc8 by liamcaffrey@data-folks.masto.host
2023-10-01T14:31:15Z
0 likes, 0 repeats
@simon https://www.theguardian.com/world/2023/sep/28/the-war-on-motorists-the-secret-history-of-a-myth-as-old-as-cars-themselves
(DIR) Post #AaKUcKBxKs2HJvv2x6 by ZaneSelvans@social.coop
2023-10-01T14:13:20Z
0 likes, 0 repeats
@simon in the 1920s city engineers already understood induced demand and the inefficiency and inequity of dedicating scarce urban public space to storing private automobiles. There were mass campaigns against giving the streets over to the "murder machines" that disproportionately killed children. Some major cities almost passed laws requiring speed governors that would limit cars to 20mph.
(DIR) Post #AaKUcKvgaqcDblJZEO by ZaneSelvans@social.coop
2023-10-01T14:23:29Z
0 likes, 0 repeats
@simon but the industry successfully fought back, creating an astroturf organization called AAA, funding a new urban planning school at UCLA that focused on auto centric design, creating the Institute of Transportation Engineers to provide traffic "experts" who would contradict existing city engineers, and running PR campaigns blaming children for their own deaths in the previously pedestrian dominated streets.
(DIR) Post #AaKUcLjJcKJY5gXCaW by ZaneSelvans@social.coop
2023-10-01T14:37:21Z
0 likes, 0 repeats
@simon Anyway, it's a great book! Lots of general relevance to how public policy and social norms are made during times of technological change, and I think more specific relevance to policymaking around autonomous cars today.
(DIR) Post #AaKUcMOR9RCw9Dm2gS by simon@fedi.simonwillison.net
2023-10-01T14:53:45Z
0 likes, 0 repeats
@ZaneSelvans It feels like there should be really valuable lessons from that for the current conversations about AIThanks for the recommendation, just bought the ebook
(DIR) Post #AaKUpeyPNjaVXOwP9U by simon@fedi.simonwillison.net
2023-10-01T14:56:27Z
0 likes, 0 repeats
@webology I guess the challenge back then - just like today - was how to both embrace the new potential of the automobile while finding the right balance against the negative consequences
(DIR) Post #AaKV2Ps2qPUoSj2zYG by danjac@masto.ai
2023-10-01T14:58:09Z
0 likes, 0 repeats
@simon @webology the problem then as now is that vested interests make those decisions, rather than an informed public. So for example, car manufacturers lobbying for the removal of trams from cities.
(DIR) Post #AaKW0v3Q2ANAYEDXY8 by webology@mastodon.social
2023-10-01T15:09:39Z
0 likes, 0 repeats
@simon There was a tremendous public benefit to mobilizing society. While I use LLMs and am skeptical of the gold rush we are seeing today to control LLMs, I do not see the same public benefit for everyone as other technical revolutions. The potential for misinformation and harm currently has no ceiling though.
(DIR) Post #AaKX7LovWwtfFz4NXM by ZaneSelvans@social.coop
2023-10-01T15:21:50Z
0 likes, 0 repeats
@simon Definitely some similar vibes in the pitch automakers were making to the public at the time. "Cars are modern. Cars are the future. You don't want to be against the FUTURE do you?" Also "Hey, rich people like cars. And you want to be like rich people, right?"There's even a China angle... but it's very different than today.
(DIR) Post #AaKYIGPk4S3Bd53bSC by smurthys@hachyderm.io
2023-10-01T15:35:02Z
0 likes, 0 repeats
How much better our society would be if all electronics systems included a recycle/disposal plan. I mean these systems are recent enough in human history that we should have been mindful about it, yet *even now* we don't require a recycle/disposal plan for electronics. 🤦♂️May be some jurisdictions require a plan, may be for certain electronics, but in general we are mindlessly flooding the planet with so much of it.Yay consumerism. 🥺
(DIR) Post #AaKZZXYXgsUrffyX2m by seeker@fosstodon.org
2023-10-01T15:49:18Z
0 likes, 0 repeats
@simon the most important criticism that's missing is the same as Google search vs Yahoo/dmoz directory from the 90s.How can anyone accept that a 7GB file I download from the Internet contains all the human knowledge without understanding what's inside it and how to break it down into 10x 1GB files representing different domains.
(DIR) Post #AaKZmLsARNyZCTGPrs by simon@fedi.simonwillison.net
2023-10-01T15:50:53Z
0 likes, 0 repeats
@webology Sure, there was huge public benefit. But did we really need to trash entire (mostly minority) neighbourhoods to build six lane freeways? And did we really need to tear up the tram systems ala Who Framed Roger Rabbit?Europe kept its passenger train networks, America mostly dismantled them.I want us to be able to adopt this kind of technology without making those kinds of mistakes.
(DIR) Post #AaKa0OSlbgBtHv4vhY by dvogel@mastodon.social
2023-10-01T15:51:09Z
0 likes, 0 repeats
@simon The comparisons to crypto are not based on a lack of uses. They arr based on a mismatch between promises being made and the actual potential LLMs have. The similarity is in the resulting over-investment and inevitable bust. The actual uses for LLMs are manyfold over crypto but the investment is also manyfold over crypto, adjusting for timeline.I've seen very little over-promising from you personally though. So when people criticize LLMs I don't think they are applicable to your work.
(DIR) Post #AaKeCFHTyF2S6OhuNc by Rainer_Rehak@mastodon.bits-und-baeume.org
2023-10-01T16:41:00Z
0 likes, 0 repeats
@simon Thanks, I see. :)
(DIR) Post #AaKfheSiUtw58DV9No by ncweaver@thecooltable.wtf
2023-09-30T17:45:56Z
0 likes, 0 repeats
@simon @jernej__s @axleyjc @interpipes @briankrebs But that is the same reason you worry about copyright: if you can get it to spit out trade secret it can spit out copyrighted data, so the output is derived from the copyrighted data.
(DIR) Post #AaKgn90OSwvxGHR7lA by axleyjc@federate.social
2023-10-01T17:10:02Z
0 likes, 0 repeats
@simon @jernej__s @interpipes @ncweaver @briankrebs This is definitely a concern, since the training data is encoded in the model and can be leaked.Not just trade secrets but also exposing internal details of software you would rather not. No need to make an adversary's job easier. Credentials may be accidentally leaked that shouldn't be in code... but hard to prove a negative "there are no credentials at all in any of the code used to train the model"
(DIR) Post #AaKhMiREidZtI3xCEK by webology@mastodon.social
2023-10-01T17:16:45Z
0 likes, 0 repeats
@simon it's harder for me to draw parallels between the two. I understand your annoyance that you don't want people showing up to troll you when you talk about this tech. I don't see the global usefulness yet in LLMs, despite being hundreds of hours into exploring and using the tech. If you want to talk the robber barren era of building and controlling America's rail systems versus how Big Tech is viewing this tech then I'm all in on that comparison.
(DIR) Post #AaKhyRYBxJvzuyCHJ2 by simon@fedi.simonwillison.net
2023-10-01T17:23:32Z
0 likes, 0 repeats
@Rainer_Rehak right: whether or not the positives can outweigh the negatives is still a very open question!I'm trying to put my thumb on the scale in favour of the positives
(DIR) Post #AaKiA0ql3NoYaBOjI0 by kitten_tech@fosstodon.org
2023-10-01T17:23:43Z
0 likes, 0 repeats
@simon yeah, that's a step towards a useful debugging tool, just the things I work on don't generally fit in little Pyrhon VMs - what it needs is to be given SSH access to production servers or the special hardware my app needs or a box with the multi-terabyte test dataset lives or whatever, instead :-) So, some system where you get to review and OK each command it wants to run, perhaps?
(DIR) Post #AaKkMCSxtG3jzdCxWa by paulmison@sfba.social
2023-10-01T17:49:53Z
0 likes, 0 repeats
@simon I have yet to really set this down, but I’d argue that private cars were (and are) a worse technology than atomic bombs / power. Maybe seeing Jeremy Irons read Autogeddon in BBC2 set me down that path.
(DIR) Post #AaKo1gAAgNNNpct6MC by simon@fedi.simonwillison.net
2023-10-01T18:30:49Z
0 likes, 0 repeats
@kitten_tech someone built that (a thing that show you commands and gives you three seconds to cancel them before it runs them) as a 24 line PHP script on top of my LLM tool the other day! https://simonwillison.net/2023/Sep/6/hubcap/
(DIR) Post #AaKuPDpYH0c8dPqv0C by michaelrhanson@hachyderm.io
2023-10-01T19:42:36Z
0 likes, 0 repeats
@simon (1/2) It feels like one of those moments in the emergence of a new technology where practitioners are suddenly aware of a huge new range of features and products, because problems that had been intractable before are suddenly solvable. Rather like the web in 98-2002, or mobile in, say, 2010-2012.And of course we see the bad with the good, the way that the mindless hungry machine of commerce will roll forward over thoughtful concerns and humane design.
(DIR) Post #AaL3Wbz9lOlpAFwYgi by quoidian@mastodon.online
2023-10-01T21:24:39Z
0 likes, 0 repeats
@simon The beginning of the End was the invention of 'jaywalking.'
(DIR) Post #AaLAl7vQVYoVxQ4Eam by jpop32@mstdn.social
2023-10-01T22:45:54Z
0 likes, 0 repeats
@simonWhat I find funny is that I have the exact same reaction when people say that crypto is not useful for anything. :-)As your expertise in LLMs gives you the perspective and insight to make you cringe at generalizations about the space, consider that experts cringing about similar generalizations about the crypto space might actually have a point or two.But, hey, I am grateful for LLMs because they are siphoning the grifters, scammers and make money fast tourists away from crypto. :-)