Post AcyXbYP74eKkm2CeH2 by nazgul@infosec.exchange
 (DIR) More posts by nazgul@infosec.exchange
 (DIR) Post #AcyNrGrN4VIPS9HBjc by neurovagrant@masto.deoan.org
       2023-12-19T16:24:42Z
       
       0 likes, 0 repeats
       
       Good @pluralistic  column with a really good, digestible piece on the AI economic bubble (link below).It echoes a lot of my own thoughts on the state of the business - and the interesting part is, all those AI highway signs he talks about were up when I lived in San Mateo a few years ago.So we're not at the beginning of the "AI" bubble. We're probably closer to its end than its start. And if you follow the fortunes of "AI"-focused technology funds, they haven't rocketed up in years, just bloated laterally.https://pluralistic.net/2023/12/19/bubblenomics/#pop
       
 (DIR) Post #AcyNrHvf62K2lkI83s by mttaggart@infosec.town
       2023-12-19T16:32:22.632Z
       
       0 likes, 0 repeats
       
       @neurovagrant @pluralistic I have to say, I think this is a rare miss.While I agree about the high-risk applications and the low-value, risk-tolerant ones, I feel like a massive middle ground has been elided here for reasons that are unclear.We're already seeing content producers move to using LLMs to do this work in lieu of human labor. That decrease in cost is a value proposition that, as long as the cost of operating anything like a workable model doesn't explode, will continue to be appealing.Maybe, maybe there's a tipping point where we all collectively agree that LLM output isn't good enough to pass muster. But I find that highly unlikely.Instead, I suspect it will be good enough to slip by all but the closest scrutiny, and that is at once valuable for content creation farms and extraordinarily dangerous for everyone else.
       
 (DIR) Post #AcyQvHyTAa2PvkChua by pluralistic@mamot.fr
       2023-12-19T16:42:47Z
       
       0 likes, 0 repeats
       
       @mttaggart @neurovagrant Those are low-value applications. Individual "content producers" are low-waged and precarious and while they may pay for automation, they can't afford much. Their bosses, meanwhile, will only pay a lot if they can fire their workforce - which would leave the news, textbooks, and other risk-intolerant applications in the automation trap.
       
 (DIR) Post #AcyQvJ0zIhe99qOETY by mttaggart@infosec.town
       2023-12-19T17:06:56.861Z
       
       0 likes, 0 repeats
       
       @pluralistic @neurovagrant I am not clear which "automation trap," you're referring to—sorry if I didn't do some prior reading!But I'm not sure that "the news" and other content sources are as risk-intolerant as is being claimed here. To say nothing of the means in which people ingest news—see 2016. It's not always via traditional outlets. There's plenty of money to be made in generating disinformation, which is both low-risk and lucrative at scale, a scale which is enabled by LLMs. But at any rate, when the bubble pops, if it pops, I contend the real casualty will be factual text being the majority of what's present on the internet.
       
 (DIR) Post #AcyRae340HCGGU4a2q by pluralistic@mamot.fr
       2023-12-19T17:08:58Z
       
       0 likes, 0 repeats
       
       @mttaggart @neurovagrant You can't ask an AI to produce the news unless you get a reporter to verify what the AI says. If that reporter does a good enough job to call it "the news" it will takes nearly as many hours as reporting the news from scratch. The time-consuming, labor-intensive part of "the news" isn't writing the words.
       
 (DIR) Post #AcyRaeedkZFq91eacC by mttaggart@infosec.town
       2023-12-19T17:14:25.265Z
       
       0 likes, 0 repeats
       
       @pluralistic @neurovagrant Ideally, yes, but we are hardly in the era of responsible journalism. Call it what you want, but what people consume as "the news" falls far short of that standard now, and I see no reason to believe that access to LLMs would diminish that.
       
 (DIR) Post #AcyS2JmOxJUBVjTVJY by pluralistic@mamot.fr
       2023-12-19T17:15:29Z
       
       1 likes, 0 repeats
       
       @mttaggart @neurovagrant That's not "the news." That's just spam. It makes money from remnant ads at CPMs so low as to be nearly indistinguishable from zero. They do not have disposable income to buy high-dollar licenses. They are a canonical low-value, risk-tolerant applications.
       
 (DIR) Post #AcySAHVTI5TAmHLxVg by mttaggart@infosec.town
       2023-12-19T17:20:51.461Z
       
       0 likes, 0 repeats
       
       @pluralistic @neurovagrant How would you classify something like, say, Gizmodo, or Ars Technica, or other "magazine" publications that have a demonstrated interest in cutting costs at the expense of quality content? Imagining a state of the art slightly improved from now, is there no incentive for them to use LLMs to reduce costs to produce content?
       
 (DIR) Post #AcySkf9Ur2MqXQQGTA by pluralistic@mamot.fr
       2023-12-19T17:22:34Z
       
       0 likes, 0 repeats
       
       @mttaggart @neurovagrant That is a gross mischaracterization of those two outlets. They employ both skilled investigative journalists and hard-nosed fact-checkers.I mean, this is a thing you are empirically wrong about.Source; I have written for both and have first-hand knowledge of their internal processes.
       
 (DIR) Post #AcySkfqOHYg8gSUWKO by mttaggart@infosec.town
       2023-12-19T17:27:25.681Z
       
       0 likes, 0 repeats
       
       @pluralistic @neurovagrant I know they do! And I like those outlets a lot, but don't they want to reduce costs?Regardless, I certainly don't have insider knowledge of these operations like you do, and so if there's really no way you can imagine LLMs being of value to them, or to any organization that produces content, then that's that.For what it's worth, I really, really hope you're right.
       
 (DIR) Post #AcySqtixaa5bTBz2Ku by sidereal@kolektiva.social
       2023-12-19T17:12:59Z
       
       0 likes, 0 repeats
       
       @pluralistic @mttaggart @neurovagrant This is why I don't understand what this type of AI is "for"People are like "you can use it to write a novel" but I actually like writing novels, I don't want to skip that part.People are like "you can use it to summarize a text" but how can I trust it?I just don't see a use case. I've been feeling like I live in the Emperor's New Clothes for like a year now.LLM's replace typing. The difficult part of writing/coding etc is not the typing.
       
 (DIR) Post #AcySquZ4Spm04oMeYq by introversion@universeodon.com
       2023-12-19T17:23:45Z
       
       1 likes, 0 repeats
       
       @sidereal @pluralistic @mttaggart @neurovagrant  > I just don't see a use case.The primary use-case is, “Make investors money”.A secondary use-case is, “Allow employers to replace employees with cheaper software.”  That it can’t do that well, isn’t a barrier for many.
       
 (DIR) Post #AcyTN27zsZjCnh19v6 by SidFudd@4bear.com
       2023-12-19T17:32:38Z
       
       0 likes, 0 repeats
       
       @sidereal @pluralistic @mttaggart @neurovagrant I think I just figured out the use case for AI - it's strictly for entertainment purposes. Accuracy: nil. Reliability: nil. But wow, what results! AI is a psychic hotline.
       
 (DIR) Post #AcyTN3J1V48sSBBTkG by mttaggart@infosec.town
       2023-12-19T17:34:21.864Z
       
       0 likes, 0 repeats
       
       @SidFudd @sidereal @pluralistic @neurovagrant This is mostly on-point, except: think about how many people make life-impacting decisions based on mystical thinking.The LLMs are oracles at Delphi, and the text they generate will be trusted enough by enough people to do material harm.
       
 (DIR) Post #AcyTwntUHK1D4zHtEu by LesserAbe@social.coop
       2023-12-19T17:40:33Z
       
       1 likes, 0 repeats
       
       @sidereal @pluralistic @mttaggart @neurovagrant What if you had an army of toddlers at your disposal? They can mostly understand language and follow directions with varying degrees of success. If we're talking unlimited toddlers, then smart people will figure out ways to use them and get work done. The limit in the case of AI is the support cost for servers etc. The real miss is assuming there will be no improvement in cost. We can run LLMs on commodity devices because people are iterating.
       
 (DIR) Post #AcyTyvMlT4H8pizcYq by troglodyt@mastodon.nu
       2023-12-19T17:40:59Z
       
       1 likes, 0 repeats
       
       @sidereal @pluralistic @mttaggart @neurovagrant in retail we've had this for a long time. e.g. chinese corporations that buy or snatch a set of images, like a stock image set of hundreds or thousands of symbols, rent a factory and produce ten thousand t-shirts per symbol and dump it into a webshop, hakencreuz included it's coming to every information sector, except a few niches for nerds
       
 (DIR) Post #AcyW9S6smWeDBsiDKK by pluralistic@mamot.fr
       2023-12-19T18:04:15Z
       
       1 likes, 0 repeats
       
       @mttaggart @neurovagrant Yes, they want to reduce costs, but only in service to making more money. If they cut costs and this relegates them to running bottom-feeder remnant ads at low CPMs, they will see a net loss.
       
 (DIR) Post #AcyWHsVkVUhZOAMRYe by mttaggart@infosec.town
       2023-12-19T18:07:02.916Z
       
       0 likes, 0 repeats
       
       @pluralistic @neurovagrant Agreed. So it seems like at the end of the day, the issue is the threshold for acceptable content. I suspect my estimated threshold for what people will tolerate (and share) is lower than yours. And again, I really hope you're right.
       
 (DIR) Post #AcyWqA01zYYvTRs8HY by Shivaekul@infosec.exchange
       2023-12-19T18:12:56Z
       
       1 likes, 0 repeats
       
       @pluralistic @mttaggart @neurovagrant I've seen multiple organizations cost-cut themselves into oblivion, including my local newspaper and several of my old favourites, so I don't know why you think they won't do the same with LLMs. It will be counterproductive sure, but the people with money and/or charge either have different priorities, or just are that dumb.
       
 (DIR) Post #AcyX8peRrzMTrqgFPc by darkuncle@infosec.exchange
       2023-12-19T18:15:19Z
       
       1 likes, 0 repeats
       
       @mttaggart @pluralistic @neurovagrant the encouraging flipside here is that as the deluge of disinfo and generated garbage content grows, the value of human-curated, trustworthy content also goes up (as its rarity increases).
       
 (DIR) Post #AcyXCmr0DFyiLjREUi by mttaggart@infosec.town
       2023-12-19T18:17:20.101Z
       
       0 likes, 0 repeats
       
       @darkuncle @pluralistic @neurovagrant Yes, indeed, although the same number of needles (or fewer) in an exponentially larger haystack is not a great state of affaris.
       
 (DIR) Post #AcyXbYP74eKkm2CeH2 by nazgul@infosec.exchange
       2023-12-19T18:19:46Z
       
       1 likes, 0 repeats
       
       @pluralistic @mttaggart @neurovagrant And furthermore, correcting and proofing generated content that is mostly correct, but wrong in weird ways, turns out to be a job that humans are not good at. This is a similar problem to Tesla autopilot. The technology is good enough to bore us into not noticing when it’s wrong.
       
 (DIR) Post #AcyXdU1sJJ2zmiFmiG by darkuncle@infosec.exchange
       2023-12-19T18:20:40Z
       
       1 likes, 0 repeats
       
       @mttaggart I have a pretty strong feeling that after the hype cycle dies down, and only the use cases with actual legitimate value remain, we will eventually see less garbage output. (Risk: quality improves as well, to the point that it becomes very difficult to sort the generated stuff from the real stuff, and eventually people start asking what even is the difference and how do you define "real” …)
       
 (DIR) Post #AcyY5j2o4vCqSf9klU by mttaggart@infosec.town
       2023-12-19T18:27:16.306Z
       
       0 likes, 0 repeats
       
       @darkuncle I just don't get the obsession with quality being some barrier here. I can use ChatGPT today to make a fake article about Joe Biden being found complicit in Hunter Biden's business dealings. And it'll be bullshit, but it'll exist. I can create a fake "incriminating" photo that goes along with it. And then I can share it with an inflammatory headline and it'll go halfway around the world before the fact-checkers get out the door.The damage, as the Russians know all too well, is the erosion of trust by flooding the zone. And this technology gives every actor in that space an orbital cannon to deploy for that end.
       
 (DIR) Post #AcyYxVbUGZx4Ph1WhE by RyunoKi@layer8.space
       2023-12-19T18:36:36Z
       
       1 likes, 0 repeats
       
       @darkuncle @mttaggart I hope before we reach that point, the impact on the environment (energy, water) becomes so grave that Big Data isn't sustainable.
       
 (DIR) Post #AcyZfCLBFNY3PbEFsm by mttaggart@infosec.town
       2023-12-19T18:44:52.967Z
       
       0 likes, 0 repeats
       
       @RyunoKi @darkuncle I mean I'd like to avoid that impact altogether...And it kinda seems like as we get smaller models trained, some of this is ameliorated. Also we don't know what more specialized hardware will do to the compute demands.But broadly, yes, I'd love for this to stop before we get to an AI-induced energy crisis.
       
 (DIR) Post #Acyalb9nzbNS4RWYS0 by RyunoKi@layer8.space
       2023-12-19T18:56:54Z
       
       1 likes, 0 repeats
       
       @mttaggart @darkuncle Realistically business is only starting to care once we stop externalising the effects and make them pay.We have CO2 reports (that often look bleak) and Corporate Social Responsibility. That might be a start.A friend introduced me to the concept of #FrugalComputinghttps://limited.systems/articles/frugal-computing/
       
 (DIR) Post #Acyao4pEeqQYfF5Ilk by mttaggart@infosec.town
       2023-12-19T18:57:41.828Z
       
       0 likes, 0 repeats
       
       @RyunoKi @darkuncle This is great; thank you!
       
 (DIR) Post #Acyb016AL9fIa2hGtc by darkuncle@infosec.exchange
       2023-12-19T18:59:08Z
       
       1 likes, 0 repeats
       
       @mttaggart agreed - based on the utter nonsense people have been willing to believe previously, it may be completely unnecessary, but it will make things worse. (But this also goes to my earlier point about trusted sources becoming more valuable - but untrustworthy sources that are cited because of ideological bias will probably just use Gen AI to ramp up their content production as well.)
       
 (DIR) Post #AcybQIqwx40lAs78To by RyunoKi@layer8.space
       2023-12-19T18:58:31Z
       
       0 likes, 0 repeats
       
       @mttaggart @darkuncle As a web developer, I consider this interesting:https://github.com/thegreenwebfoundation/co2.js/
       
 (DIR) Post #AcybQJkFdSFNwNzIg4 by RyunoKi@layer8.space
       2023-12-19T19:00:18Z
       
       1 likes, 0 repeats
       
       @mttaggart @darkuncle As bloggers we could include API checks tohttps://www.thegreenwebfoundation.org/green-web-check/to nudge more towards sustainable hosting.
       
 (DIR) Post #Acybo9hWd8Ghjw4zcu by RyunoKi@layer8.space
       2023-12-19T19:05:11Z
       
       1 likes, 0 repeats
       
       @mttaggart @darkuncle You're welcome.There are papers on the subject if you want to drill deeper.