Post Aizngf19FawsbxtMJs by demiurg@fosstodon.org
 (DIR) More posts by demiurg@fosstodon.org
 (DIR) Post #AixfhWOXzO2uV09wTw by Wolven@ourislandgeorgia.net
       2024-06-15T19:30:04Z
       
       1 likes, 0 repeats
       
       First Sundar Pichai, now Tim Cook, both admitting that generative "AI," by dint of its very foundations (cf. "Bias Optimizers"), will not and cannot stop bullshitting (cf. "On Bullshit Engines"). Look at me. Listen to me: If someone is selling you a product they claim is smarter and better and faster than you, but it literally cannot be guaranteed to give you simple true facts about consensus reality when asked, then that product DOES NOT WORK and SHOULD NOT BE USED.https://futurism.com/the-byte/tim-cook-admits-apple-ai-stop-lying
       
 (DIR) Post #Aixg2LS3s4wkdkjThA by wa7iut@mastodon.radio
       2024-06-15T19:33:56Z
       
       0 likes, 0 repeats
       
       @Wolven It’s not hard to understand really 😱
       
 (DIR) Post #AixiICtrdKLIexQcMq by Wolven@ourislandgeorgia.net
       2024-06-15T19:59:14Z
       
       0 likes, 0 repeats
       
       Use generative "AI" to spitball, hypothesize, overcome the tyranny of the blank page? I mean if the environmental costs weren't astronomical and the training corpra weren't largely stolen, then yeah, sure.Use it for facts? Knowledge? To fully *Replace* thought, feeling, and creativity? Absolutely not.
       
 (DIR) Post #AixiSxwPafhTS0kZRQ by nora@blob.love
       2024-06-15T20:00:35Z
       
       0 likes, 0 repeats
       
       @Wolven yeah i think summaries might be an actual use case, for example
       
 (DIR) Post #AixmDDfzienPR5on4a by fabiocosta0305@ursal.zone
       2024-06-15T20:43:05Z
       
       0 likes, 0 repeats
       
       @Wolven for minus one people astonishment
       
 (DIR) Post #Aixme8VP3J0xDYhOmO by DemocracyMattersALot@mstdn.social
       2024-06-15T20:47:56Z
       
       0 likes, 0 repeats
       
       @Wolven Unless you like spaghetti cooked in gasoline.
       
 (DIR) Post #Aixmg1Zhn6yWImbzEG by glitzersachen@hachyderm.io
       2024-06-15T20:48:01Z
       
       0 likes, 0 repeats
       
       @Wolven I do not tire to point out, that it's not lying, because it is not saying anything. "Saying s.th." requires a mind behind it that wants to express a thought (or achieve an effect). So called "AIs" of 2023/2024 are just (mindlessly) producing statistically likely streams of words. Sometime the listener/reader thinks they make sense. Sometimes they don't. When the produced (and the keyword is produced) word stream does not make sense, people say, "the AI is lying"...
       
 (DIR) Post #Aixn63fAeMMEJ7a8pM by glitzersachen@hachyderm.io
       2024-06-15T20:49:13Z
       
       0 likes, 0 repeats
       
       @Wolven ... But indeed it is not lying. It's only producing. Nonsense, in this case.
       
 (DIR) Post #Aixn9rl0rDG2Xa88hM by Wolven@ourislandgeorgia.net
       2024-06-15T20:53:13Z
       
       0 likes, 0 repeats
       
       @glitzersachen I did not write the headline, nor did I repeat it. I stated, as I have stated since the beginning of this particular "AI" phase, that they are bullshitting, which has a very specific technical meaning in relation to philosophy of knowledge and language:https://www.youtube.com/watch?v=9DpM_TXq2ws
       
 (DIR) Post #AixnZoK2FeiZPqsoXA by glitzersachen@hachyderm.io
       2024-06-15T20:58:25Z
       
       0 likes, 0 repeats
       
       @Wolven I know the meaning (Harry Frankfurt, right?). While I think it describes well the form of the produced language, I still think it's a term that should not be applied here. "Bullshitting" requires an intention behind it (that is: manipulate the listener into a certain state of mind), whereas the AI has not intention. (It's actually the ultimate ZEN practitioner, totally void of any need and drives and interpretation of world. It just, reacts).But that's only my opinion.
       
 (DIR) Post #AixogDuW7wpOT7ZKhE by zombiewarrior@social.vivaldi.net
       2024-06-15T21:10:35Z
       
       0 likes, 0 repeats
       
       @Wolven a thought occurs to me (it happens) that a lot of people are focusing their efforts now on getting AI to do 'x', where, it's entirely possible that a non-AI solution, something conventional, would achieve the same or much better success with the same resources applied to its development
       
 (DIR) Post #AixomX9bUgDlt8dOzo by PJ_Evans@mastodon.social
       2024-06-15T21:11:29Z
       
       0 likes, 0 repeats
       
       @Wolven If you can't get your AI to stop lying, then stop putting your AI in everything. (It will also save you a lot of money for cooing and electricity.)
       
 (DIR) Post #AixrekJuxQasGjR9Ye by mousefriend@this.mouse.rocks
       2024-06-15T21:44:10Z
       
       0 likes, 0 repeats
       
       @Wolven It used to be that the absolute baseline functioning of any search engine (which LLMs functionally are) was it's ability to accurately return relevant search results. It's wild to us that anybody would accept a calculator that cannot accurately do simple arithmetic, and that is exactly where "AI" and LLMs are currently.
       
 (DIR) Post #Aixs1HtKoxzsl7kfDN by Hcobb@spacey.space
       2024-06-15T21:48:10Z
       
       0 likes, 0 repeats
       
       @Wolven Off to write a SciFi script about a machine overlord that is humored because it is so hilariously out of touch with reality. Hence it has nightly broadcasts and legions of fanatics.
       
 (DIR) Post #Aixu16wzhPSmjbVUFk by independentpen@mas.to
       2024-06-15T22:10:37Z
       
       0 likes, 0 repeats
       
       @WolvenWe talked about this in cultural anth too, informed by Haraway's cyborg theory. The students were AI critical
       
 (DIR) Post #Aixwemu1r6r636APWi by rfg@bark.lgbt
       2024-06-15T22:40:05Z
       
       0 likes, 0 repeats
       
       @Wolven All AI operations keep getting exposed as data-vacuuming user surveillance, yet there's no way to stop governments and the press from dancing around this garbage pile.
       
 (DIR) Post #AixySUKNI7dwsAkjWS by Bearded_Pip@c.im
       2024-06-15T23:00:12Z
       
       0 likes, 0 repeats
       
       @Wolven @fifilamoura you can, very easily. Turn it off and the lies stop.
       
 (DIR) Post #AixygHhJ2x0pkt3cTA by justafrog@mstdn.social
       2024-06-15T23:02:52Z
       
       0 likes, 0 repeats
       
       @Wolven It really isn't fit for purpose and I wish they'd just drop it like Eliza in the 60s.
       
 (DIR) Post #Aixzu3xj0hcDAGs4J6 by BlackDogActual@mastodon.sdf.org
       2024-06-15T23:16:30Z
       
       0 likes, 0 repeats
       
       @Wolven So. AI and Trump have that in common
       
 (DIR) Post #Aiy2TCeOtnxM9eiSZM by crschmidt@better.boston
       2024-06-15T23:45:14Z
       
       0 likes, 0 repeats
       
       @Wolven search has never been guaranteed to give you simple true facts about consensus reality, back to yahoo and altavista days. I think The idea that search engines do not work and should not be used as a result is unreasonable.
       
 (DIR) Post #AiyEJoNWyJ69ih76e0 by crumbleneedy@aus.social
       2024-06-16T01:58:01Z
       
       0 likes, 0 repeats
       
       @Wolven to everyone responding to this with 'and you're surprised?' and other patronizing bullshit - spare us bro
       
 (DIR) Post #AiyMehP0u33DnZHIh6 by crumbleneedy@aus.social
       2024-06-16T02:00:38Z
       
       0 likes, 0 repeats
       
       @crschmidt @Wolven when did search return 'put glue on your pizza' as a recipe before now?
       
 (DIR) Post #AiyMei9S7OCK7b0O4u by crschmidt@better.boston
       2024-06-16T03:21:28Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven The standard set was "guaranteed to give you simple true facts about consensus reality". Google search has always had the ability to return bad results; and those results have appeared normative at least as long as Featured Snippets have been around (so at least a decade). Bing implemented features to display StackOverflow snippets directly in the search result (and sometimes picked the wrong answer). There are no guarantees that search is returning the correct thing.
       
 (DIR) Post #AiyMeiiY0uGpsRQPmS by crschmidt@better.boston
       2024-06-16T03:22:50Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven For example, the answer to "How many rocks should you eat per day" has had a Featured Snippet telling you to eat 3 servings of rocks per day for years. That isn't new, and it isn't AI (even if people see it surfaced through AI). But the fact that search can give you bad results doesn't mean that using search is worse, overall, than not using search.
       
 (DIR) Post #AiyMkXeS4OUhIHHk12 by Wolven@ourislandgeorgia.net
       2024-06-16T03:32:25Z
       
       0 likes, 0 repeats
       
       @crschmidt You've read Algorithms of Oppression, right? Because the use of algorithms to arrange search results due to profit motive has been an issue for a long time, and everything I said about LLM's? Yeah I and many others long ago wrote about how that first applied to the original parameters of search.But you apparently work for alphabet/google; you know all this. Bye. @crumbleneedy
       
 (DIR) Post #AiyQD6Zo72BlaCQNCC by Wolven@ourislandgeorgia.net
       2024-06-16T04:11:20Z
       
       0 likes, 1 repeats
       
       And yes, the use of algorithms and machine learning to arrange search results based on profit motive has been a big problem for consensus knowledge for a long time, and everything here [https://ourislandgeorgia.net/@Wolven/112622292468046430], Safiya Noble and others long ago first wrote about as applying to said parameters of search.And LLM integrations make it Worse.
       
 (DIR) Post #AiyUWiI6Vyhhs3kW9I by davidradcliffe@mathstodon.xyz
       2024-06-16T04:59:42Z
       
       0 likes, 0 repeats
       
       @Wolven In my experience, generative models are most useful for transforming text from one format to another. I've used it successfully to create TikZ figures, even though I don't know the syntax. I describe the figure in precise detail, and it converts my description into code, usually correctly. I don't use it for anything that requires creativity.
       
 (DIR) Post #AiyZVymYDPuiUxSZcm by Wolven@ourislandgeorgia.net
       2024-06-16T05:55:35Z
       
       0 likes, 0 repeats
       
       @melanie Probably. A lot of people on the social medias don't read things very carefully.
       
 (DIR) Post #AiyZrHTltYo6ZFpjFo by tanyakaroli@expressional.social
       2024-06-16T05:59:26Z
       
       0 likes, 0 repeats
       
       @Wolven Exactly! I would absolutely use these tools to help me rewrite my own texts (especially helpful for us who don’t have English as our first language but have to write academic papers and research proposals in English all the time, which obviously puts us at a great disadvantage) IF they didn’t have such a horrendous impact on the environment and IF they weren’t built in such unethical ways. But they are, so I don’t.
       
 (DIR) Post #Aiye7Hzy8E8od0mYt6 by falcennial@mastodon.social
       2024-06-16T06:47:08Z
       
       0 likes, 0 repeats
       
       @Wolven nice, true, but I am extremely familiar with garbage, I have seen and produced much in my time. no such imbecile exists that will ever need the nature of garbage explained to them, or instruction in recognising it. least I really hope not.
       
 (DIR) Post #Aiyi5QsAZAinO9VJzM by oliver_schafeld@mastodon.online
       2024-06-16T07:31:37Z
       
       0 likes, 0 repeats
       
       In other words:The portion of internet/social media users that is already too lazy to do research or argue rationally (follow a link, read an article, provide a link) and rather just believes trolls/influencers/celebrities will soon be lied to instead by an AI, prompt-engineered by trolls/influencers/media oligarchs. 😑If it wasn’t for the resource costs and the "copyraid": same, same, but different.
       
 (DIR) Post #AiytSp5RntJudy4SUC by idlemoor@fosstodon.org
       2024-06-16T09:39:07Z
       
       0 likes, 0 repeats
       
       @Wolven I think the learning point widely missed is that we can now see that human intelligence is qualitatively overrated, and GenAI, with its deep propensity for worthless bullshit, is faithfully reflecting how most humans have been picking up and passing on their bullshit for aeons. If we as a species internalise this, instead of thinking of ourselves as gods worthy of emulation by GenAI, we might make a bit of progress.
       
 (DIR) Post #Aiz32wBPnRo2fH9qGe by simon_brooke@mastodon.scot
       2024-06-16T11:26:28Z
       
       0 likes, 0 repeats
       
       @Wolven IT'S NOT LYING.To lie is to know what the truth is, and to deliberately say something else. These so called 'AI' systems are incapable of lying, because they have no semantic layer and consequently don't even have a concept of truth. They produce probable sequences of tokens, that's all. All of the tokens in the output they produce are meaningless -- even if they look like words.
       
 (DIR) Post #Aiz3unGVAYqmdyDYGW by Jennifer@bookstodon.com
       2024-06-16T11:36:11Z
       
       0 likes, 0 repeats
       
       @Wolven big tech really needs to just stop with AI assistant tech. It doesn't work and it's going to completely undo any progress we've made trying to deal with climate change.
       
 (DIR) Post #Aiz6j4u83rgrjorGds by adredish@neuromatch.social
       2024-06-16T12:07:43Z
       
       0 likes, 0 repeats
       
       @Wolven @inthehands Be careful using generative "AI" to spitball and vomit draft initial versions.  The danger is that many of the biases and falsehoods will be baked in to that initial draft. Yes, it is possible to correct them, but it can also be hard to see them in that initial text. Like all the people arguing that fixing coding bugs is harder than generating good code in the first place, it is often hard to see implicit biases hidden in text.What I have seen work is to write the initial text yourself and then ask generative "AI" to do something to the text, which gives you a new perspective.  This is analogous to the classic poetry writing technique of inverting every word in a poem, then taking both together and seeing what you make of the combination.(Of course, the concerns about environmental costs and stolen corpora still hold.)
       
 (DIR) Post #Aiz9mufuKrqJ26j6Yq by djvanness@mastodon.social
       2024-06-16T12:42:00Z
       
       0 likes, 0 repeats
       
       @Wolven this doesn't just exactly apply to AI by the way. Apply this rule, and most consultants would go out of business.
       
 (DIR) Post #AizAEzOkMtaTn35U1Y by crschmidt@better.boston
       2024-06-16T12:47:03Z
       
       0 likes, 0 repeats
       
       @Wolven @crumbleneedy I have not read Algorithms of Oppression, though I agree with the broad summary as I understand it.But if you believe that search engines "SHOULD NOT BE USED" to find information on the internet, what do you think people should do in order to find information on the internet?My opinion is that any time you're using a tool, you need to know the limitations of using that tool towards your goals. Search engines are a tool, like any other, in that regard.
       
 (DIR) Post #AizC43p1mPpSEXISDg by rachaelspooky@cyberpunk.lol
       2024-06-16T13:07:24Z
       
       0 likes, 0 repeats
       
       @Wolven but tech would never lie to us! they are purely motivated by the pursuit of knowledge and a better tomorrow! ugh hold on i accidentally glued my daily rock to my pizza
       
 (DIR) Post #AizD2z9VkvRPPove1g by alexbfree@mastodon.social
       2024-06-16T13:18:34Z
       
       0 likes, 0 repeats
       
       @Wolven this is particularly scary because in today’s fake news world, ‘sometimes right’ is not good enough at all. Society needs trustworthy information. 100% trustworthy. I worry that companies like Apple and Google presenting AI responses as ‘answers’ makes them look like fact, diluting the amount of true and reliable information even more…
       
 (DIR) Post #AizSRcCTj8h6maWFMm by ceulig@rollenspiel.social
       2024-06-16T16:11:03Z
       
       0 likes, 0 repeats
       
       @Wolven Earlier today: Hey, microsoft copilot gpt-4, can you handle the reasonably simple numeric task of unit conversion between imperial and metric so I can write a joke?
       
 (DIR) Post #AizT9lITGclVjlNBCK by Wolven@ourislandgeorgia.net
       2024-06-16T16:19:03Z
       
       0 likes, 0 repeats
       
       @freezeanopensore Did you seriously just come into the mentions of someone you *Do Not Know* to berate them for a stylistic writing choice within their *Own Space*, when you could have just… gone about your business and lived your life… and then call *That Person* an asshole?That takes some serious lack of introspection and gall, I gotta say. Good bye.
       
 (DIR) Post #AizTLKzqsQUjdwBRrM by idlemoor@fosstodon.org
       2024-06-16T09:31:20Z
       
       0 likes, 0 repeats
       
       @melanie @Wolven his second sentence is "Look at me" ... which ofc affirms the point about people on social media.
       
 (DIR) Post #AizTixUduh0XhNqi6i by crumbleneedy@aus.social
       2024-06-16T02:07:22Z
       
       0 likes, 0 repeats
       
       @glitzersachen @Wolven you (glitzersachen) are confusong the lack of intention in the model/engine with the clear intent of the *product*, which is to provide a presumably accurate, factual,  relevant *answer* to a question - that is the context in which the words are *intended* to be understood - and as such, theproduct, as dr williams correctly points out, is pure pernicious bullshit.
       
 (DIR) Post #AizTiy9lRntvkv5YCe by glitzersachen@hachyderm.io
       2024-06-16T12:32:10Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven The product cannot have intent. The listener interprets intent into the produced words, but actually it's only the listener bullshitting himself...BTW (1) I am not confusing anything here. I am intentionally talking about what AI is (and is not currently) and not about properties of the produced word stream. It's IMHO a behaviorist mistake to focus on the output and think this can be fixed without understanding the producing system. ...
       
 (DIR) Post #AizTj0udCWZwIr20X2 by glitzersachen@hachyderm.io
       2024-06-16T12:35:58Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven ... We don't debug "normal" software this way and I don't think debugging so called "AI"on this basis will get us anywhere.(2) I am not saying Dr Williams is wrong. I only state an additional opinion that anthropomorphic terms need to applied to AI with a big grain of salt.(3) You seem somewhat to misunderstand how dangerous and corrosive to human minds and society I consider the output of LLMs used a pseudo AI. While I laud Dr Williams judgment ...
       
 (DIR) Post #AizTj3pmL1TnMfmfsu by glitzersachen@hachyderm.io
       2024-06-16T12:39:11Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven ... as a big step into the right direction, I think the "AI" output is even more corrosive than "pure pernicious bullshit". The anthropomorphic term bullshit, though, evokes the impression that in the end there is an entity behind that utterances that could be reasoned with and that could be "cured" from its bad behavior.But there is not and the problem is (literally) systemic.It's the impression explained above I want to avoid by advising against using this term....
       
 (DIR) Post #AizTj6TYW6UBYcPTA8 by glitzersachen@hachyderm.io
       2024-06-16T12:42:15Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven ... Definitely not because I want to defend AI as not "bullshitting". Quite the opposite."AI" systems are things. They are created by their creators to maintain some illusion that you are talking to an intelligent (quasi human) partner. This is often tempting. We should not fall for this. They are things. Defective things created with malicious (!) intent.
       
 (DIR) Post #AizTmQiDjacNBkr5cW by thetaphi@noc.social
       2024-06-16T12:34:07Z
       
       0 likes, 0 repeats
       
       @nora @Wolven bundestagszusammenfasser.de uses GPT4 for summaries of bills introduced in German parliament. Quite neat, and appropriate, since most of the language used has a strained relationship with reality to begin with.For almost all other use cases I can think of, LLMs are bullshit. https://link.springer.com/article/10.1007/s10676-024-09775-5
       
 (DIR) Post #AizWCFlxELxwhH2jZo by Wolven@ourislandgeorgia.net
       2024-06-16T16:52:54Z
       
       0 likes, 0 repeats
       
       @falcennial 1) Roll it back with the ableist language. 2) Assuming that you can distinguish *All* faulty information produced and presented in an authoritative and convincing-sounding way from genuine real information, is to assume that you know everything.People who don't know anything about mushrooms can very easily be misled by an official-seeming "field guides." Hell, people with years, decades of training can be misled by a system that just happens to confirm their biases.Never assume you're somehow infallible.
       
 (DIR) Post #Aizf56klGT7uHi9WqG by Wolven@ourislandgeorgia.net
       2024-06-16T18:32:41Z
       
       0 likes, 0 repeats
       
       Since this has escaped containment, here's a note that "Cf." is an abbreviation that means "compare," also often used to mean "see also." So when I say 'Cf. "Bias Optimizers" … "On Bullshit Engines,"' i'm not just throwing out phrases. I'm directing you to compare what I said to two Specific things:https://www.americanscientist.org/article/bias-optimizershttps://www.youtube.com/watch?v=9DpM_TXq2ws
       
 (DIR) Post #Aizju7r9K4mmI6iu9o by demiurg@fosstodon.org
       2024-06-16T19:26:42Z
       
       0 likes, 0 repeats
       
       @Wolven It is disruptive in the field of NLP. We measured in one case that it was 20% more accurate than humans for entity recognition tasks. We automate workflows for our customers and they have a big ROI. It is here to stay and it is more about automation than generating 'truth'. Truth is also not defined from a philosophical perspective. You always need to check your sources but I think you are aware of this.
       
 (DIR) Post #AizlyZRFNHfzL3rUK8 by Wolven@ourislandgeorgia.net
       2024-06-16T19:49:52Z
       
       0 likes, 0 repeats
       
       @demiurg I'd like to see those experiments, and also what the comparative parameters were. Also I'd like to see the data preponderance of reinforced biases and disparities in those workflows, because it sounds like you think the automation of processes and the disposition of knowledge and belief and the implication of those two sets of things on people's lives don't have anything to do with each other, when actually they have Everything to do with each other.
       
 (DIR) Post #Aizm7QocntaBCY1PU0 by crumbleneedy@aus.social
       2024-06-16T17:40:55Z
       
       0 likes, 0 repeats
       
       @glitzersachen @Wolven 'the product cannot have intent' the product is *intended* by its designer to be used, understood, app/comprehended by the user in specific ways, and users will generally do so. or maybe you brush your teeth with a hammer.  muting now. good luck!
       
 (DIR) Post #Aizm7RxWYIIMkRC1zc by glitzersachen@hachyderm.io
       2024-06-16T19:36:39Z
       
       0 likes, 0 repeats
       
       @crumbleneedy @Wolven Man, what a chump. "Intent of product" != "what the product is intended to by its designer".But thanks for confirming my initial impression of your contribution.
       
 (DIR) Post #Aizngf19FawsbxtMJs by demiurg@fosstodon.org
       2024-06-16T20:09:05Z
       
       0 likes, 0 repeats
       
       @Wolven These are not experiments but numbers of a production ticketing system. The company measured errors for the same task before and after the automation. I am aware of the impact of automations and I work several years in the field. I am also sceptic about the technology and how it is handled. The observations I make are empirical, though. I had the impression you only talk about LLMs not generating facts, while for me this is clear and not the use case where it has the biggest impact.
       
 (DIR) Post #AizxE4enMkm9t2VI0m by dagnymol@pony.social
       2024-06-16T00:35:47Z
       
       0 likes, 0 repeats
       
       @nora @Wolven I know people who have used GenAI to take things they understand and format them into Appropriate Business Speak and it does that pretty well.  The overzealous mansplaining makes it worthless for anything where you're depending on it for knowledge.
       
 (DIR) Post #AizxE5pozFBpXWfbpw by qgustavor@urusai.social
       2024-06-16T21:24:55Z
       
       0 likes, 0 repeats
       
       @dagnymol @nora @Wolven I notice it as people trying to dismiss the technology by ignoring its strengths and focusing on their weak points, even if there are people working on those.Why? I guess a lot of people are genuinely trying to warn people of them being badly used (which is something really common!) but I think a lot of people are afraid of them stealing jobs. As a hobbyist programmer, I love to see automation, yesterday I even took some of my free time to automate a boring part of my job. Thus I'm biased to like those things, I'm not afraid of it, I guess many people are.
       
 (DIR) Post #Aj09QNWcEv2pKWJMrg by StarkRG@myside-yourside.net
       2024-06-17T00:12:39Z
       
       0 likes, 0 repeats
       
       @Wolven I really want AI to be awesome and useful, but everything being marketed right now is just absolute garbage. I think it's probably already ruined the term "AI" for any serious work.Even if we ignore the environmental and ethical concerns (which we shouldn't), AI isn't even good at most of the the purposes they're pushing for it to be used. AI seems best at providing fuzzy answers to vague questions, not truthful answers to specific questions.
       
 (DIR) Post #Aj0Bp923SdL1WqxCqG by Schouten_B@mastodon.social
       2024-06-17T00:39:34Z
       
       0 likes, 0 repeats
       
       @Wolven In itself this is not a fantastic argument, as every other method of information gathering, i.e. a classic search query, wikipedia, word of mouth, all bullshit as well. The question is the type, frequency and degree of bullshit.Generative AI products can clearly provide value, already on many queries they outperform a google search and produce easily verifiable answers.
       
 (DIR) Post #Aj0MF3w228kMn78Ms4 by CStamp@mastodon.social
       2024-06-17T02:36:15Z
       
       0 likes, 0 repeats
       
       @Wolven Just what we need: sexist and racist computers.  ;)
       
 (DIR) Post #Aj0U5SkNj9BoTwcO3M by imabuddha@techhub.social
       2024-06-17T04:04:13Z
       
       0 likes, 0 repeats
       
       @Wolven but it is the ultimate example of ‘truthiness’.
       
 (DIR) Post #Aj0VtWJgpWDbzBZB4a by davidreinertson@sfba.social
       2024-06-17T04:24:27Z
       
       0 likes, 0 repeats
       
       @Wolven That’s why it should be kept on a short leash. If something is just offering to call a car rental place when I buy a plane ticket, or remove duplicate photos, or write an email reply, it could save a little time, as long as as I know to check for accuracy and sanity.Pretty much any time I’ve used ChatGPT, I’ve had to clean up and revise the results.Think “Sleeper” or “Brazil”, not “Star Trek” or “Terminator.” Fast, free, and not that smart.
       
 (DIR) Post #Aj0WIGU0prx47QC58S by davidreinertson@sfba.social
       2024-06-17T04:28:55Z
       
       0 likes, 0 repeats
       
       @Wolven Yes. I should have read this before responding to your last post.How does a teacher then respond to a paper written with a little or a lot of AI in it?In some ways, it might look better than a fully human paper, based on human reading and thinking. So are you grading looks, or reading and thinking?
       
 (DIR) Post #Aj0WXO3Az1pPvoqtX6 by davidreinertson@sfba.social
       2024-06-17T04:31:41Z
       
       0 likes, 0 repeats
       
       @Wolven Yes. If OpenAI isn’t paying Apple for the integration into the phones, do they hope to use the AI in the phone to make money for OpenAI or Microsoft?
       
 (DIR) Post #Aj0qwOt50R50qoKnFQ by bjeelka@ukrainian.network
       2024-06-17T08:20:13Z
       
       0 likes, 0 repeats
       
       @Wolven Well of course it spits bullshit, it doesn't even live in our world, it crudely simulates its own. It's like a weirdly mentally disabled person stuck in a horrible loop of dreaming or psychosis or whatever the sick shit they've grown in a lab without slightest understanding what it actually does. Why would anyone think this is a reliable source of factual information?The really scary part is that we can't really distinguish its hallucinations from deliberate lies.
       
 (DIR) Post #Aj0wzEFxaUaaTOlhnE by I_am_elena@mastodon.social
       2024-06-17T09:27:59Z
       
       0 likes, 0 repeats
       
       @Wolven @mastodonmigration Wish they'd come to the conclusion this raging dumpster fire of a "feature" is not worth their money (since it's so pricey, I'm optimistic they will).Like there's any need for even more lies.
       
 (DIR) Post #Aj12kDREwjeZJjGjho by lippyduck@mstdn.social
       2024-06-17T10:32:30Z
       
       0 likes, 0 repeats
       
       @Wolven I wish people would not use 'Iie' in this context. You'd have to be very cynical to believe the Al was *intentionally* giving untrue results. Lle has a specific meaning involving deliberate intent which is being lost from the language.
       
 (DIR) Post #Aj1AKKSn0KWvD1A2Eq by muzicofiel@mastodon.nl
       2024-06-17T11:57:29Z
       
       0 likes, 0 repeats
       
       @ErikJonker tip AI
       
 (DIR) Post #Aj1DJ75tWclVJlZ1gO by volkris@qoto.org
       2024-06-17T12:30:49Z
       
       0 likes, 0 repeats
       
       @Wolven nonsense: there’s plenty of use for things that are quick and dirty. The key is knowing that’s what you’re getting and being mindful about its limits.Often enough in everyday life we don’t need the absolute guarantee.
       
 (DIR) Post #Aj2ERzIMBXtmqe9GMq by markstoneman@zirk.us
       2024-06-18T00:18:13Z
       
       0 likes, 0 repeats
       
       @Wolven @terrigarland Yeah, it's not great. It's possible, though, that more limited-purpose LLMs can be made useful, e.g., what Cohere seems to be offering. But I can't for the life of me understand why companies wouldn't want to spend their money on more skilled librarians instead, and editors and other language arts types, unless they think these services are going to be significantly cheaper. Anyway, nice to see someone influential talking about the emperor's lack of clothes. #LLMs
       
 (DIR) Post #Aj2KXwGSxsrguaU8Ei by sps@historians.social
       2024-06-18T01:26:43Z
       
       0 likes, 0 repeats
       
       @Wolven I keep on thinking of a world that insists on microwaves as a worthwhile kitchen appliance.
       
 (DIR) Post #Aj2dD7CZ0KMnsdRZM8 by gaggle@mastodon.social
       2024-06-18T04:55:49Z
       
       0 likes, 0 repeats
       
       @Wolven Who argues AI should "fully *Replace* thought, feeling, and creativity"?
       
 (DIR) Post #Aj2hMqsYwhP7y8zElE by dacig@mastodon.social
       2024-06-18T05:42:25Z
       
       0 likes, 0 repeats
       
       @Wolven I feel for the techbros, they must be running out of bullshit to sell by now. Most wallets don't have crypto, VR remains virtual and they are saying AI, who stole the data from the creators, cannot keep it's story straight, or stable. What now? Actual useful things maybe?