Post AoS5AtWfGeEZxWOfqa by futurebird@sauropods.win
 (DIR) More posts by futurebird@sauropods.win
 (DIR) Post #AoPuYwcztj9uiahkI4 by futurebird@sauropods.win
       2024-11-25T23:49:14Z
       
       0 likes, 1 repeats
       
       @ronaldtootall @hannu_ikonen LLM are not reliable enough to "check facts" this isn't what they are even designed to do well. What they are designed to do is generate plausible seeming streams of text similar to existing sets of text. That is all.There is no logic behind that, no verification. It's pure chance. Do not use them to check facts, please.
       
 (DIR) Post #AoPvTNdgerkshywCeG by futurebird@sauropods.win
       2024-11-25T23:59:31Z
       
       0 likes, 1 repeats
       
       @ronaldtootall @hannu_ikonen They can do an OK job check grammar and suggesting ways to reword things you have written. This is an OK use case, although consider how much energy it takes to generate these responses. IDK I find it a little wasteful. But, at least for things like word order and finding similar sentences ... this is in line with how they have been designed.
       
 (DIR) Post #AoPvw1Cp8xXFWy1pLc by lina@neuromatch.social
       2024-11-26T00:04:37Z
       
       0 likes, 0 repeats
       
       @futurebird@ronaldtootall @hannu_ikonen yeah LLMs aren't at all reliable for any of the use cases he proposed. 🙄😮‍💨 literally any modern word processor is more reliable for spelling/grammar checks, and the suggestion of using them for research is always nauseating.
       
 (DIR) Post #AoPw4rAHqR7Mzn3XTU by KateOfMind@mastodon.social
       2024-11-26T00:06:16Z
       
       0 likes, 0 repeats
       
       @futurebird @ronaldtootall @hannu_ikonen a Seriously. I can do that for you, too, don't hallucinate unfacts, and won't dry up an entire inland sea to do it. I might ask for a cup of coffee, tops.
       
 (DIR) Post #AoPw64YJYVtwo1sHq4 by futurebird@sauropods.win
       2024-11-26T00:06:29Z
       
       0 likes, 1 repeats
       
       @lina @ronaldtootall @hannu_ikonen They can do grammar. In the sense that if you type an incorrect sentence and ask for better wording they will give you wording that is more similar to what most texts they have scanned use. That isn't the same as following grammar rules. But it's OK.
       
 (DIR) Post #AoPx2rkOoAMaB2YuEC by dawngreeter@dice.camp
       2024-11-26T00:07:43Z
       
       0 likes, 0 repeats
       
       @lina @futurebird @ronaldtootall @hannu_ikonen I dunno, it seems like LLMs are brilliant at fact checking as long as you have a crack squad of experts fact checking the fact checking.60% of the time it works every time.
       
 (DIR) Post #AoPx2smuwHyJP8kQnA by futurebird@sauropods.win
       2024-11-26T00:17:07Z
       
       0 likes, 1 repeats
       
       @dawngreeter @lina @ronaldtootall @hannu_ikonen I think some people are impressed with the way they generate plausible text. I don't want it to seem like I'm not noticing that they do ... do something. But, LLM companies and media are not educating people about the limitations. And I think if we just say "it's bad never use it" people who struggle with writing, might feel like we aren't even looking & don't know what we are saying.
       
 (DIR) Post #AoPx5GgJ4icg5UnF2G by lina@neuromatch.social
       2024-11-26T00:16:29Z
       
       0 likes, 2 repeats
       
       @futurebird@ronaldtootall @hannu_ikonen by portraying LLMs as a panacea, the hypemongers have successfully kneecapped a lot of people's ability to consider alternative tools that are often well known, widely used for decades, and solve the specific problem they're targeting with a 0% fail rate.i've had to tell project managers way too many times, "no, we don't need to pay OpenAI to fuck up at doing what regex does perfectly. all we're trying to do is extract a consistently-formatted string for fucks sake."
       
 (DIR) Post #AoPxEeJOaS6tyyYA7s by lina@neuromatch.social
       2024-11-26T00:18:08Z
       
       0 likes, 0 repeats
       
       @dawngreeter@futurebird @ronaldtootall @hannu_ikonen but when the marketing pitch is "get rid of the fact checkers and use this instead".......
       
 (DIR) Post #AoPxEfU4EGEzcMYCOm by futurebird@sauropods.win
       2024-11-26T00:19:16Z
       
       0 likes, 0 repeats
       
       @lina @dawngreeter @ronaldtootall @hannu_ikonen What a nightmare!I feel like we are on a fast track to ... some kind of illogical singularity. Not AI taking over, but information being just... broken. (I'm freaking out someone please talk me down)
       
 (DIR) Post #AoPxNuHSaJcg7C3ing by futurebird@sauropods.win
       2024-11-26T00:20:56Z
       
       0 likes, 0 repeats
       
       @lina @dawngreeter @ronaldtootall @hannu_ikonenOr ... do some people think that "fact checking" just means making the text seem like nothing in it is wrong. Not... you know finding out if it is actually correct, corroborated, supported? That is, if no one notices that it's wrong ... it's fine?
       
 (DIR) Post #AoPxaydRhoUtNwZiWe by dawngreeter@dice.camp
       2024-11-26T00:23:18Z
       
       0 likes, 0 repeats
       
       @futurebird @lina @ronaldtootall @hannu_ikonen GenAI should be clearly labeled "unreliable by design". Improvements do not mean it is more reliable, it means being better at obscuring unreliability.
       
 (DIR) Post #AoPy3CnCf87bOybDhA by lina@neuromatch.social
       2024-11-26T00:28:23Z
       
       0 likes, 0 repeats
       
       @futurebird@dawngreeter @ronaldtootall @hannu_ikonen yeah, i dunno... i think we have so many glaring, catastrophic issues when it comes to accessing, validating, communicating, and understanding information these days. LLMs are only exacerbating underlying issues and incentives.not a good talk down... i'm very worried about it as well 😬
       
 (DIR) Post #AoPy9jix8YMjH4j2nY by futurebird@sauropods.win
       2024-11-26T00:29:36Z
       
       0 likes, 1 repeats
       
       @lina @ronaldtootall @hannu_ikonen LLM are a tool. Tools are not evil or good. The use of tools may be evil or it may be good. Asking it a factual question, asking it to fact check is doing a kind of evil because you are setting yourself up to be bamboozled. They are VERY good at giving answers that sound coherent. The wording is authoritative as if written by an expert. The vocabulary can even be technical and impressive.  But it could all be nonsense.
       
 (DIR) Post #AoPyyVmgixq8oGgTfk by lina@neuromatch.social
       2024-11-26T00:38:45Z
       
       0 likes, 0 repeats
       
       @futurebird@ronaldtootall @hannu_ikonen some tools are designed for evil. WMDs, surveillance tech, corporate landlord decision making algorithms, etc. value-neutrality is a myth that's deployed by people trying to obscure their values. it enables them to present their products as "scientific" or "inevitable" or "the only legitimate option". no technology, model, or theory exists in a vacuum - they're always enmeshed with human decisions, value judgments, and material circumstances.accepting value-neutrality is how we end up with scientific racism and "there is no alternative to capitalism" and "guns don't kill people"
       
 (DIR) Post #AoPzDVD38xvFN3A9Eu by feelnotes@alaskan.social
       2024-11-26T00:41:27Z
       
       0 likes, 1 repeats
       
       @futurebird @lina @ronaldtootall @hannu_ikonen Agree that tools aren’t evil or good. A hammer is a hammer. I can use it to build a community bench or to threaten violence.But does it matter how I got the tool? Did I pay for the hammer, or steal it? Is building the community bench less virtuous if I used tools I stole from community members?
       
 (DIR) Post #AoPzHIlvKOS1xGHoiu by futurebird@sauropods.win
       2024-11-26T00:42:10Z
       
       0 likes, 0 repeats
       
       @lina @ronaldtootall @hannu_ikonen surveillance tech? Or cameras?WMDs? or powerful explosions?corporate landlord decision making algorithms? or just algorithms?There is a tool in each of these. But yeah it's possible to make an evil tool I suppose. If it has limited function.
       
 (DIR) Post #AoPzKWMJGinaxjqBZA by sgillies@mastodon.social
       2024-11-26T00:42:31Z
       
       0 likes, 0 repeats
       
       @futurebird But what if one brand of tool required 1 gallon of water to manufacture and another required 100 gallons? I think it's not just about good vs evil.
       
 (DIR) Post #AoPzaQgy6bBi1L5beq by futurebird@sauropods.win
       2024-11-26T00:45:37Z
       
       0 likes, 1 repeats
       
       @sgillies There are MANY issues with AI. But, if we can't at least get people to understand what they can and cannot do we will have some very big problems fast. I have not found any good uses for these tools and I've tried. I don't think they are adding any value, but a lot of people don't see it that way and they just think that all the complaints are ... some kind of fear of the future, hippie nonsense. But if you use a LLM to tell you what mushroom you can eat you may well DIE.
       
 (DIR) Post #AoPzff9XleYoJmx0BE by futurebird@sauropods.win
       2024-11-26T00:46:33Z
       
       0 likes, 0 repeats
       
       @sgillies It's like when everyone started driving SUVs but ... they are using the SUVs to iron their clothes or something. Not only is it bad ... but it's not even FOR that.
       
 (DIR) Post #AoPzozP7qKShjUgbiK by futurebird@sauropods.win
       2024-11-26T00:48:13Z
       
       0 likes, 0 repeats
       
       @medley56 @feelnotes @lina @ronaldtootall @hannu_ikonen Oh I can think of a few.
       
 (DIR) Post #AoPzsrzFMXxXt1QqKu by futurebird@sauropods.win
       2024-11-26T00:48:57Z
       
       0 likes, 0 repeats
       
       @medley56 @feelnotes @lina @ronaldtootall @hannu_ikonen Exactly.
       
 (DIR) Post #AoQ0A8DQHJ4f8qIGzA by msbellows@c.im
       2024-11-26T00:50:52Z
       
       0 likes, 1 repeats
       
       @feelnotes @futurebird @lina @ronaldtootall @hannu_ikonen And using an unnecessarily complicated diesel-powered hammer indoors without ventilation to drive a single nail for hanging a picture is evil, full stop, because it's wasteful and environmentally harmful and completely unnecessary for the purpose, which is how I see LLMs for 99.999% of their current uses.
       
 (DIR) Post #AoQ0IwCaJUpMUM6JU0 by lina@neuromatch.social
       2024-11-26T00:53:39Z
       
       0 likes, 0 repeats
       
       @futurebird@ronaldtootall @hannu_ikonen categories != instances ... by that logic, authoritarian dictators could be said to be neither good nor evil because they fall under the category "leaders". there's a big difference between a DSLR and a camera attached to a lamppost in a public area that feeds a facial recognition database. this sort of reductionism is one of the methods people use to claim value neutrality.
       
 (DIR) Post #AoQ0OBI9dZOMjkCqmm by sgillies@mastodon.social
       2024-11-26T00:54:35Z
       
       0 likes, 1 repeats
       
       @futurebird I'm with you, I don't find it useful in my work, and think it has a lot of potential for harm in a lot of situations. What I was getting at in my comment is that LLMs are a ridiculously expensive tool that companies are pitching for mundane applications that are already being handled well by older systems.
       
 (DIR) Post #AoQ0ZekmRJknIQ4FcG by futurebird@sauropods.win
       2024-11-26T00:56:40Z
       
       0 likes, 1 repeats
       
       @sgillies I have to shoo my students away from them when coding and they think I'm being so old fashioned and just making more work for them. Maybe if I toss a ream of copy paper in the trash bin each time they fire it up they might start to get it. (the are oddly obsessed with sorting recycling which I like, but they nag me about it)
       
 (DIR) Post #AoQ0mafWwC5YONmNmK by sabik@rants.au
       2024-11-26T00:58:59Z
       
       0 likes, 0 repeats
       
       @futurebird @lina @ronaldtootall @hannu_ikonen Generally, tools have affordances; they're designed (explicitly or implicitly) for particular uses and goals, which can certainly be good or evilThe street finds its own uses for things, to be sure, but those too will be shaped by the affordances
       
 (DIR) Post #AoQ0qS5KnHjsvcgrgG by futurebird@sauropods.win
       2024-11-26T00:59:38Z
       
       0 likes, 0 repeats
       
       @bri_seven @lina @ronaldtootall @hannu_ikonenBut this is really hard to learn. I think a lot of people just... think it's right because it sounds right. :(
       
 (DIR) Post #AoQ1R3wzULrKn6FFpY by Impish4249@mastodon.social
       2024-11-26T01:06:17Z
       
       0 likes, 0 repeats
       
       @futurebird @lina @ronaldtootall @hannu_ikonen Totally but respectfully disagree.Needing an AI assistant to make up for the fact that a human is incapable of correctly using their native language is not OK, it is disgraceful.#AI #laziness #ignorance #idiocy #EndTimes #education #failure
       
 (DIR) Post #AoQ2Geu5rBKcAUUbwm by mcsquank@mastodon.online
       2024-11-26T01:15:39Z
       
       0 likes, 0 repeats
       
       @futurebird @sgillies can you work in sorting algorithms to the lessons? there was this fun/amazing browser game in the 2000s called Fantastic Contraption and my favorite levels were the sorting ones.
       
 (DIR) Post #AoQ2PzbgFjdOz3Ka8G by futurebird@sauropods.win
       2024-11-26T01:17:21Z
       
       0 likes, 0 repeats
       
       @mcsquank @sgillies My advanced CS students do a lot with those, it's most they CS club that drives me nuts with trying to use AI all the time .
       
 (DIR) Post #AoQDUtMdzMeQZBpvOK by StarkRG@myside-yourside.net
       2024-11-26T03:21:26Z
       
       0 likes, 0 repeats
       
       @futurebird @ronaldtootall @hannu_ikonen The set of things deep learning is useful for (finding patterns in large amounts of data, patterns that then have to actually be checked by a human to see if they're legitimate) and the set of things deep learning is marketed for (literally everything marketed as "AI", particularly including LLMs) are two non-intersecting sets. If someone says "you should is AI for ___” then you almost certainly should not.
       
 (DIR) Post #AoQFYKylyOmozRRRFw by Dervishpi@mastodon.social
       2024-11-26T03:44:29Z
       
       0 likes, 0 repeats
       
       @futurebird @ronaldtootall @hannu_ikonen They're also great at boosting the visibility of words, phrases, and spellings that are just wrong. If enough people misuse a phrase, they'll encourage others to misuse it in the same way.Someone needs to reign them in, and I'm waiting with baited breath for them to notice these errors. I know AI isn't just an escape goat, but these days I take it for granite that it won't flag any of these as bad.
       
 (DIR) Post #AoQTxA8eoWfJxynSMa by snaeqe@chaos.social
       2024-11-26T06:25:51Z
       
       0 likes, 0 repeats
       
       @futurebirdYou've defined right wing media business case pretty exactly.@lina @dawngreeter @ronaldtootall @hannu_ikonen
       
 (DIR) Post #AoRPe5GiUVN4hZAogC by crazyeddie@mastodon.social
       2024-11-26T17:12:15Z
       
       0 likes, 0 repeats
       
       @futurebird @ronaldtootall @hannu_ikonen There's lots of logic in LLMs.  There's verification.  That logic just has nothing to do with fact or truth and the verification is not checking that either.
       
 (DIR) Post #AoRbJzs82pOdcfrRB2 by staringatclouds@mastodon.social
       2024-11-26T19:23:07Z
       
       0 likes, 0 repeats
       
       @futurebird @ronaldtootall @hannu_ikonen LLM's can't produce anything that wasn't in their training data, which usually consists of work by a human, though lately other LLM work has crept inThey produce a statistically probable amalgam of the training data, picking a next likely word given the last fewIt may be correct or not, there's no checkWhatever they produce is always the work of othersTo paraphrase Morecambe & Wise: It's mostly the right notes, not necessarily in the right order
       
 (DIR) Post #AoRmWzc3Q1rW4xk2Vs by shuro@friends.deko.cloud
       2024-11-26T19:02:37Z
       
       0 likes, 1 repeats
       
       There are applications where AI based tools can help though.E.g. factchecking. Sure using them to query data directly isn't the best idea (same goes for search engines btw as these only provide potential sources not answers) however AI does a pretty good job at things like text and video summarization or audio/video transcribing.  It doesn't mean one should use the resulting data directly but if you need to find some relevant research it might come in handy helping to browse through lots of materials and identify those potentially interesting so you can evaluate them yourself.IMO "AI is God" and "AI is useless crap" are two equally radically unproductive approaches. It is just a tool which is suited for some things and not so much for others.
       
 (DIR) Post #AoRmX18LlRI2nPrLUW by lina@neuromatch.social
       2024-11-26T19:17:41Z
       
       0 likes, 0 repeats
       
       @shuro@futurebird @hannu_ikonen @ronaldtootall 👎
       
 (DIR) Post #AoRmX2OL5TfqhILd3I by shuro@friends.deko.cloud
       2024-11-26T20:58:49Z
       
       0 likes, 1 repeats
       
       @lina What is your opinion on image search which incorporates use of neutral networks for quite a while?E.g. when you can search for similar images or when photo gallery allows to search through untagged photos with terms like "dog in snow". Is this also not acceptable?
       
 (DIR) Post #AoRw6FMZkahYePO30a by futurebird@sauropods.win
       2024-11-26T23:15:58Z
       
       0 likes, 1 repeats
       
       @shuro @lina @hannu_ikonen @ronaldtootall Fact checking is the worst application. Asking an AI to check a fact will give you a reply that seems like the work was done. And if there is something similar enough in the training data it might even be right. But it could also be wrong. This kind of defeats the purpose. I don't understand why people keep suggesting fact checking is a good application.
       
 (DIR) Post #AoRwZOKxSs5ysfWlrU by hllizi@hespere.de
       2024-11-26T23:21:12Z
       
       0 likes, 0 repeats
       
       @futurebird @shuro @lina @hannu_ikonen @ronaldtootall I agree and would like to add that, given the resources the stuff consumes, "not entirely useless" is a rather disappointing outcome no matter what.
       
 (DIR) Post #AoRxyk2S13NrNeWSwa by semitones@tiny.tilde.website
       2024-11-26T23:37:00Z
       
       0 likes, 1 repeats
       
       @futurebird @shuro @lina @hannu_ikonen @ronaldtootall I'm disappointed how much AI is used for summarizing. How can you rely on the AI to never leave out a detail, or make the wrong conclusion in anything important? HR sent me an email about commuting benefits, including bike commuting. I emailed them about it, they apologized, chatgpt made that up from the document it summarized. But since I was the only one who asked, they wouldn't be sending out a correction. Imagine if that was important.
       
 (DIR) Post #AoS1xtVJjm8IGSsNns by Flo_Rian@norden.social
       2024-11-27T00:17:54Z
       
       0 likes, 1 repeats
       
       @VulpineAmethyst @shuro @futurebird @lina @hannu_ikonen @ronaldtootall LLMs have also been found to make mistakes summarising text, to the point of falsehoods. Simple stuff like meeting notes.There are different types of “AI”s that can be used to identify relevant information from a database and an LLM can then write up the results for you. But even then you have to check the actual results.
       
 (DIR) Post #AoS1xvblukU4n3uZ8q by futurebird@sauropods.win
       2024-11-27T00:21:40Z
       
       0 likes, 0 repeats
       
       @Flo_Rian @VulpineAmethyst @shuro @lina @hannu_ikonen @ronaldtootall People hear "check the results" and think "well of course I'd look it over and fix anything obviously wrong!"But the whole point is this is exactly how you generate things that will be wrong *and it won't be obvious* at all. (although they also do "obviously wrong" too from time to time, and this is what gets attention... but it's the not obvious stuff... that's is what concerns me. )
       
 (DIR) Post #AoS20y2XKVbFCVsaxc by DamonWakes@mastodon.sdf.org
       2024-11-26T23:41:47Z
       
       0 likes, 0 repeats
       
       @shuro @futurebird @lina @hannu_ikonen @ronaldtootall It's worth keeping in mind that the "AI is God" crowd are 100% wrong while the "AI is useless crap" crowd are 90% right.
       
 (DIR) Post #AoS20zNqKmElMsr7oG by lina@neuromatch.social
       2024-11-26T23:55:22Z
       
       0 likes, 0 repeats
       
       @DamonWakes@shuro @futurebird @hannu_ikonen @ronaldtootall i talk about this stuff with a lot of people, and i haven't met a single person who says something like "AI is useless crap" and doesn't have nuanced, technical, well informed reasons to back up the claim. i find that people who say "yeah some is crap but some is great" tend to be people without anything close to that level of understanding, and they often justify their "some is great" position with anecdotes and marketing (or marketing-derived journalism) talking points.
       
 (DIR) Post #AoS210UcD5FSoB230K by lina@neuromatch.social
       2024-11-27T00:21:15Z
       
       0 likes, 1 repeats
       
       @DamonWakes@shuro @futurebird @hannu_ikonen @ronaldtootall i think a lot of people are impressed by their experiences with and/or find some personal usefulness for genAI despite its shortcomings, and they get defensive because they feel personally attacked by the notion that genAI worsens so many social/ecological issues and sucks at doing what it claims to do well. it comes across as tho their position is based on falling for the hype and a lack of concern for the externalities and insidious motives associated with the products they've used. so they take a centrist position to save some face and dismiss the (valid) arguments coming from AI pessimists, while putting on an air of having a balanced perspective.it's understandable. imagine liking a tv show and then someone says stuff that sounds like "you're stupid and unethical for liking the show". but i don't think many people are actually putting that on lay users/enthusiasts, instead reserving their criticism for the people doing building/professionally hyping. these are highly technical systems and the propaganda machine behind them has infinite money. also, we've got a century's worth of scientism/cult of technology culture that lends itself to the hyped narratives and takes a loooot of work to deconstruct. dehyping and communicating about these issues for lay audiences can be a really hard task, and it has basically no funding.
       
 (DIR) Post #AoS2BVLEN1YRNaeFG4 by futurebird@sauropods.win
       2024-11-27T00:24:09Z
       
       0 likes, 0 repeats
       
       @lina @DamonWakes @shuro @hannu_ikonen @ronaldtootall If it helps being fooled by AI can happen to anyone. And by "fooled" I mean that it feeds you a false or bad answer and it sounds good and you go with it. They are how I would design a system to produce this exact result. Only... why would you design that?
       
 (DIR) Post #AoS2EYo9wDzl02dW5Y by spacelizard@aus.social
       2024-11-27T00:24:12Z
       
       0 likes, 1 repeats
       
       @shuro @futurebird @lina @hannu_ikonen @ronaldtootall Assuming by "AI" you mean LLMs & GenAI, sure, they have their uses, but unfortunately just like the previous tech fad (crypto) they are best suited to applications that are illegal and/or immoral.As a direct result of the way that they work the output of LLMs is fundamentally untrustworthy¹. They cannot be relied on to be truthful or accurate, which makes them a poor choice for any application where that matters, e.g. fact checking and summarising. They really come into their own where truthfulness and accuracy are irrelevant, e.g. SEO filler content and disinfo/misinfo bots on social media. This pairs nicely with GenAI's deep fake abilities.It really should not be surprising that the killer app for technology that enables computers to (superficially) pass as human is deception, any more than it was surprising that unregulated pseudo-currency would turn out to be most useful for criminal activity.¹ Hicks, M.T., Humphries, J. & Slater, J. "ChatGPT is bullshit." Ethics Inf Technol 26, 38 (2024) https://doi.org/10.1007/s10676-024-09775-5
       
 (DIR) Post #AoS3H7ysEm4ucTR62q by Flo_Rian@norden.social
       2024-11-27T00:36:22Z
       
       0 likes, 0 repeats
       
       @futurebird @VulpineAmethyst @shuro @lina @hannu_ikonen @ronaldtootall In our company AI is used for preliminary work, like mood boards or writing simple code. But it’s used by experts and treated like the work of interns, if that makes sense. Nobody (hopefully) lets the intern write the official summary of a board meeting or push code to production. You need a subject matter expert to work with the results.
       
 (DIR) Post #AoS5ArofcjRsemo1dQ by johntimaeus@infosec.exchange
       2024-11-27T00:56:09Z
       
       0 likes, 0 repeats
       
       @Impish4249 @futurebird Or it will suggest a common wrong grammatical usage, because it doesn't know the rules -- only how words were used in the training corpus. @lina @ronaldtootall @hannu_ikonen
       
 (DIR) Post #AoS5AtWfGeEZxWOfqa by futurebird@sauropods.win
       2024-11-27T00:57:38Z
       
       0 likes, 0 repeats
       
       @johntimaeus @Impish4249 @lina @ronaldtootall @hannu_ikonen Thing is with grammar often what you want is the most common usage. But also, yes.
       
 (DIR) Post #AoS5PuGYnVXIbB1SV6 by johntimaeus@infosec.exchange
       2024-11-27T01:00:22Z
       
       0 likes, 0 repeats
       
       @futurebird I saw one not too long ago mix there and they're basically on a coin-flip frequency. Common usage as learned from reddit and 4chan isn't necessarily what you want.@Impish4249 @lina @ronaldtootall @hannu_ikonen
       
 (DIR) Post #AoSxT4InMNt0eCHgVU by xilebo@norden.social
       2024-11-27T11:06:00Z
       
       0 likes, 0 repeats
       
       @futurebirdDepends in how you do it.Of course, asking "are my facts correct" is a bad idea.But if I do the usual fact checking, and after that is finished, I ask "are there sources that contradict my facts, please include links to that sources", and then check those sources, I may catch an error, that had slipped the initial fact checking.I don't know any better way to do this, and no unchecked input is entered directly.@shuro @lina @hannu_ikonen @ronaldtootall