Post ApyUNdwJsGw6swfBVw by fredb@mstdn.social
 (DIR) More posts by fredb@mstdn.social
 (DIR) Post #ApySlkGLoiapXOOinI by futurebird@sauropods.win
       2025-01-11T13:44:54Z
       
       0 likes, 0 repeats
       
       My school is having a professional development day to discuss AI. If you could share an opinion, resource or idea with a bunch of very dedicated teachers about AI what might it be?I'm going to be pushing for a uniform expectation that any content that used AI needs to be clearly identified along with a push for faculty to encourage students to take more care with citing sources in their work in general. This will also mean setting a good example by citing sources in our lesson material.
       
 (DIR) Post #ApyT3K3bzG3tTWowGe by futurebird@sauropods.win
       2025-01-11T13:48:03Z
       
       0 likes, 1 repeats
       
       For example if I use an image in one of my worksheets it should have a caption saying where it came from or if I made it myself. This isn't generally done and I think some people wont' like it because its a lot of work. But, every image should have "provenance" that's just the world we live in now. Even things like bog standard scientific diagrams. Just because it's basic and well known isn't an excuse for not having a source. And "it's just for 3rd graders" isn't an excuse either.
       
 (DIR) Post #ApyT7Wo2WrggAoXgjQ by goatrodeo@mstdn.social
       2025-01-11T13:48:47Z
       
       0 likes, 0 repeats
       
       @futurebird AI MUST have its algorithms tuned to maximum wellbeing of the weakest among us rather than maximizing the wealth of the Musky parasites we politely call “oligarchs”
       
 (DIR) Post #ApyTOdOBJ6QlVrbWEK by helplessduck@mastodon.online
       2025-01-11T13:51:50Z
       
       0 likes, 1 repeats
       
       @futurebird I'm going to have to have a discussion with one of my instructors who asks us to use an AI tool to improve our writing in a Professional Writing class. I'm not at all comfortable feeding my work to the beast. I think there needs to be a clear opt-out policy. Aside: I know it all eventually ends up there anyway, but I'm able to maintain a 'moral firewall' if I'm not the one putting my work in it's maw.
       
 (DIR) Post #ApyTPqwUOxRBpzETcO by pbloem@sigmoid.social
       2025-01-11T13:51:55Z
       
       0 likes, 0 repeats
       
       @futurebird I'm involved with the development of policy at our university. To summarize:Teachers should be forced to learn about it. They don't need to like it, but they need to understand it. Students don't need to be incentivized, but they should taught the dangers. They need to experience the world without AI at least occasionally.Institutions should be held back from buying in to the hype uncritically. They need to understand their own values and how they relate to current AI products.
       
 (DIR) Post #ApyTS6YvIXlwrRHdK4 by futurebird@sauropods.win
       2025-01-11T13:52:33Z
       
       0 likes, 0 repeats
       
       @Zumbador Thankfully I'm not the one doing the presenting but I know everyone is going to turn to me and expect me to have something they can do since I'm the tech person.
       
 (DIR) Post #ApyTUMZSkVOD5ec2bI by worldwidewerner@mastodon.social
       2025-01-11T13:52:57Z
       
       0 likes, 0 repeats
       
       @futurebirdImagine an engineer using AI to construct a bridge, and the bridge collapses. Investigation shows the AI made a mistake.Who's fault is it? A computer can never be held responsible, so who will pay?The people/company making/selling the AI?The engineering company/it's leadership deciding to the usage of the AI to cut costs?The engineer using the AI and not discovering it's fault?
       
 (DIR) Post #ApyTeRMHnvDIpmFqKm by kechpaja@social.kechpaja.com
       2025-01-11T13:54:28Z
       
       0 likes, 1 repeats
       
       @futurebird I hope they are already covering this, but: pointing out the distinction between _generative AI_ and machine learning in general. I.e. generating your paper with ChatGPT vs using Google Translate on a foreign-language source that you will then cite and write your own paper about have very different implications for academic integrity.
       
 (DIR) Post #ApyTgLAJnhkoCpYx3w by futurebird@sauropods.win
       2025-01-11T13:55:08Z
       
       0 likes, 0 repeats
       
       @helplessduck I'm glad you are having that conversation. At the same time, engaging with these things and knowing what they do is very important. A general "AI is bad and evil" stance makes it mysterious... it's not. I've had my students try to get GPT to write proofs for them. It's not very good at it. They enjoy finding errors and it makes them feel more confident. But I also only do this with topics they know well enough to be able to spot the errors.
       
 (DIR) Post #ApyTjGXzQZEgkf1uaW by Nonya_Bidniss@infosec.exchange
       2025-01-11T13:55:34Z
       
       0 likes, 0 repeats
       
       @futurebird Some years ago when I was an intelligence analyst and OSINT was becoming a recognized discipline, there was a huge hoo-hah in the community over whether Wikipedia could be used as a source. My position was it could not be used as a primary source because it is by definition not one. But it could be used for research, to find primary sources, and was quite useful in this way. Today we have LLMs, which are understood to inject random information created out of thin air into their responses, sometimes sounding rational enough that a credulous person might take it at face value. In my view, LLMs may be of some use for finding new information to verify, but because of that, they may double your work. It would be irresponsible not to verify every single claim made in an LLM output. You simply can't take anything an LLM produces at face value, so you must spend the time to find the corroborating facts. For me, my time would be better spent skipping the LLM. I would not cite an LLM as a source unless my paper was about the LLM and what it said.
       
 (DIR) Post #ApyTx7PWC5rWT8fm8u by futurebird@sauropods.win
       2025-01-11T13:58:09Z
       
       0 likes, 1 repeats
       
       @passenger I need to find that report. Based on what I know about how these systems work this is what I suspected: they will always be unreliable because of how they are made, they don't encode the structure of the concepts they try to express so they can't really produce new reasoning with those concepts, they are working with text and globs of pixels, not ideas or the concept of objects. They are intrinsically limited. They will always *almost* be what they hype themselves up to be.
       
 (DIR) Post #ApyUAdw6Zlqz6sMdHM by futurebird@sauropods.win
       2025-01-11T14:00:35Z
       
       0 likes, 0 repeats
       
       @aaribaud I'm not following what you mean? I'm just saying if you have an image or a quote it should say "This photo was taken by Jane Smith from her website janephoto.com"Or "This was generated by AIGlopper"Just basic attribution.
       
 (DIR) Post #ApyUGkdmYtONdnhqq0 by becha@social.v.st
       2025-01-11T14:00:56Z
       
       0 likes, 0 repeats
       
       @futurebird high level : AI Vision of municipality of Amsterdam https://assets.amsterdam.nl/publish/pages/1061246/amsterdam_visie_ai_wcag_engelse_versie.pdf
       
 (DIR) Post #ApyUNdwJsGw6swfBVw by fredb@mstdn.social
       2025-01-11T14:02:55Z
       
       0 likes, 0 repeats
       
       @futurebird Ben Williamson always posts a lot of great stuff on linked in but I would never ask anyone to willingly look in there. Try this https://codeactsineducation.wordpress.com/
       
 (DIR) Post #ApyUVum7AGaZPqpmKW by fredb@mstdn.social
       2025-01-11T14:04:25Z
       
       0 likes, 0 repeats
       
       @futurebird This is great, too: https://openpraxis.org/articles/10.55982/openpraxis.16.4.777
       
 (DIR) Post #ApyUYXYxvHvGIGw4xs by futurebird@sauropods.win
       2025-01-11T14:04:47Z
       
       0 likes, 1 repeats
       
       @spacehobo I think this is a very typical stance, though personally I wonder how useful they are even for this kind of work. The only use I've found for AI is when I need to write something like a recommendation, or a cover letter: looking at AI makes me so angry and annoyed that I suddenly can write in my own voice with force and speed. I get inspired by how insipid and boring it is. I worry that putting AI in the creative loop could lead to a regression to the average.
       
 (DIR) Post #ApyUbhFwLBADE1XeWO by futurebird@sauropods.win
       2025-01-11T14:05:13Z
       
       0 likes, 0 repeats
       
       @spacehobo But I have been known to talk to a rubber duck, and why is that any better?
       
 (DIR) Post #ApyUmXljxBtKzFaXNw by kithrup@wandering.shop
       2025-01-11T14:06:51Z
       
       0 likes, 0 repeats
       
       @futurebird That "AI" is too broad a cover to know what exactly is being used, and also that none of it is *intelligent* so far. (And if they encounter an AI-pusher, ask said pusher to define what they think "intellgent" *means*.)
       
 (DIR) Post #ApyUnjMoJnv5FMUfnk by helplessduck@mastodon.online
       2025-01-11T14:07:29Z
       
       0 likes, 0 repeats
       
       @futurebird It's an online course... maybe I'll post on the Canvas discussion board. hmmmEven though I lean towards "it's bad," simply because we're seeing so much harmful use of it, I agree It needs to be a discussion. One of the ways I've used Copilot+ in research is outlining and summarizing sources that I'm working with. I will agonize over an outline, so it saves a ton of time. The mistakes are funny frequently enough that I don't mind them.
       
 (DIR) Post #ApyV5D1LJsqE8gmppw by TammyGentzel@awscommunity.social
       2025-01-11T14:10:46Z
       
       0 likes, 0 repeats
       
       @futurebird Never trust the results. Always fact check them. I once had AI tell me a word I was searching for that started with “A” was a word that started with “M”. That seems like a very basic request that it should not get wrong. If it can’t do that, imagine the errors that might result from a more complex request.
       
 (DIR) Post #ApyVFkDek7gxzyZOBk by DamonWakes@mastodon.sdf.org
       2025-01-11T14:12:36Z
       
       0 likes, 0 repeats
       
       @futurebird I find the most important thing is to get people to understand what "AI" is actually doing. Something I've found handy for that is MENACE, a machine that "learns" to play Noughts and Crosses but is made entirely of matchboxes full of beads: https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_Crosses_EngineExplaining how this thing plays a game with no understanding of the rules helps get across the way in which ChatGPT etc. spit out text without understanding it at all.
       
 (DIR) Post #ApyW7vxBKay21shfzk by richpuchalsky@mastodon.social
       2025-01-11T14:22:29Z
       
       0 likes, 0 repeats
       
       @futurebird This is not really what you want, but here is my old thread on why "AI" is bad for librarians:https://mastodon.social/@richpuchalsky/112010213240446748
       
 (DIR) Post #ApyWmLcxIPoFEqVuO8 by RogerBW@discordian.social
       2025-01-11T14:29:47Z
       
       0 likes, 0 repeats
       
       @futurebird The policy I wrote for work is roughly "as an academic institution, we care about correctness and verifiability of facts, therefore we don't use AI."
       
 (DIR) Post #ApyWoIQk9Yw9u0jMeW by RachamimOnWheels@wandering.shop
       2025-01-11T14:30:03Z
       
       0 likes, 0 repeats
       
       @futurebird if nothing else, it's basic scaffolding to do this with the younger kids because you want the older kids to do it themselves without need to be told
       
 (DIR) Post #ApyX3p4j3wWTUapsX2 by simone_z@mastodon.social
       2025-01-11T14:09:10Z
       
       0 likes, 0 repeats
       
       @melioristicmarie @futurebird I rather disagree.Productivity) Writing rather complex texts is clearly faster with AI most of the times with little supervision. The daughter may be left aside, but this is a logic fallacy.Prompt) this is probably true, but easy to fix, so not an AI limitation but rather a bad implementation Learning) this is also inaccurate. Several AIs have a quite large prompt window and Retrieval Augmented Generation can give AI knowledge.
       
 (DIR) Post #ApyX3qOy8AJFbfJYiu by futurebird@sauropods.win
       2025-01-11T14:32:54Z
       
       0 likes, 0 repeats
       
       @simone_z @melioristicmarie "Writing rather complex texts is clearly faster with AI most of the times with little supervision. The daughter may be left aside, but this is a logic fallacy."The first sentence seems (if not just plain wrong) at least *very* debatable the second sentence? Can you explain what you mean? I don't know what you are saying here.
       
 (DIR) Post #ApyX9Qh0d4KLboPqsa by DamonWakes@mastodon.sdf.org
       2025-01-11T14:24:52Z
       
       0 likes, 0 repeats
       
       @futurebird Another useful exercise I've found is to ask ChatGPT to produce a story in Pilish (https://en.wikipedia.org/wiki/Pilish), ideally on a specific subject so it can't simply regurgitate an existing example. ChatGPT is HORRENDOUS at this particular task, its failure is blindingly obvious, and despite that it'll happily claim that it absolutely nailed it. I find it great for getting across how confidently wrong these things can be.
       
 (DIR) Post #ApyX9Rwzx6i9Vgu8RM by DamonWakes@mastodon.sdf.org
       2025-01-11T14:32:48Z
       
       0 likes, 0 repeats
       
       @futurebird If pressed, ChatGPT will likely admit to the obvious mistake and offer a "correct" alternative that's arguably even more wrong.
       
 (DIR) Post #ApyX9T2htMs6tgaCye by futurebird@sauropods.win
       2025-01-11T14:33:56Z
       
       0 likes, 0 repeats
       
       @DamonWakes What on earth is "pillish" ?
       
 (DIR) Post #ApyXtpzEv1NBNU4XDs by michael_w_busch@mastodon.online
       2025-01-11T14:42:21Z
       
       0 likes, 0 repeats
       
       @futurebird I would simply say to never use the automated plagiarism machines.
       
 (DIR) Post #ApyY3URxqXffTk6v1E by kechpaja@social.kechpaja.com
       2025-01-11T14:44:06Z
       
       0 likes, 0 repeats
       
       @futurebird @spacehobo Perhaps this is just a different way of saying what @RachamimOnWheels is saying, but: the rubber duck doesn't talk back, it just quietly listens (or "listens") to you explain until you explain yourself into understanding. You're still the only person doing the work of understanding and explaining when you talk to a rubber duck, the duck just gives you someone to direct it at, which makes it easier to do.AI, in contrast, talks back, and it often says useless things which then stick, even if you know they're useless.
       
 (DIR) Post #ApyY73yuCMKqXuDqXw by DamonWakes@mastodon.sdf.org
       2025-01-11T14:44:33Z
       
       0 likes, 0 repeats
       
       @futurebird Sorry! I put a link in an earlier post but it might have been kind of buried: https://en.wikipedia.org/wiki/Pilish. ChatGPT's description of it is right, here: having the number of letters in each word correspond to the digits of pi. It's just the examples it gives that are wildly, wildly off.
       
 (DIR) Post #ApyYHfmEWvAef9M19E by marymessall@mendeddrum.org
       2025-01-11T14:46:39Z
       
       0 likes, 0 repeats
       
       @futurebird I think educators need to put in a little more work to understand what large language models are actually doing. The best resource for this is this LONG article by Stephen Wolfram, of "Mathematica" and "Wolfram alpha" fame. If I could assign it as homework to all educators and policy makers, I would.https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
       
 (DIR) Post #ApyYQnHdzR0eYBOa36 by matrakuku@mastodon.social
       2025-01-11T14:48:18Z
       
       0 likes, 0 repeats
       
       @futurebird Geoffrey Hintonm.youtube.com/watch?v=qrvK_KuIeJk
       
 (DIR) Post #ApyYfBEHkyKks4gUee by chrisamaphone@hci.social
       2025-01-11T14:50:54Z
       
       0 likes, 0 repeats
       
       @futurebird I recently watched this video from an engineering prof I follow on her classroom AI policy and it makes a lot of great points about learning, attribution, privacy, and ethics:https://youtu.be/zOGrxJ4AAXk?si=p9UNIE8QtYCQvDEEit doesn’t get at everything I personally find depressing and abhorrent about the whole *project* of openai, but it seems like a good set of practical explanations
       
 (DIR) Post #ApyZfuGCQtdmUCUR4C by simone_z@mastodon.social
       2025-01-11T15:02:16Z
       
       0 likes, 0 repeats
       
       @futurebird @melioristicmarie Space here is limited. I will try to explain better what I mean. For texts of low-average contextual complexity, as those written by most secretaries, using AI is at least as effective as giving the task to a human collaborator, when supervising (validating the text) is anyway required. How many millions of daily person hours are spent today doing this? AI can do it instead.
       
 (DIR) Post #ApyadAueAB1NDZfcBs by semitones@tiny.tilde.website
       2025-01-11T15:12:47Z
       
       0 likes, 0 repeats
       
       @futurebird I saw a tv feature about Sal Khan's AI: the kids are assigned to use a special chatbot to help them learn chemistry.(https://www.cbsnews.com/news/how-khanmigo-works-in-school-classrooms-60-minutes/) It seemed to not give authoritative answers, but guided the students' work through problems, like a tutor. I am skeptical of anything that uses AI as authoritative. It also let the teacher drop into any of the chat histories and see what the students were working on, and raised flags if any students seemed to be struggling. (1/2)
       
 (DIR) Post #ApybHKz08q1E28rFg0 by MedeaVanamonde@chaosfem.tw
       2025-01-11T15:20:08Z
       
       0 likes, 0 repeats
       
       @futurebird @passenger They have no Dasein.They are just ready to hand hammers.
       
 (DIR) Post #ApyepzUuJbxPOaztfU by WizardOfDocs@wandering.shop
       2025-01-11T15:57:12Z
       
       0 likes, 0 repeats
       
       @futurebird my opinion: teach students to write and research without AI, so that when they start using it they understand just how stupid and unhelpful it isTeach them to respect themselves and their art, and not give it away for free so someone else can profit from it
       
 (DIR) Post #ApyfK5sr8TrHtEYN3w by faassen@fosstodon.org
       2025-01-11T16:05:33Z
       
       0 likes, 0 repeats
       
       @futurebirdAlso learning to evaluate those sources. Emphasis on unreliability. The harder or more obscure the question the less likely you get a good answer. I imagine for common school level questions it can do well which can lead to the wrong impression they do the hard part well too. Our heuristics lead us to believe someone who can answer a lot of medium level questions well must be an expert, but it doesn't work like that with AI.
       
 (DIR) Post #ApyfRRj2KMx1epSy48 by FantasticalEconomics@geekdom.social
       2025-01-11T16:06:48Z
       
       0 likes, 0 repeats
       
       @futurebird I would want all teachers and students to know the costs of AI before making decisions to use it. Environmental costs from energy, pollution and water use:https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-aboutAnd that much of it is based on stolen information (or at least folks have no real ability to opt out) to it making ethically questionable in its creation even if it's 'used responsibly'.
       
 (DIR) Post #ApyfUGvqugBV3FoAq0 by va2lam@mastodon.nz
       2025-01-11T16:07:19Z
       
       0 likes, 0 repeats
       
       @futurebird yes I do this though I teach from shared slides where colleagues don't...
       
 (DIR) Post #Apyg9cKGHDbDH9wMDY by floppyplopper@todon.nl
       2025-01-11T16:14:49Z
       
       0 likes, 0 repeats
       
       @futurebird i think that while people can use an educational exemption for copyright rules, it's best practice to provide a good example. you're doing exactly the right thing imo.
       
 (DIR) Post #ApyjhRcEu15pleAev2 by david_chisnall@infosec.exchange
       2025-01-11T16:54:34Z
       
       0 likes, 0 repeats
       
       @futurebird Anyone in education considering AI should read this paper (AI use linked to decline in critical thinking ability).@didierlapin can talk about AI in assessment.
       
 (DIR) Post #ApylXCG7LNAP9T458i by didierlapin@mastodon.social
       2025-01-11T17:15:07Z
       
       0 likes, 0 repeats
       
       @futurebird from an educational assessment perspective, the JCQ has some interesting guidance which includes attribution https://www.jcq.org.uk/wp-content/uploads/2024/07/AI-Use-in-Assessments_Feb24_v6.pdf
       
 (DIR) Post #ApyuidoC9d2TMtXJJo by didierlapin@mastodon.social
       2025-01-11T17:17:21Z
       
       0 likes, 0 repeats
       
       @futurebird normally at this point I’d start talking about the importance of context validity, but I think you said elsewhere that your students are third graders? If so I’d guess you’re focussed on learning oriented rather than summative assessment, right?
       
 (DIR) Post #ApyuiewjvLT4tgXeHA by futurebird@sauropods.win
       2025-01-11T18:58:04Z
       
       0 likes, 0 repeats
       
       @didierlapin  My students are in grades 6-12.
       
 (DIR) Post #ApyuwNoKQlOBo29RoW by mavu@mastodon.social
       2025-01-11T19:00:31Z
       
       0 likes, 0 repeats
       
       @futurebird besides all things mentiond, i fear normalizing reliance on LLM will cause problems once they leave the current hype cycle and start to require payment.Will the school pay for pupils LLM use?Will the school take the improved abilities of more expensive LLMs into account when grading?Will LLMs always be allowed? All of them?The time when those things are free to use will definitely come to an end. Better think of social implications before that.
       
 (DIR) Post #ApyvWMpBZghDAki8OG by futurebird@sauropods.win
       2025-01-11T19:07:03Z
       
       0 likes, 0 repeats
       
       @akamran @Anke  I try very hard to source everything even on social media and I wish more people would do this— it’s getting harder to do!
       
 (DIR) Post #ApywiwsbRqvgQNs9gm by grant_h@mastodon.social
       2025-01-11T19:20:31Z
       
       0 likes, 0 repeats
       
       @futurebird lots of good answers already. My take1. Never use it for marking, and never use a AI detector. These are inaccurate, and target, and destroy, good kids as much as possibly spotting some cheats.2.They are useful for reformatting. I can make a GIFT MCQ for Moodle from notes in 10 minutes. They can also be good for prompts. 3. If your students aren't invested in learning, you've already lost. Those who want to learn will get that AI doesn't get info into the brain. The others won't.
       
 (DIR) Post #ApyycubpB3MJYMH1yS by lienrag@mastodon.tedomum.net
       2025-01-11T19:41:48Z
       
       0 likes, 0 repeats
       
       @futurebird At least tell them to use the term "stochatsic parrots" instead of "AI" in order to make clear to the students (and the staff !) what it is exactly.(and yes I know that no all AI is stochastic parrots, but the one people talk about is, and making the distinction is important)@Zumbador
       
 (DIR) Post #ApyzdjCfAYZaQFSR16 by alcinnz@floss.social
       2025-01-11T19:53:10Z
       
       0 likes, 0 repeats
       
       @futurebird I'll point you to @timnitGebru , she's very well educated on the subject & has some good papers!
       
 (DIR) Post #Apz5JD0PljCUUrKMPg by Blort@social.tchncs.de
       2025-01-11T20:56:31Z
       
       0 likes, 0 repeats
       
       @futurebird > If you could share an opinion, resource or idea with a bunch of very dedicated teachers about AI what might it be?Kill it with fire?
       
 (DIR) Post #Apz5NVyBCf3kTFKqUy by futurebird@sauropods.win
       2025-01-11T20:57:33Z
       
       0 likes, 0 repeats
       
       @Blort  I’m trying not to come off too strongly. lol
       
 (DIR) Post #Apz5sP4Qu6vA3BzNc8 by Blort@social.tchncs.de
       2025-01-11T21:02:59Z
       
       0 likes, 0 repeats
       
       @futurebird I'm probably the wrong person to listen too, then. 😜
       
 (DIR) Post #ApzK3oISqtl0pesxfs by futurebird@sauropods.win
       2025-01-11T23:41:59Z
       
       0 likes, 1 repeats
       
       @Blort "How do we address the impact of AI in the lives and studies of our students?""Burn it. Burn the garbage boggle machine before it bewitches the young. It speaks two lies for every three truths and none can tell them apart. Burn it and give it little cement boots then place it in the East River"
       
 (DIR) Post #ApzKb2hAL9X32oHmAy by trochee@dair-community.social
       2025-01-11T23:47:59Z
       
       0 likes, 0 repeats
       
       @futurebird @Blort This is the Way
       
 (DIR) Post #ApzKuQLmcCdRuZWfj6 by bituur_esztreym@pouet.chapril.org
       2025-01-11T23:51:30Z
       
       0 likes, 0 repeats
       
       @futurebird @Blort great amount of-"bury it in the desert" - "wear gloves" vibes in there..
       
 (DIR) Post #ApzNFQLc1KCPtOaTaK by TerryHancock@realsocial.life
       2025-01-12T00:17:38Z
       
       0 likes, 0 repeats
       
       @futurebird @Blort Yes!"Get on the cart, ya toadies!"*Shadows of the Butlerian Jihad coming.[*H2G2 Radio Show, if this is too obscure]
       
 (DIR) Post #ApzNSEmKLTLjmBC1Sa by Phosphorous@zirk.us
       2025-01-12T00:20:01Z
       
       0 likes, 0 repeats
       
       @futurebird @Blort   And there needs to be an incantation to vice over the grave, ensuring it does not rise.
       
 (DIR) Post #ApzNX0fUEbkvSgGjtQ by futurebird@sauropods.win
       2025-01-12T00:20:52Z
       
       0 likes, 1 repeats
       
       @TerryHancock @Blort LOL What if in the world of Dune they became anti-AI not because it was too powerful and evil or whatever... but just because it was so terrible and made so much nonsense they got fed up with it?
       
 (DIR) Post #ApzNdQ54fD7810IHOS by jmax@mastodon.social
       2025-01-12T00:22:04Z
       
       0 likes, 0 repeats
       
       @futurebird @TerryHancock @Blort Yeah, the Butlerian Jihad is starting to look a lot more sensible.
       
 (DIR) Post #ApzNkCapD44ID6NZC4 by Mkagle@sfba.social
       2025-01-12T00:23:16Z
       
       0 likes, 0 repeats
       
       @futurebird @TerryHancock @Blort That's just Hitchhiker's Guide to the Galaxy
       
 (DIR) Post #ApzO1k9pfcwnES7NAm by preferred@expressional.social
       2025-01-12T00:26:27Z
       
       0 likes, 0 repeats
       
       @futurebird @TerryHancock @Blort "what if(tm)" .. Who's F pro-AI, I might ask.....
       
 (DIR) Post #ApzOZ6UZeUJA3dj8c4 by Seanochicago@mastodon.sdf.org
       2025-01-12T00:32:19Z
       
       0 likes, 0 repeats
       
       @futurebird I think stressing that “you’re responsible for what you turn in. If you turn in junk, it’s your fault.” Too many people think “AI/the algorithm absolves me from responsibility.”  That’s the opposite of what should happen.  If you use an algorithm that kills somebody, you should be tried for murder. Yes, I think that should apply to corporations too. Criminally negligence should result in criminal charges.
       
 (DIR) Post #ApzPeOJqtmf2kWnU7U by Moss@beige.party
       2025-01-12T00:44:37Z
       
       0 likes, 0 repeats
       
       @futurebird @TerryHancock @Blort The Fremen would certainly be mortified by its water usage.
       
 (DIR) Post #ApzQAXpJ5P1lVDqFLU by tessala@sfba.social
       2025-01-12T00:50:22Z
       
       0 likes, 0 repeats
       
       @futurebird I would encourage educators to create assignments that explicitly direct students to engage critically with GenAI. For example, a history teacher could have students use ChatGPT to answer a prompt and then use 3 other sources to critique or expand on its answer. Or an English teacher could have students edit an essay chatgpt wrote and write a reflection on what needed to be changed and why. These are the assignments I’d be doing if I were still in writing education.
       
 (DIR) Post #ApzSHLJqBeg4e9UdFY by ParadeGrotesque@mastodon.sdf.org
       2025-01-12T01:13:54Z
       
       0 likes, 0 repeats
       
       @futurebird @Blort - Nuke it from orbit, it's the only way to be sure. - But Sir! AI is in every data center on Earth!- It's a sacrifice I am willing to make.
       
 (DIR) Post #ApzX0l7nUKPbWtltA0 by paninid@mastodon.world
       2025-01-12T02:07:05Z
       
       0 likes, 0 repeats
       
       @futurebird In the era of AI, reading the outputs closely is a political act.
       
 (DIR) Post #ApzdNjukknugA30lFo by disappearinjon@wandering.shop
       2025-01-12T03:18:24Z
       
       0 likes, 0 repeats
       
       @futurebird teachers (and journalists) are particularly susceptible to thinking AI is intelligent because those professions over-index on facility with language as an indicator of ability. Be aware and adjust accordingly.
       
 (DIR) Post #ApzjS0wTzBypr6F3KK by strypey@mastodon.nzoss.nz
       2025-01-12T04:26:19Z
       
       0 likes, 0 repeats
       
       @alienghic> LLMs are a parlor trick used to part rich fools from their moneyAe, it's a shell game;https://disintermedia.substack.com/p/invasion-of-the-mole-trainers@futurebird
       
 (DIR) Post #Aq0N2EqdetIrcXlwbA by cford@toot.thoughtworks.com
       2025-01-12T11:50:00Z
       
       0 likes, 0 repeats
       
       @futurebird @passenger This seems a very sound take too me, and I think your point about intrinsic limitation will hold any time we ask GenAI to "think". A complication is that many upcoming systems use GenAI to provide unstructured interaction with another component that does model the concepts.
       
 (DIR) Post #Aq0m35Kf1N9bFfJbdY by marymessall@mendeddrum.org
       2025-01-11T15:00:51Z
       
       0 likes, 0 repeats
       
       @futurebird But since that article is so long, a more practical reading assignment might be Ted Chiang's "Chat GPT is a Blurry JPEG of the Web":https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-webI think even non-math people who don't understand how statistics relate to file compression might have enough experience with JPEG artifacts to get the point on a metaphorical level, but I think it's a good description on a literal level too. The statistics on which LLMs are built really ARE a kind of lossy compression.
       
 (DIR) Post #Aq0m367E6o0BgI2OKu by marymessall@mendeddrum.org
       2025-01-11T15:05:42Z
       
       0 likes, 0 repeats
       
       @futurebird"To save space, the copier identifies similar-looking regions in the image & stores a single copy for all of them; when the file is decompressed, it uses that copy repeatedly to reconstruct the image. It turned out that the photocopier had judged the labels specifying the area of the rooms to be similar enough that it needed to store only one of them, & it reused that one for all three rooms when printing the floor plan." -Example from the essay of lossy compression "hallucinations."
       
 (DIR) Post #Aq0m36hjv3D1VX7YFU by marymessall@mendeddrum.org
       2025-01-11T15:07:42Z
       
       0 likes, 1 repeats
       
       @futurebird Here's a little more from the Ted Chiang piece...
       
 (DIR) Post #Aq0ojtu1akTeADOsaW by wtrmt@mastodon.social
       2025-01-12T17:00:16Z
       
       0 likes, 0 repeats
       
       @futurebird You Are Not a ParrotAnd a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.This is an excellent starting point.https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
       
 (DIR) Post #Aq0syrTVIeoYnKNuoS by antoinechambertloir@mathstodon.xyz
       2025-01-12T16:35:14Z
       
       0 likes, 0 repeats
       
       @marymessall @futurebird what i like in the wolfram piece is that it clearly sets out the random part of chatGPT, and also explains why random is not a synonym for rule-less. From that perspective, I believe it is important to reflect about whether such a system can be, or cannot ever be, appropriate for emulating thought.
       
 (DIR) Post #Aq0sysPdoVJphdaLQm by marymessall@mendeddrum.org
       2025-01-12T16:41:41Z
       
       0 likes, 0 repeats
       
       @antoinechambertloir @futurebird Seems to me there is a lot more to "thought" than language, though. People without language still think. They feel, perceive, learn, plan, reason, desire, remember, etc. LLMs do none of that. There is a part of the human brain called "Brocas's area," and when it is damaged, people suffer from "aphasia" - an inability to choose words and construct sentences.I think LLMs might be performing some functions of Brocas's area. But brains do a lot more than that.
       
 (DIR) Post #Aq0sytDynLaKDl8XtQ by antoinechambertloir@mathstodon.xyz
       2025-01-12T17:41:09Z
       
       0 likes, 0 repeats
       
       @marymessall You are not the first one to tell me that people without language think, it seems that cognitive sciences support that idea, but I have also read the opposite take. We could reconcile everybody by an appropriate definition of thought. My own experience of thought (where I have to write and speak to make it evolve) brings me on the side of language being required for thinking.Are there experiments that would support the idea that people with aphasia 1/ think, and 2/ don't have an inner language?@futurebird
       
 (DIR) Post #Aq0sytlem8WVuCtRNw by futurebird@sauropods.win
       2025-01-12T17:47:47Z
       
       0 likes, 0 repeats
       
       @antoinechambertloir @marymessall I'm not just sitting here like a petunia, but I also do not think in words or sentences most of the time. It's a little effort to "put things into words" that's part of the value of journaling. Although sometimes, no, ALWAYS, something is lost in that translation.
       
 (DIR) Post #Aq0t70OK25jQlUqPLc by antoinechambertloir@mathstodon.xyz
       2025-01-12T17:49:24Z
       
       0 likes, 0 repeats
       
       @futurebird @marymessall I definitely agree that sometimes it's difficult to put things into words.
       
 (DIR) Post #Aq1GJ3IlKt0CDCxPYO by sven_md@mastodon.social
       2025-01-12T22:09:21Z
       
       0 likes, 0 repeats
       
       you mean uniform in shape or patterns, where AI couldn't get into that HD or be missing bytes which fail yet? but:humans can sew thoseI'd say AI might read graphics (had an example of a screenshoot, displaying an actor's nick),texted to someone by quoting it &chatGPTread what wasnt even there:the perfectmovietitle&maincharacter tooAI in furbies(toys) seem the start but I think it can be auditive+visually scary: facefilters to save&errors for explaining screenreadershttps://www.youtube.com/watch?v=CgDFzWvH1NE
       
 (DIR) Post #Aq1H0F5ZO3fbDnNgn2 by williampietri@sfba.social
       2025-01-12T22:17:08Z
       
       0 likes, 0 repeats
       
       @futurebird I would mention rising contempt among the more serious text consumers for GenAI text. E.g., I know an executive who just fired a vendor because they kept sending obviously generated proposals. I see people saying in effect that students don't really need to learn to write because they can have a chatbot do it. But I expect good thinking and good writing will be at even more of a premium as automated mediocrity becomes more common. I hope teachers keep teaching it!
       
 (DIR) Post #Aq1HQG0MbUvDakLo7E by bobthomson70@mastodon.social
       2025-01-12T22:21:51Z
       
       0 likes, 0 repeats
       
       @futurebird @emilymbender and her content
       
 (DIR) Post #Aq1IcEFr2KEZaorvxQ by sobroquet@infosec.exchange
       2025-01-12T22:35:14Z
       
       0 likes, 0 repeats
       
       @futurebird Accentuate the Positive,  NOT!14 Risks and Dangers of Artificial Intelligence (AI)AI has been hailed as revolutionary and world-changing, but it’s not without drawbacks.https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence#:~:text=Is%20AI%20Dangerous%3F,biggest%20dangers%20posed%20by%20AI.Data Centers & Energy Demandhttps://www.pecva.org/our-work/energy-matters/data-centers-energy-demand/The Missing AI Conversation We Need to Have: Environmental Impacts of Generative AIhttps://www.justsecurity.org/96534/environmental-impacts-of-ai/The Scariest Part About Artificial IntelligenceBetween its water use, energy use, e-waste, and need for critical minerals that could better be used on renewable energy, A.I. could trash our chances of a sustainable future.https://newrepublic.com/article/179538/environment-artificial-intelligence-water-energy?utm_source=newsletter&utm_medium=email&utm_campaign=tnr_daily&vgo_ee=5IGNFv%2Fyr8A2K6Tog%2BI6lOwjzIkUA%2FpxsNRh94HNQvQDby%2FIsmLUieQ%3D%3AHbh1qxozLOQfhAQzZGgP21unaU5c4l8e
       
 (DIR) Post #Aq2QFl1a1ikNMDwpTk by didierlapin@mastodon.social
       2025-01-13T11:35:15Z
       
       0 likes, 0 repeats
       
       @futurebird speed-running context validity: for this whatever’s available to candidates in the real world is what should be available to them in an assessment. This means that they’ll likely be activating the same knowledge and cognitive processes. For AI I strongly believe that if your candidates can just lift and shift a task to AI, this isn’t something it’s worth assessing them on. As assessment designers, we need to be aware of AI capabilities and what the human-add value is.