Post B0ccVZTbWbfWlCtOb2 by RachelThornSub@famichiki.jp
 (DIR) More posts by RachelThornSub@famichiki.jp
 (DIR) Post #B0ccVZTbWbfWlCtOb2 by RachelThornSub@famichiki.jp
       2025-11-25T23:18:15Z
       
       0 likes, 1 repeats
       
       Poll only for #Blind/LowVision users who rely on #AltText. Is AltText generated with an LLM actually “better than nothing” as some argue? Please comment if you’re Blind or Low Vision, and please boost to get a good sample.
       
 (DIR) Post #B0ccVbBbAWSE3wU2oC by abucci@buc.ci
       2025-11-26T01:23:03Z
       
       1 likes, 1 repeats
       
       @RachelThornSub@famichiki.jp I am low vision and I refuse to accept the false zero sum game being set up whereby I have to screw other people out of affordable electricity, fresh water, or a good paying job that doesn't cause PTSD in order to have alt text. The cruelty of this technology makes it unusable, no matter how inconvenienced I end up feeling.
       
 (DIR) Post #B0d6PmxdTUVPT869Ue by CStamp@mastodon.social
       2025-11-26T00:57:59Z
       
       1 likes, 0 repeats
       
       @RachelThornSub Would blind people know if the AI text is accurate?  As a sighted person, I've seen some alt text that in no way describes what is seen in the image.  I usually block the person using it.
       
 (DIR) Post #B0dJIRKOo0FrkjZVWC by fastfinge@fed.interfree.ca
       2025-11-26T02:39:32.305Z
       
       2 likes, 1 repeats
       
       @RachelThornSub So as an actual blind user who uses AI regularly...no, not really.  If you include AI generated alt-text, the odds are you're not checking it for accuracy. But I might not know that, so I assume the alt-text is more accurate than it is. If you don't use any alt-text at all, I'll use my own AI tools built-in to my screen reader to generate it myself if I care, and I know exactly how accurate or trustworthy those tools may or may not be.  This has a few advantages:1. I'm not just shoving images into Chat GPT or some other enormous LLM. I tend to start with deepseek-ocr, a 3b (3 billion parameter) model. If that turns out not to be useful because the image isn't text, I move up to one of the 90b llama models. For comparison, chat GPT and Google's LLM's are all 3 trillion parameters or larger. A model specializing in describing images can run on a single video card in a consumer PC. There is no reason to use a giant data center for this task.2. The AI alt text is only generated if a blind person encounters your image, and cares enough about it to bother. If you're generating AI alt text yourself, and not bothering to check or edit it at all, you're just wasting resources on something that nobody may even read.3. I have prompts that I've fiddled with over time to get me the most accurate AI descriptions these things can generate. If you're just throwing images at chat GPT, what it's writing is probably not accurate anyway.If you as a creator are providing alt text, you're making the implicit promise that it's accurate, and that it attempts to communicate what you meant by posting the image. If you cannot, or don't want to, make that promise to your blind readers, don't bother just using AI.  We can use AI ourselves, thanks. Though it's worth noting that if you're an artist and don't want your image tossed into the AI machine by a blind reader, you'd better be providing alt text. Because if you didn't, and I need or want to understand the image, into the AI it goes.