Post AxBfRUypgHhy1CMMsa by aceryz@social.hackerspace.pl
 (DIR) More posts by aceryz@social.hackerspace.pl
 (DIR) Post #AxB6NrjFfBEx1BbW1Q by eff@mastodon.social
       0 likes, 0 repeats
       
       Setting fairness aside, biased AI models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors, even if you call the reduction “woke,” will ultimately make models more accurate, not less. https://www.eff.org/deeplinks/2025/08/president-trumps-war-woke-ai-civil-liberties-nightmare
       
 (DIR) Post #AxBCzVMjrAHMOCPA3c by Salvo@aus.social
       0 likes, 0 repeats
       
       @eff reminder that the insult “Woke” is claiming the the subject is aware (or awake) of what is going on.
       
 (DIR) Post #AxBWdYtW3RZA2tYBJQ by Szwendacz@social.linux.pizza
       0 likes, 0 repeats
       
       @eff And who is to decide when a model is "biased"?
       
 (DIR) Post #AxBfRUypgHhy1CMMsa by aceryz@social.hackerspace.pl
       0 likes, 0 repeats
       
       @eff "AI" is a marketing buzzword and a misnomer, there's nothing intelligent about LLMs.If we value truth as statements corresponding to and based on facts, then LLMs are always wrong. There's no semantics in the model and none appears in its operation or output: it's all syntactic with choosing most probable next token – based on syntactic relations, not on meaning.LMM output can, at best, coincide with truth. It's amazing it so often does, but with no relation to facts, it's always wrong.
       
 (DIR) Post #AxC38BSiQEgKskezTc by ThreeSigma@mastodon.online
       0 likes, 0 repeats
       
       @eff It could be argued that training a network IS biasing it; all it “learns” is bias.   (Much like how conservatives view all education as indoctrination.).  Biasing a model towards truth requires truthful training sets.