Post B1pvZbTGmxwKYd7bQe by JulianOliver@mastodon.social
(DIR) More posts by JulianOliver@mastodon.social
(DIR) Post #B1pvZbTGmxwKYd7bQe by JulianOliver@mastodon.social
2026-01-01T06:51:30Z
0 likes, 1 repeats
An enviro defense group reached out asking for my security take on using BigAI in activism work. After giving them the take (basically "don't"), I took liberty to talk about the ethics of using it in the first instance, especially in the context of human rights and climate activism. Sharing here in case useful on the same grounds elsewhere. #bigai #bigtech #climate #humanrights
(DIR) Post #B1pvjZ2Mh1chu9HqIC by aral@mastodon.ar.al
2026-01-01T09:21:41Z
0 likes, 0 repeats
@JulianOliver @skry Let me help with my increased character limit 😉 Alt text (text in photo): Beyond the many security concerns of using BigTech Al products are several ethical questions that I believe ought to be central to anyconsidering using the tech.What we popularly call 'Al' is a software product often engineered to give the appearance of sentience, even an 'ID'. Yet it is not an'entity as so many marketeers and authors of fiction might have us think. Rather, our ancient propensity for animism, and a natural tendency to anthropomorphise, are being actively exploited to hide a dark apparatus and project.Behind the stage play, behind the interface, are whole datacenters serving output built from clusters of machines brute-forcing a statistically-generated collage of largely stolen human content and expression, refined and audited by a vast and poorly-paid force of unseen human labour. As you will read, those interviewed that perform this work will not themselves use the technology, with some parents cited saying they prohibit their children from using it.And whom could blame them. As a whole, the 'Al' industry is largely toxic, driven by an old capitalist desire to finally decouple work from human skill, and profit from it. BigAl is in a race to provide the sole means to this devastating production.It is also, as we all know, environmentally destructive, with OpenAl (ChatGPT) alone looking to build multiple terrawatt datacenters In the US, and in regions where locals are rising up trying to save their water and electricity from being drained by the industry. Behind this massive hardware surface are countless living histories in far flung lands of conflict minerals and neo-colonisation.BigAl is also climate poison, the digital equivalent of dirty diesel.Here is an article citing research finding that US coal demand has increased 20% between 2024 and 2025, in large part due to datacenter growth, the majority of which is attributed to the Al industry. Therein is referenced a finding that ought to be of great interest tolA 2024 Morgan Stanley report projected that datacenters will emit 2.5 billion tonnes of greenhouse gases worldwide by 2030 - triple the emissions that would have occurred without the development of generative Al technology.Finally, as we are now seeing, an underlying motivation to the tech feudalists driving the industry is cognitive capture, a most grim centralisation and engineered dependence. And that's not to speak of the impacts of voice and identity theft, trickery, videos simulating real feeling people, putting words in their mouths. The large scale long term mental health impacts of this are yet to be seen, lest of all its impacts on human cultures, faith in sense-making and each other.What began as Turing's (UK) 'Imitiation Game' (later known as the Turing Test) has become a very dangerous project of mass species-scale deception and automation whose Western apparatus is almost entirely within the hands of a deregulated US-based broligarchy under the bidding of a theocratic white supremacist project. A dark digital imperialism whose vast and pervasive export of right wing US liberatarian ideals now reaches into almost every phone, laptop and workstation.Computationally, environmentally, energetically, socially, culturally - BigAl is an obscenity.As all this is not part of the world that I for one thought we were building, I do not and will not use it. To use it is to support it. No two ways about it.
(DIR) Post #B1pw3f1O9ZC4FwPel6 by JulianOliver@mastodon.social
2026-01-01T09:24:51Z
0 likes, 0 repeats
@aral @skry Thank you!
(DIR) Post #B1pyjIeRul6AwofLpw by aral@mastodon.ar.al
2026-01-01T09:55:14Z
0 likes, 0 repeats
@JulianOliver @skry Anytime :) Here’s wishing you a happy new year, Julian. Hope our paths will cross again soon 💕
(DIR) Post #B1pzkJUyLlzNopHRzc by JulianOliver@mastodon.social
2026-01-01T10:06:06Z
0 likes, 0 repeats
@aral And vice versa! Keep up the good work. Wishing you and yours great health and a fulfilling year.
(DIR) Post #B1q0RvhuAR0Gly7xYm by deFractal@infosec.exchange
2026-01-01T10:14:26Z
0 likes, 0 repeats
@aralI suggest copy/pasting this into a text editor, replacing all copies of "Al" (with a lowercase "L") with "AI" (with a capital "'i"), replacing all single line-breaks with dual line-breaks, and then editing the reply and pasting the corrected version back in. (Then screen readers will pronounce it correctly and pause briefly between paragraphs.) If you want to notice such OCR errors, I suggest using your computer's text-to-speech capability, and listening to the text before pasting it in reply. For future reference, @JulianOliver, if you can't fit your article in the alt text of a screenshot, then I recommend making the alt text "screenshot of text which is repeated in my reply below" (or "replies" if applicable) then pasting the text into as many consecutive self-replies as are necessary given the length limit on your Mastodon instance. Better that than waiting for someone like @skry to do it for you—especially because you (Julian) already have the original text, so your copy won't have errors from applying OCR to an image of text (in a font with ambiguous character pairs such as "l" and "I"), like the copy Julian shared.