2025-11-05 -- My (current) reasons against AI ============================================= I'm talking about typical cloud-based tools here. Locally run models that you train with your own data are a different story (in some re- gards). This list is not exhaustive. From a user's perspective ------------------------- Everything that AI produces appears to be correct and legitimate. AI is confident. It always *looks* right. But it just isn't. It doesn't reason, it doesn't understand anything. It's just a string of words or code that is likely to appear. And yet it looks so convincing. None of it is trustworthy. This is dramatically different from internet search engines: They also link to incorrect content, all the time. But they don't try to make it look real. And they, by design, always link to the original source. This means that I, the reader, can build a skill over time: I can learn to estimate the trustworthiness of a search result. When I see search re- sults from a random blog, I know that it's just a random blog and it could likely be wrong. When it's a search result from an official docu- mentation or a government page or whatever, than the trustworthiness in- creases. (You could do the same analogy with libraries and physical books.) AI automatically introduces a certain framing. It might tell you some- thing like: "I found a solution to your problem, you can do foo and bar, because that does baz. Here are my sources: ..." It sounds and feels hu- man, and that essentially tricks us into believing it. It doesn't just give you the sources, it gives you an "interpretation" of them, except that this interpretation is likely to be inaccurate. You have to learn to ignore this interpretation. I found this to be quite tricky, because it looks so good and convincing. By the time you reached the "here are my sources" part (if there even are any), you have already ingested pos- sibly inaccurate information. To fight this, you should treat the output of AI like a random, bad blog, basically. And then you should only trust the actual sources (based on their individual trustworthiness) and ig- nore everything else. And that just leaves you with ... a fancy search engine, but with lots of overhead. In short, AI has the effect of gaslighting you. I found it very hard to work with any of these AI tools, because I con- stantly have to second-guess something that looks legitimate on first sight. This is much more exhausting than just doing the research on my own (using search engines) or writing the code myself. It's like pair programming with a highly confident and charming intern with no work ex- perience whatsoever -- you have to second-guess every move this person makes. This slows me down. AI isn't trustworthy, it doesn't understand anything (and never will, if we keep using the current techniques). Not only is it not useful to me, it is actively harmful. From a moral/political perspective ---------------------------------- I'm already seeing the effects of AI with the (few) people that I teach: When confronted with a task, they immediately reach for an AI bot and then only work with that answer. They no longer do research on their own -- but that is an important skill. You must be able to read different sources, weigh them against each other, know which are definitive cor- rect sources and which are not. You must have your own thought process, your own understanding and reasoning. When you blindly trust the output of an AI bot, you are worse off than before. You're back to "but it said so", without having an actual understanding of your own. I already had to clean up aftermaths of "vibe coding". People not know- ing what the hell they're doing, but being proud of it, because the shiny new AI did it. Interns will only learn how to use these AI tools, they will not learn the actual skills. AI is designed and intended to make you work less. This sounds good, but in the case of programming or writing, it is bad. These are skills that you have to learn yourself. You must be the programmer or the writer. Here's an analogy that I once read: "You don't bring a forklift to a gym." That nails it. Another analogy: You can't claim to speak a language if you only ever use translation tools. No sane person will hire you as a translater, ei- ther, just because you put "I know how to use Google Translate" in your resume. If you don't learn these skills yourself, you will depend on the compa- nies that sell you their AI tools, and that is my big concern. Do we re- ally want that? Is that empowerment of the user? Is that better than de- pending on search engines like we already do? Does this make the world a better place? *At the moment*, we still have a large percentage of people who are (mostly) able to use AI "responsibly": They can use it to get new ideas, learn about new approaches, without becoming a victim of their gaslight- ing, because they can filter out the garbage (assuming they're not too lazy to actually do that). They can only do that, however, because of their pre-existing knowledge -- because they already are programmers or writers. When you take this knowledge away, then it'll be a very differ- ent story. And I am already beginning to see these effects in younger people. I want future generations to be good programmers. I want them to be able to write their own documentation in their own words (and thus incorpo- rate their own knowledge about their own software). I want people to be independent from companies. I want people to be able to program when the internet is down. I want to see strong, independent, intelligent human beings. From a webmaster's perspective ------------------------------ I host a website. It runs on a VPS at a cloud hoster. That website con- tains my little blog and my software. This server keeps getting overrun by AI bots. They viciously scrape everything they can get their hands on. They come in huge waves, I've seen about 1000 requests per second, which isn't even that much, others get way more requests. And they are not smart: They often re-scrape the same HTML page over and over, not even using If-Modified-Since. They ma- liciously try to hide their identity. All this puts a lot of strain on targets like me. You've probably seen "this anime girl"[1] pop up on a lot of websites: That's Anubis and it tries to counteract the effects of these bot attacks. Even the Git repo viewer of the Linux kernel[2] uses Anubis. This isn't good, this isn't sustainable. As a workaround, I had to block several other cloud providers from accessing my website. Yes, not just some individual servers, but entire cloud hosters -- which means that you, a user of my website, can no longer do things like run your own RSS reader and fetch my feed. This is harmful to the internet at large. If this keeps going on, people will be discouraged from doing self-hosting even more. What spam is to email, AI bots will be to webhosting. This completely goes against the ideas of a free and decentralized internet that I believe in. From a legal perspective ------------------------ There's a double standard at work here: For decades, we've been told that copyright is important, teenagers have been sued for downloads, we pay extra fees for storage media like hard disks or USB sticks (at least we do in Germany, "Urheberrechtsabgabe"). But now, all of a sudden, this doesn't matter anymore *for the AI companies*? Suddenly they can use the content of my website without asking or compensation, completely disregarding any licenses? Huh? Make them play by the same rules -- or open up everything to everyone. But don't bullshit me just because this is new shiny tech. I am absolutely aware that, in many areas, the law is (effectively) dif- ferent for rich pepole. But this double standard is an integral part of current AI tools. When you praise these tools, you either also praise this double standard, or you must advocate for a change in law that makes everything way more open. And even if AI was perfect ... ------------------------------ There's another factor at play and it took me a while to realize this. Let's assume that AI was perfect and none of the complaints above were true. I still would not want to use it. The main motivation behind almost everything I do is that I want to ex- plore and to learn. A (hypothetical, perfect) AI takes that away from me. I already dislike that I know so little about hardware and electronics. I dislike that I do not know how to fix my car. I dislike that so much that we do is "in the cloud". All this makes me depend on other people (or rather, corporations), which is bad. "But you can't know everything!" Yes, but I would like to. I *want* to dig through the problems, I *want* to write the code. I want to understand what's going on, not be a slave to some machine that spits out an answer. ____________________ [1]: https://anubis.techaro.lol/ [2]: https://git.kernel.org