**Gopherhole? Why? My personal fight against AI Slop - 5th Dec 2025** By Kieren Nicolas Lovell _________________________________________ / Why on earth are you writing a Phlog in \ \ a Gopherhole? / ----------------------------------------- \ . _ . \ |\_|/__/| / / \/ \ \ /__|O||O|__ \ |/_ \_/\_/ _\ | | | (____) | || \/\___/\__/ // (_/ || | || | ||\ \ //_/ \______// __ || __|| (____(____) A sane question, dear Reader. One that any sane person should ask :) And on one hand, this is definitely not a sane project. On the other, it's one of the most sane things I've ever done. Let me explain. Since 2023, the world has gone AI crazy. Now don't get me wrong, I'm not against AI as a tech stack. I think there are absolutely fantastic use cases for AI. However I am a little bit at a loss. I had the silly idea that we would use AI to improve our lives; I kind of wanted AI to do the stuff that I don't like. AI automation. Organising my taxes. That crap. What it seems like is that we've decided, and by we, I mean our tech overlords, that we want to automate the things that we really enjoy, and a lot of people really do seem to want to join this cult. The cult of becoming 'creative generalists'. 2025 has now become the year of AI Slop. Within my social media feeds, the level of content is just utter dross. Impeccably _fake_ beautiful people, chuntering out random content all designed not to impart knowledge, either focusing on stuff to fuel my echo chamber, or to enrage me to get me to engage. All content is just there to grab engagement, comments, likes and interaction. Which is ironic, because most of the comments on these Slopbots' articles and posts are themselves from other Slopbots. The _circle of slop_ is not how I remember the Lion King, but here we are. Let's see a basic example of how this works, in a crappy ASCII diagram below. -------------------------- |AI Slopbot makes Article| -------------------------- ^ \ (Humans engage -->) / \ (and share further ) / \ / \ --------------- ----------------- | New Slopbot | | Slopbot #01 | | generates | | does differing| | new article | | opinion | --------------- ----------------- ^ / (<- Feed ) \ / (<- Humans argue) ( goes ) \ / ( and share ) ( viral ) \ / \ / ----------------- | Slopbot #02 | | produces | | more comments | ----------------- Figure 1.1 - The circle of Slop Now this AI slop has entered all parts of our existence; (a) Our personal entertainment in social media feeds (b) Our professional feeds like LinkedIn (c) YouTube and other video mediums (d) Our news feeds (which is increasingly becoming just misinformation news vomiting). For me, this has become a breaking point. On social media, the only one I have left now is LinkedIn, and that's for professional purposes. I keep LinkedIn because I have to. My Facebook, X (formerly known as "Twatter" I believe), and Instagram profiles have all been expunged. But even in leaving all of these; my news, YouTube, and LinkedIn content has just been contaminated by this AI sewage. So let's go through these. First up, news (if we can now call it that) With misinformation, there is a term for this in the way that these stories take hold, and start to become mainstream. It's called /thought-germs/. The way to think of this is that you should treat misinformation or extreme articles like a digital sneeze. With each sneeze, this poisoned thought-germ spreads further to more and more people. The more violent the reaction to the thought-germ, the more that person will spread. This means it will target people that _really agree_ with this idea in their echo chamber, and in turn become super spreaders, and target people that really don't like it. So they can react. In this case when we argue, we engage, we comment, we criticise, with this article. We compound the spread. Now, in the past, only a few of these thought-germs actually go viral. Most of them burn out before they can take hold. And yes, we've had a load of them. You most likely have heard that we, as humans, only use 5% of our brain. That is not true and if you think about it for more than six seconds, of course it isn't true. A doctor doesn't say "bloody hell, you are very lucky that high-speed projectile only liquidated 95% of your brain mass. Take these two paracetamols, and if you have trouble call me in the morning". (Edit: Although, this might be true if you've asked Gemini or ChatGPT for medical advice, as we all know it's important to eat some rocks every day for your daily minerals, and doctors recommend you smoke five cigarettes a day if you are pregnant). This is where the AI Slopbots come in. They make sure they make articles that have more violent reactions, which in turn make more digital sneezes. Which spreads the thought- germ until we enter a self sustaining doom loop. The fuel, and the fire, become one. For us as a species, we need to be able to trust something. We in cybersecurity always go on about "zero trust", but it is not really "zero trust" in the way that it sounds. It's actually more like the old proverb "trust, but verify". Humanity has always aspired to have this within our specialisations. Within science, first we state our intent or ideas or things we postulate on, and then cite our work against others, so you can verify what we are thinking about doing within our research is based in fact, verifiable by another source. Then within our methodology we have to be clear, concise, and transparent, to make sure others can check and verify that we are basing our assumptions on sound approaches. We undergo ethical boards and checks to make sure what we are doing is ethical and not causing harm. We try to make sure that we use open data or other public data sources so this in turn can be checked, and our research can be replicated. And finally, this is then peer reviewed, before publication ... or most likely, we go around again because reviewer number 2 spotted something :D Either way, however, there are checks and balances. And even with all of these checks and balances? We still get it wrong occasionally. However, we aspire and try to aim high so that errors can be found out, either due to error or negligence, because we have the ability to be reviewed critically. Even if you don't like it. And even then. When we think everything is right... We still call it a theory. New information comes out all of the time. We aspire not to be right individually. We aspire to get closer to the truth collectively, and we hope our small contribution gets us all there. It's completely the same with cybersecurity postures. You show me a zero trust system, and I'll show you (eventually) where there is a weak spot. This is also fine. You can't mitigate all risks. The basis of checks and balances is what actually keeps us, as a society, on track. I may be using examples of scientific articles, cyber and information security, but it really is this part, peer review, that keeps us together as a whole, from falling apart. In medicine we have second opinions, we have medical reviews. In law, we have judges, we have juries, we have ability to appeal. We have the public record. The systems are not perfect but they admit this, it's part of the check and balance. With journalism, we are breaking this check and balance. More and more AI bots are generating news. And we are firing people that can verify. These AI bots will, like a human would, make a mistake. Now it is not being noticed and caught in time. The damage has already happened. The AI thought-germ has already taken hold, and is now impossible to put back in the bottle. As an example from my field of specialisation, MIT Sloan recently just wrote an article that said that 90% of recent malware infections are created from AI generated means. That sounds very scary, until you read the article (it has now been withdrawn) [1] You can then clearly see, on the most part, it's completely made up. You can tell that since WannaCry, one of the infections that it stated were developed by AI generated means, was around before these systems existed in the public domain. It was actually shut down six months before ChatGPT went public. Most of the citations on this occasion do exist, but are completely irrelevant to the point they are trying to back up. This research was published by MIT Sloan. So... AI malware bots also now come with time machines. Those in the industry know this isn't true. Lots of us came together to state clearly, this is not what's happening on the ground. We can not see this within logs or attacks. But that doesn't matter. The thought-germ has spread. It has taken hold. And if MIT Sloan are failing in this, we are all f**ked. I've recently seen a wave of "juice jacking" articles in LinkedIn, warning people that they should never, ever charge their phone or device via USB ports at public airports, due to possible infection of their devices. There has been zero evidence of this in the wild. Anyone with a modern phone, that has at least been updated with a security patch in the last three years, will be absolutely fine. No matter how hard we try to say this isn't true... more AI articles get written about it, because it's interesting... and still, not true and not seen in the wild [2, 3] It is however, an interesting news story that divides opinion. Opinion means more engagement. Engagement means more exposure. The doom loop continues. I know these are only silly examples in the grand scheme of things; but for me they are really important. I have been working in computing and cybersecurity for nearly 23 years. I've been playing with computers from when I got my second hand Spectrum ZX81 all the way to my Pinebook Pro and shiny new Thinkpad. I've seriously enjoyed all parts of my computing career. Back in 1995, I used to be a paper boy in order to spend my pocket money to pay for my CompuServe monthly subscription (which I think if memory serves... was around 25 per month for three hours included?) so I could meet and engage with people. First on CompuServe forums, then on IRC and ICQ, then even with Orkut where I met my first Estonian friends. The internet started out as a way for /people/ to connect. To /bring/ people together. To advance our knowledge, to bring us all closer to a point where we can understand each other's point of view. My long dead email address of 114704.566@compuserve.com and ICQ 219620 UIN really was my gateway to another world. That first time of going onto your first Gopherhole site, for me, was mind-blowing. The same I'm sure when you could communicate with new people all around the world! Microsoft Live Messenger really did bring people together! BBM really did make being a teenager awesome. And I really miss that. I miss being able to make a human connection via this whole internet malarkey. Now don't get me wrong I'm a very social bear, and I've got loving friends and family, and the best to-be wife and the most amazing son that anyone could ever ask for. What I'm talking about is the very essence of what was magic about the internet; its humanity. The ability for us to honestly connect. To share ideas, to advance, to learn, to befriend... That is what we've lost from the tech bros' AI advance. We've extracted the open source soul from the internet. We have a network of bots, talking to bots, creating slop, because they only have a few objectives; (a) more likes (b) more comments (c) more shares The fact that it's not actually about _making a real human connection_, or if it is about a fact that actually _improves someone's life, or us as a whole_, has gone. And that's why I have a Gopherhole Phlog. And this is why I'm going to publish content on it. No ads. No media. No AI content. No slop. You may or may not like my writing style, I'm more of a technical writer than this ... but at least it won't sound like every other article you've read. And yes, you can also say, hey, you've shared this article, aren't you a hypocrite, and to that I can say - yes, I most likely am - but I've shared it there with a reason. To hope, even if it is just one of you, if you feel like I do within this new AI slop nightmare of an internet, that it's not just you. You are not alone. And we can miss what the internet was like together. So, for the TL;DNR this phlog is just for me - to regain that feeling again I had back in 1994, when my modem had finished its handshake to get onto the "information superhighway", to pull emails from friends, to push electronic letters I'd prepared before, to chat to family, friends, and to post phlogs to strangers I will never meet with similar interests, and to learn new ideas and interests. To enjoy the humanity of the wibbly wobbly web. This Gopherhole is my mental bunker to avoid this mental slop. _________________________________________ / Thanks for reading. Hope you find an AI \ \ slop mental bunker too / ----------------------------------------- \ \ ____ /# /_\_ | |/o\o\ | \\_/_/ / |_ | | ||\_ ~| | ||| \/ | |||_ \// | || | ||_ \ \_| o| /\___/ / ||||__ (___)_) References [1] https://www.theregister.com/2025/11/03/ mit_sloan_updates_ai_ransomware_paper/ [2] https://www.hacklore.org/letter [3] https://www.msn.com/en-us/news/technology/ ex-cisa-officials-cisos-dispel-hacklore-spread- cybersecurity-truths/ar-AA1R4pCO