Post Aw8R1rC79WmRWbkHbc by saraislet@infosec.exchange
 (DIR) More posts by saraislet@infosec.exchange
 (DIR) Post #Aw8R1rC79WmRWbkHbc by saraislet@infosec.exchange
       2025-07-14T11:32:47Z
       
       2 likes, 1 repeats
       
       One of the problems with vibe coding is that the hardest part of software engineering is not writing the code, rather it's *choosing* what to code, and designing the system (and, later on, maintaining the code/operations/etc)The barriers and investment cost to writing code is itself a *desirable* aspect of software engineering because it forces you to make careful, good choices before you invest in building somethingBecause the majority of the time spent writing, say, curl, is not writing the original tool but rather maintaining it over time, it's important to make good choices from the beginning, and at every major version change
       
 (DIR) Post #Aw8R1w7KqivcnEY2Lo by saraislet@infosec.exchange
       2025-07-14T11:34:03Z
       
       0 likes, 0 repeats
       
       Early in my career, I built a system for a customer that made it easier for their university to automate sending invitations and login instructions for new users. I missed some key differences in some user environments that weren't covered in my testing, and I ended up sending thousands of malformed login invitations... twice.
       
 (DIR) Post #Aw8R24F4dps7zYEKWW by saraislet@infosec.exchange
       2025-07-14T11:35:32Z
       
       1 likes, 0 repeats
       
       If I'd been forced to write separate login invitations manually for each segment of the user population, I would also have been forced to do more fine grained testing and take more care to ensure it works and think through the details. But as a starryeyed new engineer, I was too excited by the idea of automating the whole thing in one go. I made it too easy to distribute the impact of my mistake, and I wasted the time of thousands of users.
       
 (DIR) Post #Aw8R29VZ3x2AK8z4QC by saraislet@infosec.exchange
       2025-07-14T11:41:43Z
       
       0 likes, 0 repeats
       
       I have a much wider and wiser perspective now: The tools developed by my teams are operated by them day in and day out, used by thousands of engineers daily, maintaining products that are used by hundreds of millions of customers night and day around the world
       
 (DIR) Post #Aw8R2EWSQ3idsMRM0W by saraislet@infosec.exchange
       2025-07-14T12:45:29Z
       
       0 likes, 0 repeats
       
       1. Those tools are maintained by security software engineers many years after the original authors left the companySo it's rather important that those tools are easy to understand and maintain. Readable code, composable parts, unit and integration tests, etc.
       
 (DIR) Post #Aw8R2JuOOUpsa2g2Vc by saraislet@infosec.exchange
       2025-07-14T12:58:56Z
       
       0 likes, 0 repeats
       
       2. On the order of 60 tools were historically maintained by about 8 people for most of the last 10 years. Granted, half of those tools are glorified scripts, tiny little things. But it was manageable to operate and maintain 60 tools by 8 people because they were built for reliability and resilience, within a resilient platform that made it easy to operate software, redeploy instances and entire clusters, etc. And it continues to be reasonable to maintain and operate because the code is written with clear logging, errors, monitoring, etc. (Okay, most of it. There are gaps, so this also involved a lot of luck đŸ˜….)So it's important to be thoughtful from the design stage and regularly in maintaining it about how software will be operated and maintained — and retired! — in the long-term over the years.
       
 (DIR) Post #Aw8R2PSFm1tNmVYeTw by saraislet@infosec.exchange
       2025-07-14T13:08:21Z
       
       0 likes, 0 repeats
       
       3. We think about when to RETIRE software! We probably haven't retired enough. Merge the useful parts into newer systems (or rewrite then if that makes sense). Hand off tools or systems that are no longer part of a team's scope, maybe to more relevant teams or to the downstream teams that depend on it, or to upstream teams who are a better fit for owning that product, etc. Whatever the direction should be, retirement of software is a topic to think about intentionally and regularly.
       
 (DIR) Post #Aw8R2V40v44HB4GNX6 by saraislet@infosec.exchange
       2025-07-14T13:20:47Z
       
       0 likes, 0 repeats
       
       4. Infrastructure platforms are intricately collaborative sociotechnical systems of systems, with interactive intertwined layers of domains, teams, and expertise across networking, security, hardware/cloud, developer-facing interfaces, and the many abstractions of/by/for each of these. So all of the above and below aspects of software happen within these collaborative sociotechnical systems. We design, implement, maintain, operate, retire, and so on, all through collaboration to think about and plan in advance around things like downstream dependencies if something were to go wrong or if we consider retirement, or how we're affected by upstream decisions/incidents/etc.
       
 (DIR) Post #Aw8R2anDcQCMwiS3Bg by saraislet@infosec.exchange
       2025-07-14T13:32:15Z
       
       0 likes, 0 repeats
       
       5. Within security (with insecurity?), we always consider how the ever-evolving threat landscape may affect these systems.With all the complexity being discussed here, security can seem intimidating—but through tools (and abstractions) like threat modeling, we can focus on the most likely goals of threat actors, what we most want to protect or prevent (e.g., confidentiality/integrity/availability), and the steps of an attack chain where we can most effectively prevent or detect.
       
 (DIR) Post #Aw8R2fzoH2F15DNg0W by saraislet@infosec.exchange
       2025-07-14T13:55:17Z
       
       0 likes, 0 repeats
       
       Writing software involves considerations across many areas (including and beyond design, maintenance, operations, and security), all within collaborative sociotechnical systems.Writing software has evolved SO much since my mother was punching stacks of cards in Assembly! We have higher level languages and compilers that optimize code, memory safe languages, cloud computing, Infrastructure-as-Code, etc. A good example of how software engineering has evolved: One of the hardest problems 15 years ago was "which part of this complex distributed system is broken?" And although tracing methods go back to basic print debugging (in the 1960s if not earlier), open source products like OpenTelemetry is what turned this virtually overnight into a solved problem.
       
 (DIR) Post #Aw8R2kiciMSrjMNWPg by saraislet@infosec.exchange
       2025-07-14T14:27:21Z
       
       0 likes, 0 repeats
       
       OpenTelemetry and generally distributed tracing techniques have been transformative for software engineering in exactly the way that GenAI has not been.I'm a mathematician, and I've loved machine learning since I first learned about Markov chains. Some of my earliest code was machine learning…back in the 1990s when we called it mathematical modeling.In my opinion, one of the most transformative advances in machine learning is having cloud-based tools that make it easier to develop and iterate on models, handle data, etc. That and tools like TensorFlow nearly democratized development of relatively good ML capabilities like recommendation systems or automated labeling.
       
 (DIR) Post #Aw8naApMrTpbFlsHZY by saraislet@infosec.exchange
       2025-07-14T15:46:37Z
       
       0 likes, 0 repeats
       
       @dalias yeah, I agree. Though I would say that most of those harms of applying statistical models to human behavior at scale happened mostly independently of broad access to those tools, because most of that was developed by profit seeking companies that developed those capabilities on their own or would have done so regardless — whereas non harmful use cases wouldn't necessarily have access to be able to develop such capabilities Does that make sense?
       
 (DIR) Post #Aw8naBhFd8vtwt5JYm by saraislet@infosec.exchange
       2025-07-14T15:54:09Z
       
       0 likes, 0 repeats
       
       @dalias I guess rather I wonder whether should I be focusing more on the issues of how greed & (lack of) ethics lead to harmful application of technology. Analyzing qualities of technical capabilities as I did is a bit of a red herring against that. : /
       
 (DIR) Post #Aw8naCaYJXAWiOxTl2 by strypey@mastodon.nzoss.nz
       2025-07-14T22:43:12Z
       
       0 likes, 0 repeats
       
       (1/?)@saraislet > how greed & (lack of) ethics lead to harmful application of technologyAt the risk of picking on your wording here and missing your point, I want to point you back to the post where you used the term "sociotechnical systems". You can drop the most perfectly ethical person into such a system, optimised for other values (eg "productivity", "efficiency", "profit"), and it will either corrupt them, or spit them out.@dalias
       
 (DIR) Post #Aw8naDRN99Q5MDff5U by strypey@mastodon.nzoss.nz
       2025-07-14T22:51:01Z
       
       0 likes, 0 repeats
       
       (2/2)That's not to say that personal ethics don't matter, you can't reform sociotechnical systems without them. But you also can't reform them as an ethical lone gun, certainly not from the inside. Harmful application of tech happens because institutions with power over tech (both state and corporate) are not optimising for harmless application.It is a failure of ethics, but at the level of systems design and maintenance, not individual ethics. Just like software failures, as you point out.
       
 (DIR) Post #Aw9bAcHRALNiZk0ioq by saraislet@infosec.exchange
       2025-07-15T09:53:23Z
       
       0 likes, 0 repeats
       
       @strypey it isn't ready, but theoretically enough individuals working collectively can create change to institutional power, through influence/strikes, or through regulation like GDPR.
       
 (DIR) Post #AwAgn0VaEDMlePiFv6 by strypey@mastodon.nzoss.nz
       2025-07-15T22:31:07Z
       
       0 likes, 0 repeats
       
       @saraislet > working collectively can create change to institutional power, through influence/strikes, or through regulation like GDPRExactly. I find Cory Doctorow's analysis really useful here, particularly the Four Horsemen of Progressification (just came up with that, WIP) that mitigate enshittification; labour rights, customer freedom, market competition, and public regulation. We need to work on strengthening and mobilising all 4 of these, internationally.
       
 (DIR) Post #AwBnO4nZXfvB58oZai by saraislet@infosec.exchange
       2025-07-16T11:19:43Z
       
       0 likes, 0 repeats
       
       @strypey that's a great name for it, thanks. Didn't know he'd written about these
       
 (DIR) Post #AwCszoeGjJNgvVEoaW by strypey@mastodon.nzoss.nz
       2025-07-16T23:57:19Z
       
       0 likes, 0 repeats
       
       @saraislet > Didn't know he'd written about theseSo often that I've got them memorised, and I dribble like Pavlov's dogs whenever I see or hear them laid our :PHere's an example from Marchhttps://pluralistic.net/2025/03/28/street-pricing/#sportball-analogiesThere's a whole book about it he wrote with Professor Rebecca Giblin, called Chokepoint Capitalism.
       
 (DIR) Post #AwCtffgSS02HHlDhmC by strypey@mastodon.nzoss.nz
       2025-07-17T00:04:53Z
       
       0 likes, 0 repeats
       
       (2/2)If Four Horseman of Progressification catches on though, I take full credit. It was prompted by a combo of Doctorow's comment that nobody had coined a word for the opposite of enshittifcation, and the fact that his principles number 4 (3 sir! No, in this case, definitely 4).Of course, the Four Horsemen of the Infopocalypse probably played a role in choosing that metaphor. But it's also a subtle reference to the fact that apocolypse is actually a synonym for enlightenment (true story).