medium_random_person.rss.xml - sfeed_tests - sfeed tests and RSS and Atom files
 (HTM) git clone git://git.codemadness.org/sfeed_tests
 (DIR) Log
 (DIR) Files
 (DIR) Refs
 (DIR) README
 (DIR) LICENSE
       ---
       medium_random_person.rss.xml (178310B)
       ---
            1 <?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
            2     <channel>
            3         <title><![CDATA[Stories by Wojtek Borowicz on Medium]]></title>
            4         <description><![CDATA[Stories by Wojtek Borowicz on Medium]]></description>
            5         <link>https://medium.com/@wojtekborowicz?source=rss-d8c08a305574------2</link>
            6         
           11         <generator>Medium</generator>
           12         <lastBuildDate>Mon, 19 Oct 2020 12:33:43 GMT</lastBuildDate>
           13         <atom:link href="https://medium.com/feed/@wojtekborowicz" rel="self" type="application/rss+xml"/>
           14         <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
           15         <atom:link href="http://medium.superfeedr.com" rel="hub"/>
           16         <item>
           17             <title><![CDATA[An open letter about crunch to the leaders of CD Projekt RED]]></title>
           18             <link>https://medium.com/@wojtekborowicz/an-open-letter-about-crunch-to-the-leaders-of-cd-projekt-red-78cdc849f8d5?source=rss-d8c08a305574------2</link>
           19             <guid isPermaLink="false">https://medium.com/p/78cdc849f8d5</guid>
           20             <category><![CDATA[open-letter]]></category>
           21             <category><![CDATA[game-development]]></category>
           22             <category><![CDATA[videogames]]></category>
           23             <category><![CDATA[cyberpunk-2077]]></category>
           24             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
           25             <pubDate>Mon, 05 Oct 2020 18:11:34 GMT</pubDate>
           26             <atom:updated>2020-10-09T15:01:32.241Z</atom:updated>
           27             <content:encoded><![CDATA[<p>(Tłumaczenie pod wersją angielską.)</p><p><em>To Marcin Iwiński, joint CEO at CD Projekt, and Adam Badowski, Head of CD Projekt RED.</em></p><p>Few days ago you announced CD Projekt RED would introduce a mandatory 6-day workweek until the launch of Cyberpunk 2077. This news came alongside a <a href="https://www.bloomberg.com/news/articles/2020-09-29/cyberpunk-2077-publisher-orders-6-day-weeks-ahead-of-game-debut">report</a> that some employees have already been putting in long hours and working weekends for more than a year. I want to respond to this as a player and fan of CD Projekt RED’s work, as well as a soon-to-be-former shareholder of CD Projekt.</p><p>I own a small amount of CD Projekt shares. They have doubled in value since I bought them. It’s not a ton of money — roughly enough to buy a used car or go on a nice vacation — but given the astronomic growth of the company’s valuation and an upcoming launch of a new franchise, who knows how much it could be worth in a few years. This week, however, I will be selling all my CD Projekt shares and donating the profits to <a href="https://www.ozzip.pl/">OZZ Inicjatywa Pracownicza</a>, one of the largest Polish labor unions.</p><p>I can’t stand to profit from your exploitation of labor. Your games are good and your company’s results are outstanding. But people are more important than both.</p><p>Last year, <a href="https://www.theverge.com/2020/9/29/21494499/cyberpunk-2077-development-crunch-time-cd-projekt-red">Marcin promised</a> the team working on Cyberpunk 2077 wouldn’t have to suffer through mandatory crunch. His LinkedIn <a href="https://www.linkedin.com/in/marciniwinski">bio</a> says he is <em>the guardian of values at CD PROJEKT</em>. How does breaking your word and making the employees suffer the consequences of your bad planning square with those values?</p><p>In his announcement, Adam wrote this was <em>one of the hardest decisions he’s had to make</em>. How so? A hard decision would have been to postpone the launch. A hard decision would have been to cut something from the game to reduce the scope of remaining work. Instead, you fell back on the default decision for corporate executives. You decided to exploit your employees.</p><p>Adam also <a href="https://www.bloomberg.com/news/articles/2020-09-29/cyberpunk-2077-publisher-orders-6-day-weeks-ahead-of-game-debut?sref=ExbtjcSG">said</a> he <em>takes it upon himself to receive the full backlash for the decision</em>. What does this mean? While the CD Projekt RED team is forced to put their health and family lives at risk to release Cyberpunk on schedule, you hide behind platitudes. And when the game is out, you will bask in the glory of glowing reviews and record-breaking sales results.</p><p>If you actually want to take responsibility, please explain what will you do to prevent this from happening again. Will you step down from your roles? Will you introduce binding policies to prevent mandatory crunch in the future? Will you support CDPR employees if they decide to unionize? And I hope they do, because it’s clear they need the power of collective bargaining to be able to push back against the leadership that doesn’t value their well-being.</p><p>Regards,</p><p>Wojtek Borowicz, your former supporter.</p><p><em>Do Marcina Iwińskiego, prezesa CD Projekt, i Adama Badowskiego, szefa studia CD Projekt RED.</em></p><p>Kilka dni temu ogłosiliście wprowadzenie w CD Projekt RED obowiązkowego sześciodniowego dnia pracy aż do premiery Cyberpunk 2077. Wiadomość ta ukazała się razem z doniesieniami o tym, że niektórzy pracownicy już od ponad roku wyrabiają nadgodziny i pracują w weekendy. Chcę się do tego odnieść jako gracz, fan gier CD Projekt RED, oraz jako (wkrótce były) udziałowiec CD Projekt.</p><p>Mam niewielki pakiet akcji CD Projekt. Ich wartość podwoiła się od czasu kiedy je kupiłem. Nie jest to żadna szalona suma. Mniej więcej tyle, żeby kupić używany samochód albo pojechać na fajne wakacje. Ale kto wie ile te akcje byłyby warte za kilka lat, biorąc pod uwagę dotychczasowy wzrost firmy i to, że wprowadzacie na rynek nową markę. Mimo to, w tym tygodniu zamierzam je sprzedać. Zyski przekażę <a href="https://www.ozzip.pl/">OZZ Inicjatywie Pracowniczej</a>.</p><p>Nie chcę czerpać korzyści z wyzysku. Wasze gry są dobre a wyniki firmy świetne, ale ludzie są ważniejsi od jednego i drugiego.</p><p>W zeszłym roku pan Marcin obiecał, że zespół pracujący nad Cyberpunk 2077 nie będzie musiał przechodzić przez obowiązkowy crunch. W opisie na LinkedInie nazywa się <em>strażnikiem wartości CD Projekt</em>. Jak do tych wartości ma się łamanie danego słowa? Jak ma się spychanie na pracowników konsekwencji złego zarządzania?</p><p>W swoim ogłoszeniu, pan Adam nazwał decyzję o obowiązkowym sześciodniowym tygodniu pracy <em>jedną z najcięższych jakie musiał podjąć</em>. Czyżby? Ciężkim wyborem byłoby przełożyć premierę. Ciężkim wyborem byłoby wyciąć coś z gry na ostatniej prostej żeby zmieścić się w terminie. Zamiast tego, poszliście po najmniejszej linii oporu i zrobiliście to, co prezesom i menedżerom przychodzi najłatwiej. Postawiliście na wyzysk pracowników.</p><p>Pan Adam powiedział także, że <em>bierze na siebie całą krytykę za tę decyzję</em>. Ale co to w praktyce znaczy? Podczas gdy zespół CD Projekt RED będzie ryzykował zdrowiem i stosunkami rodzinnymi żeby dowieźć Cyberpunk w terminie, wy chowacie się za takim pustosłowiem. A kiedy gra się ukaże, będziecie kąpać się w blasku świetnych recenzji i znakomitych wyników sprzedaży.</p><p>Jeśli faktycznie chcecie wziąć odpowiedzialność, wyjaśnijcie proszę co zrobicie żeby ta sytuacja się nie powtórzyła. Zrezygnujecie ze stanowisk? Wprowadzicie wiążące zasady by zapobiec crunchowi? Wesprzecie pracowników CDPR jeśli postanowią założyć związek zawodowy? Bo mam nadzieję, że tak zrobią. Ewidentnie potrzebują tego, żeby móc skutecznie przeciwstawić się menedżerom, którzy za nic mają ich dobro.</p><p>Pozdrawiam,</p><p>Wojtek Borowicz, wasz były sympatyk.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=78cdc849f8d5" width="1" height="1" alt="">]]></content:encoded>
           28         </item>
           29         <item>
           30             <title><![CDATA[Computers Are Hard: building software with David Heinemeier Hansson]]></title>
           31             <link>https://medium.com/computers-are-hard/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e?source=rss-d8c08a305574------2</link>
           32             <guid isPermaLink="false">https://medium.com/p/c9025cdf225e</guid>
           33             <category><![CDATA[engineering]]></category>
           34             <category><![CDATA[software-development]]></category>
           35             <category><![CDATA[programming]]></category>
           36             <category><![CDATA[technology]]></category>
           37             <category><![CDATA[agile]]></category>
           38             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
           39             <pubDate>Sun, 27 Sep 2020 23:49:31 GMT</pubDate>
           40             <atom:updated>2020-09-28T17:32:14.242Z</atom:updated>
           41             <content:encoded><![CDATA[<p><em>If you were to summarize the entire endeavor of software development, you’d say: ‘The project ran late and it got canceled’.</em></p><figure><img alt="Illustration showing a person designing an app interface on a whiteboard." src="https://cdn-images-1.medium.com/max/1024/1*C1pyLUtNRaSP5_em5T8K2w.png" /><figcaption><em>Building software with David Heinemeier Hansson. </em>Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>We’ve reached the end of Computers Are Hard. After several conversations about how individual components of software come to be — from printer drivers to password hashing — I wanted to wrap up with a look at the philosophy of building software products.</p><p>It’s perhaps a little embarrassing, but even after a couple of years in the industry, I never understood why tech companies are so obsessed with speed. And that obsession is baked into the very language of software, where work cycles are called sprints and measure of progress is called velocity. But is it really so fundamental to ship software fast? I don’t know. I don’t build software myself but I troubleshoot it every day and boy, do I sometimes wish engineers worked a little slower.</p><p>I brought my questions about the methodology of building software to someone who’s had his share of heated debates about the topic. <a href="https://dhh.dk/">David Heinemeier Hansson</a> created Ruby on Rails, is a co-founder and CTO of Basecamp, and an author of business books such as <em>Rework</em>. He also has a reputation for speaking against the industry trends without mincing words, whether these trends are technical, like the rising popularity of microservices, or institutional, like venture capital becoming the default way of growing a business in tech. We talked about how software is built today, what does it mean for the people doing the building, and how it could be built instead.</p><p>Enjoy the final chapter of Computers Are Hard.</p><h4><strong>Wojtek Borowicz: Software methodology is an industry of its own. There is Scrum, and Agile, and coaches, and books, and all of that. But you and your team at Basecamp don’t follow these practices. Why?</strong></h4><p><strong>DHH: </strong>First of all, our approach to software development is heavily inspired by the Agile Manifesto and the Agile values. It is not so much inspired by the Agile practices as they exist today.</p><p>A lot of Agile software methodologies focus on areas of product development that are not where the hard bits lie. They are so much about the procedural structures. Software, in most cases, is inherently unpredictable, unknowable, and unshaped. It’s almost like a gas. It can fit into all sorts of different openings from the same basic idea. The notion of trying to estimate how long a feature is going to take doesn’t work because you don’t know what you’re building and because humans are terrible at estimating anything. The history of software development is one of late or cancelled projects. If you were to summarize the entire endeavor of software development, you’d say: ‘The project ran late and it got canceled’. Planning work doesn’t work, so to speak.</p><p>What we do at Basecamp we chose to label Shape Up, simply because that is where we find the hard work to be. We’re trying to just accept the core constraint that it is impossible to accurately specify what software should do up front. You can only discover what software should do within constraints. But it’s not like we follow the idea that it’s done when it’s done, either. That’s an absolute abdication of product management thinking. What we say instead is: don’t do estimates, do budgets. The core of Shape Up is about budgets. Not how long is something going to take but what is something worth. Because something could take a week or four months. What is it worth?</p><p>Something could be worth the whole cycle of work — that’s usually the limit — and for us this is six weeks. That’s what we call a big batch. Or it could be worth less than that. Maybe only a week, maybe it’s two weeks. That’s a small batch. That takes the fuzzy project statement of ‘let’s add <em>Feature A</em>’, puts it under a constraint, and delegates figuring out implementation to the people doing the work. That’s the key insight here. If you have a big problem definition and a fixed boundary and you give creative, intelligent people the freedom to come up with a solution within those terms, they will do wonderful work they are very proud of.</p><blockquote>AGILE MANIFESTO</blockquote><blockquote>In 2001 a group of 17 men (and, yes, zero women) published a document that set the course for the next 20 years of software development. Their <a href="https://agilemanifesto.org/principles.html">Agile Manifesto</a> codified the principles of agile software development. The ideas were basic (for example: ‘Working software is the primary measure of progress’ and ‘Simplicity is essential’) but they spawned an entire universe of Agile coaches, consultants, and authors.</blockquote><h4><strong>So the problem with those methodologies is they put too much focus on estimating, which is inherently impossible with software?</strong></h4><p>I’d go even further and say that estimation is bullshit. It’s so imprecise as to be useless, even when you’re dealing with fixed inputs. And you’re not. No one is ever able to accurately describe what a piece of software should do before they see the piece of software. This idea that we can preemptively describe what something should do before we start working on it is bunk. Agile was sort of onto this idea that you need running software to get feedback but the modern implementations of Agile are not embracing the lesson they themselves taught.</p><h4><strong>But technology is almost religiously obsessed with speed. How does that work if you want to focus on speed but can’t trust the estimations?</strong></h4><p>We’re talking about progress and speed. Those are actually two different things. You can try to do things faster and faster and realize you’re actually not going any further. The interest for us in Shape Up is to go far. It’s to end up with projects that deliver meaningful, large batches of work that customers and implementers are proud of and happy with. And that is not improved by trying to shrink feedback loops to be impossibly small. There’s this idea that constant feedback is a good thing. Yes, within some reason! I don’t want to be constantly evaluated on what I do. For example, we don’t do sprints. Rejigging the work every two weeks as Scrum and other methodologies dictate is a completely oppressive and churning way of work that just exhausts everyone and doesn’t actually deliver anything. Most people can’t deliver real, big features in two weeks.</p><p>The magic really is in shifting your mindset from estimates to budgets. Don’t think about how long something takes. Think about how long are you willing to give something. This flips the entire idea. It lets the requirements float. The project definition that is vague is actually more realistic. Highly specific project definitions usually go astray very quickly. Vague enough definitions allow for creativity and selectivity for the people doing the work. And when you allow for those two things, you empower these people with the agency to do the best work they think they can do, not just follow the spec.</p><p>The whole Agile rebellion was about rejecting big, upfront design. But I think Agile didn’t take that conclusion far enough. It thought: ‘we don’t want big upfront design. We just want little upfront design’. That’s not really that much better. A lot of software methodology is myopically focused on the technical requirements of implementation. But the hard work in software is figuring out what it should do, not how to make it work. There is a mythology of the <em>10x programmer</em>. But that’s not the programmer who is heroic in their implementation of the problem. The 10x programmer is the programmer who restates the problem.</p><p>The problem re-statement really should be at the forefront of software methodology.</p><h4><strong>When teams are tied to the two-week cycles, estimations, and working to spec, do the quality of code and decision-making suffer?</strong></h4><p>Certainly. But even more than that. It results in — and I mean it in a little bit of a glib way but not that glib — human suffering. People who work under such regimes are simply eaten up and spat out. To be constantly reevaluating everything you’re doing every two weeks because that’s when the new sprint cycle starts — it’s going really fast, nowhere.</p><p>This is why we don’t do daily standups. This constant churn, just spinning around in a circle on a very tight leash. I think it’s actually dehumanizing. Again, Agile said: ‘Hey, you know what? That two-year software project you’re trying to plan upfront? That’s a completely bunk idea. It’s way too far out’. And yeah, absolutely! But then, Agile methodology as it’s practiced lately, overcorrected and went way too short. It says two weeks with the daily standups is this magic cycle. No. People need some slack. Some autonomy. Some space.</p><p>Through extensive experimentation we’ve found that about six weeks gives us enough room to breathe and to think. And as long as you’ve set boundaries, you end up making progress even if the day to day doesn’t necessarily look like it. You may see more activity on a team that is very tightly leashed in terms of the feedback loops. They’re frantically churning but don’t end up making more progress. Sometimes, the teams that go the fastest are the teams that look very calm. They are not constantly on some methodology treadmill or on some procedural clock that goes ding! ding! ding! every five seconds.</p><figure><img alt="Four smiley people are pointing at a computer during a work meeting." src="https://cdn-images-1.medium.com/max/1024/0*UeO_aWKlT9CaQbEy" /><figcaption>You can tell this is a stock photo because no one is that enthusiastic during a standup meeting. Photo by <a href="https://unsplash.com/@judmackrill?utm_source=medium&amp;utm_medium=referral">Jud Mackrill</a>.</figcaption></figure><h4><strong>Do you believe your Shape Up approach could work at a much larger organization? One that has 500 or 5,000 instead of 50 engineers?</strong></h4><p>First of all, you shouldn’t be thinking about software methodologies in terms of being able to scale from five people to five thousand. Trying to plan work for 5,000 people as one unit is a fool’s errand and no one actually does that anyway. The more interesting approach is: what is the appropriate team limit in general, for any company? Large company of 500 is, let’s say, a hundred teams of five. That’s how you make the comparison.</p><p>It’s not to say that six weeks is some sort of a magic number that can work for everyone. I think it can actually work in a surprisingly large number of instances, far more often than the two weeks approach. It’s far more generous. And far more realistic that you can ship whole things. But if you cannot ship complete features in six weeks, your feedback loop is still too short. If I can’t make the whole thing happen, from inception, to implementation, to shipping, within the time frame I had for my cycle, my cycle is too short.</p><p>If you work in native app development, which has a balls-up reputation, maybe six weeks is not enough time. Maybe you need a little more. Or maybe not. It depends on whether you can ship the things you want to ship. But the industry at large has already gelled around thinking ‘oh, two weeks is a great time frame’. What? How did we get to be so sure? Even for a 5,000 people company, six weeks is a more realistic starting point.</p><h4><strong>You have also spoken in less than favorable terms about other trends that have emerged in software development, like microservices and serverless or Test-Driven Development. Are there any trends in software engineering that you actually find appealing?</strong></h4><p>That’s a tough question. It’s much easier to pick out all the shit that I don’t like.</p><p>Obviously, I’m a little biased because I am pushing those things. I’m implementing Ruby on Rails and Shape Up and I’m sharing the things that I think are how you should do software development. That doesn’t mean there aren’t other ways of doing it. There are all sorts of technology stacks and approaches to web development that give me a smile. I’m very happy to see the advances we’ve made in JavaScript development. The level-ups that have happened in front-end have been great at the atomic level. I think where we’ve gone astray is the sort of molecule level. The frameworks and the approaches people take to front-end are not great but I think the underpinning technical improvements are with transpilers, polyfilling, and core progress on JavaScript over the versions.</p><p>A lot of what we’re doing with the web is working with fundamentals that are 25 years old. The core innovations that happened are more about recognizing where to put the emphasis, what’s important, what to push for. For example, the web industry approach that front-end development should go through JSON, where the server side is just responsible for generating an API and then the API returns a JSON, is a bad detour. We should go back to embracing HTML. Have HTML at the center of what we do, send HTML over the wire so we’re returning fully formed web documents on the first load, and then do subsequent hydration or updates through HTML as well.</p><h4><strong>I wanted to go back to microservices. One of the engineers I spoke to earlier talked about them as a response to the monolithic software becoming too complex to maintain. What’s your beef with microservices?</strong></h4><p>Let’s start with the premise. Monolithic software <em>becoming </em>so complicated… what is this ‘becoming’? Is this just happening to us? Are we completely innocent bystanders? The complexity is just rolling over us and there’s nothing we can do? That is a bullshit assumption that we need to refute. Complete and utter nonsense. You don’t have to let complexity roll over you. You choose to. And once you’ve chosen to be flooded with complexity, then a natural response is to try shove that complexity into more different boxes because you just can’t handle it all at once, right? Wrong! Just deal with the fucking thing in the first place. Why are things so complicated? Do they have to be so complicated? Could they not be so complicated? In my opinion, the answers are: yeah, they don’t have to be so complicated. Yeah, we can do something about it. No, it doesn’t mean we have to just submit.</p><p>I want to tackle the root cause here. Web development in large terms should be simpler than it ever was. We’ve made tremendous steps forward in compressing conceptual overhead from a large number of domains that used to be very complicated and people needed to think very carefully about. That people somehow still end up with monolithic applications overwhelming them with complexity is a beast of their own making. Rather than try to think what can we do, the response is: ‘where can we put the complexity?’ How about we center the discussion around why we have the complexity in the first place?</p><p>Developers talk about accidental and inherent complexity. Accidental complexity is in the implementation, inherent complexity is the complexity of the domain in which we’re working. Inherent complexity in most web applications is the same it ever was. Where we regressed is in introducing a tremendous amount of accidental complexity. If you are unable to contain the complexity of a monolithic application, what on earth makes you think you are competent to distribute that complexity across a fleet of services that now have to interact and deal with network unreliability, retries, two-phase commits, and all these other complexities that simply do not exist when you deal with method calls, parameters, and the basics of running a singular process. There’s very little worse harm you can do to an application on the complexity level than distribute it. You can take even the smallest problem and as soon as you distribute it, it grows an order of magnitude in complexity.</p><h4><strong>Other industries and even politicians look up to tech as a source of innovation. Meanwhile, I hear developers more and more often say that the entire field is fundamentally broken…</strong></h4><p>No, no, no. This comes from a misconception that most software development is engineering. I don’t believe it is. When you look at software development through the eyes of an engineer, yeah, things look broken. And then the engineer would go like: ‘Well, your specs are really loose. Your tolerances are undefined’. All these things, all the engineering assessment, blah, blah, blah. And it’s a fundamental misunderstanding of what software development is. Software development is not engineering in the same sense as building a bridge. They’re not just a different branch of the same root discipline.</p><p>This whole self-loathing a lot of software engineers engage in is entirely unproductive and is never going to be resolved. The idea that software development is a young industry and if we just give it another 30 years of ISO compliance or whatever rigor, we’re going to arrive at a romanticized notion of engineering they have in aerospace, or elevators, or bridges… no, we’re not. This is a fundamentally different domain that requires a fundamentally different approach.</p><p>We already have many of the answers. We’re simply afraid of embracing them. For example, in traditional engineering estimates are a huge part. Things run on estimates and on critical path diagrams because that’s simply the way you build a skyscraper. You don’t get to reconfigure how the pylons go after you pour the concrete. Software development is nothing like that. Software in many ways is far closer to the creative process of writing, game making, movies. Experiences where you design the unknown and you don’t know whether it’s good or not until you see it.</p><p>Look at movie making. We’ve been making movies for a hundred years. Haven’t we figured out the creative process yet? No! We haven’t. You can take a great director, a great cast, and still make a totally shitty movie. Versus in building, largely speaking, if you take a great architect, a great engineering firm, and a great general contractor, you’re gonna arrive at a building that works. You may make minor mistakes but the basic structure is going to be sound, unless someone makes a completely negligent error. In movie making, in music, in software things fail all the time. Even when good people who know the techniques of how to build things get together and work on something, they still end up failing.</p><h4><strong>One more thing engineers feel strongly about is their choice of programming language. Is there such a thing as a good and a bad language?</strong></h4><p>Yes, for a person. A programming language can be better or worse for an individual. I think they can also be better or worse on an objective level, but that discussion is almost uninteresting. The interesting discussion for me is one of personal truth.</p><p>For example, one of the long-running debates about what makes a programming language good or not is whether you should use static or dynamic typing or strongly or weakly typed languages. In Ruby you don’t have a statically typed language, and there’s a certain class of refactorings or mistakes that approach does not do well with. On the other hand you have something like Java, just to take a standard example of the strongest-typed language, that works in a different way. For different people, with different brains, different languages either speak to them or they don’t. It’s similar when you look at learning styles. Some people learn visually, some people learn audibly, and these styles are absolutely right or wrong for the individual. If you are a visual learner, trying to learn in an audible or a tactile way just doesn’t work for you. For me, Ruby is a far superior programming language to any other I ever tried because it fits my brain like a glove.</p><p>We should look and embrace the personal truths that arise from different brains but we shouldn’t shy away from people with different brain types arguing about what is better and what is worse. There is tremendous value in the clash of opinions. Even if you have one person, like me, who says Ruby is the greatest language ever and another person says Java is the greatest language ever. These are things we’re supposed to embrace. It’s like atoms hitting each other. Then we get light, we get energy, we get excitement. And that’s good! Engineers are so fucking conflict-shy that they can’t take two people disagreeing without backing off and going ‘Trade offs! Trade offs! It depends!’. It’s like crying uncle, which I think is a completely counterproductive way to learning, to inspiration, to anything.</p><p>When I debate software development and my choices and opinions, I do it with the full force of conviction about what’s right for me. And the spectators can decide who they’re more like. They can try for themselves. They can see whether the arguments I put forward about my romantic affair with Ruby resonate with them. And if they don’t? Who gives a hoot!</p><blockquote>TYPE CHECKING</blockquote><blockquote>Data is classified into types and each programming language has its own rules for what you can do with which type. Type checking means enforcing those rules, so that the program knows, for example, whether the value <em>30</em> you assigned to a variable should be treated as a number or a string of two characters. Languages are statically or dynamically typed depending on when the type check occurs and strongly or weakly typed depending on how it’s done.</blockquote><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c9025cdf225e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e">Computers Are Hard: building software with David Heinemeier Hansson</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
           42         </item>
           43         <item>
           44             <title><![CDATA[Computers Are Hard: representing alphabets with Bianca Berning]]></title>
           45             <link>https://medium.com/computers-are-hard/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343?source=rss-d8c08a305574------2</link>
           46             <guid isPermaLink="false">https://medium.com/p/bc8c9a498343</guid>
           47             <category><![CDATA[software-development]]></category>
           48             <category><![CDATA[engineering]]></category>
           49             <category><![CDATA[technology]]></category>
           50             <category><![CDATA[typography]]></category>
           51             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
           52             <pubDate>Sun, 27 Sep 2020 23:16:41 GMT</pubDate>
           53             <atom:updated>2020-09-27T23:55:28.346Z</atom:updated>
           54             <content:encoded><![CDATA[<p><em>Provided you’re storing and processing text in Unicode, it just works.</em></p><figure><img alt="Illustration showing a keyboard connected to a monitor with characters from the Japanese alphabet on display." src="https://cdn-images-1.medium.com/max/1024/1*XslHRcsuYhCrzE7tiZwdjw.jpeg" /><figcaption>Representing alphabets with Bianca Berning. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>If you look deep enough, all data on our computers is binary code. Strings of 0s and 1s are the only language that processors understand. But since humans are not nearly as adept at reading binary, there are multiple layers of translation data goes through before it’s presented to us in a legible way. And we rarely think about it, but that’s a herculean task.</p><p>Unlike computers, we don’t have a unified way of communicating. We speak thousands of languages, written in hundreds of scripts. We write equations and use emoji. Many engineers spent years trying to untangle the messiness of language for the purpose of representing it in software and then they spent some more years trying to agree on a common approach.</p><p>To find out more about what goes into representing languages and alphabets in software, I reached out to <a href="http://bberning.com/">Bianca Berning</a>, an engineer, designer, and Creative Director at a type design studio Dalton Maag. We talked a little about the history of encoding standards and about how fonts come to be and how much work it takes to create one. We also touched on tofu, but not the edible kind.</p><h4><strong>What’s character encoding and why are there different standards for that? What’s the difference between Unicode and ASCII, for example?</strong></h4><p>Character encodings assign numeric values to characters. They’re used because digital data is inherently numeric so anything which isn’t a number needs to be mapped to one. There have been many competing standards for encoding characters over the past hundred years, but ASCII and Unicode are the most well known.</p><p>ASCII, from around 1960, used seven bits to represent characters, meaning it could encode only 128 different characters. This is just about acceptable for English, but will struggle to faithfully represent the alphabets of other languages. Straightforward extensions of ASCII, using eight bits for 256 different characters, appeared rapidly, but they were many and incompatible. Each language or at best, groups of languages needed their own encoding as 256 characters is still not enough.</p><p>There were standardization efforts which led to some rationalization. For example, there are a range of ISO standards for 8-bit encodings which group languages together, such as Latin-1 (formally known as ISO/IEC 8859–1, first published in the early 1980s) for the languages of Western Europe.</p><p>It was Latin-1 which was used as the basis for Unicode in 1988. By growing that 8-bit encoding to a 31-bit encoding there were finally enough codepoints for every character from every language, while retaining some easy compatibility with one of the most common 8-bit encodings.</p><blockquote><em>UNICODE</em></blockquote><blockquote>Unicode is the globally adopted standard for character encoding, maintained by the <a href="https://home.unicode.org/membership/members/">Unicode Consortium</a>, which has some of the largest tech companies as its members. Unicode supports most writing systems, current and ancient, as well as emoji 🦛. That’s why every platform has the same set of emoji, even though they look different on iOS/Mac, Android, and Windows.</blockquote><h4><strong>What are control and format characters? What purpose do they serve?</strong></h4><p>Control characters control the behavior of the device which is displaying the text. Many are rooted in the physical nature of early teletype devices, such as backspace which moved the carriage back one character (it didn’t delete) and bell which rang a little bell on a typewriter.</p><h4><strong>Say you’re building a chat service. You’d obviously want to support as many alphabets and writing systems as possible. Is this something that software development tools provide by default, or does it require extra work?</strong></h4><p>I’m probably not the right person to talk about implementation, but in general modern operating systems and application development environments are fully Unicode aware and compliant.</p><h4><strong>What happens if someone tries to paste unsupported text into your app?</strong></h4><p>Provided you’re storing and processing text in Unicode, it just works. If you’re not, you’ll get a lot of missing glyph tofu characters.</p><blockquote><em>TOFU</em></blockquote><blockquote>Each font should include a .notdef glyph. It’s the glyph that appears when a website or an app is trying to display an unsupported character. Usually, .notdef glyph is a white square (like this □), which is why it’s called tofu.</blockquote><h4><strong>What writing systems are the most complicated to support and why?</strong></h4><p>There are writing systems, such as Arabic and Devanagari, in which letter shapes vary depending on the context in which they appear. While their Unicode characters are as straightforward as any other writing system, they require an additional stage of processing, known as shaping, to get from sequence of characters to correctly formatted glyphs.</p><h4><strong>Most physical keyboards are based on the Latin alphabet. How do you make typing in a vastly different writing system possible with this interface?</strong></h4><p>It depends on the writing system and the language. As an example, Japanese keyboards have both the English QWERTY layout and hiragana indicated on their standard keyboards.</p><p>Many minority scripts don’t have a history of physical keyboards, but digital only keyboards can often be installed to be able to input in languages that use those scripts. It’s far from perfect and far from complete, but people have adapted.</p><h4><strong>Is it possible to create your own, custom font for your project? If so, how do you go about this? How do you make sure it’s properly rendered across all platforms?</strong></h4><p>Creating a custom font can be easy or hard depending on the scope and ambition of the project. Each font is a collection of graphic representations of characters, known as glyphs. The more glyphs there are, the more complex the behaviour, and the more diverse the script systems being supported, the more specialist knowledge and skill will be required.</p><p>To guarantee cross-platform compatibility, we’re relying, again, on industry and formal standards. The most common file format for fonts, OpenType, is an ISO standard (ISO/IEC 14496–22) and the most common encoding for accessing the glyphs is Unicode.</p><h4><strong>When we look at fonts, we only see the aesthetic side. But what are the technical steps to creating a new font?</strong></h4><p>We refer to the technical steps by the umbrella term “engineering”. It includes everything from rules for getting from characters to glyphs, to adding extra instructions to help a glyph make best use of the available pixels when it is displayed on screen.</p><figure><img alt="Alphabet written in the Comic Sans font." src="https://cdn-images-1.medium.com/max/1024/1*swVixjdDm_lBahywJUckrA.png" /><figcaption>Obligatory Comic Sans joke. Illustration by <a href="https://en.wikipedia.org/wiki/Comic_Sans#/media/File:ComicSansSpec3.svg">GearedBull</a>.</figcaption></figure><h4><strong>If you want to use an existing font, how do you know what alphabets it supports? Do you need to test it manually?</strong></h4><p>There are some existing tools, such as <a href="https://wakamaifondue.com/">Wakamai Fondue</a>, that can provide you with a list of languages that are likely supported by a font. Most of them are based on Unicode’s CLDR which has the largest and most extensive repository of locale data available but it should by no means be considered complete or absolute.</p><h4><strong>And if you want to use a font but it doesn’t work with all alphabets you want to support, can you just edit and expand it?</strong></h4><p>You will need to check the license agreement you agreed to when you downloaded that font to find out what you can and can’t do with it — terms vary from supplier to supplier. If you chose an open source font for your project, the same applies. The terms often allow expansion of the font but there might be restrictions on, or requirements for, how you can distribute the result.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bc8c9a498343" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343">Computers Are Hard: representing alphabets with Bianca Berning</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
           55         </item>
           56         <item>
           57             <title><![CDATA[Computers Are Hard: accessibility with Sina Bahram]]></title>
           58             <link>https://medium.com/computers-are-hard/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7?source=rss-d8c08a305574------2</link>
           59             <guid isPermaLink="false">https://medium.com/p/a3ce25b1f7b7</guid>
           60             <category><![CDATA[web-development]]></category>
           61             <category><![CDATA[engineering]]></category>
           62             <category><![CDATA[technology]]></category>
           63             <category><![CDATA[software-development]]></category>
           64             <category><![CDATA[accessibility]]></category>
           65             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
           66             <pubDate>Sun, 27 Sep 2020 23:06:44 GMT</pubDate>
           67             <atom:updated>2020-09-28T17:35:02.874Z</atom:updated>
           68             <content:encoded><![CDATA[<p><em>I have no interest in people thinking about accessibility. I have interest in people doing something about accessibility.</em></p><figure><img alt="Illustration showing a hand, eye, head, and an ear." src="https://cdn-images-1.medium.com/max/1024/1*kMrJbXN4qzDqhs1FrORQ7w.jpeg" /><figcaption>Accessibility with Sina Bahram. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>Accessibility features in software used to be mostly invisible to me. Sure, I believed they were important, but only in an abstract sense. Then my eyesight started getting worse and I found out it’s hard to enjoy yourself when you’re squinting all the time. I started paying attention to subtitle size options in movies and video games, because I needed them.</p><p>I wish I — and everyone in tech — spared more thought for accessibility before it becomes a personal issue. Because accessibility done right benefits everyone and yet it’s rarely done right. There’s still lots of games and video players that don’t offer something as simple as enlarging subtitles. And if even things like this are neglected, you can imagine the daily frustrations of people who need assistive technology to use software.</p><p>I discussed the problems tech has with accessibility with <a href="https://www.sinabahram.com/">Sina Bahram</a>, who runs Prime Access Consulting, an inclusive design firm that works with organizations to make their products more accessible. We also talked about what engineers should do for their software to be inclusive and how that’s more a question of good coding hygiene rather than some arcane technical knowledge.</p><h4>Wojtek Borowicz: How do you make a product accessible? Do you have to think about it from the beginning or do you include accessibility at a certain stage of development?</h4><p><strong>Sina Bahram: </strong>That’s three questions and they have several premises built into them. We should first define what we mean by something being accessible and something being inclusive. Accessibility is often used as a catch-all term that means someone has done things necessary for people who use assistive technology to interact with their product. It’s rooted in a definition that depends upon disability. And if that is the case, it’s critical to understand there are many ways of looking at disability. Two most prominent are the medical model and the social model.</p><p>The medical model of disability says something is broken. Someone’s blind, can’t walk, can’t hear. And therefore, we try to fix that thing. And if we can’t fix it, then that person has an impairment, is disabled. The social model says: instead of putting the burden on the individual, we recognize it’s the environment that’s not inclusive. Not the individual that is disabled, it’s the environment that is disabling.</p><p>The reason I say this is because the second part of your question asked: should accessibility — and I’m gonna change that to say <em>inclusivity</em> — be considered at the beginning or as a discrete step in the process. The answer is both. It needs to be woven into the entire product development lifecycle, from conception until post-production and maintenance. There is no other way. This has been proven across tons of projects. Not to mention there is a huge cost implication there. It is fractions of pennies on the dollar to make things inclusive up front compared to remediating after implementation.</p><p>Should there also be specific points in the development lifecycle where accessibility could be considered more? Absolutely. There are places in QA/QC and in testing. And when making feature requests you can be explicit about accessibility. Like for a screen reader user to have access to the images on your platform. By being more accessible in this one way you become more inclusive. Accessibility is the outcome and the methodology is inclusion.</p><h4>So you need to think about it from the start and also have parts of the process specifically driven by accessibility, including specifications, QA, and so on?</h4><p>Yes, but not just think about it. Thinking about it doesn’t do any good other than making you feel better. I have no interest in people thinking about accessibility. I have interest in people doing something about accessibility. That means even on the conceptual design stage you are weaving it into your design directives. That means color contrast, text on top of images, making sure that if there’s a menu with hover functionality, it has a little arrow beside it to show it’s a dropdown. Things like these need to be done before even a single line of code is written.</p><h4>And if you don’t do them early on, is it difficult to make an inaccessible service accessible?</h4><p>Sometimes it can be a reasonable amount of effort. Other times, it can be incredibly difficult. It really depends on the product and has a lot to do with the underlying structure and the caliber of the coding. If something has been thrown together with WYSIWYG tools, automatically generated code, and very inexpensive programming labor, you will have more than just accessibility problems, but accessibility definitely tends to suffer disproportionately.</p><p>On the other hand, there are iPhone apps that are 95 percent accessible and the developers weren’t even trying. What they were doing was following good coding hygiene, best practices, and examples from Apple. I reach out to them to talk about some issues and they would write back and say:</p><p>‘I didn’t even know blind people could use my app. Of course I’m gonna fix this one thing for you. The release will be out next week.’</p><p>Conversely, there could be a situation where you didn’t think about it at all and you’re making an app like Instagram, heavily image-based. Now you have a lot of work to do not only because of the engineering effort, but also because the entire core product needs a treatment of accessibility and inclusivity.</p><figure><img alt="Apple VoiceOver icon with a human figure inside a circle." src="https://cdn-images-1.medium.com/max/700/1*upT61LmVve20xjcFMravGw.png" /><figcaption>iOS and Mac come with a built-in screen reader, called VoiceOver.</figcaption></figure><h4>Is it even possible to make an app like Instagram accessible to visually impaired users?</h4><p>Yes, absolutely. Visual descriptions are just one consideration for users with low or no vision, but we must also consider switch-based access, color contrast, users with cognitive differences who prefer different UI layouts, etc. However, those are just accessibility considerations. To make something like Instagram inclusive, it requires promoting affordances that will make that happen. Things like crowdsourcing descriptions or allowing people to search only for images with descriptions or making sure to promote and facilitate content about persons with disabilities and workflows around generating that content. This is only a small fraction, but these actions are not just done for some small percentage of the user base. They help everyone. They do not, contrary to naive belief, make the experience worse. They also, not that this should be the only goal, enhance value and support all popular return on investment (ROI) arguments.</p><h4>You said that some of the accessibility problems can stem from inappropriate tools. Are there existing tools for building accessible products or do engineers need to make them for themselves? And does the quality of tools differ between platforms?</h4><p>There’s a lot of different components to be aware of here. Let’s just talk about the web. There’s tooling that results in mostly accessible stuff. If you use a vanilla instance of WordPress and you use a readable theme, chances are your website is going to be okay. Not going to be great, not going to be accessible in all the technical ways, but it will be somewhat usable because the underlying tooling supports basic structure, good semantics, and the ability for the user to go in and fill in alt text, facilitate keyboard access, and caption videos just to name a very few things. So even if there is something wrong, the system provides a facility to fix it. Whereas any type of WYSIWYG editor, like Squarespace for example, is a black box. They don’t let you go in and modify what you need to make the site accessible. And you generally end up with very, truly inaccessible code. Because of the way this tool has been designed, it’s going to map everything as a <em>div</em>, it’s going to nest things inside of other things, it’s just not going to be accessibility-aware. And you’re also not gonna get a lot for free. The reason is that Squarespace, even though they know about the issues, have refused to put effort into fixing them or even providing the basic functionality for the content creators on their platform to address these needs, which is so incredibly unfortunate and frustrating.</p><p>When we talk about integrated development environments (IDEs) like Visual Studio or Xcode, there are plugins and libraries that can help enhance or test for accessibility issues, whether it’s on the testing side or it’s a UI library that generates more accessible components. This is true on the web as well, for example with the latest version of Bootstrap or with components like CodeMirror. When using anything, a library or a framework, it’s very important to be aware of what is being done for developers — and of what we need to do. And if the tool doesn’t do something but at least we’re able to fix it, that’s one thing. If the tool doesn’t do anything about accessibility and prevents you from doing anything, then you have yet another circle of problems. It’s a multi-layered issue.</p><h4>Would it then be correct to say that a big part of making a product accessible is choosing the right tools and following the principles of good programming?</h4><p>I would flip the order of those. I would say following the best practices, using the recommendations in the specs (like the ARIA spec or the HTML 5 spec), writing compliant code… these are gonna make a huge difference. Because if you commit to those things, you will pick frameworks that support you in them. If you don’t care about code validation, then you’re not gonna care about the code validation of the framework you’re using. On the other hand, if you have a strict requirement that all code in your project must be validated, you’re just not gonna let in a module that generates crappy HTML. So I would say the order of priorities should go that way.</p><blockquote>ARIA</blockquote><blockquote><a href="https://www.w3.org/TR/wai-aria-1.1/">Accessible Rich Internet Applications</a> is a technical specification for making web content usable and friendly to people relying on assistive technology, like screen readers and dictation tools. ARIA is maintained by the World Wide Web Consortium (W3C) and is a widely adopted standard for web accessibility.</blockquote><h4>Is learning to build accessible products more a matter of adopting the correct mindset rather than learning specific technical skills?</h4><p>Exactly. Then the technical skills help you facilitate the natural objectives that arise from such a healthy mindset. You can learn more about ARIA, you can learn more about how assistive technologies interact with your stuff, but a lot of that has to come from expert users. You need to involve expert users for that knowledge transfer, read and educate yourself on these matters, and simply do those things that any professional does to enhance and perfect their craft.</p><h4>Does not having personal needs for accessible products make it more difficult for teams to design, develop, and test with accessibility in mind?</h4><p>It can if the culture has not prioritized inclusion. But if it does, they don’t need accessibility needs themselves as the sole way of getting it right. It’s also important to understand you will be very hard pressed to find me a team of people who don’t have accessibility needs. A lot of times those are silent and people don’t talk about them. But 20 to 25 percent of people have some form of disability. So if you talk about any development group of a reasonable size, you’re gonna find people with those needs. And this diversity helps, of course. It’s just that they might not be as profound as for someone who is a screen reader user or for someone who needs to enlarge the text 400 percent or is profoundly deaf or hard of hearing.</p><h4>We mentioned screen readers a couple of times because it’s the first thing that comes to mind when talking about accessibility. What other considerations and solutions are there?</h4><p>In general, we should take a more inclusive strategy and not make assumptions about the abilities of the user. Instead of assuming we may have a blind person using this product, we should instead ask ourselves: are we relying on the use of vision to complete a task? Why are we doing that if we don’t need to? It’s a subtlety but makes a big difference, because then you don’t accidentally leave out a group you didn’t think about at the beginning.</p><p>Now, with respect to things that are not thought about as much as screen reader users and users with low and no vision. People who are deaf and hard of hearing rely on captions, transcripts, and sign language. Folks with cognitive differences will benefit from plain-language explanations of controls and non-confusing layouts. People with dexterity differences may be unable to drag a component across the screen, so providing a keyboard equivalency not only allows a screen reader user to benefit from that but also a sighted keyboard user. There are a lot of different ways and groups of people to be thinking about here.</p><figure><img alt="Subtitle settings menu on Netflix." src="https://cdn-images-1.medium.com/max/1024/1*mu3TLcJgZ9bwfwSVdonquw.png" /><figcaption>Wish every video player had subtitle settings like Netflix.</figcaption></figure><h4>What are some low-hanging fruits you wish developers paid more attention to?</h4><p>Label your controls. The number of unlabeled buttons that screen reader users come across is frankly ridiculous. It’s just not acceptable. This is something that can be automated and should just be a best practice. When you make a button, give it a damn label. It’s not hard. That’s one thing that would go a long way.</p><p>Structure is something a lot of developers don’t think about. Heading order matters. Don’t just use a level three heading because you think it looks good. That’s what CSS is for. Headings are for conveying semantic structure. So, again, it all comes down to using technologies the way in which they were designed. Don’t use a link and style it as a button. We literally have a button tag. Use it. And when you do, you get so much for free. You don’t have to do much to make a button accessible. That’s because everything in HTML is born accessible. Lots of hard-working people spend their time making sure that’s true.</p><p>That all of a sudden alleviates this perceived burden, which you haven’t asked about but is a very commonly held myth, that this stuff is hard and intricate. It need not be. If you want to talk to me about a multi-user drag and drop scenario with a real-time requirement, okay, yes, we gotta go through some design thinking on that. But a video player, a blog, a to-do list, or a messaging app like Signal or something, that’s not hard. Those are known patterns and so many of them have been implemented by various libraries and operating systems for developers. Developers just have to use the components the way they were intended to and stop reinventing the wheel as a primary first step. It’s important to point out that developers is a catch-all term here for the development/design team.</p><blockquote>CSS</blockquote><blockquote>Cascading Style Sheets is a style language for customizing the look of websites. It’s usually used in combination with HTML and JavaScript and is one the primary tools for web developers.</blockquote><h4>Should every front-end developer know these things, because it’s just proper coding?</h4><p>Yes, that’s exactly right.</p><h4>What are the best examples of accessible apps or websites?</h4><p>Definitely <a href="https://twitterrific.com/">Twitterrific</a> on iOS. They’ve done so many things right. They didn’t just do bare accessibility. They tried to make the experience pleasant for a VoiceOver user, for a low vision user, for a high contrast user. They had night mode before everyone was talking about it. They had rotor actions where you can flick up and down with one finger if you’re a VoiceOver user and immediately retweet something, reply, like a tweet, or bring up your profile, all without making you go through three screens. They got it right. Twitter has, unfortunately, made API changes that have made the app less useful, because Twitter is prioritizing their ad revenue over the usability of their platform. But Twitterrific? Iconfactory are the folks who made it and massive props from me. I have nothing but respect.</p><p>WhatsApp is pretty accessible currently. It definitely has a few things that could be improved. Facebook’s web accessibility is somewhat suboptimal in my opinion but their app experiences on iOS tend to be more accessible. I’m terrified that a more ad-driven model will be used for WhatsApp in the future, though, making things less accessible. Let’s hope that doesn’t happen.</p><p>Zoom has done a reasonable job. It’s not perfect but they have definitely tried and are taking accessibility seriously. That’s why I don’t use Skype, anymore, by the way. I and others informed Microsoft about various usability and accessibility issues with Skype but to no avail. So, we moved all of our client interactions over to Zoom. With Skype, VS Code, and other offerings, I feel Microsoft talks a great deal about accessibility, but it is nowhere near the usability or accessibility I and many others expect and deserve.</p><p>Netflix has really done a good job surfacing audio description and captioning for many of their shows and movies. These days I am surprised when something doesn’t have audio description as opposed to being surprised that it does. And that speaks to a very good job on their part. And they didn’t start that way. They started with the National Association of the Deaf suing them over captions. They took it seriously and instead of going for minimum compliance or trying to fight it legally, they really upped their game and I gotta tell you, they’ve encouraged other platforms. It is my firm belief that none of the new streaming platforms would offer audio description if Netflix wasn’t doing that. They not only made their platform better, they made the entire industry better.</p><p>There’s an app called <a href="https://www.reddit.com/r/Blind/comments/9mwkag/i_made_a_fully_accessible_reddit_app_for_ios_its/">Dystopia</a>, which is a Reddit reader. This is really important because oftentimes people with disabilities are excluded from popular platforms and emerging technologies. Reddit is by no means an emerging technology, it’s been around forever, but believe it or not: there hasn’t been an accessible Reddit mobile client until about a few months ago. My understanding is that it’s just this one guy who is still at school and just wanted to make Reddit accessible. And it’s night and day. He has truly taken feedback and, like the Twitterrific way of doing things, tried to make the experience not only fully accessible, but went well above and beyond to make it fully inclusive. And not at the expense of any other group.</p><p>This last point is key. I have no interest in something that is very accessible but is not pleasant to look at or to hear. It needs to be good for everybody. I want to be really clear about this. Inclusive design is beautiful design, which happens to also be usable by as many people as possible.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a3ce25b1f7b7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7">Computers Are Hard: accessibility with Sina Bahram</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
           69         </item>
           70         <item>
           71             <title><![CDATA[Computers Are Hard: app performance with Jeff Fritz]]></title>
           72             <link>https://medium.com/computers-are-hard/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1?source=rss-d8c08a305574------2</link>
           73             <guid isPermaLink="false">https://medium.com/p/94aaaa5267b1</guid>
           74             <category><![CDATA[software-development]]></category>
           75             <category><![CDATA[apps]]></category>
           76             <category><![CDATA[engineering]]></category>
           77             <category><![CDATA[technology]]></category>
           78             <category><![CDATA[performance]]></category>
           79             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
           80             <pubDate>Sun, 27 Sep 2020 22:49:53 GMT</pubDate>
           81             <atom:updated>2020-09-28T12:44:40.546Z</atom:updated>
           82             <content:encoded><![CDATA[<p><em>If the next person comes in and they can’t understand your code, it doesn’t matter how good it is.</em></p><figure><img alt="Illustration showing a race car competing with an old car." src="https://cdn-images-1.medium.com/max/1024/1*OkerZwyxpNcWsxQC_LDHZA.jpeg" /><figcaption>App performance with Jeff Fritz. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>Everyone wants fast software. We all know the agony of operating a touch screen that’s slow to respond, or trying to play a shooter with requirements way above what our PCs can handle. We hate loading screens, animated hourglasses, and spinning beach balls. There’s plenty of <a href="http://www.websiteoptimization.com/speed/tweak/psychology-web-performance/">research</a> into the psychology of human-computer interaction that boils down to this: if the app or website doesn’t respond immediately, we get frustrated, bored, or both.</p><p>But a computer has only so much processing power, memory, and storage. And it can have many applications competing for these resources. As I’m writing this, I have 25 processes open just in Chrome. There’s about 150 others, keeping Windows and a bunch of apps running. They’re using 35% of my PC’s memory and I’m not really even doing anything.</p><p>How do software engineers operate in these circumstances, where everyone wants every app to load immediately and run smoothly but at the same time not use too much RAM or processing power? How do they cope with never ending demands for better performance? I talked about this with <a href="https://jeffreyfritz.com/">Jeff Fritz</a>. Jeff works at Microsoft as a Program Manager on the team responsible for ASP.NET, an open source web development framework. He walked me through how he thinks about and works on writing performant software.</p><h4><strong>Wojtek Borowicz: How do you define app performance? Is it how fast it loads and runs? How efficient it is with resource consumption? How stable?</strong></h4><p><strong>Jeff Fritz: </strong>I look at performance as a mashup of all those things. If there are memory leaks, if there are issues with processor utilization, that application or system is going to crash sooner than later because it’s gonna run out of resources. It’s not so much a question of how much processor or memory is being used but whether the application is making the best use of them. It’s one thing to say ‘oh yeah, I’ve got very small utilization of processor and memory’. Well, if you didn’t need a lot of processor and memory, who cares? But if your application is really big and it does a lot of heavy processing, it’s doing some artificial intelligence, modelling, and calculation, well… it needs a lot of processor. Is it using that efficiently?</p><p>And finally, there is the user’s perception. If the time from when I launch an application to when it starts responding is very long, that’s terrible performance. There are ways to hide that. Push it to background threads or put up spinners and waiting indicators to let folks know that ‘I’m doing some work in the background, you can continue doing something over here while I finish over there’.</p><h4><strong>So part of an app’s performance is in the user’s perception and not just in how efficiently it utilizes the system’s resources?</strong></h4><p>If I had to rate those types of performance measurement, I would put them in that order: stability, the time and ability for the user to interact, and the efficiency of the utilization of processor and memory</p><p>If stability drops quickly, that’s terrible performance. The application is not performing well because it keeps falling down. The user’s perception of performance is number two because okay, we’re stable, but is it returning to me properly? And third would be resource utilization.</p><h4><strong>Is there much difference between platforms? Is it more difficult to develop fast apps for Windows, Linux, or Mac OS? How about mobile?</strong></h4><p>I don’t think there is much difference in how you tune for performance between various platforms. You always want to figure out how you can use other threads and push resource-intensive things into background processes, while you’re still servicing the user. You see that same concern whether it’s Windows, Mac, or Linux. Whether it’s desktop, or server, or cloud, or mobile. It’s just a difference in how much resources you have available. When you’re on mobile, you have the phone’s processor. For IoT, you might only have a little processor on a Raspberry Pi. You’re on the cloud? You have terabytes of memory available, dozens of processors, each that has a handful of cores… and you’re still gonna run into very similar concepts.</p><p>Tricks come in when you have one thread. Maybe it’s JavaScript or Web Assembly inside a browser. You’re in a sandbox, in a very constrained environment. You can’t play games with background threads. There’s tricks you can pull but it’s still the same thing: where can you stash resources so the processor you have inside a web browser sandbox is able to properly respond and paint the screen in a quick manner?</p><blockquote>THREADS</blockquote><blockquote>When you launch an app, it starts a process on your computer. You can look it up in Task Manager (Windows) or Activity Monitor (Mac). Processes request the CPU to perform instructions. A sequence of such instructions is called a thread and it’s a basic unit of CPU utilization.</blockquote><h4><strong>Are there programming languages and frameworks that are inherently more performant than others?</strong></h4><p>I wouldn’t say that. But there are tools that allow you to interact with platforms at a lower level and act more directly with the processor, with memory, with network. If you can program with C++, then you have full access to every bit of the processor. To every register in memory. But you don’t really have garbage collection. You actually have so much control, that it’s not the right tool to build a website with and deploy something that’s gonna run in a browser and interact with much higher level languages like JavaScript and HTML.</p><p>There are tools that are better for some environments. There are some general purpose ones. You’re kind of forced into Objective-C and Swift if you want to compile for iOS. You have to be in a Java-like environment if you want to be on Android. JavaScript works in many places. But do you really want to write JavaScript to build a game? Are you gonna get the best performance out of that? Nah. Because in JavaScript, you don’t have control over memory utilization and processor. And you can run into issues based on the traits of the language. To each their own.</p><p>I like C#. It will compile and run everywhere and it’s really good for me to be productive. It improves my developer performance. Does that give me the best experience everywhere? No, I’m not gonna get the best performance in an application deployed to Android or iPhone. But I can take advantage of things that will speed up performance in those.</p><h4><strong>While we’re on the subject of different platforms. Is using Electron or Progressive Web Apps always going to be a performance trade-off versus building native apps?</strong></h4><p>There is a bit of a performance trade-off. There is a choice that you’re making. When you build Progressive Web Apps, you’re building a website that’s going to be able to be installed and run as a local application on Windows, Mac, Linux, iPhone, Android. Nice! But it’s a web application. You’re not going to be able to really touch and interact with sensors and devices. You can hit some Web APIs that will allow you to get to, say, GPS, cameras, and stuff. The little things. But you’re not going to be able to build high performance applications.</p><p>Electron is effectively a Progressive Web App that comes with its own browser. That being said, you can tune and really squeeze performance out of that. One of the most used Electron apps is Visual Studio Code and it’s tremendous with performance. I don’t work with the Visual Studio Code team but I see the delivery of their work. It’s a phenomenal editor that many, many people use and for an Electron app, it’s pretty impressive. But am I going to do artificial intelligence and machine learning inside an Electron app? Probably not. I might use an Electron app to view the output of a high-end application that is doing calculations around those things. Or maybe I’m a game developer and I’m rendering scenes for my next 3D game. I’m gonna have distributed computing running on big servers. Maybe they’re in the cloud, maybe in a local data center. Using an Electron to monitor and paint that performance on screen is a perfect use for that type of technology. But I wouldn’t have an Electron app rendering lighting and maps and ray tracing… you can do it, but you’re not gonna get the best performance from it.</p><blockquote>ELECTRON</blockquote><blockquote><a href="https://www.electronjs.org/">Electron</a> is an open-source software development framework that allows web developers working with languages like JavaScript and HTML to package their software as apps for Mac, Windows, and Linux. It’s popular because it allows you to write one app and release it on multiple platforms. Among others, WhatsApp, Twitch, and Figma use Electron. But it has plenty of opponents, too, because of performance drawbacks of Electron apps compared to native software on each platform.</blockquote><h4><strong>Hardware obviously plays a major role here. Apps will run smoother on a system with a faster CPU and plenty of RAM. But what about software? Can other apps running on the same system affect your program? If so, can you work around that?</strong></h4><p>This is where virtual machines and containers come into play. Some really intelligent folks have built these to be isolated across process boundaries, so that you can’t run into an issue where you’re significantly affected by other processes. You can be better isolated and be guaranteed that this container I’m running in has been allocated 4 GB of RAM or has been allocated X amount of processor cycles.</p><p>You see this on iPhone and Android. They will time out and sleep applications that you’re not currently using. Desktop operating systems (Mac OS, Windows 10, Linux) try to share the processor appropriately, so getting into more of a containerized application will prevent that side effect of stealing all the processor and knocking something down. But there are some applications where I want to own the entire processor. If I’m a big database, if I’m Microsoft SQL server, mySQL, or Postgres — I want to be the only application running on the system. You better believe I want 100% of the processor and I want all of the RAM.</p><h4><strong>If a customer reports your app uses too much RAM or runs slowly, how do you approach this?</strong></h4><p>That’s when you break out a profiling tool and something that will help you analyze memory utilization. That’s when you’re gonna run your application through a suite of integration tests. See if over the course of half hour or an hour memory utilization is trending up or it’s staying as close to flat as possible. All applications are at some point going to use more memory. The question is: how fast is acceptable?</p><figure><img alt="Screenshot of Windows Task Manager with Chrome using 1.2 GB of RAM." src="https://cdn-images-1.medium.com/max/726/1*7aE38x8oYehu6uFvqKvocQ.png" /><figcaption>Turns out writing a blog can be quite memory-intensive.</figcaption></figure><h4><strong>Are those profiling and integration testing suites standard tools or is it something you need to develop yourself alongside your app?</strong></h4><p>You shouldn’t have to build a profiling tool. In my case, in the .NET world, Microsoft Visual Studio comes with a profiling tool. JetBrains makes a tool called dotTrace that will help you analyze your memory utilization as well. There’s a bunch of them out there from a number of different vendors.</p><p>Integration tests should be tests that you run as part of your quality assurance process. Before you ship something you should have some series of tests. A test can be as simple as a script that says explicitly ‘click this button’, ‘type this’, ‘click this’, and you make sure everything on the script works properly. Or maybe it’s an automation that folks know how to execute on any number of machines so you can parallel test everything very quickly.</p><p>The trick then is to measure. Measure what the processor or memory use was before the test ran and then measure them at the end and see what the delta is. If that change is acceptable, great! But if you started at 2% of the processor and at the end you’re using 10%, that’s a fivefold increase. I can’t say that’s unacceptable: if it’s a database system, if it’s MySQL or Postgres, and after scaling up and running a bunch of tests it’s only using 10% of the processor, that might be great. That might be tremendous. But if it’s a calculator app… what are you doing?</p><figure><img alt="Screenshot from Chrome’s profiling tool, showing performance test results for facebook.com." src="https://cdn-images-1.medium.com/max/1024/1*BpBuhzq4OTiY7ArFdWKUrQ.png" /><figcaption>Testing Facebook’s web app’s performance in Chrome.</figcaption></figure><h4><strong>To what extent does an app have to be designed with performance in mind? Is this something you can always fix later or if you don’t take care of it early on, you’ll never pay back that debt?</strong></h4><p>This is one of the greatest challenges for developers. When do I start thinking about performance? And I hate to say it, but it depends. If you’re building a game, performance is critical. When folks are aiming their guns in a first-person shooter, they need to have great performance. Performance is a differentiator. But if you’re building a little website that’s gonna show information about upcoming conferences, brochureware, performance can be thought about later. You don’t need to design for performance up front. There are considerations you can make that will give you better performance without doing extra work. I can design a static website, so that when my user experiences it, they’re only getting static content, instead of generating pages on every request. Do you need to make that decision upfront? No, not all the time.</p><h4><strong>Are there limits to performance optimization? Is there a point where you say this is as fast and stable as it gets or is there always a margin to improve?</strong></h4><p>There is a school of thought that says you can always make something faster. But you will hit the limit of physics at some point and be like ‘We have X amount of threads running at the same time, I can only get those threads running so fast on this processor’. Also, as you’re optimizing things, you want to make sure they’re maintainable. If you engineer your system in a way that is incredibly ingenious and you squeeze that last bit of performance out of it in a very tricky way, that makes it now less maintainable. Is that a valid trade off? Is that valuable? When the next person comes in and they can’t understand your code, it doesn’t matter how good it is. They’re gonna say ‘I can’t understand this. I need to rewrite it’.</p><h4><strong>What common misconceptions do you notice among software engineers when it comes to performance?</strong></h4><p>There are misconceptions that there is one right way. That there is a golden framework or tool set that you can use and you will get the best performance available. That’s just not the case. You can get great performance in your applications with any number of tools, whether it’s C++, Java, PHP, Rust, Go, .NET tools… for whatever task it is, for whatever reason you’re building that application, you can get great performance with whatever tool you choose. There’s limits to how much you can get out of those things, but you can get great performance out of everything.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=94aaaa5267b1" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1">Computers Are Hard: app performance with Jeff Fritz</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
           83         </item>
           84         <item>
           85             <title><![CDATA[Computers Are Hard: hardware with Greg Kroah-Hartman]]></title>
           86             <link>https://medium.com/computers-are-hard/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126?source=rss-d8c08a305574------2</link>
           87             <guid isPermaLink="false">https://medium.com/p/4be2d31c3126</guid>
           88             <category><![CDATA[hardware]]></category>
           89             <category><![CDATA[technology]]></category>
           90             <category><![CDATA[engineering]]></category>
           91             <category><![CDATA[software-development]]></category>
           92             <category><![CDATA[operating-systems]]></category>
           93             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
           94             <pubDate>Sun, 27 Sep 2020 22:26:21 GMT</pubDate>
           95             <atom:updated>2020-09-27T23:56:29.381Z</atom:updated>
           96             <content:encoded><![CDATA[<p><em>A printer is a very complex thing.</em></p><figure><img alt="Illustration showing a microphone, printer, and a computer mouse ‘speaking’ in binary code." src="https://cdn-images-1.medium.com/max/1024/1*EsbjOFlvBHIx-CBo3zWVlQ.jpeg" /><figcaption>Hardware with Greg Kroah-Hartman. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>Once, I was troubleshooting with a customer adamant desktop notifications from our app weren’t firing for him. Standard stuff. Usually, the answer would be misconfigured settings or the operating system interfering. Except I ran a diagnostic and the output was crystal clear: notifications were showing up fine. No errors, all green across the board.</p><p>I threw everything but the kitchen sink at that case. After a few days of back and forth with the customer (bless your patience, sir) I finally got it. His laptop monitor and secondary display had different resolutions and scaling settings. Notifications were supposed to show up on the external monitor but because of a bug, the app was rendering them based on the laptop’s screen settings. The logs told us everything was fine because we were, indeed, showing the notification. A few centimeters beyond the edge of the screen.</p><p>When software meets the messy, physical reality of physical devices, fuckups like this are bound to happen. But at the end of the day, hardware <em>mostly </em>works. Webcams mostly<em> </em>work. Mice and keyboards mostly work. Notifications mostly display in a part of the screen that actually exists. Printers… fine, these are fifty-fifty.</p><p>I asked <a href="http://www.kroah.com/log/">Greg Kroah-Hartman</a> to tell me about the work that goes into making computer peripherals do — mostly — what we ask them to. Greg is the maintainer of the Linux kernel’s stable releases and an author of books about writing Linux drivers. He took me on a journey from a tiny processor embedded in a mouse to deep inside the guts of an operating system.</p><p>Oh, and he explained printers to me, too.</p><h4><strong>Wojtek Borowicz: Let’s start with the very basics. I slide a mouse across the desk and it makes the cursor on my monitor move. How does that even happen?</strong></h4><p><strong>Greg Kroah-Hartman: </strong>Oh wow, this is an operating system interview question if I have ever seen one. To be up front, this is not <em>basics</em>. A <em>simple</em> task like this shows a lot about how systems work these days. I’ll make some assumptions to make it easier: mouse is a USB mouse, not serial, Bluetooth, PS/2, or whatever. Your system is running Linux. I will not go above the operating system level in much detail, as that’s where my knowledge gets blurry.</p><p>First off, there is a tiny processor in your USB mouse. The code it’s running is very small and compact. It’s responsible for two things: reading the state of the mouse movements and button clicks, and responding to the computer when it’s asked if it has done anything different since the last time it was asked. One of the main goals when the USB protocol was created was so that a mouse could be made for less than $1. Because of this, a processor that controls a mouse can be made very cheaply and all of the harder computations involved in dealing with mice are done by the operating system: Linux in our case.</p><p>Before the kernel can ask the mouse for data, it has to know that a device is even plugged into it. In the old days objects that were plugged into systems had to be configured so that the system knew what type of device they were, where they were plugged into, and what type of protocol the device ‘spoke’. The goal of USB was to try to unify all of that and create a standard that grouped common types of devices together to speak the same way, and create a way for the host system to ask ‘what type of device are you?’ As part of the USB specification process, a huge number of common devices were defined, such as mice, keyboards, disk devices, video cameras, foot pedals, electronic scales, and so on, such that any manufacturer could create a device that spoke the same type of protocol and no custom code would have to be written on the host system. Once the code was written for the operating system to talk to one USB mouse, all USB mice that followed the specification would instantly work. That was a huge step forward in standardization of devices and has done more to make systems easier to use than almost anything else in the past few decades.</p><p>Anyway, back to our mouse. The host system now knows a mouse is plugged into it, so it will go and ask the mouse every few milliseconds or so: ‘do you have any more data for me?’ If it does, it converts that data into a standard form that can be used by programs, and then exposes it to user space. In the case of a mouse, the data is usually a simple ‘I have moved in the X direction so many units, and in the Y direction so many units and button N is now pressed (or released)’.</p><p>A user space program (a program that lives outside of the system’s kernel. Every app you use runs in the user space) is running and has either told the operating system ‘wake me up when a mouse has sent you data’, or asks at regular intervals ‘do you have any more mouse data for me?’ The operating system replies to the program, which then converts the data into another unified standard and provides it to the program that wants to represent the mouse pointer on the screen. Drawing that mouse pointer on the screen is a whole other set of sequences that are much more complex than the mouse data pipeline, due to different hardware protocols that are not standardized in places.</p><blockquote>KERNEL</blockquote><blockquote>One of the most important parts of an operating system is its kernel. It manages the communication between hardware and software and allocates memory to other software running in the system.</blockquote><h4><strong>So when you have different computer peripherals, be it keyboards, mouses, printers and whatnot, they run their own software too?</strong></h4><p>It is very rare that any peripheral made in the past 10 years would not have a processor running software written for it. See the USB mouse example above. A keyboard has to have code written in it to scan all of the different keys to determine what is currently pressed, and be able to send that data to the host computer when it is asked for it.</p><p>A printer is a very complex thing. My first job out of college in the early 1990’s was writing software that was embedded inside printers that printed airline tickets and different types of packing labels. The software had to handle the data that described what text and barcodes needed to be printed and where on the page, as well as control the motors that fed the paper to the printer, monitor the sensors to verify that the paper was present and where it needed to be at that moment in time, drive the print head so it did not burn the paper incorrectly, talk to different chips that handled button pressing, time of day, different font cartridges, persistent memory, and much much more. There was an internal operating system controlling all of these tasks running at the needed speed in order to keep it all moving smoothly. Modern printers are even more complex, having to talk to wireless networks, handle scanning, and different apps written on the printer itself. Usually, Linux runs inside printers in order to make it easier for printer developers to focus on the things they need to do to make a printer work, instead of having to rewrite the basic things like ‘talk to USB’ or ‘talk to a network’.</p><figure><img alt="A wireless printer from Brother." src="https://cdn-images-1.medium.com/max/960/1*TuOJH6TDRBfFsTlQeoklQw.png" /><figcaption>A wireless printer. The most feared enemy of every office worker.</figcaption></figure><h4><strong>Does building your own device require writing a lot of custom code or is everything already baked into existing operating systems?</strong></h4><p>It all depends on what type of device you want to make. It is pretty simple to create your own keyboard these days such that it will ‘just work’, running open source code that talks the standard USB keyboard protocol. That is due to the standardization of many common types of devices.</p><p>But if you want to create something that has never been done before, yes, you will have to write custom code for the operating system to be able to control and talk to your device.</p><h4><strong>Speaking of operating systems, how different is writing hardware drivers between Linux, Mac, and Windows? If I developed Windows drivers for a printer, can I just convert them to Linux and OS X?</strong></h4><p>Hardware drivers are very different between different operating systems. Traditionally, writing a driver for Linux results in about one third less code than for other operating systems, due to the huge amount of common code that Linux provides for you to use. The fact that all drivers for Linux are contained directly within the main source tree of Linux has allowed us to see common features that multiple drivers use, and consolidate that code into functions that are outside of the driver, and provided by the operating system, making the driver much simpler and easier to write and maintain over time.</p><h4><strong>What devices are the most difficult to make to talk to a computer?</strong></h4><p>Custom ones that have never been done before, as no one has written the code for them yet.</p><h4><strong>Is building support for wireless devices any different than for wired ones?</strong></h4><p>In some ways yes, and in other ways no. Take a mouse again. The USB protocol for mice was called <a href="https://en.wikipedia.org/wiki/Human_interface_device">HID</a>, which stood for <em>Human Interface Device</em>. Manufacturers realized that once this protocol was made and operating systems supported it, they could use the same communication protocol across other transport mediums. So if an operating system could add support for a new transport method, then it could instantly start talking to a device for which it already knew a different transport method. So while there is some plumbing involved in turning a USB mouse to a Bluetooth mouse, the data sent to the operating system to describe how the mouse is moving is the same.</p><h4><strong>Can different peripherals interfere with each other? Is it possible that for example my microphone isn’t working because of the webcam drivers? Or my headphones don’t connect because of the Wi-Fi adapter?</strong></h4><p>Hopefully not. For most hardware protocols these days, devices can not even see that there is any other device in the system at all. All they have the ability to do is to answer the simple question from the host system: ‘do you have any data for me?’</p><p>Drivers for specific classes of devices, or different types of custom devices, should not be able to see other devices in the system either, as they are not controlling them. It’s different in cases where some drivers control multiple things at once in order to get the device to work properly, but those are the exception, not the rule.</p><h4><strong>Similarly, is it likely apps would trip over individual devices? Imagine you built support for back and forward mouse buttons into an app and there’s one model of a mouse where it won’t work. How do you debug that?</strong></h4><p>The job of an operating system is to provide a unified view of all hardware to programs, so you should not have to worry about a different type of device. All you should need to focus on is: ‘did the mouse change position?’ But of course, hardware being hardware, there are loads of exceptions and ways that hardware designers can mess things up and do things differently either on purpose, or by accident.</p><p>Because of this, there are huge tables of hardware quirks that an operating system accumulates to smoothen things. But sometimes, for more complex devices, the operating system can not handle these differences, and so a user space library needs to get involved in order to figure things out and fix the data up. That is why there are common libraries that all programs have come to use in order to talk to devices like mice, so that they do not have to duplicate that logic in their own code. Those libraries do not live in the operating system, but are part of the low level plumbing that has been created around it. The specific library for mice and input devices that does this is called <a href="https://www.freedesktop.org/wiki/Software/libinput/">libinput</a>.</p><blockquote>LIBRARIES</blockquote><blockquote>In computing, a library is a tool in the form of pre-written code that handles a specific task. Engineers use libraries to avoid reinventing the wheel every time they build an app. Greg shared an example of libinput: a Linux library for interacting with input devices like mice, touchpads, and graphic tablets.</blockquote><h4><strong>If you find a bug in device drivers, how do you get the fix to the users?</strong></h4><p>For Linux, you fix the driver and send a change to the owner of the driver and the development community for that subsystem. The change is reviewed by the developers and accepted by the maintainers of that subsystem, and then sent on to the kernel maintainer for inclusion in the next release. When the fix shows up in a public release, it can be backported to older stable releases of Linux at the same time.</p><p>Fixes like this happen all the time. We are averaging about 20–40 fixes a day for Linux at the moment that are being backported to the stable kernels. This is in contrast to the main development cycle of Linux, which is averaging about 9 changes an hour every day, adding new features and functionality for new things that people come up with.</p><h4><strong>We connect devices to computers and to each other through USB, USB-C, Lightning, HDMI… why do we have so many interfaces? Is there much difference between them?</strong></h4><p>There are lots of differences between them at the hardware levels in some ways, and in other ways, they all seem to work the same way on a physical layer (they use <a href="https://en.wikipedia.org/wiki/Differential_signaling">differential signaling</a>) to transmit data across two wires at very high speeds. The data that is sent can be in a standard format (like to describe mice), or in a lower-level format, to emulate another type of device to make it look like it is directly connected to the main system over an older style of connection (i.e. PCI).</p><p>New form factors are created all the time in order to provide for higher speeds, increased distances, lower power consumption, and different design goals. HDMI fits a very different specific need than USB-C does. Interfaces are not just created for fun but to solve real issues.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4be2d31c3126" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126">Computers Are Hard: hardware with Greg Kroah-Hartman</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
           97         </item>
           98         <item>
           99             <title><![CDATA[Computers Are Hard: security and cryptography with Anastasiia Voitova]]></title>
          100             <link>https://medium.com/computers-are-hard/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d?source=rss-d8c08a305574------2</link>
          101             <guid isPermaLink="false">https://medium.com/p/d55ce5c0855d</guid>
          102             <category><![CDATA[technology]]></category>
          103             <category><![CDATA[cybersecurity]]></category>
          104             <category><![CDATA[engineering]]></category>
          105             <category><![CDATA[software-development]]></category>
          106             <category><![CDATA[cryptography]]></category>
          107             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
          108             <pubDate>Sun, 27 Sep 2020 22:06:10 GMT</pubDate>
          109             <atom:updated>2020-09-27T23:56:45.418Z</atom:updated>
          110             <content:encoded><![CDATA[<p><em>There’s no single action that will allow your application to be secure. It’s a cycle.</em></p><figure><img alt="Illustration showing a locked padlock with source code in the background." src="https://cdn-images-1.medium.com/max/1024/1*Gct5uS0SRNOk04JTiyRh7w.jpeg" /><figcaption>Security and cryptography with Anastasiia Voitova. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>Cybersecurity is a weird beast. It ranges from using complex mathematical functions to encrypt data, to saying things like ‘you shouldn’t write your password on a sticky note’<em> </em>and ‘please, for the love of god, enable two-factor authentication’ over and over. Because no matter how sophisticated the protections, we all have a story about a family member who got phished (in my family, that’s me). Not to even mention regular news about breaches and data leaks from major companies.</p><p>For all the talk about how crucial security is, how vulnerable our networks and websites and even governments are, I have to admit I don’t really know much beyond the basic precautions. Set strong passwords, don’t click on suspicious links, things like that. But how are passwords stored? How does a website know I typed in the right one? If an app encrypts messages I send and can then decrypt them, why can’t a hacker do that? It all feels esoteric. So I asked Anastasiia Voitova, a Security Software Engineer at Cossack Labs whom the Twitterfolk among you might know better as <a href="https://twitter.com/vixentael">vixentael</a>, to shed more light on security engineering and her specialty: applied cryptography.</p><p>Let’s dive into how engineers protect their products and our data from malicious actors. But first: please, for the love of god, enable two-factor authentication.</p><h4><strong>Wojtek Borowicz: What’s the most common way for security incidents to happen?</strong></h4><p><strong>vixentael: </strong>If you <a href="https://www.google.com/search?ei=hG73XcibIMP66QTJqZegAw&amp;q=filetype%3Aenv+DB_USERNAME&amp;oq=filetype%3Aenv+DB_USERNAME&amp;gs_l=psy-ab.3...4436.5067..5251...0.2..0.54.107.2......0....1..gws-wiz.......0i71.3RfyQ0FmkbM&amp;ved=0ahUKEwjI7faQjLrmAhVDfZoKHcnUBTQQ4dUDCAs&amp;uact=5">search Google for <em>.env</em> files</a>, you will find a lot — a lot! — of public environment files in plaintext with logins and passwords for databases and internal services. You can use such a password to log in and you basically <em>hacked</em> a company.</p><h4><strong>So the most typical source of data breaches is not some sort of sophisticated attack but putting your password a Google search away from malicious actors?</strong></h4><p>Yeah, exactly. To exploit a security breach, attackers sometimes don’t need specific knowledge or even a lot of time. There are so many low hanging fruits: public access, misconfiguration, or sticking with default credentials like <em>admin/admin</em> or <em>root/root</em>. Recently, I was at a conference and I was showing the organizers why their app wasn’t very secure. It took me a couple of hours to <em>hack</em> the app, access some details about attendees, and point out to the organizers how they can improve. It’s not complicated.</p><p>A company often wouldn’t even know they’ve been hacked until long after the fact. If they have security and anomaly monitoring systems (SIEM), they can notice someone is reading too much data from their database or accessing resources they shouldn’t. Otherwise, they find out from the news. I read that typically, it takes more <a href="https://www.itgovernanceusa.com/blog/how-long-does-it-take-to-detect-a-cyber-attack">than 170 days</a> for companies to realize they’ve been hacked.</p><figure><img alt="Screenshot of Google search results for database credentials." src="https://cdn-images-1.medium.com/max/1024/1*4GOD9dP63ivV5yY_lsjOcw.png" /><figcaption>If you’re not careful, your database password is just a Google search away.</figcaption></figure><h4><strong>Let’s assume we took the basic steps and made sure our password isn’t published anywhere and isn’t just <em>admin1</em>. What other precautions can software engineers take to keep their application secure?</strong></h4><p>In software development in general, we have cycles. First we try to understand the user’s problem, then we create the prototype, then we code it, and we perform user testing. In security it’s the same. There’s no single action that will allow your application to be secure. It’s a cycle.</p><p>First of all, you need to understand what you are trying to protect. In my experience, many companies don’t. In many industries, there are regulations that <a href="https://www.cossacklabs.com/blog/what-we-need-to-encrypt-cheatsheet.html">explicitly say what data to protect</a>. For example in healthcare or in finance. Now there’s also GDPR. But regulations don’t cover everything. Other — typically non-regulated data — might be sensitive for a specific business. Let’s say your company has an app that collects users’ likes. Thanks to that, you show content and ads based on users’ interests. So for you, the data to protect would be those likes and users’ profiles. Technically speaking, it’s not sensitive data because it’s not regulated. But for your business, it’s critical.</p><p>So step number one is to define the data scope. And I don’t mean just binary data, but all the assets, access, and infrastructure points… basically anything that will lead to financial or reputation losses if someone gains access to, modifies, or deletes it. Now, even if you understand the data scope, you still most likely can’t protect <em>everything</em>. There’s not enough time, or the budget is too tight, you know, the real world happens. You need to focus and prioritize. Understand losing what data would lead to the most severe consequences. In security, we call that risk management. It’s complicated. I often see software developers putting more effort towards obfuscating the source code, rather than encrypting user data or spending time on proper authentication.</p><h4><strong>Does implementing those methods, like obfuscation and encryption, make it more difficult to build and maintain software?</strong></h4><p>Yes. And this brings us to step number three. When you understand what to protect, you implement ways to do that. That’s what we call security controls or security measures. Usually, you want to have more than one. This is called <a href="https://www.cossacklabs.com/blog/defense-in-depth-with-acra.html"><em>defense in depth</em></a>: when you have multiple layers of security measures to protect the same assets. Unfortunately, there is no finish line here. There’s no sign that says ‘hello, you’ve done everything and are 100% secure’. You can take the basic steps and as a company, you will be fine against most threats. But new vulnerabilities are discovered every day, so you need to be updating these layers of defense.</p><h4><strong>Is it common that your application becomes exposed to a threat because of someone else? Like a vulnerability on the side of a vendor or a library you use?</strong></h4><p>Of course, it happens all the time. There are companies whose main business it is to keep an eye on dependencies. They follow the libraries you’re using and alert you when you need to update them.</p><h4><strong>Let’s say I’m running an online store. Which security layers would you recommend I use?</strong></h4><p>First of all, you operate under some regulations. You gather regulated data from customers and you need to protect it. To be compliant with GDPR and still be able to use the data for the purpose of analytics, you might need encryption, anonymization, and pseudonymization.</p><p>Since this is an e-commerce app, you’ll be handling payments. That’s another regulated industry, with PCI DSS and financial regulations. You either implement them yourself and keep an eye on credit card information, or you use a third-party solution. If you do the latter, you need to make sure it’s a trusted vendor. You can’t allow anyone to intercept the data in transit between your application and your vendor’s library.</p><p>Your store also has some inventory. And if you lose the database of your items, you can’t sell them. So you want to back up that database and do that every night. And you need to make sure you’re really backing up the data and are able to retrieve from the backups. Because another typical mistake is creating empty backups. And <a href="https://about.gitlab.com/blog/2017/02/01/gitlab-dot-com-database-incident/">until something happens</a>, no one realizes there was nothing backed up.</p><p>You probably also have different apps for different platforms. Like an iOS app, Android app, a web app, and some backend. You need to protect the infrastructure layer and make sure that data transmitted from the mobile application is transmitted in a secure way to the backend application. To do that, you need to make sure transport encryption is configured properly. Most likely that’s TLS. Bonus points if you create an <a href="https://www.cossacklabs.com/blog/end-to-end-encryption-in-bear-app.html">end-to-end encrypted app</a>, but that’s overkill for online stores.</p><p>Then you have authentication, authorization, and access control policies. That leads to a step many companies forget about. Some of your staff has access to user data. Like customer support. It makes sense to keep an eye on staff accounts and monitor their behavior. For example, if someone from tech support is accessing gigabytes of data from the database, that’s most likely a sign of something wrong. It could be a disgruntled employee trying to sell the data. The last thing you want is information leaking from insiders.</p><blockquote>TLS</blockquote><blockquote>Transport Layer Security is a standard internet security protocol. It has three fundamental components: encryption (the data can’t be read during transmission), authentication (the data can’t be exchanged unless both sides of the connection prove they are who they claim to be), and integrity (the data can’t be tampered with). To verify their identity, apps and websites need a TLS certificate (also known as an SSL certificate).</blockquote><h4><strong>You mentioned that when you’re transmitting data, for example when connecting to your payments vendor, you need to watch out for it to not be picked up during transmission. How is it possible for a third party to listen in on the data you’re sending?</strong></h4><p>It’s either through the transmission layer — if you set up TLS but allow downgrading to the old TLS versions (weak ciphers), the attacker can intercept that connection — or it’s through your logs. Many people believe that TLS is enough but unfortunately, TLS is terminated outside of the application’s code. The data that was encrypted during transmission reaches your application and is translated into plain text. Here it can be logged. Logging sensitive data is another typical story. It happened with Twitter and Facebook. They realized they were logging plaintext passwords of users. Developers might use application-level encryption to encrypt sensitive data fields before sending them using TLS. This way, the data will be encrypted twice, with different methods.</p><h4><strong>Does the hacker need to be very technically sophisticated to intercept this data in transit?</strong></h4><p>If the data is logged and no one protects the logs, then no. Attackers just need to find these logs. They need to either be lucky or to understand where to look.</p><h4><strong>We touched upon encryption a few times already. Is that the main method of protection in software engineering?</strong></h4><p>If you asked security engineers with a different background, they wouldn’t answer in the same way. But for us, cryptographers, yes: encryption is the main security control to protect the data. That’s because if data is properly encrypted, it can’t leak in plaintext. Instead of monitoring the whole data flow, we can monitor only decryption services. In other words, if we use good encryption and we understand what we’re doing, we make defensive security easier.</p><p>But that’s the tricky part. Doing encryption correctly is quite a sophisticated job. <a href="https://www.cossacklabs.com/themis/">Which libraries to use?</a> Which ciphers? How to store keys? How to revoke them? How to rotate them? If we ask ourselves what’s simpler: to implement encryption in an app and handle all the difficulties of encryption and key management, or not implement it at all but set up a lot of other security measures, the answer will be — <em>it depends</em>.</p><p>If you start from scratch with encryption, then it will be faster and easier. But if you already have the application working, especially if it’s a large application, it’s not easy to build encryption into it.</p><h4><strong>Okay, but if I’m a malicious actor who can already access your encrypted data, what’s stopping me from also accessing the keys?</strong></h4><p>Because keys are usually stored separately, in key management systems or HSM (hardware security modules). But storing keys alongside the data is another mistake I’ve seen a lot. In this case, encryption doesn’t make a lot of sense.</p><figure><img alt="Thales hardware security module." src="https://cdn-images-1.medium.com/max/565/1*zb_PIjuSaWS21OACcnSuFQ.png" /><figcaption>Hardware Security Modules can store and generate encryption keys.</figcaption></figure><h4><strong>It’s like locking your safe and leaving the combination on your desk.</strong></h4><p>Or like having your long and secure password written on a piece of paper under your keyboard.</p><h4><strong>But if the keys are securely stored, does it automatically mean your data is safe?</strong></h4><p>It means the attacker would need a lot of time to decrypt it.</p><h4><strong>So they can do it even if they don’t have the keys?</strong></h4><p>It’s just a question of time. It could be days, months, or a hundred years. What we’re trying to achieve with the use of modern ciphers is making decryption with brute force take so long, it would still take ages even if you rented a whole Amazon cluster to do it.</p><p>If you use old ciphers, they can be decrypted in hours or minutes. Same if you use a correct, modern cipher but do it wrong. For example, if the attacker has access to the database, they can send plaintext to your service and see what cipher text is returned. Then they can guess the nature of the encryption and try <a href="https://en.wikipedia.org/wiki/Attack_model">many possible attacks</a> — <em>known-plaintext</em> attack, <em>chosen-plaintext</em> attack, <em>side-channel attack</em>, etc.</p><h4><strong>If it takes ages to decrypt data encrypted with modern ciphers, why would anyone keep using old ciphers?</strong></h4><p>Typically, you don’t write the ciphers yourself. You just use a library. Some libraries don’t support new ciphers. But if you’re a developer and you don’t have any cryptographic background, you don’t know which ciphers are modern and good and which are old and bad. It’s a question of expertise. There are still people who use Base64 as encryption.</p><blockquote>BASE64</blockquote><blockquote>Base64 is an encoding algorithm dating back to the 80’s. It uses an alphabet of 64 characters, hence the name. It can convert files, such as images, videos, or music, into strings of text. Engineers use Base64 when they need to store files somewhere that doesn’t support non-textual data. Base64 can be easily decoded and is not an encryption method.</blockquote><h4><strong>Can you encrypt any type of data? Is there a difference between encrypting videos, images, and audio recordings?</strong></h4><p>Yeah, there is. There are different kinds of ciphers types (block, stream), and ways to encrypt (e.g. authenticated encryption). Which one to use depends on how much data you have, which hardware you use, what performance drawbacks you can handle.</p><h4><strong>What about securing data online (e.g. website passwords) versus offline (e.g. password protected files)?</strong></h4><p>There’s no difference on abstract level. It’s more that it’s different types of data. When you protect your password, you don’t encrypt it — you hash it. You use a password hashing function, like scrypt, bcrypt, or PBKDF2 to create a hash from password. When you encrypt a file, you probably just use a block cipher to encrypt contents of the file. Encryption and hashing are different mathematical functions and you use them for protecting different types of data.</p><h4><strong>What’s the difference between encrypting and hashing?</strong></h4><p>That’s easy. Without digging into details — they are both mathematical functions. Hashing is a one-way function. You have data, you hash it, and <a href="https://twitter.com/cybergibbons/status/1203291585473110016">you can’t get it back to the original state</a>. Encryption is a two-way function. You have data, you encrypt data, then you decrypt the data and have it back in plaintext.</p><h4><strong>So how does a website know I entered the correct password if they store it hashed and cannot unhash it?</strong></h4><p>When you try to log in, it calculates a hash from your password the same way as when you created the password. Then compares the new hash with the stored one. But password-based authentication <a href="https://hackernoon.com/how-do-you-authenticate-mate-f2b70904cc3a">is not the only one</a> that exists.</p><h4><strong>Is it possible to make a complex application that would be entirely secure and impenetrable?</strong></h4><p>Nothing is impenetrable. You can protect your application from the most common threats but you cannot protect it against vulnerabilities that will be revealed, say, next month. You can, however, have a security strategy and implement changes continuously. <a href="https://www.cossacklabs.com/blog/how-to-prepare-for-security-incidents.html">It’s like a roadmap</a>. Month by month you add or improve security properties of your application. We can’t tell the future. What we can do is make our application good enough against threats we know about right now.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d55ce5c0855d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d">Computers Are Hard: security and cryptography with Anastasiia Voitova</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
          111         </item>
          112         <item>
          113             <title><![CDATA[Computers Are Hard: networking with Rita Kozlov]]></title>
          114             <link>https://medium.com/computers-are-hard/computers-are-hard-networking-with-rita-kozlov-6bf251991083?source=rss-d8c08a305574------2</link>
          115             <guid isPermaLink="false">https://medium.com/p/6bf251991083</guid>
          116             <category><![CDATA[technology]]></category>
          117             <category><![CDATA[internet]]></category>
          118             <category><![CDATA[engineering]]></category>
          119             <category><![CDATA[software-development]]></category>
          120             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
          121             <pubDate>Sun, 27 Sep 2020 21:40:29 GMT</pubDate>
          122             <atom:updated>2020-09-27T23:57:01.285Z</atom:updated>
          123             <content:encoded><![CDATA[<p><em>The internet is a living thing.</em></p><figure><img alt="Illustration showing an antenna sending signal to a mobile phone." src="https://cdn-images-1.medium.com/max/1024/1*Pl3KeB4iAaFVpbBQwNStsQ.jpeg" /><figcaption>Networking with Rita Kozlov. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>It’s easy to visualize how the internet works. We type an address into the browser and then our computer connects to a server and that server shows us a website. Simple enough. But then you start thinking about what’s really going on and it turns out just the number of acronyms involved is enough to make you dizzy. ISP looks up a DNS and finds an IP. Your computer connects to the IP over HTTP. On its way, it hits a CDN, likely hosted by AWS or GCP.</p><p>What?</p><p>The internet is an enormous ecosystem of technologies and organizations. They make sure electrons zipping through thousands of kilometers of cables laid across the world end up as pixels on your screen, whether you’re fragging enemies in Fortnite or sending holiday pics to friends. Both these scenarios generate traffic handled by multiple companies and communication protocols: some fairly new and others decades old. And as you’re reading these words, engineers at Medium are probably making changes to the website’s code. All the time, the internet is stirring and bursting and bubbling. It never stops.</p><p>To untangle this quagmire, I talked to <a href="https://twitter.com/ritakozlov_">Rita Kozlov</a>. She’s a Product Manager at Cloudflare, a popular provider of web infrastructure. I asked Rita what <em>really</em> happens when you try to open YouTube and it led us all the way down to those pipes laid across the ocean floor.</p><h4><strong>Wojtek Borowicz: I launch the browser, type in an address — say, <em>YouTube.com</em> — and hit enter. What happens under the hood before I’m able to watch videos?</strong></h4><p><strong>Rita Kozlov: </strong>First is a DNS lookup. When you go to a website like youtube.com, you need to open communication with a server, which is basically another computer somewhere else in the world. Your computer finds that server by an IP address. But people are bad at remembering numbers and pretty decent at remembering names, so we came up with DNS: <em>Domain Naming System</em>. It’s basically a large phonebook that makes the translation from the name to the IP address.</p><p>Once the browser has the IP address, it can open the connection to the server. Before we can start talking, we need to establish a channel of communication. That’s the handshake. It will generally start with something called a <em>client hello</em>, where the computer will initiate the connection. Otherwise it’s called a <em>SYN</em>, synchronization request. The server will say ‘I acknowledge you want to connect’<em> </em>and the computer will receive a <em>SYN/ACK</em>. We’ve established a channel of communication.</p><p>Now that we’re talking to each other, the server will send a response over the HTTP protocol. If it was a phone call, HTTP would be the language being spoken. Then the server will send an HTTP response that the browser will take and render. And that’s how the YouTube web page shows up in your browser.</p><h4><strong>Going back to that first step, you said DNS translates names into IP addresses. Is DNS something that you, the website developer, control? Or is that a global directory?</strong></h4><p>There are two parties here, similar to the client-server situation. There is an authoritative DNS provider and there is a DNS resolver. If I want to publish my website, I’m going to need an authoritative DNS provider. That could be Cloudflare, for example, which is where I work. The resolver is usually provided through your ISP, or if you’re working at a company, there is often an internal DNS resolver.</p><p>When you type <em>example.com</em> into the browser, DNS does a recursive lookup. So we’re going to start from the very end, which is <em>.com</em>. Generally, the idea of the internet is that it’s not controlled by any single entity. This requires a few organizations, like IANA or ICANN, to run root servers that contain information about the TLD registries. A TLD stands for <em>top-level domain</em>. Examples of that are .com, .org, or .net.</p><p>The recursive resolver will first go to the root and ask ‘hey, do you know where .com is?’<em> </em>and the root server will respond: ‘yes, this server will know all about .com websites’. Then you go to the .com registry and check where <em>example.com</em> is and it will return an IP. Or you ask where is <em>www.example.com</em>, which is called the subdomain, and the .com registry will send you to the authoritative DNS provider for <em>example.com</em> and then you will connect to that name server and ask where <em>www.example.com</em> is. At that point you will get back an IP.</p><h4><strong>All those servers are physical computers. Does their location matter? If I’m connecting to a service based in the UK, is it going to be slower if I’m in Australia compared to Germany?</strong></h4><p>Many users of the internet just imagine the cloud as this thing floating somewhere, but yeah, computers are absolutely physical and cloud providers generally have regions where those servers are. A very random place called Ashburn, Virginia, is an Amazon AWS region otherwise referred to as us-east-1. If you’re in France and accessing a service hosted in the US, say a shopping site with thousands of pictures and JavaScript and CSS assets — for every single one of those assets you will need to travel over wires from France all the way to the east coast of the US and back. And if you’re in Australia, literally the internet is slower for users there because everything has to travel such a long way.</p><blockquote>US-EAST-1</blockquote><blockquote>us-east-1 is the codename of the first data center Amazon launched as part of Amazon Web Services. It went online in 2006 and became the cornerstone of AWS, which is now the largest cloud services provider in the world. Whether you’re watching Netflix or signing up for health insurance, it’s all happening on Amazon’s infrastructure.</blockquote><h4><strong>So how fast you can load websites is limited by the laws of physics?</strong></h4><p>You’re literally limited by the speed of light.</p><h4><strong>What can engineers do about that?</strong></h4><p>Until we figure a way to beat the speed of light, the best thing you can do is make sure the assets are as close to the end user as possible. One technology that enables you to do that is called CDN: <em>content delivery network</em>. Without a CDN, if I’m in Australia and I visit your website, I’m gonna travel all the way to the US for every single asset. And if another person then visits that website, they will have to make those journeys all over again. A CDN basically places servers at many different locations around the world, so when you send your request, it can leverage those locations for caching, which is the ability to make a local copy of an asset. For static content, which doesn’t change between users, once I’ve gone to a shopping site and downloaded all those images, the next user in Australia can be served those images from a data center much closer to them.</p><figure><img alt="Map of submarine cables." src="https://cdn-images-1.medium.com/max/1024/1*Ng9jvwd44ey-NL_BcYVcOg.png" /><figcaption>Map of submarine cables. Source: <a href="https://www.submarinecablemap.com/">submarinecablemap.com</a>.</figcaption></figure><h4><strong>What about scale? Is a website going to run slower if million people are using it at the same time versus a few thousand?</strong></h4><p>Yes. Servers, like anything else, have limited capacity. You can get too much traffic, whether it’s intentionally (your company is growing) or in a form of an attack called <em>Distributed-Denial-of-Service</em> or DDoS, where an attacker would send so much traffic to your server that it can’t handle the legitimate requests.</p><p>A CDN can definitely help you scale by caching things so that not everything is hitting your server. Another way to scale is by adding more servers and load balancing between them, which is determining how much traffic goes to each one.</p><h4><strong>So when you want to scale your infrastructure, you just add more servers?</strong></h4><p>It depends on how you chose to architect and build your application in the first place. If you went and bought a server, you’re probably going to need to go and buy another server and buy a box that balances the load between them.</p><p>Luckily, there’s been the emergence of the cloud and the idea is that you, as the developer, won’t need to worry about buying the boxes anymore. A provider like AWS will be able to purchase those on your behalf and you can specify how much load you’re expecting. Typically, you would buy something called a VPS or <em>virtual private server</em>. AWS would spin up a server for you, and no matter how much traffic you get, even if it’s very little, it’s on standby for you.</p><p>The next iteration of that is called serverless, where you should only worry about the code you’re writing and your cloud provider will take care of scaling for you. And you only pay for as much as you’re using, rather than having this thing standing by the whole time even if it’s not getting traffic. Or inversely, when you get a ton of traffic, you shouldn’t have to do anything because the cloud provider will figure out a way to scale.</p><h4><strong>Does the type of service you build determine the infrastructure? How would this differ for a game store versus a video calling app?</strong></h4><p>Definitely the type of thing you’re building would influence the type of infrastructure you end up using. For a game store, the primary functions you need are the ability to serve large files and the user database. And you need to be able to authenticate these users and authorize them based on whether or not they bought the game. These are all very well served by the HTTP protocol and can be easily built with a simple storage solution. You can have Amazon S3 for storing the games and then you can have a serverless function that does the authentication and maintains the user database.</p><p>For a video conferencing app, you would definitely want to consider a different stack and you would want to think about how to connect users from a point that’s the nearest to both of them, so they are able to communicate with as little delay as possible. HTTP might not be the best choice. At a lower level, HTTP is based on something called TCP and the idea is that you have a consistent connection you’re communicating over. This is built for services that require all of the data to be always accounted for and transferred. If I buy a game and I have a blip in the connection, I still need to be able to get that game later. When I’m calling you and I had a blip in the connection, it would be very disruptive if you suddenly heard words I said five minutes ago. So there is a different protocol, called UDP, which is better optimized for performance but not so much for consistency.</p><blockquote>UDP</blockquote><blockquote>User Datagram Protocol is one of the standard communication protocols used for data transmission over the internet. It’s prone to dropping data packets but it’s also very fast, making it a common choice for streaming or online gaming.</blockquote><h4><strong>Fast and stable internet connection is not the default for millions of people around the world, especially in developing countries. When building a service, do you take into account that some of your users will be on spotty connections or will be using cellular networks instead of Wi-Fi?</strong></h4><p>Around 50 or 60% of internet connections today are coming from mobile devices and as places like India, China and sub-Saharan Africa become more connected, you want to think about your users there. Generally, when countries connect to the internet, the beginning of that journey will be on pretty cheap mobile devices that don’t have the latest hardware and that might be running relatively old software that can be easily overwhelmed.</p><p>As a developer, especially when you’re building something intended for an international audience, you will want first of all to consider how much logic you’re cramming into the client side. On one hand, you want quite a bit of logic to live on the server side in case the device is not able to handle all of the computation that you’re trying to ship to it. On the other hand, you try to place as much of the logic as possible with the end user, so their devices can get connected to the server quickly.</p><h4><strong>Is that difficult to test for when your team is based in a modern office in San Francisco, with everyone using brand-new MacBooks connected to a superfast network?</strong></h4><p>There are lots of tools that can help you test for things like that. One of my favorites is Google Lighthouse, which is available with any Chrome browser right now. If you’re a developer, especially a front-end developer, you’re probably already familiar with Chrome dev tools but there is a new tab now called Audits. Under those Audits, you can select what kind of device you want to run tests for. You can choose whether it’s mobile or desktop, you can test for performance, you can choose artificial network throttling to help you understand the experience of someone on a slower internet connection. But it can also help you improve your business with testing for things like SEO and accessibility.</p><h4><strong>All these elements we’ve talked about — DNS, CDN, hosting providers — create a complicated map of middlemen between the user and the service. How much of this is under engineer’s control? If your CDN fails, is there anything you can do?</strong></h4><p>There are few approaches you can take pre-emptively. One that’s becoming more prominent is the idea of multi-cloud. You know, AWS has failures and GCP has failures too, so if you have a layer in front of them that balances the load or you set up health checks, then you can make sure your service stays online. You can also have primary and secondary DNS providers. We encourage people to have a very robust infrastructure. At Cloudflare, we have many data centers around the world, so even if one of them goes down you will get routed to the next one and you’re up and available at all times. But ultimately it’s just a bunch of wires hacked together. People are writing code that has bugs, as anyone’s code does. Sometimes there is not much you can do about outages.</p><h4><strong>We opened with a hypothetical so let’s close with one, too. Why can the same app, used from the same network and the same device, feel slower on one day and faster on another?</strong></h4><p>The internet is a living thing. There can be something as basic that happened as a snowstorm and some of the wires in your connection got knocked over. Or someone at the ISP fat-fingered something and now the connection isn’t going through. Another possibility: if there is an emergency happening and everyone is getting online at the same time, the internet routes get congested, which causes slowness. Just like water pipes, the internet pipes can get clogged up too.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6bf251991083" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-networking-with-rita-kozlov-6bf251991083">Computers Are Hard: networking with Rita Kozlov</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
          124         </item>
          125         <item>
          126             <title><![CDATA[Computers Are Hard: bugs and incidents with Charity Majors]]></title>
          127             <link>https://medium.com/computers-are-hard/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8?source=rss-d8c08a305574------2</link>
          128             <guid isPermaLink="false">https://medium.com/p/252813ae9ce8</guid>
          129             <category><![CDATA[debugging]]></category>
          130             <category><![CDATA[technology]]></category>
          131             <category><![CDATA[software-development]]></category>
          132             <category><![CDATA[engineering]]></category>
          133             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
          134             <pubDate>Sun, 27 Sep 2020 21:18:52 GMT</pubDate>
          135             <atom:updated>2020-09-30T15:40:57.931Z</atom:updated>
          136             <content:encoded><![CDATA[<h4><em>Everything’s failing all the time so we’re gonna embrace that and lean into it instead of being afraid.</em></h4><figure><img alt="Illustration showing a laptop in flames." src="https://cdn-images-1.medium.com/max/1024/1*VsnGNg5CYsqqicyeUea3Og.jpeg" /><figcaption>Bugs and incidents with Charity Majors. Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>One of memorable moments working at an enterprise software company has been seeing a very senior engineering manager stumble between desks, laptop in one hand and phone in the other, screaming: ‘Fuck, fuck, fuck! We’re going down!’.<em> </em>She then barged into a conference room that is permanently booked for the incident response team and started paging people in an office halfway across the globe. It was still the middle of the night for them, but when you’re on call, you’re on call.</p><p>The first minutes of an emerging outage are frantic. Alerts start pinging, number of support tickets goes through the roof, engineers and customer service scramble to assemble the response team. But then it settles down. Tech companies have runbooks written to make sure a service failure is addressed quickly and efficiently. In action, the process looks like a NASA launch center. A bunch of people are staring at screens and responding to what they see in perfect synchronization, with everybody knowing exactly what to do.</p><p>But it begs the question: why did tech companies need to operationalize every step of incident response? There is no other industry so accustomed to its products breaking that it’s considered part of the daily routine. Think about it. If one day every Prius in the world refused to start for a couple of hours, Toyota would face international scrutiny and would be forced to recall hundreds of thousands of cars. If an app isn’t working, you google if it’s down for everyone or just for you and come back later.</p><p>What makes software so special that we have to live with continuous fixing of bugs and emergence of new ones? To find out, I reached out to <a href="https://charity.wtf/">Charity Majors</a>, co-founder and CTO of Honeycomb, a company building tools for developers to debug and better understand systems they work with. We talked about how and why software breaks, what engineers do when that happens, and about Charity’s own adventures in debugging routers in Romania.</p><h4><strong>Wojtek Borowicz: Say you run an online service, be it a store, or an app, or something else. And then one day it’s down. How can an outage happen? What are some of the causes?</strong></h4><p><strong>Charity Majors: </strong>We’re shipping software all the time. There’s far more to do, and fix, and build than we have time for in our entire lives. So we’re constantly pushing changes and every time we change something, we introduce risk to the system. There are edge cases, things you didn’t test for, and then there are subtler things. Not directly something you changed, but the interaction of two, or three, or four, or five, or more systems. This is really hard to anticipate and test for in advance. That’s why we’re increasingly thinking less about how do we prevent failure and more about how we can build our systems so that lots of things have to fail before the users ever notice.</p><p>It’s mind-boggling to sit back and actually think about how complex these systems are. What amazes me is not that things fail but that, more or less, things work.</p><h4><strong>What does that mean we’re constantly introducing changes? Do developers deploy new code once a week? Daily? Every hour?</strong></h4><p>It depends on the system. But your system is built on top of another system, built on top of another system… so it could be someone introducing a change you have no visibility into and no control over. Timing doesn’t matter. You should just assume that changes are happening literally all the time. That’s the only way to plan for risk.</p><h4><strong>So it’s entirely plausible that your service went down even though you didn’t make a change yourself?</strong></h4><p>Oh yeah! And changes are not just code. They can be other components, too. Like if a piece of hardware fails. Or there’s a storm on the East Coast. Or you need to add more capacity to handle increased load.</p><h4><strong>On the user’s end, most of those failures look the same though. Twitter’s <em>Fail Whale</em> is perhaps the most popular example…</strong></h4><p>These are the easy ones! The ones that tell you: <em>hey, I failed </em>— those are the lucky ones. Most failures are not that graceful. Most failures don’t give you a Fail Whale. Suddenly, things are just not working the way you expect them to. And sometimes you will never figure it out or even notice it.</p><figure><img alt="Fail Whale: a drawing of a whale, captioned ‘Twitter is over capacity’." src="https://cdn-images-1.medium.com/max/507/1*sXbNtGCCM9_jOpe7GWZEYw.png" /><figcaption>Twitter used to go down so much, its outages had their own mascot.</figcaption></figure><h4><strong>When the system is down, how do you recognize which component failed?</strong></h4><p>This speaks to exactly what I’m doing with my life right now. Because we’re in the middle of this great shift, from the monolith to microservices or from one to many. The old world was one where you had <em>The App</em>. You deployed The App and you had The Database. You could kind of fit the whole system in your head and visualize it. You could see where requests were going and you could reason about it. And we monitored those systems and found thresholds. As long as some metric was between this and that, we’d call it good.</p><p>That whole model is starting to completely disintegrate. Now, instead of The App and The Database you have tens, dozens, hundreds. You’re depending on all those loosely coupled, far-flung services that aren’t even yours, yet you’re still responsible for your availability. So increasingly instead of monitoring the problems, you really just need to focus on building visibility, so that while you’re shipping code you can look through the lens of your instrumentation and see: am I shipping what I think I am? Did I build what I wanted to? Is it behaving the way I expected?</p><p>If engineers just developed the muscle memory of pushing to master, looking at it as it was being deployed, and asking themselves those questions, 80–90% of problems would be caught before users notice. But it’s scary to people because it’s very open-ended and exploratory. There are no answers and no dashboards that say: <em>there’s the problem</em>. The problem is usually like half a dozen impossible things combined.</p><h4><strong>Does moving to this model of tens or hundreds entangled services also make it tens to hundreds times more difficult for engineers to diagnose problems?</strong></h4><p>More than that. It’s exponentially harder. The hardest problem is not fixing the bug, it’s finding out where in the system is the piece of code that you need to debug… well, I should backtrack. It’s very hard when you’re using the tools we’ve had for the past two decades. It is not harder when you get used to it. But there’s a learning curve. It’s a shift from the mindset of control to the mindset of: everything’s failing all the time so we’re gonna embrace that and lean into it instead of being afraid. We’re gonna get our hands wet every single day, looking at prod, interacting with prod, and not gonna be scared.</p><p>I come from Ops and Ops are notorious for telling developers to get out of our way. Stay out of production, we don’t trust you, it’s scary here. And that’s a huge mistake that we’re just beginning to make up for. Instead of building this terrifying glass castle, we should have built a playground with bumpers and safety guards. Your kid should be able to run around in prod, get a bloody nose, eat a lot of dirt, but not kill themselves. It shouldn’t be scary. Engineers should grow up learning how to conduct themselves in production.</p><h4><strong>Is it fair to say then that software has become so complex that you cannot build it in such a way to prevent failure?</strong></h4><p>You should build it with the assumption that it’s failing all the time and that’s mostly fine. Instead of getting too hung up on failures, we need to define SLOs — Service Level Objectives. It’s like a contract everyone in the organization makes with our users. We’re saying that this is the level of service that’s acceptable and you’re paying us to provide it. So like, 0.5% failure rate or whatever. Anything better than that we don’t have to obsess about. We can go and build product features until that threshold starts to be threatened in which case it’s all hands on deck. This is really liberating. It’s a number we’ve all agreed on, so it has potential to ease a lot of frustrations that many teams have had for years and years.</p><h4><strong>Let’s go back to outages. Now that we identified one, how do we go about fixing it?</strong></h4><p>Well, step one is, figure what the outage actually <em>is</em>. When it started, what is the scope, who it impacts, any dependencies… this is harder than you might think. Fixing the problem is usually trivial compared to figuring out precisely what is happening. Because of this, we have a tendency to jump straight into fix mode and just start blindly doing things that we have seen fix prior outages. This is terrible! You can easily work yourself into a much worse state than you were to begin with.</p><p>So the first step to fixing it is truly understanding it, and making sure that you understand it, and communicating it to other stakeholders in case there’s something they know that you don’t know. How you fix it depends entirely on what <em>it</em> is.</p><h4><strong>Is it common in tech companies that when a service is down, people who respond to the outage are not the people who built the element that failed?</strong></h4><p>It is common but I think it’s changing. It’s hard to build on-call rotations and feedback loops that are tight and virtuous. One thing I like to do with my teams is anytime we get an alert for a deploy that’s just gone out, we page the person who merged the diff. It’s super simple and 95% of the time it is that person’s responsibility. And they want to know when it’s still fresh in their head. They don’t want to find out five hours later when it’s gone through all the escalation points. Everyone wins! But it’s hard to do and it’s not something anyone gets right on the first try. Microservices help with this because in theory you’ll only get alerts for the stuff that you own.</p><p>Now, there’s always going to be a tier of system specialists who literally specialize in the system as a whole — often called Site Reliability Engineers (SRE). But they don’t want to be the ones who get all the pages either. They want to be escalated to when it’s clear that the problem is bigger than any one component.</p><p>This gets to the bottom of something super important: the idea of ownership. We don’t just write code and fling it over a wall. We own it. Some people think of it a scary and something that won’t let them sleep at night. It’s not. Ownership means you care. You deeply care about the quality of your work — it’s your craft, right? You want to build something well and you want your users to be happy. We all have that desire but it has been squeezed out of many of us by shitty on-call rotation and frustrating times when you’re responsible for something but don’t have the tools or the authority to make the change that needs to be made. That’s just a recipe for frustration.</p><h4><strong>Have you ever been in a situation when you were on call or responsible for a problem and you looked into it and had no idea what’s broken?</strong></h4><p>Of course.</p><figure><img alt="Animated GIF of the scene from IT Crowd with Moss working during a fire." src="https://cdn-images-1.medium.com/max/500/1*Kz2MjtcytYWdy9bjkV2SpA.gif" /><figcaption>You didn’t think I was going to <em>not use this GIF, did you?</em></figcaption></figure><h4><strong>What’s your next step when this happens?</strong></h4><p>You start digging. Using my own tooling, Honeycomb, the right thing to do is generally start at the edge and start bisecting. Start following the trail of breadcrumbs until you find something. This is hard to explain to people who are used to the dashboard style of debugging, where you kind of have to form a hypothesis in your mind and then go flipping through dashboards to verify it. That’s kind of how debugging has worked… but that’s not debugging. That’s not science. That’s magic, gut intuition, and pattern matching.</p><p>It is very different when you have instrumentation for your entire stack and you just start at the edge, start slicing and dicing… for example, instead of going ‘I see a spike in errors. It smells like the time Memcached was out. I’m gonna look at some Memecached dashboards’, you would be like: ‘There’s a spike. Let’s slice by errors. And now slice by endpoints. Which endpoints are erroring? Looks like it’s the ones that write to databases. Are all of the write endpoints erroring? No, only some of them. What do they have in common?’</p><p>On every step of the way, you examine the result and take another small step. And there are no leaps of faith there — you just follow the data.</p><blockquote>MEMECACHED</blockquote><blockquote>An open-source technology for caching data. It allows web applications to take memory from parts of the system with plenty available and make it accessible to parts that are short on memory. This way engineers can use the system’s resources more efficiently.</blockquote><h4><strong>From what you’re saying it sounds like this is a novel approach. Would you say that most companies still base their incident response on gut reactions and pattern matching?</strong></h4><p>Yes, absolutely. These are the dark old days and I’m trying to get people to see it can be so much better.</p><h4><strong>Is there such a thing as a completely unpredictable outage?</strong></h4><p>Yeah, absolutely. Let me give you an example. When I was working at Parse, one day the support team told us push notifications were down. I was like: ‘push notifications are definitely not down’. They were in the queue and I was receiving push notifications, so they couldn’t be down. Two or three days passed and the support team is back telling us people are really upset because pushes are down. I went to look into it. Android devices used to have to hold a socket open to the server to subscribe to pushes. We made a change that caused the response to exceed the UDP packet size. Which is fine. Usually, the DNS would just post over TCP. And it did: for everyone except one router in eastern Romania.</p><p>You can’t predict this stuff. You shouldn’t even try. You should just have the instruments and be good at debugging your system. That’s all you can do.</p><h4><strong>How much of a factor in bugs and outages is human error? How much responsibility can you assign to one person or one team?</strong></h4><p>I really don’t like the phrase <em>human error</em>. It’s never a single thing. People who do this for a living, resilience engineers, always stress that there are many contributing factors. Even if a human was the last link in a chain, that is still a long chain that led them to think something was the right thing to do. No one is maliciously doing it. Everyone’s doing their best. And when you try to pinpoint humans as the source of the problem, people just freeze. They start pointing fingers and stop being willing to share what they know. Then you’re not gonna make any progress whatsoever. People have to feel emotionally safe, they have to feel supported, they have to know they’re not gonna get fired because we’re all in this together.</p><p>I like to think of computers as socio-technical problems. It’s not just social, it’s not just technical. You can rarely solve a problem just by looking at the tools and you can rarely solve a problem just by looking at humans. They need to work in concert with each other.</p><h4><strong>Why do some outages take much longer to fix than others? What has to happen for an incident to be so catastrophic that a service stays down for days?</strong></h4><p>Usually it comes down to data. Data has gravity and mass — that’s how I like to think of it. Everything gets scarier, and longer, and more permanent the closer you get to disk. You never want to be in a situation where you only have one copy of the data because you could go out of business in a blink of an eye.</p><p>Here’s a thing that happened to me at Parse. We had a bunch of databases with multiple copies of the data. Sometimes, all of the replicants would die except the primary. I could not turn back on access until I copied the entire replica set. That could take a very long time. Other times it wasn’t even a question of best practices and being safe. It could be a case where the database won’t start back up until it performs a consistency check or until we copy over the only remaining copy from the tape archive. There are all sorts of things that can happen when you’re dealing with data so it can take a lot of time.</p><blockquote>TAPE DRIVES</blockquote><blockquote>Storage devices that store data on magnetic tapes. They’ve been around for decades but have been pushed out of personal computing by technologies that allow much faster access, like HDD or SSD. Many companies, however, still back up their data to tape drives. Tapes are secure and extremely durable: they can go decades without maintenance and remain functional.</blockquote><h4><strong>So when you’re experiencing a particularly nasty outage as a user, it’s due to how fast data can be backed up and not because engineers on the other end are not typing code fast enough.</strong></h4><p>I guarantee you they’re working as fast as they can. There’s an amount of time it takes you to figure out what the problem is. And then there’s the amount of time it takes you to recover. There’s also a lot of stuff like, maybe it’s not down for everyone but this particular shard is gone for days because a backup broke down? Or maybe they’re writing tools to help recover for as many people as possible?</p><p>It’s kind of ironic that in our quest to keep everything resilient and redundant and up 100% of the time, we’ve sliced and diced and spread everything around that now there is a hundred points of failure instead of just one.</p><h4><strong>Some companies suffer from outages more often than others even though at least on paper the Silicon Valley attracts the best engineering talent in the world. Why?</strong></h4><p>First of all, I’m gonna push back on the idea that Silicon Valley companies have the best engineers in the world. They don’t. Maybe they do for a very narrow definition of <em>better</em>. But, you know, I’ve been in those hiring meetings and people will straight up admit there is no correlation between the questions that they ask, how well the interviewees do on them, and how good of an engineer they are. Some of the best engineers I know are not in the Silicon Valley. Some of the worst I’ve met are here. It’s definitely a magnet but I hate that idea that the best engineers are here. It’s not true and it’s harmful.</p><p>When we were hiring for Honeycomb, I could have just gone out and hired all of the most senior, awesome engineers I’ve worked with at Parse and Facebook. I didn’t do that because I knew we were building a tool for everyone and I wanted to have diverse backgrounds. And I’m gonna admit something a little bit embarrassing. For a while, I thought it’s too bad my team wouldn’t have the experience of working with those excellent engineers I worked with. But here’s the thing — this team kicks the ass out of any other team I have worked with. They ship more consistently, they ship better quality code, and I’ve had to reckon with my own snobbery and bias. I no longer think that <em>best</em> engineers make for the best teams. They don’t. The best teams are made of people who feel safe with each other, who communicate, who care passionately about what they’re doing, and can learn from their mistakes. The whole best engineers thing is total bullshit.</p><p>Now, back to your question. Why can’t Silicon Valley get it right? Well, we’re solving new problems in Silicon Valley. Like problems of scale. Google’s solutions don’t work for anyone but Google. It’s a hard set of problems and it’s hardest the first time. After it’s been solved once, or twice, or three times, we can learn from each other and it gets a lot easier.</p><h3>Computers Are Hard</h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-ed82bccc5c87">Introduction</a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=252813ae9ce8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8">Computers Are Hard: bugs and incidents with Charity Majors</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
          137         </item>
          138         <item>
          139             <title><![CDATA[Computers Are Hard]]></title>
          140             <link>https://medium.com/computers-are-hard/computers-are-hard-ed82bccc5c87?source=rss-d8c08a305574------2</link>
          141             <guid isPermaLink="false">https://medium.com/p/ed82bccc5c87</guid>
          142             <category><![CDATA[technology]]></category>
          143             <category><![CDATA[engineering]]></category>
          144             <category><![CDATA[software-development]]></category>
          145             <category><![CDATA[interview]]></category>
          146             <dc:creator><![CDATA[Wojtek Borowicz]]></dc:creator>
          147             <pubDate>Sun, 27 Sep 2020 18:08:46 GMT</pubDate>
          148             <atom:updated>2020-09-29T16:36:54.747Z</atom:updated>
          149             <content:encoded><![CDATA[<figure><img alt="Illustration showing hammers smashing a computer screen with the shrug icon on display." src="https://cdn-images-1.medium.com/max/1024/1*Cqh8mLvVNp5W6lf_fC6_HA.jpeg" /><figcaption>Illustration by <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a>.</figcaption></figure><p>Everyone has heard that the most terrifying words in the world are supposed to be ‘I’m from the government and I’m here to help’. But I’ve been doing tech support for several years now and I can tell you there’s nothing more dreadful than when a support ticket starts with ‘I’m a software engineer myself and…’</p><p>From there, it can go one of two ways. The customer either is going to prove patient and understanding, or they will ignore everything I say and will then proceed to pronounce me and the entire company I work for a bunch of lazy morons who don’t know what they’re doing.</p><p>I get the customers’ frustration. Software can be maddening even when it works as expected and much more so when it doesn’t. But as much as some days it feels like apps exist just to spite us, they don’t. Software users, even savvy ones, often lack understanding of how complex things can be under a sleek interface. It takes you a second to click ‘Log in’ on a website, but that second triggers a cascade of processes. A signal from the mouse to the computer, an HTTP request, an authentication handshake, fetching assets from the CDN. And everything is in a continuous deployment and connected to systems that are also in continuous deployment. Then on top of that there are humans making decisions about the way to put these pieces together. And those decisions can take weeks of deliberations because when you pull what looks like a loose thread on a simple feature, layers upon layers of intertwined systems start unraveling.</p><p>Or, as a software engineer Steve Klabnik <a href="https://twitter.com/steveklabnik/status/1248315210395463680">put that</a>, ‘motherfucker, some of us ran fucking in-person mediation to try and resolve the intense disputes around this decision’.</p><p>The sentiment of people mad about something they don’t fully understand but perceive as simple is all too common. I catch myself doing that more often than I’d like to admit. And I wish to change this. That’s why I set out to interview a group of software engineers, each specializing in a different aspect of the field. The goal was to explore and explain why software is more complicated than it looks even to a trained eye. Some of those reasons are technical, while others stem from human fallibility and how the discipline of software development bears the consequences of these failings. Each conversation started from the same premise. I asked my interlocutors questions that were simple to an expert, but with one caveat:</p><p>‘Explain it to me <strong>not</strong> like I was five, but like I was a reasonably smart 30 year old who doesn’t know anything about what you do’.</p><p>Welcome to Computers Are Hard.</p><p>Below you will find eight interviews. In the first one we dive into the thorny questions around why software breaks and how engineers approach fixing bugs and outages. In the three conversations that follow, we dive deeper into the guts of apps and websites to investigate different components that make them work (or not): networking, hardware integration, and security. From there we move on to talk about how engineers work on ways we perceive and interact with software. We discuss speed and performance, accessibility, and representing different alphabets and languages on screen. Finally, we go all the way back to how software is made and why the mainstream approach to software engineering might not be the best there is.</p><p>Thank you for tagging along. I hope you enjoy the journey. I’d also like to thank the eight people who agreed to be interviewed for Computers Are Hard, <a href="https://www.instagram.com/gabikrakowska/">Gabi Krakowska</a> for drawing the illustrations, Rafał and Pikor for their advice when I was starting this project, as well as Karen for proofreading.</p><h3><strong>Computers Are Hard</strong></h3><ol><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-bugs-and-incidents-with-charity-majors-252813ae9ce8"><em>Bugs and incidents with Charity Majors</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-networking-with-rita-kozlov-6bf251991083"><em>Networking with Rita Kozlov</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-hardware-with-greg-kroah-hartman-4be2d31c3126"><em>Hardware with Greg Kroah-Hartman</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-security-and-cryptography-with-anastasiia-voitova-d55ce5c0855d"><em>Security and cryptography with Anastasiia Voitova</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-app-performance-with-jeff-fritz-94aaaa5267b1"><em>App performance with Jeff Fritz</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-accessibility-with-sina-bahram-a3ce25b1f7b7"><em>Accessibility with Sina Bahram</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-representing-alphabets-with-bianca-berning-bc8c9a498343"><em>Representing alphabets with Bianca Berning</em></a></li><li><a href="https://medium.com/@wojtekborowicz/computers-are-hard-building-software-with-david-heinemeier-hansson-c9025cdf225e"><em>Building software with David Heinemeier Hansson</em></a></li></ol><p>If you like this series, check out my other interviews:</p><ul><li><a href="https://medium.com/inner-worlds/inner-worlds-building-a-universe-in-11-conversations-b3949109c5c2"><strong>Inner Worlds</strong></a>, about alternate universes.</li><li><a href="https://medium.com/does-work-work/intro-b656a19baac5"><strong>Does Work Work</strong></a>, about place of work in society</li></ul><p><strong>P.S.</strong> Software permeates our lives, politics, and economy. And it’s not neutral. Spread of misinformation, demolition of privacy, upheaval in labor and housing markets: these are all consequences of how software is made, by whom it’s made, and what incentives and values drive people to make it. These are important conversations, but they often lack nuance. Discourse around the problems plaguing the tech industry — and problems the tech industry brought upon us — doesn’t distinguish between malice, incompetence, and lack of foresight. And we can’t engage in those conversations productively if we don’t understand what we’re talking about. I purposefully focused Computers Are Hard on the technical aspects of building software and I hope it will make understanding this weird, complex world a little easier.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ed82bccc5c87" width="1" height="1" alt=""><hr><p><a href="https://medium.com/computers-are-hard/computers-are-hard-ed82bccc5c87">Computers Are Hard</a> was originally published in <a href="https://medium.com/computers-are-hard">Computers Are Hard</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
          150         </item>
          151     </channel>
          152