[HN Gopher] Understanding how bureaucracy develops
       ___________________________________________________________________
        
       Understanding how bureaucracy develops
        
       Author : dhruvmethi
       Score  : 70 points
       Date   : 2024-10-17 14:32 UTC (2 days ago)
        
 (HTM) web link (dhruvmethi.substack.com)
 (TXT) w3m dump (dhruvmethi.substack.com)
        
       | robwwilliams wrote:
       | Greatly enjoyed this commentary. The example of IRB approvals for
       | biomedical research is unfortunately on the mark.
       | 
       | How much of a full time researcher's time is bureaucratic
       | overhead? In my case no more than 10% but it feels like 30%.
        
         | dhruvmethi wrote:
         | Thank you for reading!
         | 
         | Even if it takes up only a small percentage of time, it
         | probably takes up the majority of frustration (that's at least
         | true for me)
        
       | nine_zeros wrote:
       | This is a very well written article. And I firmly agree with this
       | from first-hand experience.
       | 
       | Organizational malleability is key. But it wouldn't work in FAANG
       | style standardized performance review style of work.
        
         | dhruvmethi wrote:
         | Agreed - as organizations scale, it's like some kind of
         | fundamental law of thermodynamics that says they must become
         | more bureaucratic in order to remain competitive. I think it's
         | because organizations can only work at scale if they minimize
         | the variance of each individual business unit, and malleability
         | threatens that. I still think that good enough leadership and
         | communication should allow for malleable units to coexist well
         | together, but that may be a naive ideal.
        
           | marcosdumay wrote:
           | > I think it's because organizations can only work at scale
           | if they minimize the variance of each individual business
           | unit, and malleability threatens that.
           | 
           | It's because of the principal agent problem.
           | 
           | As organizations grow, people inside it become less and less
           | oriented towards the organizational goal. The rigidity
           | appears to fight that.
        
             | toomuchtodo wrote:
             | Very insightful. When an organization is small, the
             | individuals protect the org, and are incentivized to. The
             | org cannot survive without strong alignment between
             | individuals. At some point, when sufficient scale has been
             | achieved, the org crystallizes to protect itself from
             | certain actors that prioritize the individual over the org.
             | The rigidity is a defense mechanism, an immune system of
             | sorts.
        
             | nyrikki wrote:
             | There is a school of thought about management by intent
             | that tries to address this, following the ideas born out of
             | the Prussian army in the early 1800s.
             | 
             | But many of our current problems are more directly related
             | to Taylorism and an intentional separation of design from
             | production.
             | 
             | GMs failure to learn from Toyota at the NUMMI plant is a
             | well documented modern example, with Japan intentionally
             | targeting the limits of Taylorism to basically destroy the
             | US industrial market is another.
             | 
             | The centralized command and control model also ignored the
             | finite congitive ability and assumed the relational
             | omniscient actors.
             | 
             | The funny thing is that multiple Nobel prizes have been
             | awarded related to some of the above, and we know that
             | Taylor faked some of his research and most business tasks
             | are not equivalent to loading pig iron onto train cars
             | anyway.
             | 
             | Even TOGAF and ITIL recently made changes after the Feds
             | changed the Klinger act, moving away from this model and
             | every modern military uses mission command vs C2, but
             | management is still teaching the pseudo scientific
             | management school of thought and not addressing the needs
             | modern situations.
             | 
             | The incentive models are largely a reason for this and
             | recent developments like 'impact' scores push things back
             | even more.
             | 
             | You can still have a principal-agent relationship, but
             | delegate and focus on intent and avoid this trap, but it
             | requires trust and bidirectional communication.
             | 
             | Really IMHO, or comes down to plans being safe feeling,
             | high effort and compatible with incentives.
             | 
             | Those plans never survive the real world, because of the
             | actors bounded rationality and knowledge.
             | 
             | A book that is potentially a useful lens into this is 'The
             | art of action', but it is just a lens.
             | 
             | Organization 'goals' are often poorly communicated and
             | useless because 'planning' is insufficient and not durable
             | enough.
             | 
             | Being way past any horizon that can be planned for,
             | actionable concepts of shared intentions and purpose are
             | not communicated.
             | 
             | Toyota gave teams concrete goals to obtain, allowed them to
             | self organize and choose how to deliver.
             | 
             | GM meticulously copied what those teams had done and forced
             | Detroit teams to follow those procedures and it failed.
             | 
             | It was allowing the teams, which understood the context and
             | constraints of their bounded problems that worked, not the
             | procedures themselves.
             | 
             | Amazon's API mandate resulted in a product mindset and
             | scaled better than almost everyone until culture erosion
             | killed that.
             | 
             | Delegating works, but centralized process needs to be
             | restricted to the minimum necessary.
             | 
             | Unfortunately the negative aspects of bureaucracy seem
             | artificially successful in the short term, but the negative
             | aspects of setting things in concrete are long tailed.
             | 
             | The growing disengagement problem is one of those long
             | tails IMHO.
        
               | marcosdumay wrote:
               | Well, yes to all of that.
               | 
               | Taylorism is actually an attempt to make organizations
               | flexible given that the more subordinated people are all
               | completely unaligned with the organization goals and the
               | management is in closer alignment. It's a very direct
               | consequence of that.
               | 
               | Of course, the irony on it is that reality is often
               | closer to the other way around.
        
       | sevensor wrote:
       | When you treat every negative outcome as a system failure, the
       | answer is more systems. This is the cost of a blameless culture.
       | There are places where that's the right answer, especially where
       | a skilled operator is required to operate in an environment
       | beyond their control and deal with emergent problems in short
       | order. Aviation, surgery. Different situations where the cost of
       | failure is lower can afford to operate without the cost of
       | bureaucratic compliance, but often they don't even nudge the
       | slider towards personal responsibility and it stays at "fully
       | blameless."
        
         | hypeatei wrote:
         | I've never seen it put so succinctly but this is the issue I
         | have with blameless culture. We can design CI pipelines,
         | linters, whatever it is to stop certain issues with our
         | software from being released but if someone is incompetent,
         | they don't care and _will_ find a way to fuck something up and
         | you can only automate so much.
        
           | liquidpele wrote:
           | There's a 2x2 matrix you can put employees into with one side
           | being smart/idiot and the other being lazy/industrious. There
           | is no greater threat than the industrious idiot.
        
         | linuxlizard wrote:
         | >When you treat every negative outcome as a system failure, the
         | answer is more systems.
         | 
         | Holy crap, I'm going to save that quote forever. I have a co-
         | worker who treats every line of bad code committed as a reason
         | to add ever more layers to CI. Yo, we caught it in testing.
         | There's no reason to add another form we have to fill out.
        
         | SupremumLimit wrote:
         | This is a wonderfully insightful comment!
         | 
         | I've encountered a similar phenomenon with regard to skill as
         | well: people want to ensure that every part of the software
         | system can be understood and operated by the least skilled
         | members of the team (meaning completely inexperienced people).
         | 
         | But similarly to personal responsibility, it's worth asking
         | what the costs of that approach are, and why it is that we
         | shouldn't have either baseline expectations of skill or
         | shouldn't expect that some parts of the software system require
         | higher levels of expertise.
        
           | jiggawatts wrote:
           | There is the reason Haskell or F# are relatively unpopular
           | and Go has a much wider footprint in the industry: high
           | expertise levels don't scale. You can hire 100 juniors but
           | not 100 seniors all trained up in the _same_ difficult
           | abstractions.
           | 
           | Conversely, one skilled senior can often outperform a hundred
           | juniors using simpler tools, but management just doesn't see
           | it that way.
        
             | SupremumLimit wrote:
             | Indeed, specialist knowledge is a real constraint, but I
             | think it's possible to at least _orient_ towards building
             | systems that require no baseline level of skill (the fast
             | food model I guess) or towards training your staff so they
             | acquire the necessary level of skills to work with a less
             | accessible system. I suspect that the second pathway
             | results in higher productivity and achievement in the long
             | term.
             | 
             | However, management tends to align with reducing the
             | baseline level of skill, presumably because it's convenient
             | for various business reasons to have everyone be a
             | replaceable "resource", and to have new people quickly
             | become productive without requiring expensive training.
             | 
             | Ironically, this is one of the factors that drives ever
             | faster job hopping, which reinforces the need for
             | replaceable "resources", and on it goes.
        
         | poulsbohemian wrote:
         | But there's also an element where this isn't due to system
         | failure, but rather design. Companies want to make their
         | processes bureaucratic so that you won't cost them money in
         | support and so you won't cancel your subscription - making the
         | process painful is the point. Likewise in government - it isn't
         | that government can't be efficient, it's that there are people
         | and organizations who want it to be encumbered so that they can
         | prove their political point that government is inept. One side
         | wants to create funding for a program, the other side puts in
         | place a ton of controls to make spending the money for the
         | program challenging so they can make sure that the money isn't
         | wasted - which costs more money and we get more bureaucracy.
        
         | schmidtleonard wrote:
         | Just one tiny problem: I've played the blame game before. I've
         | worked there. You can't sell me the greener grass on the other
         | side of the road because I've been to the other side of the
         | road and I know the grass there is actually 90% trampled mud
         | and goose shit.
         | 
         | The blame game drives the exact same bureaucratization process,
         | but faster, because all of the most capable and powerful
         | players have a personal incentive to create insulating
         | processes / excuses that prevent them from winding up holding
         | the bag. Everyone in this thread at time of writing is
         | gleefully indulging in wishful thinking about finally being
         | able to hold the team underperformer accountable, but these
         | expectations are unrealistic. Highly productive individuals do
         | not tend to win the blame game because their inclinations are
         | the exact opposite of the winning strategy. The winning
         | strategy is not to be productive, it's to maximize safety
         | margin, which means minimizing responsibility and maximizing
         | barriers to anyone who might ask anything of you. Bureaucracy
         | goes up, not down, and anyone who tries to be productive in
         | this environment gets punished for it.
         | 
         | "Blaming the system" doesn't prevent bureaucracy from
         | accumulating, obviously, but it does prevent it from
         | accumulating in this particular way and for this particular
         | reason.
        
           | tdeck wrote:
           | This also multiplies with hierarchy. In a blame-based
           | culture, your manager is partly to blame for what you do.
           | Their manager is partly to blame for what your manager does.
           | Therefore everyone in a reporting chain is incentivized
           | through fear to double check your work. That means more sign-
           | off and review and approval processes so that people can
           | avoid any kind of fuckup, and it also often means a toxic
           | environment where everyone is spending at least 20% of their
           | brain power worrying about internal optics which in my
           | experience is not a good thing for people engaged in creative
           | work.
        
         | cyanydeez wrote:
         | Geez.
         | 
         | Someone has no idea how modern human psychology is the only
         | thing creating any of these super structures and their
         | frailties.
         | 
         | We aren't ever going to be your super ant organism, get over
         | it.
        
         | jancsika wrote:
         | > When you treat every negative outcome as a system failure,
         | the answer is more systems.
         | 
         | Eloquently put. Also, likely false.
         | 
         | E.g., soft-realtime audio synthesis applications like Pure Data
         | and Supercollider have had essentially the same audio engines
         | since the 1990s. And users see _any_ missed audio deadline as a
         | negative outcome. More to the point-- for a wide array of audio
         | synthesis /filtering/processing use cases, the devs who
         | maintain these systems consider such negative outcomes as
         | systemic failures which must be fixed by the devs, _not_ the
         | users. I can 't think of a more precise example of "blameless
         | culture" than digital artists/musicians who depend on these
         | devs to continue fitting this realtime scheduling peg into the
         | round hole that is the modern (and changing!) multitasking OS.
         | 
         | While there have been some changes over the last 30 years, in
         | no way way have any of these applications seen an explosion in
         | the number of systems they employ to avoid negative outcomes.
         | There's one I can think of in Pure Data, and it's optional.
         | 
         | IMO, there's nothing noteworthy about what I wrote-- it's just
         | one domain in probably many across the history of application
         | development. Yet according to your "law" this is exceptional in
         | the history of systems. That doesn't pass the smell test to me,
         | so I think we need to throw out your ostensible law.
        
         | gamblor956 wrote:
         | A negative outcome is a system failure, even if it is a
         | personal failure that drove the outcome, because that is a
         | failure of the system to prevent personal failures from causing
         | negative outcomes.
         | 
         | You can't stop personal failures from happening because people
         | are people. You can design processes to minimize or eliminate
         | those personal failures from yielding negative outcomes.
        
         | tom_ wrote:
         | The past was allowed to play itself out. Why not the present
         | too?
        
       | alexashka wrote:
       | Modern society is wage slavery. Wage slaves respond to their
       | condition with malicious compliance [0].
       | 
       | The rest is talk.
       | 
       | [0] https://en.wikipedia.org/wiki/Malicious_compliance
        
       | wavemode wrote:
       | In my experience, the more siloed an organization, the more
       | bureaucracy you end up with.
       | 
       | We've all had those experiences where, the problem we are trying
       | to solve isn't inherently difficult - it's difficult because
       | three separate teams are in charge of parts A, B, and C of the
       | problem, and getting them to talk to each other, align their
       | roadmaps, change their processes, etc. is impossible, unless you
       | can go above their heads and force them to do it.
       | 
       | I think about organization design similarly to software design.
       | It's tempting to think about your software design from the top-
       | down, and design a hierarchy of siloed interfaces with
       | encapsulated private data and strictly separated concerns. This
       | looks beautiful on paper, but then in practice you now have to
       | navigate through a sea of objects and abstractions and
       | redundancies - getting anything meaningful done often requires
       | "punching holes" through the siloes so data can be shared.
       | 
       | Organizations are the same way. Paul Graham wrote an essay[0]
       | recently about the differences between "founder mode" and
       | "manager mode". In a nutshell, managers usually think about
       | organizations as silos - we divide up the company into a
       | hierarchy of departments and teams and levels, so that only
       | directors talk to middle managers and only middle managers talk
       | to supervisors and only supervisors talk to the individual
       | contributors. Again, it looks great on paper, and is what most
       | people are used to.
       | 
       | But "founder mode" is when someone with a lot of political
       | capital can step in and say, "you know what, I want to talk to
       | the people on the ground. I want to find out what's actually
       | going on below the surface in the org, not just the pre-packaged
       | PowerPoint version I hear from my directors. I want to pull
       | together people from across teams and across levels and across
       | departments - whoever is best suited to making this project a
       | success." I think that sort of "hole punching" can be really
       | powerful, if the company's culture is amenable to it.
       | 
       | [0]: https://paulgraham.com/foundermode.html
        
         | liquidpele wrote:
         | To a founder, the success of the company means their own
         | success. To a manager, success is to climb a ladder, any
         | ladder. Their incentives are very different, and thus how they
         | approach things will be.
        
       | GlenTheMachine wrote:
       | Here's an example from my corner of the Defense Department:
       | 
       | In order to publish a research paper, it has to be reviewed for
       | suitability for public release. This process is more than a
       | little silly, because it requires seven levels of review, of
       | which exactly one - my immediate supervisor - will have any idea
       | what the paper is about. But fine.
       | 
       | There used to be a paper form. You'd fill it out and either route
       | it around for signatures, or if you had a time crunch, walk it
       | around yourself. Eventually they replaced the paper form with a
       | web form, so now there's an automated queuing system that emails
       | people when they have a paper waiting to be reviewed.
       | 
       | The web form has all of the same info as the paper form, with one
       | addition. They scanned the paper form and turned it into a pdf,
       | and they make you fill out both the web form AND the pdf version
       | of the original paper form. So to sign off on a paper, you now
       | have to download the pdf, digitally sign it, upload it again,
       | _and_ hit the  "Approve" button on the web form.
       | 
       | Because God help us if anybody does an audit and we don't have
       | all of the forms correctly signed.
        
         | ok_dad wrote:
         | At a medical device manufacturer I worked at, it's even worse:
         | you print a copy of the thing to sign and sign it, then upload
         | that to the digital system and digitally sign there too. You
         | end up with several people printing huge documents, not just
         | the signature page, and each signing a different copy which is
         | uploaded then thrown away. That's right, one paper copy per
         | person is signed, scanned, then shredded.
        
           | IIAOPSW wrote:
           | At least tell me the shredded paper is recycled so that the
           | next document can be printed signed and shredded on it.
        
         | toomuchtodo wrote:
         | Is a list or inventory maintained of research papers that
         | aren't published? What happens to those papers?
        
           | GlenTheMachine wrote:
           | In my experience no research paper _ever_ gets rejected, at
           | least for reasons that have anything to do with their
           | content. If your paper gets rejected, it is almost always
           | because you failed to put the appropriate markings on the
           | paper, or filled the form out wrong, and then you missed the
           | conference deadline so the whole thing was OBE.
           | 
           | There is indeed a list of rejected papers. The system logs
           | all of them. Generally they're recycled, updated, and
           | published elsewhere.
        
       | IIAOPSW wrote:
       | I'm pretty torn about this, because I am also deeply skeptical of
       | exactly the sort of situations an IRB is set up to prevent.
       | Things like requiring documents to be signed in pen are an
       | important part of a secure audit trail. And an appropriate audit
       | trail with proper safe guards is absolutely essential especially
       | given the way personal health related things are conducted in the
       | inherent darkness of confidentiality. The privacy protections of
       | personal health records also happens to be just as effective at
       | keeping evidence of corrupt conduct within the system private as
       | well.
       | 
       | Maybe the real problem is that there is (at least to a degree) a
       | trilemma between effective, safe from research misconduct, and
       | respectful of individual privacy.
        
       ___________________________________________________________________
       (page generated 2024-10-19 23:00 UTC)