[HN Gopher] It's Always the Process, Stupid
___________________________________________________________________
It's Always the Process, Stupid
Author : DocIsInDaHouse
Score : 248 points
Date : 2025-11-29 14:20 UTC (8 hours ago)
(HTM) web link (its.promp.td)
(TXT) w3m dump (its.promp.td)
| Lapalux wrote:
| >It is the first technology that is truly useful for handling
| unstructured data.
|
| >Processes that rely on unstructured data are usually
| unstructured processes.
|
| I appreciate someone succinctly summing up this idea.
| defaultcompany wrote:
| This doesn't ring true to me. Having processes which rely on
| communication between humans using natural language can of
| course be either structured or unstructured. Plenty of highly
| functioning companies existed well before structured data was
| even a thing.
| Spooky23 wrote:
| Technology folks often confuse structured data needed for
| their computing function as being needed for the business
| process.
| yannyu wrote:
| Structured data doesn't have be a database. It can be a
| checklist, a particular working layout, or even just a
| defined process. Many high functioning companies spent a lot
| of time on those kinds of things, which became a competitive
| advantage.
| wavemode wrote:
| "Talk to the vendor and see what they say" is an unstructured
| process relying on unstructured data.
|
| "Ask the vendor this set of 10 compliance questions. We can
| only buy if they check every box." is a structured process
| based on structured data.
|
| Both kinds of processes have always existed, long before
| modern technology. Though only the second kind can be
| reliably automated.
| evrydayhustling wrote:
| Best lines in this article. But it doesn't get to IMO a very
| important point: why can't these processes easily be
| structured? Here are some good reasons:
|
| - Your process interacts with an unstructured external world
| (physical reality, customer communication, etc.)
|
| - Your process interacts with _differently structured_
| processes, and unstructured in the best agreed transfer
| protocol (could be external, like data sources, or even
| internal between teams with different taxonomies)
|
| - Your process must support a wild kind of variability that is
| not worth categorizing (e.g. every kind of special delivery
| instruction a customer might provide)
|
| Believing you can always solve these with the right taxonomy
| and process diagram is like believing there is always another
| manager to complain to. Experienced process design instead
| pushes semi-structured variability to the edges, acknowledges
| those edges, and watches them like a hawk for danger.
|
| We should ABSOLUTELY be applying those principles more to AI...
| if anything, AI should help us decouple systems and overreach
| less on system scope. We should get more comfortable building
| smaller, well-structured processes that float in an
| unstructured soup, because it has gotten much cheaper for us to
| let every process have an unstructured edge.
| softwaredoug wrote:
| My time working in the search field for 13 years, there is always
| this trend:
|
| Leaders think <buzzy-technique> is a good way to save money, but
| <buzzy-technique> actually is a thing that requires deeper
| investment to realize more returns, not a money saver.
| stuartjohnson12 wrote:
| That's why you need consultants to tell you that <buzzy-
| technique> has problems, but <rebadged-buzzy-technique> is
| really how you save money, and that's why working with a
| <rebadged-buzzy-technique> expert is how you can overhaul your
| business and manage operational costs.
| chrisweekly wrote:
| > _Let's rip the Band-Aid off immediately: If your underlying
| business process is a mess, sprinkling "AI dust" on it won't turn
| it into gold. It will just speed up the rate at which you
| generate garbage.
|
| In the world of Business IT, we get seduced by the shiny new toy.
| Right now, that toy is Artificial Intelligence. Boardrooms are
| buzzing with buzzwords like LLMs, agentic workflows, and
| generative reasoning. Executives are frantically asking, "What is
| our AI strategy?"
|
| But here is the hard truth:
|
| There is no such thing as an AI strategy. There is only Business
| Process Optimization (BPO)._
|
| This is well-expressed, and almost certainly true for an
| overwhelming majority of companies.
| vanschelven wrote:
| Although I very much agree with the sentiment "here's the hard
| truth" is LinkedIn speak/ LLM-tell to me
| ashu1461 wrote:
| Probably the OP might have used LLM's to structure the
| document, but the sentiment `Automating stupidity = faster
| stupidity` is a great take away.
| PaulHoule wrote:
| There was a guy who wrote a blog post in that style who was
| wondering how it was he'd posted hundreds of messages to
| people on LinkedIn and gotten no replies.
|
| There are some people who insist on spamming out splog posts
| in that style, some of them think they are blogging, not
| splogging, and maybe they have good intentions but that style
| screams "SPAM!" and unfortunately people who are writing that
| don't understand how it comes across.
| wavemode wrote:
| A similar observation commonly comes up related to software
| development - "it's not tech debt, it's org debt" (or to put a
| different way, "trying to use a technical solution to solve a
| social problem").
| __MatrixMan__ wrote:
| I hear that one a lot but pretty frequently it's applied to
| "social problems" which were caused by technology. It seems
| to imply some kind of technology/society boundary which
| doesn't actually exist.
| DocTomoe wrote:
| Mild disagree.
|
| The saying "you can't solve social problems with
| technology" usually means - at least in the places I have
| heard / used it - "If your workforce fights a process - be
| it for the process being stupid, tools being slow,
| incentives do not align with policy, whatever - especially
| a control step, no amount of mandatory tech enforcement of
| that process step will yield better results." At best you
| get garbled data because someone hit the keyboard to fill
| in mandatory fields, sometimes, the process now works
| OUTSIDE of the system by informal methods because 'work
| still needs to be done', at worst, you get a mutiny.
|
| You have to fix the people(s' problems) by actually talking
| to them and take the pain points away, you do not go to
| 'computer says no' territory first.
|
| In my experience, no org problem is only social, and no
| tech problem is merely technical. Finding a sustainable
| solution in both fields is what distinguishes a staff
| engineer from a junior consultant.
| ozim wrote:
| Another point of view on that.
|
| I work on SaaS platform as engineer. We can have some
| people from customer A asking for bunch of fields to be
| mandatory - just to get 6 months later people from that
| company nagging about the fields saying our platform
| sucks - well no their process and requirements suck - we
| didn't come up which fields are mandatory.
| criemen wrote:
| I've been thinking a lot about that lately, and I agree.
| I used to be hard in the "You can't solve social problems
| with technical solutions", but that's not the whole
| truth. If people aren't using your thing, sure, you can
| brand that as social problem (lack of buy-in on the
| process, people not being heard during rollout, ...).
| However one way of getting people to use your
| thing/process is to make it easier to use. Integrate it
| well into the workflow they're already familiar with,
| bring the tooling close, reduce friction, provide some
| extra value to your users with features etc. That's
| technical solutions, but if you choose them based on
| knowledge of the "social problem" they can be quite
| effective.
| __MatrixMan__ wrote:
| This is what I was trying to express, perhaps poorly:
|
| > no org problem is only social, and no tech problem is
| merely technical.
|
| I was going for "the intersection is clearly nonempty"
| but maybe the better argument is "the intersection is
| pretty much everything."
| rootnod3 wrote:
| There very much is that boundary. Jira by tech itself is a
| good product, but now try shoving it down people's throats
| and see how that goes.
|
| Or on a bigger scale look at FB/Social media and society.
| There definitely without a doubt is a boundary. They
| interact and overlap.
| TeMPOraL wrote:
| Not to mention, technicial solutions are usually the only
| viable ones. It's not like, in practice, we solve social
| problems in other ways.
| dkdcio wrote:
| "tech is easy, people are hard"
| dclowd9901 wrote:
| Oh, now I have a name for the epidemic pervasive through our
| company.
|
| Almost all of the tech debt we have was introduced by
| leadership guidance to ignore. And all additional debt to
| manage it or ameliorate it (since problems don't just go
| away) is also guidance from leadership to fast track fixes.
|
| What happened to the days where software engineers were the
| experts who decided tech priority?
| dragonwriter wrote:
| > What happened to the days where software engineers were
| the experts who decided tech priority?
|
| Outside of a very small number of firms that were called
| out as notable for being led in a way that enabled that,
| often by engineers that were themselves still hands on,
| they never existed, and even there it was "business
| leadership that happened to also be engineers, and made
| decisions based on business priorities informed by their
| understanding of software engineering", not "software
| engineers in their walled-off citadels of pure
| engineering", and it usually involves, in successful firms,
| considerable willingness to accept tech debt, just as
| business leadership can often not be shy about accepting
| funancial debt.
| mananaysiempre wrote:
| > business leadership can often not be shy about
| accepting financial debt
|
| Business leadership is not shy about accepting financial
| debt when business leadership has decided it should
| accept financial debt. Technical leadership should
| ostensibly not be shy about accepting technical debt
| because _business_ leadership has decided it should
| accept technical debt. The distribution of agency and
| responsibility in the two situations is different.
| Aperocky wrote:
| It can both be Business Process Optimization and an AI
| strategy.
|
| In fact, if an AI strategy becomes business process
| optimization, I'd say that AI strategy for that company is
| successful.
|
| There are too many AI strategy today that isn't even business
| process optimization and detached from bottom line, and just
| being pure FOMO from the C suite. Those probably won't end
| well.
| ToucanLoucan wrote:
| Almost every problem a modern corpo has can be solved with an
| appropriate head-count of appropriately trained/educated
| people, and that's why none of them get solved.
|
| The processes suck because of decades of corner cutting and
| "fat" trimming while the executives congratulate themselves for
| only making the product a biiiit worse in exchange for a
| 0.0005% cost reduction, before then offsetting any gains by
| giving themselves all the money that would've gone to whatever
| is now dead.
|
| Repeat this process for 30 years and you have companies like
| Microsoft that can barely ship anything that works anymore, and
| our 4 Big Websites frequently just fail to load pages for no
| explicable reason, Amazon goes down and takes 1/3 of the
| internet with it, and AI companies are now going to devour the
| carcass of our internet and shit it back to us in LLM waffle
| while charging us money for the privilege to eat it.
| A4ET8a8uTh0_v2 wrote:
| Honestly, I don't know if throwing people at a problem is the
| way to go. Doubly so given that a good chunk of the projects
| lately for me deal with third party vendors and those are so
| .. embedded that even getting basic requirements,
| documentation is an uphill battle ( which -- to me -- seems
| insane ). I have zero pull so I do what I can, notate the
| insanity for cya and move on.
|
| I do agree on execs congratulating themselves afterwards
| though. It was obscene last year. This year it was mildly
| muted.
| balamatom wrote:
| You can always throw other things at those people instead.
| jjk166 wrote:
| Throwing people at a problem is very different from
| allocating an appropriate head-count of appropriately
| trained/educated people. A small but skilled team can
| accomplish a lot, whereas a lot of the wrong people can't
| do much at all. Generally there are more than enough warm
| bodies available, big companies are full of those, the
| issue is that skilled people aren't fungible - the team of
| 12 working on this project seems to be moving at a snails
| pace because really it's two people, both of whom are split
| across several other projects simultaneously, doing the
| real work and everyone else doing stuff that is likely
| unnecessary if not straight up counterproductive. It takes
| skill, effort and discipline to cultivate a team that
| actually has all the skills it needs to succeed, in the
| form of people who mutually work well together, to keep
| these people around over an extended period of time, and
| not try to split them up onto different projects and plug
| the gaps with the wrong people.
| Ologn wrote:
| It's The Mythical Man Month idea. Programming software is a
| different thing than working on an assembly line, or a call
| center, or in retail sales. You're much better off having
| four programmers who are worth paying $200k a year than ten
| programmers who are worth paying $75k a year.
| PaulHoule wrote:
| I'm going to argue that, at scale, process beats the
| quality of the people you're using -- and also that there
| are toxic cultures, around Google and C++, where very smart
| people get seduced into spending all their time and effort
| fighting complexity, battling 45 minute builds, etc.
| zahlman wrote:
| > and also that there are toxic cultures, around Google
| and C++, where very smart people get seduced into
| spending all their time and effort fighting complexity,
| battling 45 minute builds, etc.
|
| Not sure what you mean here. "Fighting" as in "seeking to
| prevent", or "putting up with", or what exactly? Is this
| supposed to be bad because it's exploitative, or because
| it's a poor use of the smart person's time, or what
| exactly?
| PaulHoule wrote:
| Essentially that the idea that people can hold 7 + 2
| things in their head simultaneously is basically true
| such that when your tools make a demand on your attention
| it subtracts from the attention you can put on other
| things.
|
| There are many sorts of struggle. There is struggle
| managing essential complexity and also the struggle,
| especially in the pre-product phase, of getting consensus
| over what is "essential" [1] When it comes to accidental
| complexity you can just struggle following the process or
| struggle to struggle less in the future by some
| combination of technical and social innovations which
| themselves can backfire into increased complexity.
|
| Google can afford to use management techniques that would
| be impossible elsewhere because of the scale and
| profitability of their operations. Many a young person
| goes there thinking they'll learn something transferable
| but the market monopolies are the one thing that they
| can't walk out with.
|
| [1] Ashby's law https://www.edge.org/response-
| detail/27150 best exemplified by the Wright flyer which
| could fly without tumbling because it controlled roll,
| pitch _and_ yaw.
| nlawalker wrote:
| _> Almost every problem a modern corpo has can be solved with
| an appropriate head-count of appropriately trained /educated
| people_
|
| Not really, because solving those problems with headcount
| defeats the point. Part of the definition of those kinds of
| problems is that solutions involving headcount are invalid.
| TimPC wrote:
| I feel like this is an inside view from the BPO community and
| the only part of AI they see is the part that affects BPO. But
| for most businesses AI strategy is not about AI for internal
| use but AI to either improve customer funnels or launch new
| products. Most of the companies I've talked to in the past year
| wanted a strategy for customer facing AI not internal AI.
| wizzwizz4 wrote:
| LLMs do not improve customer funnels. Well-designed decision-
| trees _can_ , but we're not calling those "AI" at the moment.
| bathtub365 wrote:
| The last time a company put an AI chat bot between me and
| actual customer service it didn't listen to my problem and
| hallucinated something I didn't say.
| bluecheese452 wrote:
| There is an ai strategy if you are selling hype instead of
| products.
| wombatpm wrote:
| I worked for a fortune 300 company that engaged in a Business
| Process Redesign initiative. After spending 90 million on the
| project they pulled the plug.
|
| My takeaway was that the project was doomed because it was
| named wrong. Should have been called Business Process Design.
|
| They are now owned by Private Equity. I can only wonder what
| madness the would have wrought with AI.
|
| They tried to implement a system whereby a customer has a
| single customer number. Between mergers, acquisitions and
| shutdowns it was impossible to keep straight and keep tracking
| history. It impacted rates, contracts, sales commissions,
| division revenue-everything. In they end they gave everyone a
| new number while still using the old ones.
| xnx wrote:
| Relevant in so many contexts: https://xkcd.com/927/
| Starlevel004 wrote:
| LinkedIn Standard English, tab closed
| zkmon wrote:
| This should go to all CEOs. They should realize that the real
| problem AI solves is handling of text and unstructured data. That
| is the core ability.
|
| But I don't blame them. Process optimization is hard. If a new
| tool promises more speed, without changing the process, they are
| ready to pour money at that.
| watermelon0 wrote:
| Text and unstructured data is mainly related to NLP/LLM, not to
| the AI as a whole.
| zkmon wrote:
| If you take out LLM (text/img/voice processing) from all the
| models, I'm curious to know, what else is left that can be
| called as AI today?
| Spooky23 wrote:
| Well, that's a pretty powerful capability.
|
| I recently did a pilot project where we reduced the time for a
| high friction IT Request process from 4 day fulfillment to
| about 6 business hours. By "handing text and unstructured
| data", the process was able to determine user intent, identify
| key areas of ambiguity that would delay the request, and
| eliminate the ambiguity based on data we have (90%) or by
| asking a yes/no question to someone.
|
| All using GCP tools integrating with a service platform, our
| ERP and other data sources. Total time ~3 weeks, although we
| cheated because we understood both the problem and process.
| orev wrote:
| I suspect that could have also been accomplished without any
| kind of AI. Most processes are inefficient simply because
| nobody has taken the time to optimize them (and rightly so if
| they're not used often enough to justify the time; premature
| optimization and all that). The act of simply deciding to
| optimize something, and then looking at it, usually results
| in significant gains just because of that, regardless of what
| tools were used.
| CPLX wrote:
| In fairness, that is an extraordinary talent. The reality is
| that a huge amount of processes that exist have critical steps
| where humans have to make judgments because the information
| (was thought to be) not clear and structured enough for a
| machine.
|
| For many processes that have just suddenly changed, somewhat
| subjective evaluations can be made reliably by an AI. At least
| as reliably as was being done before by relatively junior or
| outsourced staff.
|
| Replacing low-level employees relying on a decision matrix
| playbook-type document with AI has a LOT of applications.
| NoNameHaveI wrote:
| I am compelled to quote Fred Brooks: "There is no silver bullet".
| https://en.wikipedia.org/wiki/No_Silver_Bullet
| 1970-01-01 wrote:
| Yet another blogger conflating AI with LLMs again. AI will
| absolutely transform your business process if you're not yet
| another software shop vibing container deployment scenarios. ex:
| https://viterbischool.usc.edu/news/2025/10/researchers-inven...
| notpachet wrote:
| How is what you linked a business process? Cancer
| identification is a step in a larger process, specifically the
| "analyze unstructured data" part that the author alludes to.
|
| AI won't take a shoddy process (say, your process for reviewing
| and accepting forms from patients) and magically make it better
| if you don't have an idea of what "better" actually entails.
|
| "Improving a system requires knowing what you would do if you
| could do exactly what you wanted to. Because if you don't know
| what you would do if you could do exactly what you wanted to,
| how on earth are you going to know what you can do under
| constraints?"
|
| - Russ Ackoff
| 1970-01-01 wrote:
| Hi Russ,
|
| Did you read the example? The business process of human bias
| is gone in the cancer detective phase. AI eliminated it.
| notpachet wrote:
| Cancer detection is not a process. That's a discrete step
| in a process.
|
| My name isn't Russ. Russ Ackoff was a business process
| optimization leader from the last century -- a contemporary
| of Deming and the Toyota school etc.
| 1970-01-01 wrote:
| The article mentions other steps are changing due to AI.
| It's a ship of Thesus result. Multiple steps are
| changing, some are eliminated, some are still needed, yet
| the overall name of the process (cancer detection)
| doesn't change.
| jmye wrote:
| Did _you_ read it? Their process is highlighting
| "interesting" cells, per the article. Human process simply
| isn't gone, nor is the "AI" resolving anything beyond
| "these cells look like other bad cells".
|
| Do you understand the treatment process, here? I don't ask
| that to be shitty, but I feel like you're hand-waving away
| the entirety of the process because image detection is
| interesting.
|
| It smells like a "disrupt healthcare" statement, of which
| there are many and of which none have any basis or value.
| wolfi1 wrote:
| I've seen several serveral introduction of new ERPs in companies,
| usually they wanted the same processes they had just with the new
| software, the customizing turned out be a nightmare as the
| consultants usually accpeted their wishes and the programmers had
| to bend the ERP-system accordingly, never was in budget or in
| time
| ashu1461 wrote:
| I think there is one counter argument, LLMs are speeding up
| everything, including the speed of learning, which also implies
| that companies that might have bad processes would learn and move
| to good processes as well on the way.
|
| Example, one of many things, in our SDLC process, now we have
| test cases and documentation which never existed before (coming
| from a startup).
| alexpotato wrote:
| One of my favorite stories about processes and documentation:
|
| - Work at a hedge fund
|
| - Every evening, the whole firm "cycles" to start the next
| trading day
|
| - Step 7 of 18 fails
|
| - I document Step 7 and then show it to a bunch of folks
|
| - I end up having a meeting where I say: "Two things are true: 1.
| You all agree that Step 7 is incorrectly documented. 2. You all
| DISAGREE on what Step 7 should be doing"
|
| I love this story as it highlights that JUST WRITING DOWN what's
| happening can be a giant leap forward in terms of getting people
| to agree on what the process actually IS. If you don't write it
| down, everyone may go on basing decisions on an incorrect
| understanding of the system.
|
| A related story:
|
| "As I was writing the documentation on our market data system,
| multiple people told me 'You don't need to do that, it's not that
| complicated'. Then they read the final document and said 'Oh, I
| guess it is pretty complicated' "
| Telaneo wrote:
| I've been in discussions about Step 7, and my god, the
| experience was soul crushing. Even more soul crushing was that
| the result of that discussion was to not document Step 7,
| because doing that might enforce the idea of what it should be
| for and why it should be done.
|
| Writing stuff down is great since it provides a baseline to
| agree upon, and later additions to the team will take it as
| given and not start to discuss minutiae and bog down
| discussions into nothingness. And if some point really is worth
| discussing, it shouldn't be hard to find support to change it.
| I've heard some wild misunderstandings of how things were based
| on how they were being done, and now I never want to do
| anything of any significant size without there being a clear
| and obvious process to it.
| alexpotato wrote:
| > the result of that discussion was to not document Step 7,
| because doing that might enforce the idea of what it should
| be for and why it should be done.
|
| In Charlie Beckwith's book about Delta Force [0] there is a
| line where he says (paraphrasing):
|
| "The SAS never wanted to write down what their role was and
| what tasks they were trained for. Why? Because they didn't
| want to get pigeon holed into a role. ... They also never
| wrote down their SOPs b/c the argument was that 'if you can't
| keep it in your head, you shouldn't be in the Regiment'. At
| Delta, we were going to write down our mission AND write down
| our SOPs."
|
| 0 - https://amzn.to/4ahIAJV
| Telaneo wrote:
| For a force whose goals can change at any moment, this
| seems pretty reasonable. The SAS shouldn't be trained for
| anything in partucular, but rather for anything and
| everything.
|
| Step 7 in a process which already has defined end-goals
| though? The fact that there were disagreements in the first
| place baffled me. The fact that it was impossible to write
| anything down about it without invoking heaven's wrath made
| me quit.
| Waterluvian wrote:
| What drives me nuts is how many people can't separate those two
| tasks/projects.
|
| We're going to write down what Step 7 currently is/does. No,
| now is not the time to start discussing what it ought to do.
| Please let us just get through sorting out what Step 7
| currently is. Yes, some people do it differently. That's why we
| hit a snag. Let's just pick one of those wrong ways, document
| it, and do it all wrong together. We'll fix it as a separate
| step. Now isn't the time to fix it, as much as it feels like a
| convenient time to.
| gishh wrote:
| Yeesh. I've never worked with a smart group of people who
| came to that conclusion. That sounds toxic. :(
| alwa wrote:
| Which way sounds toxic--wanting to get it right now that
| they've become aware it's a problem? Or getting _something_
| down now, as close as possible to what happened yesterday
| and the day before, to unblock the larger process--then
| refining it after the fires are out?
|
| Seems like horses for courses to me: I can imagine my very
| happy healthy teams needing to operate in either mode,
| depending on the specific problem. I also can imagine us
| needing the person closest to the problem to tell us which
| direction applies.
|
| (To your point though, I also can imagine that any type of
| pressures like these would really bring out the dysfunction
| in "toxic" teams.)
| gishh wrote:
| > Or getting something down now, as close as possible to
| what happened yesterday and the day before, to unblock
| the larger process--then refining it after the fires are
| out?
|
| In my experience, the refining never happens.
| danaris wrote:
| But at least, in that scenario, _the process is
| unblocked_.
|
| The other way, you've blocked the process until every
| subcommittee of the committee assigned to fix the process
| has delivered their Final Report Draft 8 FINAL (1) (13)
| (1).docx. And that could be preventing an entire
| department from working _at all_.
| gishh wrote:
| I think you identified the problem.
|
| > subcommittee of the committee assigned to fix the
| process
|
| That bit, is the problem.
| machomaster wrote:
| Sometimes blocking the process is the best way to do.
| Blocking gives leverage and allows to fix long-standing
| inbalances.
|
| Imagine that you have been slaving for low salary with
| abusive boss, who constantly promises but never delivers.
| If shit hit the fan and you are desperately needed, this
| is the perfect time to talk and solidify improvements.
| Game does not run on gratitude.
|
| The same rule unfortunately also applies to
| relationships.
| hashstring wrote:
| What do you mean exactly?
| BinaryIgor wrote:
| Writing is such a powerful and often underrated and
| underutilized tool; I don't think it's an overstatement to say
| that it's at par with fire and should be in the top 5 of all-
| time humanity inventions/discoveries.
| hammock wrote:
| Your story and the article's thesis that AI is for acceleration
| and automation (not other things like design/intelligence)
| remind me of one particular CEO's five step product process:
|
| 1) design smart(er) requirements- I.e beat up the ask and
| rewrite the problem statement correctly. 1B is every
| requirement has a persons name attached who is
| traceable/responsible for its inclusion- not a department.
|
| 2) delete features you don't need or which are hedges (if you
| aren't adding back 10% of the time, then you aren't deleting
| enough)
|
| 3) simplify or optimize. This step must come after 1 and 2 so
| you aren't wasting effort optimizing the wrong thing
|
| 4) accelerate
|
| 5) automate
|
| This way is very clear where AI plugs in- and more importantly,
| WHEN it plugs in.
|
| Also, plenty of times people try to run this process backwards,
| with poor outcomes.
| crims0n wrote:
| I have complicated feelings towards process, especially in large
| enterprises. In one hand, I know process is how you get good work
| out of average people - and that has a lot of value in big
| businesses because statistically, most people are going to be
| around average.
|
| On the other hand, I have seen process stifle above average
| people or so called "rockstars". The thing is, the bigger your
| reliance on process, the more you need these people to swoop in
| and fill in the cracks, save the day when things go horribly
| wrong, and otherwise be the glue that keeps things running (or
| perhaps oil for the machine is more apt).
|
| I know it's not "fair", and certainly not without risk, but the
| best way I have (personally) seen it work is where the above
| average people get special permissions such as global admin or
| exception from the change management process (as examples) to
| remove some of the friction process brings. These people like to
| move fast and stay focused, and don't like being bogged down by
| petty paperwork, or sitting on a bridge asking permission to do
| this or that. Even as a manger, I don't blame them at all, and
| all things being equal so long as they are not causing problems I
| think the business would prefer them to operate as they do.
|
| In light of those observations, I have been wrestling a lot with
| what it says about process itself. Still undecided.
| NeutralForest wrote:
| The Agile Manifesto says "People over process", this can be
| interpreted in many ways. But ideally you follow the 80/20 rule
| and have clear cut processes for the most frequent cases and/or
| liability/law/SLA stuff you can't do without. But you should
| have fast escape hatches as well imo where a good engineer
| having admin access on a platform or deploying a hot-fix is
| also possible.
| tetha wrote:
| One thing process protects against is lazy people.
|
| Like, we recently had an incident where someone just pasted
| "401 - URL" into the description and sent it off. We recently
| asked someone to open the incident through the correct
| channels. We got a service request "Fix" with the mail thread
| attached to it in a format we couldn't open. We get incidents
| "System is slow, infrastructure is problem" from random
| "DevOps" people.
|
| Sadly, that is the crap you need to deal with. This is the crap
| that grinds away cooperative culture by pure abuse. Before a
| certain dysfunctional project was dumped on us as "Make it
| Saas", people were happy to support ad-hoc, ambitious and
| strange things.
|
| We are now forced by this project to enforce procedure and if
| this kills great ideas and adventures, so be it. The crappy,
| out-of-process things cost too much time.
| throaway54 wrote:
| The lazy are also most likely to push back against the
| process, even though they're the ones who can most benefit.
| Telaneo wrote:
| Providing the rockstars with a sandbox where they can do
| anything and work independently, while being shielded from all
| the processes and paperwork that slow them down (while also
| having people to pull that slack), is a fairly good method, but
| depending on the work that isn't viable. Their work has to come
| out of the sandbox at some point, and there will be some back-
| and-fourth which will probably put blocks on the team in that
| case.
|
| I doubt there's much to do about the specific process that can
| be done to minimise the problems of the rockstars without also
| causing problems further down the ladder, without just starting
| to make exceptions like you said. It's probably just an
| emergent behaviour of processes like this intended to raise
| average quality. You pull up the bottom floor, but the roof
| gets lower as a result. You can find similar problems in
| schooling.
| DocTomoe wrote:
| I think it is a managerial failure to have rockstar-type
| employees work the menial, process-managed stuff. Those should
| work on the unusual, the new, the moonshots. Stuff that has not
| yet been formalized in BPMN 2.0
| jjk166 wrote:
| > On the other hand, I have seen process stifle above average
| people or so called "rockstars". The thing is, the bigger your
| reliance on process, the more you need these people to swoop in
| and fill in the cracks, save the day when things go horribly
| wrong, and otherwise be the glue that keeps things running (or
| perhaps oil for the machine is more apt).
|
| This is a case of bad process. No process is perfect, but the
| whole point of process is so when things go wrong they don't go
| horribly wrong, and that you don't need rockstars to fill in
| the cracks. It should be making your rockstars faster because
| the stuff they need others to take care of gets done well.
| Unnecessary friction that slows people down is generally a sign
| of management mistaking paperwork for process.
| SpicyLemonZest wrote:
| Very often paperwork is the necessary process. I've seen
| multiple engineering teams who used to accept essentially any
| customer escalation, for example, until they found themselves
| essentially being DDoSed by poorly explained tickets filed at
| much too high of a priority. Now they have forms that
| customer-facing folks have to fill out explaining in detail
| what's going wrong, why an escalation is required, and naming
| the senior person who's accountable for the accuracy of that
| form.
|
| Is it slow and annoying to jump through these hoops? Without
| a doubt! I've also seen people on the other side of the
| process who are very frustrated that they can't just escalate
| when they _know_ devs would want to hear about it. But it's
| not acceptable for people to get woken up every week because
| the new support engineer filed a customer error as a global
| outage, and smart people tried and failed to put a stop
| through it through training. I don't know what the
| alternative could be.
| andai wrote:
| >They think artificial intelligence brings intelligence. It
| doesn't.
|
| What does it bring?
| NebulaStorm456 wrote:
| If your answer requires clustering and assembling disparate
| facts strewn about on the internet or company data / documents
| and reasoning over them, then LLMs can help that. Atleast
| that's what I did when I used to answer questions on
| stackoverflow.
| zahlman wrote:
| You did read https://meta.stackoverflow.com/questions/421831
| , yes?
| NebulaStorm456 wrote:
| My point was before AI, when I used to answer stackoverflow
| questions out of curiosity, I used to manually search
| around on internet to properly answer the question. This is
| exactly the process LLMs help with.
| andai wrote:
| >If you automate a stupid decision, you just make stupid
| decisions at light speed.
|
| What's the prompt for that one? ;)
| kace91 wrote:
| I'm like 99% sure that text is llm-written. "Mess/gold"
| comparisons, meta paragraph expressions like "here is the truth",
| "it's not this, it's that"...
| lhmiles wrote:
| Yeah
| jsrozner wrote:
| This is AI generated, which is annoying.
| ronbenton wrote:
| I have done general process automation work (usually designing
| new web-based tools) for 20 years now. The underlying idea has
| always applied, even before AI: if your process is ill-defined
| and/or nonsensical, trying to "automate" it isn't going to work
| out.
|
| I have seen a smattering of instances along the way where the act
| of defining requirements forced companies to define processes
| better. Usually, though, companies are unwilling to do this and
| instead will insist on adding flexibility to the automation
| tooling, to the point where the tool is of no help.
| Waterluvian wrote:
| I think this is what I'm running into. Other teams want my team
| to make tooling to simplify some data processing workflow. Nice
| UIs and such. But they can't and generally won't show me the
| written-down process for how it's actually done today. Or how
| they're going to do it manually as they develop their side of
| things.
|
| Which leads us to turning into a different team: we have to go
| figure out what the process engineering even is, which means
| becoming a bigger expert than they are at the process they want
| us to make tooling for.
| DenisM wrote:
| > There is no such thing as an AI strategy. There is only
| Business Process Optimization (BPO).
|
| Here's your Ai strategy: every few months re-evaluate agent
| fitness and start switching over. Remember backstops and
| canaries.
|
| Details:
|
| Businesses usually assign responsibilities to somewhat flaky
| employees, with understanding there will be a percentage of
| errors. This works ok so long as errors don't fluctuate wildly
| and don't amplify through the system. Most business processes are
| a mess and that works ok.
|
| Once agents become less flaky and there are enough backstops to
| contain occasional damage business will start switching.
| zbentley wrote:
| Bold presumption that businesses have any useful way to
| evaluate agent fitness. Hell, they struggle to evaluate _human_
| fitness and do basic things like plan and execute OKRs. What
| makes you think they'd be any good at continuous quality
| improvement on entities that can't correctly explain their own
| reasoning?
| siliconc0w wrote:
| The other nuance is that by definition all of these undocumented
| workflows are out-of-distribution for the model - so it won't be
| particularly great at them.
| nishantjani10 wrote:
| "Processes that rely on unstructured data are usually
| unstructured processes." - thats a brilliant take, seriously!
| lhmiles wrote:
| This is terrible advice. Legibilizing everything illegible is a
| fast way to ruin everything, especially culture
| cyrusradfar wrote:
| I agree with the article's core point that placement matters.
|
| The useful framing is not "where can we bolt on AI" but "what
| does the system look like if AI is a first-class component." That
| requires mapping the workflow, identifying the decision points,
| and separating deterministic steps from judgment calls.
|
| Most teams try to apply AI inside existing org boundaries.
|
| That assumes the current structure is optimal. The better
| approach is to model the business as a set of subsystems, pick
| the one with the highest operational cost or latency, and
| simulate what happens if that subsystem becomes an order of
| magnitude more efficient. The rest of the architecture tends to
| reconfigure from that starting point.
|
| For example, in insurance (just an illustration, not a claim
| about any specific firm), underwriting, sales, and support
| dominate cost. If underwriting throughput improves by an order of
| magnitude, the downstream constraints shift: pricing cycles
| compress, risk models refresh faster, and the human-in-the-loop
| boundary moves. That's the level where AI changes the system
| shape and acts beyond the local workflow.
|
| This lens seems more productive than incremental insertion into
| existing silos.
| ChrisMarshallNY wrote:
| I feel like this article hits the nail on the head.
|
| I have learned to be careful of "too much process", but I find
| that the need for structure never disappears.
|
| AI deals well with structure. You can adjust your structure to
| accept less-structured data, but you still need the structure,
| for after that.
|
| Just maybe not _too much_ structure[0].
|
| [0] https://littlegreenviper.com/various/concrete-galoshes/
| BinaryIgor wrote:
| "There is no such thing as an AI strategy.
|
| There is only Business Process Optimization (BPO)."
|
| Exactly, that's the fundamental truth. The shiny tool of the day
| doesn't change it at all
| wallfacer wrote:
| One core assertion seems less true every day:
|
| > The intelligence (knowing what a "risk" actually means) still
| requires human governance.
|
| Less and less. Why do you trust a human who's considered 5000
| assessments to better understand "risks" and process the next 50
| better than the LLM who has internalized untold millions of
| assessments?
| scrubs wrote:
| Totally agree with this post. I've had many hour convos with a
| program manager on his project (an enterprise security master for
| trading) where AI has been misapplied to make the mess a bigger
| mess as just one current example in my own backyard.
| idopmstuff wrote:
| I have always found writing documentation to be incredibly
| helpful for clarifying my thinking. It prevents me from doing
| mental hand-waving around details, and often times writing down a
| process that I have done a thousand times is the thing that makes
| me realize how I can cut steps or improve it.
|
| I'm now in the process of trying to hand off chunks of the work I
| do to run my business to AI (both to save time but also just as
| my very broad, practical eval). It really is all about
| documentation. I buy small e-commerce brands, and they're simple
| enough that current SOTA models have more than enough
| intelligence to take a first pass at listings + financials to
| determine whether I should take a call with the seller. To make
| that work, though, I've got a prompt that's currently at six
| pages that is just every single thing I look when evaluating a
| business codified.
|
| Using that has really convinced me that people are overrating the
| importance of intelligence in LLMs in terms of driving real
| economic value. Most work is like my evaluations - it requires
| intelligence, but there's a ceiling to how much you need. Someone
| with 150 IQ points wouldn't do any better at this task than
| someone with 100 IQ points.
|
| Instead, I think what's going to drive actual change is the
| scaffolding that lets LLMs take on increasing numbers of tasks.
| My big issue right now is that I have to go to the listing page
| for a business that's for sale, screenshot the page, download the
| files, upload that all to ChatGPT and then give it the prompt.
| I'm still waiting for a web browsing agent that can handle all of
| that for me, so I can automate the full flow and just get an
| analysis of each listing sent to me without having to do
| anything.
___________________________________________________________________
(page generated 2025-11-29 23:01 UTC)