[HN Gopher] The costs of microservices (2020)
       ___________________________________________________________________
        
       The costs of microservices (2020)
        
       Author : kiyanwang
       Score  : 272 points
       Date   : 2023-10-30 14:33 UTC (8 hours ago)
        
 (HTM) web link (robertovitillo.com)
 (TXT) w3m dump (robertovitillo.com)
        
       | jihadjihad wrote:
       | Microservices                  grug wonder why big brain take
       | hardest problem, factoring system correctly, and introduce
       | network call too              seem very confusing to grug
       | 
       | https://grugbrain.dev/#grug-on-microservices
        
         | sethammons wrote:
         | grug not experience large teams stepping on each other's
         | domains and data models, locking in a given implementation and
         | requiring large, organizational efforts to to get features out
         | at the team level. Team velocity is empowered via microservices
         | and controlling their own data stores.
         | 
         | "We want to modernize how we access our FOO_TABLE for
         | SCALE_REASONS by moving it to DynamoDB out of MySQL -
         | unfortunately, 32 of our 59 teams are directly accessing the
         | FOO_TABLE or directly accessing private methods on our classes.
         | Due to competing priorities, those teams cannot do the work to
         | move to using our FOO_SERVICE and they can't change their query
         | method to use a sharded table. To scale our FOO_TABLE will now
         | be a multi-quarter effort providing the ability for teams to
         | slow roll their update. After a year or two, we should be able
         | to retire the old method that is on fire right now. In the
         | meanwhile, enjoy oncall."
         | 
         | Compare this to a microservice: Team realizes their table wont
         | scale, but their data is provided via API. They plan and
         | execute the migration next sprint. Users of the API report that
         | it is now much faster.
        
           | jjtheblunt wrote:
           | > Team velocity is empowered via microservices and
           | controlling their own data stores.
           | 
           | Bologna. Choosing clear abstractions is an enabler of focus,
           | but that doesn't necessarily imply those abstractions are a
           | network call away.
        
             | sethammons wrote:
             | our team of 300 - we _can't_ enforce the clear
             | abstractions. New dev gets hired, team feels pressured to
             | deliver despite leadership saying to prioritize quality,
             | they are not aware of all the access controls, they push a
             | PR, it gets merged.
             | 
             | We have an org wide push to get more linting and more
             | checks in place. The damage is done and now we have a
             | multi-quarter effort to re-organize all our code.
             | 
             | This _can_ be enforced via well designed modules. I've just
             | not seen that succeed. Anywhere. Microservices are a pain
             | for smaller teams and you have to have CI and observability
             | and your pains shift and are different. But for stepping on
             | eachother? I've found microservices to be a super power for
             | velocity in these cases. Can microservices be a shitshow?
             | Absolutely, esp. when they share data stores or have
             | circular dependencies. They also allow teams to be
             | uncoupled assuming they don't break their API.
        
               | ChrisMarshallNY wrote:
               | _> leadership saying to prioritize quality_
               | 
               | Well, that's different.
               | 
               | My experience is that "leadership" often finds quality to
               | be expensive and unnecessary overhead.
               | 
               | That's one reason that I stayed at a company that took
               | Quality seriously. It introduced many issues that folks,
               | hereabouts would find unbearable, but they consistently
               | shipped some of the highest-Quality (and expensive) kit
               | in the world.
               | 
               | Quality is not cheap, and it is not easy. It is also not
               | really the path to riches, so it is often actively
               | discouraged by managers.
        
               | nicoburns wrote:
               | > They also allow teams to be uncoupled assuming they
               | don't break their API.
               | 
               | Presumably you would still need tooling to enforce teams
               | not breaking their API, and also to prevent people just
               | modifying services to expose private internals that
               | should not be part of the public API?
        
               | jacquesm wrote:
               | You can't really retrofit culture and behavior, you can
               | only very gradually move towards a particular goal and
               | usually that's a multi-year effort, if it can be done at
               | all.
        
               | shadowgovt wrote:
               | If you want to change that model, there's three things
               | that need to happen:
               | 
               | 1) engineering needs a reason to embrace abstraction.
               | Because the only thing that stops a new cowboy engineer
               | is their peers explaining to them in what ways this isn't
               | the Wild Wesst. One can assume if rules would benefit the
               | team, they'd be doing them already, so why don't they?
               | Maybe they perceive feature output velocity to be too
               | high to risk changing method. Maybe decisionmaking power
               | rests in the hands of one system architect who is holding
               | the whole machine in their head so things that look
               | complex to others seem simple to them (in that case, the
               | team needs to spread around peer review signoff
               | responsibilities on purpose, so one engineer _can 't_ be
               | the decisionmaker and the architecture itself _must_
               | self-describe). Maybe (this is how I see it usually
               | happen) they were a three-person startup and complexity
               | crept up on them like a boiling frog. Whatever the
               | reason, if you 're gonna convince them otherwise,
               | someone's gonna have to generate hard data on how
               | changing the abstraction could make their jobs easier.
               | 
               | 2) If management has no idea what "prioritize quality"
               | means (meaning no metrics by which to measure it and no
               | real grasp of the art of software engineering), the
               | engineers will interpret buzzwords as noise and route
               | around them. Management needs to give actionable goals
               | _other_ than  "release feature X by Y date" if they want
               | to change engineering culture. That can take many forms
               | (I've seen _rewarded_ fixit weeks and externally-reported
               | issue burndown charts as two good examples).
               | 
               | 3) Engineering leadership needs time and bandwidth to do
               | training so they can see outside their prairie-dog hole
               | over to how other teams solve problems. Otherwise, they
               | get locked into the solutions they know, and the only way
               | new approaches ever enter the team is by hires.
               | 
               | And the key thing is: microservices may not _be_ the tool
               | for the job when all is said and done. This is an
               | approach to give your engineering team ways to discover
               | the right patterns, not to trick them into doing
               | microservices. Your engineering team, at the end of the
               | day, is still the best-equipped people to know the
               | machinery of the product and how to shape it.
        
               | Pet_Ant wrote:
               | > we _can't_ enforce the clear abstractions
               | 
               | Really, you can't? Then I struggle to see how'll get
               | anything else right. I've done it by using separate build
               | scripts. That way only the interfaces and domain objects
               | are exposed in the libraries. Now you lock down at the
               | repository level access to each sub-project to the team
               | working on it. There you go: modularity, boundaries, all
               | without network hops.
        
               | sethammons wrote:
               | sure - if you do that from the start. Most don't. The
               | codebase organically grows and then lines are blurred and
               | then you have to come in and refactor. When this refactor
               | affects several teams, it gets harder in combinatorial
               | fashion.
               | 
               | With an HTTP API boundary, you don't get to reach into my
               | code - it is literally impossible.
        
               | ndriscoll wrote:
               | But the work required to factor things out behind an HTTP
               | boundary is a superset of the work required to factor
               | things out into a module as GP describes. So if you
               | _were_ going to do microservices, you could do that same
               | factoring and then just stop when you get to the part
               | where you 'd add in networking and deployment scripts.
        
               | wg0 wrote:
               | How the whole open source ecosystem is working fine and
               | delivering software while depending upon each other for
               | almost decades all the while not ever being in the same
               | room and yet having no microservices?
               | 
               | I mean take your pick, anything open source be it desktop
               | or web has a huge and deep dependency tree all the way
               | down to libc.
               | 
               | Just wondering.
               | 
               | EDIT: Typos and desktop.
        
               | ahoka wrote:
               | Someone does the integration work for you, that's why it
               | works. Try running some distro that just grabs the latest
               | upstream versions and see how often things break.
        
               | wg0 wrote:
               | And that's my point.
        
               | mardifoufs wrote:
               | Open source projects rarely involve live services, or
               | providing SaaS. In those situations I think microservices
               | are much more helpful
        
               | wg0 wrote:
               | But Microservices have nothing to do with scaling or
               | deployment. The main selling point is that it scales
               | across teams etc?
        
               | mardifoufs wrote:
               | I think it's a bit of both IME. It helps a lot with
               | deployment across teams.
        
           | simonw wrote:
           | > Team realizes their table wont scale, but their data is
           | provided via API. They plan and execute the migration next
           | sprint.
           | 
           | ... followed by howls of anguish from the rest of the
           | business when it turns out they were relying on reports
           | generated from a data warehouse which incorporated a copy of
           | that MySQL database and was being populated by an
           | undocumented, not-in-version-control cron script running on a
           | PC under a long-departed team member's desk.
           | 
           | (I'm not saying this is good, but it's not an unlikely
           | scenario.)
        
             | sethammons wrote:
             | another reason why you provide data only over API - don't
             | reach into my tables and lock me into an implementation.
        
               | simonw wrote:
               | An approach I like better than "only access my data via
               | API" is this:
               | 
               | The team that maintains the service is also responsible
               | for how that service is represented in the data
               | warehouse.
               | 
               | The data warehouse tables - effectively denormalized
               | copies of the data that the service stores - are treated
               | as another API contract - they are clearly documented and
               | tested as such.
               | 
               | If the team refactors, they also update the scripts that
               | populate the data warehouse.
               | 
               | If that results in specific columns etc becoming invalid
               | they document that in their release notes, and ideally
               | notify other affected teams.
        
               | hermanradtke wrote:
               | This same thing can be applied to contracts when firing
               | events, etc. I point people to
               | https://engineering.linkedin.com/distributed-systems/log-
               | wha... and use the same approach to ownership.
        
               | simonw wrote:
               | Yeah, having a documented stream of published events in
               | Kafka is a similar API contract the team can be
               | responsible for - it might even double as the channel
               | through which the data warehouse is populated.
        
             | mason55 wrote:
             | > _... followed by howls of anguish from the rest of the
             | business when it turns out they were relying on reports
             | generated from a data warehouse which incorporated a copy
             | of that MySQL database and was being populated by an
             | undocumented, not-in-version-control cron script running on
             | a PC under a long-departed team member 's desk._
             | 
             | Once you get to this point, there's no path forward. Either
             | you have to making some breaking changes or your product is
             | calcified at that point.
             | 
             | If this is a real concern then you should be asking what
             | you can do to keep from getting into that state, and the
             | answer is encapsulating services in defined
             | interfaces/boundaries that are small enough that the team
             | understands everything going on in the critical database
             | layer.
        
             | pjc50 wrote:
             | This was why the "if you breach the service boundary you're
             | fired" Amazon memo was issued.
        
             | mjr00 wrote:
             | > they were relying on reports generated from a data
             | warehouse which incorporated a copy of that MySQL database
             | and was being populated by an undocumented, not-in-version-
             | control cron script running on a PC under a long-departed
             | team member's desk.
             | 
             | This definitely happens but at some point someone with
             | authority needs to show technical leadership and say "you
             | cannot do this no matter how desperately you need those
             | reports." If you don't have anyone in your org who can do
             | that, you're screwed regardless.
        
               | simonw wrote:
               | A lot of organizations are screwed.
        
               | mjr00 wrote:
               | I do agree with that. Microservices are _not_ a good idea
               | whatsoever for organizations with weak senior technical
               | people. Which is probably 90%+ of businesses.
        
           | shadowgovt wrote:
           | As with so many software solutions, the success of
           | microservices is predicated upon having sufficient
           | prognostication about how the system will be used to
           | recognize where the cut-points are.
           | 
           | When I hear success stories like that, I have to ask "Is
           | there some inherent benefit to the abstraction or did you get
           | lucky in picking your cleave-points?"
        
             | esafak wrote:
             | That comes with experience, but you can let time be the
             | judge if you factor your monolith early enough. If the
             | factorization proves stable, proceed with carving it into
             | microservices.
        
           | charcircuit wrote:
           | Other teams can access your data directly even if it's in
           | your own separate database.
        
           | sally_glance wrote:
           | This applies to performance optimizations which leave the
           | interface untouched, but there are other scenarios, for
           | example:
           | 
           | - Performance optimizations which can't be realized without
           | changing the interface model. For example, FOO_TABLE should
           | actually be BAR and BAZ with different update patterns to
           | allow for efficient caching and querying.
           | 
           | - Domain model updates, adding/updating/removing new
           | properties or entities.
           | 
           | This kind of update will still require the 32 consumers to
           | upgrade. The API-based approach has benefits in terms of the
           | migration process and backwards-compatibility though (a HTTP
           | API is much easier to version than a DB schema, although you
           | can also do this on the DB level by only allowing queries on
           | views).
        
           | crabbone wrote:
           | Found the author of CISCO / Java Spring documentation.
           | 
           | I honestly expected people used passive-aggressive corpo-
           | slang only for work / ironically. But this reads
           | intentionally obfuscated.
           | 
           | But, to answer to the substance: you for some reason assumed
           | that whoever designed first solution was an idiot and whoever
           | designed the second was, at least clairvoyant.
           | 
           | The problem you describe isn't a result of inevitable design
           | decisions. You just described a situation where someone
           | screwed up doing something and didn't screw up doing
           | something else. And that led you to believe that whatever
           | that something else is, it's easier to design.
           | 
           | The reality of the situation is, unfortunately, the reverse.
           | Upgrading microservices is much harder than replacing
           | components in monolithic systems because it's easier to
           | discover all users of the feature. There are, in general,
           | fewer components in monolithic systems, so less things will
           | need to change. Deployment of a monolithic system will be
           | much more likely to discover problems created by incorrectly
           | implemented upgrade.
           | 
           | In my experience of dealing with both worlds, microservices
           | tend to create a maze-like system where nobody can be sure if
           | any change will not adversely affect some other part of the
           | system due to the distributed and highly fragmented nature of
           | such systems. So, your ideas about upgrades are
           | uncorroborated by practice. If you want to be able to update
           | with more ease, you should choose a smaller, more cohesive
           | system.
        
             | perrylaj wrote:
             | I think I view the situation in a similar fashion as you.
             | There's absolutely nothing preventing a well architected
             | modular monolith from establishing domain/module-specific
             | persistence that is accessible only through APIs and
             | connectable only through the owning domain/module. To
             | accomplish that requires a decent application
             | designer/architect, and yes, it needs to be considered
             | ahead of time, but it's 100% doable and scalable across
             | teams if needed.
             | 
             | There are definitely reasons why a microservice
             | architecture could make sense for an organization, but "we
             | don't have good application engineers/architects" should
             | not be one of them. Going to a distributed microservice
             | model because your teams aren't good enough to manage a
             | modular monolith sounds like a recipe for disaster.
        
           | nijave wrote:
           | Microservices don't magically fix shared schema headaches.
           | Getting abstraction and APIs right is the solution regardless
           | of whether it's in-memory or over the network.
           | 
           | Instead of microservices, you could add a static analysis
           | build step that checks if code in packages is calling private
           | or protected interfaces in different packages. That would
           | also help enforce service boundaries without introducing the
           | network as the boundary.
        
         | agumonkey wrote:
         | new role: lead grug
        
         | insanitybit wrote:
         | Network calls are a powerful thing to introduce. It means that
         | you have an impassable boundary, one that is actually
         | physically enforced - your two services _have_ to treat each
         | other as if they are isolated.
         | 
         | Isolation is not anything to scoff at, it's one of the most
         | powerful features you can encode into your software. Isolation
         | can improve performance, it can create fault boundaries, it can
         | provide security boundaries, etc.
         | 
         | This is the same foundational concept behind the actor model -
         | instead of two components being able to share and mutate one
         | another's memory, you have two isolated systems (actors,
         | microservices) that can only communicate over a defined
         | protocol.
        
           | nicoburns wrote:
           | > Network calls are a powerful thing to introduce. It means
           | that you have an impassable boundary, one that is actually
           | physically enforced - your two services have to treat each
           | other as if they are isolated.
           | 
           | That is not too true at all. I've seen "microservice" setups
           | where one microservice depends on the state within another
           | microservice. And even cases where service A calls into
           | service B which calls back into service A, relying on the
           | state from the initial call being present.
           | 
           | Isolation is good, but microservices are neither necessary
           | nor sufficient to enforce it.
        
             | insanitybit wrote:
             | Well, I'd say you've seen SoA setups that do that, maybe.
             | But those don't sound like microservices :) Perhaps that's
             | not a strong point though.
             | 
             | Let me be a bit clearer on my point because I was wrong to
             | say that you have to treat a service as being totally
             | isolated, what I should have said that they _are_ isolated,
             | whether you treat them that way or not. There is a
             | _physical_ boundary between two computers. You can try to
             | ignore that boundary, you can implement distributed
             | transactions, etc, but the boundary is there - if you do
             | the extra work to try to pretend it isn 't, that's a lot of
             | extra work to do the wrong thing.
             | 
             | Concretely, you can write:                   rpc_call(&mut
             | my_state)
             | 
             | But under the hood what _has_ to happen, physically, is
             | that your state has to be copied to the other service, the
             | service can return a new state (or an update), and the
             | caller can then mutate the state locally. There is no way
             | for you to actually transfer a mutable reference to your
             | own memory to another computer (and a service should be
             | treated as if it may be on another computer, even if it is
             | colocated) without obscene shenanigans. You can try to
             | abstract around that isolation to give the appearance of
             | shared mutable state but it is just an abstraction, it is
             | effectively impossible to implement that directly.
             | 
             | But shared mutable state is _trivial_ without the process
             | boundary. It 's just... every function call. Any module can
             | take a mutable pointer and modify it. And that's great for
             | lots of things, of course, you give up isolation sometimes
             | when you need to.
        
               | nicoburns wrote:
               | Hmm... I guess I very rarely use shared mutable state in
               | web services anyway. The last job I worked at, _all_
               | state was either in the database (effectively another
               | service anyway) or stored on the client (e.g. auth
               | tokens). So anything that was mutating a function
               | parameter would already be subject to extra scrutiny
               | during code review (Why is it doing that? What scope  /
               | module boundary is the mutation limited to?).
        
               | insanitybit wrote:
               | Shared mutable state also goes beyond a mutable
               | reference. If you call a function and that function
               | throws an exception you are tying the caller/callee's
               | states together. In a SoA the callee machine can
               | literally blow up and your caller state is preserved.
               | 
               | If your web service is generally low-state and these
               | problems are manageable for the complexity scale you're
               | solving for, microservices aren't really something to
               | even consider - I mean, you basically have a microservice
               | already, it's solving a single problem within a bounded
               | context, give or take. It's just... one service and one
               | context.
        
           | manicennui wrote:
           | It is trivial to tightly couple two services. They don't have
           | to treat each other as isolated at all. The same people who
           | create tightly coupled code within a single service are
           | likely going to create tightly coupled services.
        
             | insanitybit wrote:
             | I think I've covered this in another comment just below the
             | one you've replied to.
        
           | crabbone wrote:
           | The reality of this situation is that the tool everyone is
           | using to build microservices is Kubernetes. It imposes a huge
           | tax on communication between services. So your aspiration as
           | to improving performance fly out of the window.
           | 
           | On top of this, you need to consider that most of the
           | software you are going to write will be based on existing
           | components. Many of these have no desire to communicate over
           | network, and your micro- or w/e size services will have to
           | cave in to their demands. Simple example: want to use Docker?
           | -- say hello to UNIX sockets. Other components may require
           | communication through shared memory, filesystem, and so on.
           | 
           | Finally, isolation is not a feature of microservices,
           | especially if the emphasis is on _micro_. You have to be able
           | to control the size and where you want to draw the boundary.
           | If you committed upfront to having your units be as small as
           | possible -- well, you might have function-level isolation,
           | but you won 't have class- or module- or program-level
           | isolation, to put it in more understandable terms. This is
           | where your comparison between the actors model and
           | microservices breaks: first doesn't prescribe the size.
        
             | insanitybit wrote:
             | Microservices definitely predate k8s, but sure, lots of
             | people use k8s. I don't know what penalty you're referring
             | to. There is a minor impact on network performance for
             | containers measured in microseconds under some
             | configurations. Maybe Kubernetes makes that worse somehow?
             | I think it does some proxying stuff so you probably pay for
             | a local hop to something like Envoy. If Envoy is on your
             | system and you're not double-wrapping your TLS the
             | communication with it should stay entirely in the kernel,
             | afaik.
             | 
             | http://domino.research.ibm.com/library/cyberdig.nsf/papers/
             | 0...
             | 
             | In no way is this throwing out performance. It's sort of
             | like saying that Kafka is in Java so you're throwing away
             | performance when you use it, when there are massive
             | performance benefits if you leverage partition isolation.
             | 
             | > Many of these have no desire to communicate over network,
             | and your micro- or w/e size services will have to cave in
             | to their demands. Simple example: want to use Docker? --
             | say hello to UNIX sockets. Other components may require
             | communication through shared memory, filesystem, and so on.
             | 
             | I'm not sure what you're referring to. Why would that
             | matter at all? I mean, ignoring the fact that you can
             | easily talk to Docker over a network.
             | 
             | > Finally, isolation is not a feature of microservices,
             | 
             | Isolation is a feature of any process based architecture,
             | whether it's SoA, actors, or microservices.
             | 
             | > well, you might have function-level isolation, but you
             | won't have class- or module- or program-level isolation,
             | 
             | You get isolation at the service layer. I don't see why
             | that would be contentious, it's obvious. If you're saying
             | you want more isolation, ok, you can write your code to do
             | that if you'd like.
             | 
             | > first doesn't prescribe the size.
             | 
             | Yep, the actor model is very low level. Microservice
             | architecture is far more prescriptive. It's one of the
             | reasons why I think Microservice architecture has been far
             | more successful than actor based systems.
        
         | manicennui wrote:
         | I just wanted to note that static typing isn't required for
         | autocomplete. JetBrains has IDEs for languages like Ruby and
         | Python that can do it. If you open the REPL in a recent version
         | of Ruby you get much of what you expect from an IDE with a
         | statically typed language (with regards to autocomplete and
         | syntax checking).
        
           | manicennui wrote:
           | Also, DRY is not about repeated code. This drives me crazy.
           | Ruby developers love to make code worse by trying to "DRY it
           | up".
        
             | Capricorn2481 wrote:
             | What are you replying to? The article is about how Dry
             | shouldn't be over applied.
             | 
             | Dry is literally "Don't Repeat Yourself" and is definitely
             | pushed for cleaning up redundant code, so it's not
             | unreasonable for people to think that's what about. It's
             | only recently that people have pointed out that there's a
             | difference between Duplicated code and Repeated code.
        
               | manicennui wrote:
               | Redundant code is not code that looks the same. It's only
               | reasonable for people to believe it is about code that
               | looks the same if they have never bothered to learn what
               | it means.
        
           | 998244353 wrote:
           | You are correct, but static typing does make it a lot easier.
           | Working with Rider feels like working with an IDE that fully
           | understands the code, at least structurally. Working with
           | PyCharm feels like working with an IDE that makes intelligent
           | guesses.
        
             | manicennui wrote:
             | The REPL in current versions of Ruby is probably a better
             | example of how it should be done. Because it is actually
             | running the code it has much better information.
             | 
             | https://ruby-doc.org/core-3.1.0/NEWS_md.html#label-
             | IRB+Autoc...
        
       | nevinera wrote:
       | The right time to extract something into a separate service is
       | when there's a problem that you can't tractably solve without
       | doing so.
       | 
       | Increasing architectural complexity to enforce boundaries is
       | never a solution to a lack of organizational discipline, but
       | midsize tech companies _incessantly_ treat it like one. If you're
       | having trouble because your domains lack good boundaries, then
       | extracting services _is not going to go well_.
        
         | avelis wrote:
         | Sounds like a SISP (solution in search of a problem). Or
         | throwing a solution at a problem not understanding the root
         | issue of it.
        
         | jeffbee wrote:
         | I find it odd that there is this widespread meme--on HN, not in
         | the industry--that microservices are never justified. I think
         | everyone recognizes that it makes sense that domain name
         | resolution is performed by an external service, and very few
         | people are out there integrating a recursive DNS resolver and
         | cache into their monolith. And yet, this long-standing division
         | of responsibility never seems to count as an example.
        
           | mjr00 wrote:
           | > I find it odd that there is this widespread meme--on HN,
           | not in the industry--that microservices are never justified.
           | 
           | There's a few things in play IMO.
           | 
           | One is lack of definition -- what's a "microservice" anyhow?
           | Netflix popularized the idea of microservices literally being
           | a few hundred lines of code maintained by a single developer,
           | and some people believe that's what a microservice is. Others
           | are more lax and see microservices as being maintained by
           | small (4-10 person) development teams.
           | 
           | Another is that most people have not worked at a place where
           | microservices were done well, because they were implemented
           | by CTOs and "software architects" with no experience at
           | companies with 10 developers. There are a lot of problems
           | that come from doing microservices poorly, particularly
           | around building distributed monoliths and operational
           | overhead. It's definitely preferable to have a poorly-built
           | monolith than poorly-built microservice architectures.
           | 
           | I've been at 4 companies that did microservices (in my
           | definition, which is essentially one service per dev team).
           | Three were a great development experience and dev/deploy
           | velocity was excellent. One was a total clusterfuck.
        
             | insanitybit wrote:
             | It doesn't lack a definition, there's lots of people
             | talking about this. In general you'll find something like
             | "a small service that solves one problem within a single
             | bounded context".
             | 
             | > It's definitely preferable to have a poorly-built
             | monolith than poorly-built microservice architectures.
             | 
             | I don't know about "definitely" at all. Having worked with
             | some horrible monoliths, I really don't think I agree.
             | Microservices can be done poorly but at minimum there's a
             | fundamental isolation of components. If you don't have any
             | isolation of components it was never even close to
             | microservices/SoA, at which point, is it really a fair
             | criticism?
        
               | mjr00 wrote:
               | > It doesn't lack a definition, there's lots of people
               | talking about this. In general you'll find something like
               | "a small service that solves one problem within a single
               | bounded context".
               | 
               | How small is small? Even within this comment section
               | there are people talking about a single developer being
               | the sole maintainer of multiple microservices. I'm a
               | strong advocate of (micro?)service architecture but I
               | would never recommend doing the "all behavior is 100-line
               | lambda functions" approach.
               | 
               | A horrible monolith vs horrible microservices is
               | subjective, of course, but IMO having everything self-
               | contained to one repository, one collection of app
               | servers, etc. at least gives you some _hope_ of
               | salvation, often by building new functionality in
               | separate services, ironically. Horrible microservices
               | that violate data boundaries, i.e. multiple services
               | sharing a database which is a sadly common mistake, is a
               | much harder problem to solve. (both are bad, of course!)
        
               | insanitybit wrote:
               | "Small" is a relative term, and not an ideal one, but
               | what it generally means is "no larger than is needed" -
               | that is, if you have one concrete solution within a
               | bounded context, "small" is the code necessary to
               | implement that solution. It's not a matter of LOC.
               | 
               | > IMO having everything self-contained to one repository
               | 
               | I highly recommend keeping all microservices in a single
               | repository. It's even more important in a microservice
               | world to ensure that you can update dependencies across
               | your organization atomically.
               | 
               | > Horrible microservices that violate data boundaries,
               | i.e. multiple services sharing a database which is a
               | sadly common mistake, is a much harder problem to solve.
               | 
               | But that's not microservices. Maybe this is in and of
               | itself an issue of microservice architecture, the fact
               | that people think they're implementing microservices when
               | they're actually just doing SoA, but microservice
               | architecture would absolutely _not_ include multiple
               | services with a single database, that would _not be_
               | microservices.
               | 
               | So I think the criticism would be "people find it hard to
               | actually implement microservices" and not "microservice
               | architecture leads to these problems", because
               | microservice architecture is going to steer you _away_
               | from multiple services using one database.
        
               | sally_glance wrote:
               | A little off topic but there are even more sinister
               | patterns than the shared database which some architects
               | are actively advocating for, like
               | 
               | 1. The "data service" layer, which (if done improperly)
               | is basically just a worse SQL implemented on top of HTTP
               | but still centralised. Though now you can claim it's a
               | shared service instead of a DB.
               | 
               | 2. The "fat cache" database - especially common in
               | strongly event-based systems. Basically every service
               | decides to store whatever it needs from events to have
               | lower latency access for common data. Sounds great but in
               | practice leads to (undocumented) duplicate data, which
               | theoretically should be synchronised but since those
               | service-local mirror DBs are usually introduced without
               | central coordination it's bound to desync at some point.
        
           | Scarblac wrote:
           | _Services_ are obviously a good idea (nobody is arguing
           | something like PostgreSQL or Redis or DNS or what have you
           | should all run in the same process as the web server).
           | 
           | _Microservices_ attract the criticism. It seems to assume
           | something about the optimal size of services ("micro") that
           | probably isn't optimal for all kinds of service you can think
           | of.
        
             | xnorswap wrote:
             | That may be true, but the article describes a services
             | architecture and labels it microservices, indeed goes onto
             | to say:
             | 
             | > The term micro can be misleading, though - there doesn't
             | have to be anything micro about the services
        
               | Scarblac wrote:
               | Yes, but the "widespread meme" as I see it is about
               | microservices, not services in general.
        
               | butlerm wrote:
               | The smaller the service, the more likely that the
               | overhead of having a separate service exceeds the benefit
               | of doing so. It isn't at all normal for a service to have
               | its own database unless it provides a substantial piece
               | of functionality, for example, and there are non-trivial
               | costs to splitting databases unnecessarily. If you are
               | not very careful, it is a good way to make certain things
               | a dozen times slower and a dozen times more expensive to
               | develop.
        
             | mjr00 wrote:
             | It's funny because the term "microservices" picked up in
             | popularity because previously, most "service-oriented
             | architecture" (the old term) implementations in large
             | companies had services that were worked on by dozens or
             | hundreds of developers, at least in my experience. So going
             | from that to services that were worked on by a single
             | development team of ~10 people was indeed a "microservice"
             | relatively speaking.
             | 
             | Now, thanks to massive changes in how software is built
             | (cloud, containers et al) it's a lot more standard for a
             | normal "service" with no prefix to be built by a small team
             | of developers, no micro- prefix needed.
        
               | Scarblac wrote:
               | I can understand wanting to split things off so that one
               | team handles one service, roughly.
               | 
               | There was a recent thread here where someone mentioned he
               | personally was responsible for about four of their
               | microservices...
        
               | nicoburns wrote:
               | Yeah, my last company had 10 microservices (the entirety
               | of their codebase) managed by a single team when I
               | started. Some of them had fewer than 5 API endpoints (and
               | weren't doing anything complex to justify that).
        
             | jeffbee wrote:
             | This seems subjective. It's like putting "compatible with
             | the character of the neighborhood" in a city's zoning
             | codes.
        
             | arethuza wrote:
             | You do see people arguing for running SQLite in-process
             | rather than using a separate database server like
             | PostgreSQL?
        
           | pjc50 wrote:
           | DNS resolution is genuninely reusable, though. Perhaps that's
           | the test: is this something that could concievably be used by
           | others, as a product in itself, or is it tied very heavily to
           | the business and the rest of the "microservices"?
           | 
           | Remember this is how AWS was born, as a set of
           | "microservices" which could start being sold to external
           | customers, like "storage".
        
             | jeffbee wrote:
             | What about a mailer that's divided into the SMTP server and
             | the local delivery agent?
        
               | pjc50 wrote:
               | qmail probably counts as "microservices" as well, yes. In
               | that case, for security separation of authority in a
               | multi-user environment. Nowadays we don't really do
               | multi-user environments and separation would be by
               | container.
        
           | nevinera wrote:
           | You're certainly misunderstanding me. Microservices are
           | definitely justifiable in plenty of cases, and _services_
           | even more often. But they _need to be technically justified_
           | - that's the point I'm making.
           | 
           | The majority of SOA adoption in small-to-medium tech
           | companies is driven by the wrong type of pain, by technical
           | leaders that can see that if they had their domains already
           | split out into services, their problems would not exist, but
           | don't understand that reaching that point involves _solving
           | their problems first_.
        
             | foobiekr wrote:
             | Whenever someone on the projects, I'm attached to tries to
             | create a new service, first I asked them what data it works
             | with, and then I ask them what the alternative to making
             | this a service would be. Usually, by the time we start
             | answering the second question, they realize that, actually,
             | adding the service is just more work.
             | 
             | To me, what's amazing is that in almost no organization is
             | there a requirement to justify technically the addition of
             | a new service, despite the cost and administrative and
             | cognitive overhead of doing so.
        
           | mdekkers wrote:
           | > I find it odd that there is this widespread meme--on HN,
           | not in the industry--that microservices are never justified.
           | 
           | Many HN patrons are actually working where the rubber meets
           | the road.
           | 
           | DNS is a poor comparison. Pretty much everything, related to
           | your application or not, needs DNS. On the other hand, the
           | only thing WNGMAN[0]may or may not do, is help with finding
           | the user's DOB.
           | 
           | https://m.youtube.com/watch?v=y8OnoxKotPQ
        
           | rqtwteye wrote:
           | "I find it odd that there is this widespread meme--on HN, not
           | in the industry--that microservices are never justified"
           | 
           | I think the problem is the word "micro". At my company I see
           | a lot of projects that are run by three devs that and have 13
           | microservices. They are easy to develop but the maintenance
           | overhead is enormous. And they never get shared between
           | projects so you have 5 services that do basically the same.
        
           | manicennui wrote:
           | I rarely see anyone claiming that microservices are never
           | justified. I think the general attitude toward them is due to
           | the amount of Resume Driven Development that happens in the
           | real world.
        
             | nevinera wrote:
             | Eh, I think a lot more of it is caused by optimism than RDD
             | - people who haven't _tried to do it_ look at the mess
             | they've got, and they can see that if it were divided into
             | domain-based services it would be less of a mess.
             | 
             | And the process seems almost straightforward until you
             | _actually try to do it_, and find out that it's actually
             | fractally difficult - by that point you've committed your
             | organization and your reputation to the task, and "hey,
             | that was a mistake, oops" _after_ you've sunk that kind of
             | organizational resources into such a project is a kind of
             | professional suicide.
        
           | bcrosby95 wrote:
           | Can you point to a comment on HN saying microservices are
           | never justified?
        
         | arzke wrote:
         | "The last responsible moment (LRM) is the strategy of delaying
         | a decision until the moment when the cost of not making the
         | decision is greater than the cost of making it."
         | 
         | Quoted from: https://www.oreilly.com/library/view/software-
         | architects-han...
        
           | TeMPOraL wrote:
           | Still trying to unlearn that one. Turns out, most decisions
           | are cheap to revert or backtrack on, while delaying them
           | until Last Responsible Moment often ends in shooting past
           | that moment.
        
             | jlundberg wrote:
             | Interesting term and I am curious to learn a few examples
             | on overshooting here!
             | 
             | My experience is that the new data available when
             | postponing decisions can be very very valuable.
        
             | cle wrote:
             | I don't think you need to unlearn it, but update your cost
             | function.
        
         | insanitybit wrote:
         | > Increasing architectural complexity to enforce boundaries is
         | never a solution to a lack of organizational discipline,
         | 
         | And yet we do this all the time. Your CI/CD blocking your PRs
         | until tests pass? That's a costly technical solution to solve
         | an issue of organizational discipline.
        
           | vrosas wrote:
           | The other problem is that these self-imposed roadblocks are
           | so engrained in the modern SDLC that developers literally
           | cannot imagine a world where they do not exist. I got
           | _reamed_ by some "senior" engineers for merging a small PR
           | without an approval recently. And we're not some megacorp,
           | we're a 12 person engineering startup! We can make our own
           | rules! We don't even have any customers...
        
             | insanitybit wrote:
             | Indeed, dogmatic adherence to arbitrary patterns is a huge
             | problem in our field. People have strong beliefs, "X good"
             | or "X bad", with almost no idea of what X even is, what the
             | alternatives are, why X was something people did or did not
             | like, etc.
        
               | jacquesm wrote:
               | Another problem is people making decisions with
               | potentially far reaching consequences well above their
               | paygrade without authorization.
        
               | insanitybit wrote:
               | I don't think that problem is nearly as significant.
        
               | jacquesm wrote:
               | What you think is usually the limit of your experience,
               | which effectively makes it anecdata. I've looked at
               | enough companies and dealt with the aftermath of enough
               | such instances that I beg to differ. It's possible that
               | due to the nature of my business my experience is skewed
               | in the other direction which makes that anecdata as well.
               | But it is probably more important than you think (and
               | less important than I think).
        
             | jacquesm wrote:
             | Your 'senior' engineer is likely right: they are trying to
             | get some kind of process going and you are actively
             | sabotaging that. This could come back to haunt you later on
             | when you by your lonesome decide to merge a 'small PR' with
             | massive downtime as a result of not having your code
             | reviewed. Ok, you say, I'm perfect. And I believe you. But
             | now you have another problem: the other junior devs on your
             | team who see vrosas commit and merge stuff by themselves
             | will see you as their shining example. And as a result they
             | by their lonesomes decide to merge 'small PR's with massive
             | downtime as a result.
             | 
             | If you got _reamed_ you got off lucky: in plenty of places
             | you'd be out on the street.
             | 
             | It may well be that you had it right but from context as
             | given I hope this shows you some alternative perspective
             | that might give you pause the next time you decide to throw
             | out the rulebook, even in emergencies - especially in
             | emergencies - these rules are there to keep you, your team
             | _and_ the company safe. In regulated industries you can
             | multiply all of that by a factor of five or so.
        
               | vrosas wrote:
               | I'm challenging my team to actually think about that
               | process, why it's in place, how it's helping (or actively
               | hurting!) us. Comparing ourselves to companies that have
               | regulatory requirements (spoiler: we don't and likely
               | won't for a long, long time) just furthers my point that
               | no one really thinks about these things. They just cargo
               | cult how everyone else does it.
        
               | jacquesm wrote:
               | You can challenge them without actually violating
               | established process. I wasn't comparing you to companies
               | that have regulatory requirements, I was merely saying
               | that all of the above will factor in much, much stronger
               | still in a regulated industry.
               | 
               | But not being in a regulated industry doesn't mean there
               | isn't a very good reason to have a code review in your
               | process, assuming it is used effectively and not for
               | nitpicking.
               | 
               |  _Not_ having a code review step is usually a bad idea,
               | unless everybody on your team is of absolutely amazing
               | quality and they never make silly mistakes. I 've yet to
               | come across a team like that, but maybe you are the
               | exception to the rule.
        
               | insanitybit wrote:
               | > Your 'senior' engineer is likely right: they are trying
               | to get some kind of process going and you are actively
               | sabotaging that
               | 
               | Why? Because it's a "good practice"? They have 12 people
               | and no customers, they can almost certainly adopt a very
               | aggressive developer cycle that optimizes almost
               | exclusively for happy-path velocity. You'd never do that
               | at 50+ engineers with customers but for 12 engineers who
               | have no customers? It's fine, in fact it's _ideal_.
               | 
               | > with massive downtime as a result.
               | 
               | They have _no customers_ , downtime literally does not
               | exist for them. You are following a dogmatic practice
               | that is optimizing for a situation that literally does
               | not exist within their company.
        
               | jacquesm wrote:
               | You establish a process before you need it, and code
               | review, especially when starting up is a fantastic way to
               | make sure that everybody is on the same page and that you
               | don't end up with a bunch of latent issues further down
               | the line. The fact that they have no customers _today_
               | doesn 't mean that they won't have any in the future and
               | mistakes made _today_ can cause downtime further down the
               | line.
               | 
               | If you're wondering why software is crap: it is because
               | every new generation of coders insists on making all the
               | same mistakes all over again. Learn from the past,
               | understand that 'good practice' has been established over
               | many years of very expensive mistakes. 12 engineers is
               | already a nice little recipe for pulling in 12 directions
               | at once and even if they're all perfect they can still
               | learn from looking at each others code and it will ensure
               | that there are no critical dependencies on single
               | individuals (which can bite you hard if one of them
               | decides to leave, not unheard of in a startup) and that
               | if need be labor can be re-divided without too much
               | hassle.
        
               | insanitybit wrote:
               | I'm not advocating for having no processes, I'm
               | advocating for a process that matches their situation. A
               | company with no customers should not be worrying about
               | causing a production outage, they should be worried about
               | getting a demoable product out.
               | 
               | Dogmatic adherence to a process that limits developer
               | velocity and optimizes for correct code is very likely
               | the _wrong_ call when you have no customers.
        
               | jacquesm wrote:
               | If it is dogmatic, then yes: but you have no knowledge of
               | that and besides there are always people who believe
               | there is too much process and there are people that there
               | is too little. If you want to challenge the process you
               | do that by talking about it not by breaking the process
               | on purpose. That's an excellent way to get fired.
               | 
               | I don't know the context and I don't know the particular
               | business the OP is talking about. What I _do_ know is
               | that if you feel that your management is cargo culting
               | development methodology (which really does happen) you
               | can either engage them constructively or you can leave
               | for a better company. Going in with a confrontational
               | mindset isn 't going to be a good experience for anybody
               | involved. Case in point: the OP is still upset enough
               | that he feels it necessary to vent about this in an
               | online forum.
               | 
               | Note that this is the same person who in another comment
               | wrote:
               | 
               | "On the flip side I'm trying to convince my CTO to fire
               | half our engineering team - a group of jokers he hired
               | during the run-up who are now wildly overpaid and
               | massively under-delivering. With all the tech talent out
               | there I'm convinced we'd replace them all within a week."
        
               | vrosas wrote:
               | Heh, both can be true. Process doesn't make good
               | engineers any better. Bad code gets approved and merged
               | every day. I'd rather have a team I could trust to merge
               | and steward their code to production on their own instead
               | of bureaucracy giving people a false sense of security.
               | 
               | > Case in point: the OP is still upset enough that he
               | feels it necessary to vent about this in an online forum.
               | 
               | I apologize for the conversation starter.
        
               | nevinera wrote:
               | With no customers, one of the purposes of code-review is
               | removed, but it's the lesser one anyway. The primary goal
               | of code-review should _not_ be to "catch mistakes" in a
               | well-functioning engineering team - that's a thing that
               | happens, but mostly your CI handles that. Code-review is
               | about unifying approaches, cross-pollinating strategies
               | and techniques, and helping each other to improve as
               | engineers.
               | 
               | Your attitude towards code-review on the other hand is
               | one I've seen before several times, and I was glad when
               | each of those people were fired.
        
               | withinboredom wrote:
               | We did post-merge code reviews. But half our team was on
               | the other side of the planet from the other half (4
               | people on the team, US, EU. APAC, and AU).
               | 
               | If we waited for reviews before merging, we'd be waiting
               | weeks to merge a single PR. Thus, you wrote your code,
               | opened a PR, did a self-review, then deployed it. We had
               | millions of customers, downtime was a real possibility.
               | So you'd watch metrics and revert if anything looked
               | slightly off.
               | 
               | You would wake up to your PR being reviewed. Sometimes
               | there would be mistakes pointed out, suggestions to
               | improve it, etc. Sometimes it was just a thumbs up emoji.
               | 
               | The point is, there are many ways to skin this cat and to
               | "ream" someone for merging without deploying is
               | incredibly immature and uncreative. You can still review
               | a merged PR.
        
               | nevinera wrote:
               | That process sounds fine to me, especially in a context
               | with either good integration coverage or low downtime
               | cost.
               | 
               | > to "ream" someone for merging without deploying is
               | incredibly immature and uncreative.
               | 
               | I'd agree, but I _highly_ doubt that description was an
               | accurate one. Read through the other comments by the same
               | person and you'll get a picture of their personality
               | pretty quickly.
               | 
               | It's likely that there was already an ongoing conflict
               | either in general or specifically between them about this
               | issue. They probably got a moderately harsh comment to
               | the effect of "hey, you're expected to wait for code-
               | reviews now, knock it off"
        
               | vrosas wrote:
               | I suggested it's possible to write, commit and own code
               | without others' approval to increase productivity and
               | people get _extremely_ defensive about it. It's so odd.
               | It happened in real life and it's happening in this
               | thread now, too. They even attack your character over it.
        
               | withinboredom wrote:
               | Yes. Some people get personally attached to code. It's
               | incredibly frustrating. Some people use reviews to push
               | dogmatic approaches to architecture and/or exert some
               | kind of control over things. Whenever I meet these people
               | in a code review, and they make unnecessary suggestions
               | or whatever, my favorite phrase to say is, "I can get
               | behind that, but I don't think it's worth the time to do
               | that right now," or, "I disagree, can you give an
               | argument grounded in computer science." With the latter
               | only being used twice in my career, when someone left a
               | shitload of comments suggesting variable name changes,
               | and then again, when someone suggested rewriting
               | something that was O(n) to O(n^2) and claimed it was
               | better and wouldn't give up.
               | 
               | You want to get the team to a point where you can
               | disagree and commit, no code will ever be perfect and
               | there is no reason spending 3-4 rounds of change requests
               | trying. I think the worst code review I ever had, ended
               | with me saying, "if you're going to be this nitpicky, why
               | don't you take the ticket?" (It was extremely complex and
               | hard to read -- and there wasn't any getting around it,
               | lots of math, bit shifting, and other shenanigans. The
               | reviewer kept making suggestions that would result in
               | bugs, and then make more suggestions...)
               | 
               | He came back the next day and approved my PR once he
               | understood the problem I was trying to solve.
               | 
               | Even these days, where I work on a close team IRL, I've
               | been known to say, "if there are no objections, I'm
               | merging this unreviewed code." And then I usually get a
               | thumbs up from the team, or they say something like "oh,
               | I wanted to take a look at that. Give me a few mins I got
               | sidetracked!" And I've even heard, "I already reviewed
               | it, I just forgot to push approve!"
               | 
               | Communication is key in a team. Often, if the team is
               | taking a long time to review, give them the benefit of
               | the doubt, but don't let yourself get blocked by a
               | review.
        
               | insanitybit wrote:
               | You have no idea what my attitude towards code-review is.
        
               | nevinera wrote:
               | Well, partly that was a mistaken impression because I
               | thought that your comment was also from vrosas. But I
               | think there's enough in there to assess your attitude
               | toward code-review at least a _bit_:
               | 
               | > They have 12 people and no customers, they can almost
               | certainly adopt a very aggressive developer cycle that
               | optimizes almost exclusively for happy-path velocity.
               | You'd never do that at 50+ engineers with customers but
               | for 12 engineers who have no customers? It's fine, in
               | fact it's ideal.
               | 
               | 12 engineers churning out code with no code-review at
               | all? That'll produce velocity, for sure. It'll also
               | produce not just an unmaintainable mess, but an
               | interesting experiment, in which you get to find out
               | which of your engineers are socially capable enough to
               | initiate technical communication independently and
               | construct technical rapport _without_ that process
               | helping them to do so. Hope none of them hold strong
               | technical opinions that clash!
        
               | insanitybit wrote:
               | Who said "no code-review at all"? Not me, not vrosas as
               | far as I saw.
        
               | namtab00 wrote:
               | I cannot agree more, and have nothing to add except "can
               | I work with you?"..
        
               | mgkimsal wrote:
               | Then... the people responsible for this should have
               | blocked PRs without a review. Or protected the target
               | branch. Or... something. If it's sacrosanct to do what OP
               | did, but the 'senior' folks didn't put in actual
               | guardrails to prevent it... OP is not entirely at fault.
        
             | elzbardico wrote:
             | Really man. I have almost two decades developing software
             | and yet, I feel a lot more comfortable having all my code
             | reviewed. If anything I get annoyed by junior developers in
             | my team when they just rub-stamp my PRs because supposedly
             | I am this super senior guy that can't err. Code Reviews are
             | supposed to give you peace of mind, not being a hassle.
             | 
             | During all this time, I've seen plenty of "small changes"
             | having completely unexpected consequences, and sometimes
             | all it would take to avoid would someone else seeing it
             | from another perspective.
        
               | vrosas wrote:
               | Not a man.
               | 
               | I'm not convinced bad code would get merged more often if
               | we didn't require approvals. I am convinced we'd deliver
               | code faster though, and that's what I'm trying to
               | optimize for. Your company and engineering problems are
               | not the same as mine.
        
               | foobiekr wrote:
               | At 30 years of coding professionally in great
               | engineering-focused organizations, and that on top of
               | nearly a lifetime of having coded for myself, I've
               | concluded code reviews barely work.
               | 
               | I agree with everything you say here, but honestly it's
               | quite ineffective. I wish we had found something better
               | than unit tests (often misleading and fragile) and the
               | ability of a qualified reviewer to maintain attention and
               | context that a real review entails.
        
           | nevinera wrote:
           | That's technical, and not architectural. I'm _all about_
           | technical solutions to lack of discipline, and in fact I
           | think technical and process solutions are the only immediate
           | way to create cultural solutions (which are the long-term
           | ones). I'd even consider minor increases to architectural
           | complexity for that purpose justifiable - it's a real
           | problem, and trading to solve it is reasonable.
           | 
           | But architectural complexity has outsized long-term cost, and
           | service-orientation in particular has a _lot_ of it. And in
           | this particular case, it doesn't actually solve the problem,
           | since you _can't_ successfully enforce those domain
           | boundaries unless you already have them well-defined.
        
             | jaxr wrote:
             | Agreed. isn't that why strongly typed languages made a
             | comeback?
        
             | insanitybit wrote:
             | Can you explain the salient distinction between a
             | "technical" versus "architectural" solution? Candidly, I'm
             | not convinced that there is one.
             | 
             | > But architectural complexity has outsized long-term cost
             | 
             | As do technical solutions, of course. CI/CD systems are
             | very expensive, just from a monetary perspective, but also
             | impose significant burdens to developers in terms of
             | blocking PRs, especially if there are flaky or expensive
             | tests.
             | 
             | > And in this particular case, it doesn't actually solve
             | the problem, since you _can't_ successfully enforce those
             | domain boundaries unless you already have them well-
             | defined.
             | 
             | Ignoring microservices, just focusing on underlying SoA for
             | a moment, the boundary is the process. That is an
             | enforceable boundary. I think what you're saying amounts
             | to, in microservice parlance, that there is no way to
             | prevent a single microservice from crossing multiple
             | bounded contexts, that it ultimately relies on developers.
             | This is true, but it's also just as true for good
             | monolithic designs around modules - there is no technical
             | constraint for a module to not expand into domains,
             | becoming cluttered and overly complex.
             | 
             | Microservices do not make that problem harder, but SoA does
             | give you a powerful technical tool for isolation.
        
               | nevinera wrote:
               | > Can you explain the salient distinction between a
               | "technical" versus "architectural" solution? Candidly,
               | I'm not convinced that there is one.
               | 
               | Not concisely in the general case, but in this case the
               | difference is fairly straightforward - CI/CD doesn't
               | affect the structure of your executing application at
               | all, only the surrounding context. I don't want to spend
               | the hours it would take to characterize architecture as
               | distinct from implementation, but the vast number of
               | textbooks on the topic all generally agree that there is
               | one, though they draw the lines in slightly different
               | places.
               | 
               | > I think what you're saying amounts to,
               | 
               | Very much no - my point is about the process of
               | implementation. The services.. _do_ enforce boundaries,
               | but the boundaries they enforce may not be good ones.
               | 
               | In order to successfully extract services from a
               | monolith, you have to go through a process that includes
               | finding and creating those domain boundaries for the
               | domain being extracted. If it's your first time, you
               | might be doing that implicitly and without realizing it's
               | what you're doing, but under the hood it's the bulk of
               | the mental effort.
               | 
               | The part where you actually _introduce a service_ can be
               | anywhere from a tenth to half of the work (that fraction
               | varies a lot by technical stack and depending on how
               | coupled the domain in question is to the central
               | behavioral tangle in the monolith), but by the time
               | you've gotten the domain boundary created you've _already
               | solved_ the original problem. Now you're facing a trade
               | of "extract this well-factored domain out to a separate
               | service application, to prevent its now-clear boundaries
               | from being violated in the future". And I contend that
               | that's a trade that should rarely be made.
        
               | danielovichdk wrote:
               | Which is exactly why the real challenges around (micro)
               | services is to a large extent an organisational challenge
               | mixed how those boundaries are belonging to which
               | business capabilities.
               | 
               | The technical part of services is easy enough really but
               | if the organisation leaks its boundaries, which always
               | flow downstream into the actual teams, you are proper
               | fucked.
               | 
               | It takes a different level of maturity, both in
               | organisation and team, to build a service oriented
               | software.
               | 
               | That's my experience at least.
        
               | nevinera wrote:
               | Right! What I'm fundamentally saying is that the majority
               | of orgs trying to adopt SOA are doing it for the wrong
               | reasons, and will deeply regret it. In general, the
               | "right way" to adopt SOA is to extract services one at a
               | time because you have to, to solve various problems
               | you've encountered (scaling problems, technical problems,
               | stability issues), and then realize that you are in fact
               | currently using a service-oriented architecture, and have
               | been for several years.
        
             | jcstauffer wrote:
             | I would hope that there is more process in place protecting
             | against downtime than code review - for example automated
             | tests across several levels, burn-in testing, etc.
             | 
             | People are not reliable enough to leave them as the only
             | protection against system failure...
        
               | nevinera wrote:
               | Did you mean to reply to somebody else? I'm a huge
               | believer in automated testing, and if I said something
               | that can be interpreted otherwise I'd like to clarify it.
        
               | marcosdumay wrote:
               | I guess the GP's issue is because automated tests (and
               | every other kind of validation) imposes architectural
               | constraints on your system, and thus are an exception to
               | your rule.
               | 
               | I don't think that rule can be applied as universally as
               | you stated it. But then, I have never seen anybody
               | breaking it in a bad way that did also break it in a good
               | way, so the people that need to hear it will have no
               | problem with the simplified version until they grow a
               | bit.
               | 
               | Anyway, that problem is very general of software
               | development methods. Almost every one of them is
               | contextual. And people start without the maturity to
               | discern the context from the advice, so they tend to
               | overgeneralize what they see.
        
               | nevinera wrote:
               | Hm. I think maybe you're using "system" to mean a
               | different thing than I am? I thinking of "the system" as
               | the thing that is executing in production - it provides
               | the business behavior we are trying to provide; there is
               | a larger "system" surrounding it, that includes the
               | processes and engineers, and the CI/CD pipelines - it too
               | has an "architecture", and _that_ architecture gets
               | (moderately) more complex when you add CI/CD. Is that
               | where our communication is clashing?
               | 
               | Because the complexity of that outer system is also
               | important, but there are a few very major differences
               | between the two that are probably too obvious to belabor.
               | But in general, architectural complexity in the inner
               | system costs a lot more than it does in the outer system,
               | because it's both higher churn (development practices
               | change much slower than most products) and higher risk
               | (taking production systems offline is much less
               | permissible than freezing deployments)
        
               | marcosdumay wrote:
               | > I think maybe you're using "system" to mean a different
               | thing than I am?
               | 
               | No, I'm not. Are you overlooking some of the impacts of
               | your tests and most of the impact of static verification?
               | 
               | Those do absolutely impact your system, not only your
               | environment. For tests it's good to keep those impacts at
               | a minimum (for static verification you want to maximize
               | them), but they still have some.
        
               | nevinera wrote:
               | I don't think I'm overlooking any major ones, but we are
               | probably working in quite different types of systems -
               | I'm not aware of any type of static verification I'd use
               | in a rails application that would affect _architecture_
               | in a meaningful way (unless I would write quite
               | terrifying code without the verifier I suppose).
               | 
               | I'm not sure about the tests - it depends on what you'd
               | consider an impact possibly; I've been trained by my
               | context to design code to be decomposable in a way that
               | _is_ easily testable, but I'm not sure that is really an
               | 'impact of the tests' (and I'm fairly confident that it
               | makes the abstractions less complex instead of more).
               | 
               | Would you mind explaining what you mean in more detail?
        
           | mvdtnz wrote:
           | No, not at all. CI/CD blocking pull requests is in place
           | because large systems have large test suites and challenging
           | dependencies which mean that individual developers literally
           | can't run every test on their local machine and can often
           | break things without realising it. It's not about
           | organisational discipline, it's about ensuring correctness.
        
             | bluGill wrote:
             | I can run every test on my machine if I want. It would be a
             | manual effort, but wouldn't be hard to automate if I cared
             | to try. However it would take about 5 days to finish. It
             | isn't worth it when such tests rarely fail - the CI system
             | just spin them off to many AWS nodes and if something fails
             | then run just that test locally and I get results in a few
             | hours (some of the tests are end to end integration that
             | need more than half an hour).
             | 
             | Like any good test system I have a large suite of "unit"
             | tests that run quick that I run locally before committing
             | code - it takes a few minutes to get high code coverage if
             | you care about that metric. Even then I just run the tests
             | for x86-64, if they fail on arm that is something for my CI
             | system to figure out for me.
        
         | mike_ivanov wrote:
         | I want this printed and neatly framed. Where to order?
        
       | lcfcjs wrote:
       | Interesting article.
       | 
       | Although one point I'd like to contest is the first "pro" which
       | is you can use a different language for each service. We tried
       | this approach and it failed fantastically. You're right about the
       | cons, it becomes un-maintable.
       | 
       | We had 3 microservices that we maintained on our team, one in
       | Java, one in Ruby and one in Node. We very quickly realized we
       | needed to stick to one, in order to share code, stop the context
       | switching, logging issues, etc.
       | 
       | The communication piece is something that solid monoliths should
       | practice as well (as is it touched on in the article). Calling an
       | 3rd party API without a timeout is not a great idea (to be it
       | lightly), monolith or microservice.
       | 
       | Thought-provoking nevertheless, thank you for sharing.
        
         | codesnik wrote:
         | lemme guess, you all converged to Java?
        
       | jeffbee wrote:
       | The article sees to take for granted that your development org is
       | completely broken and out of control. They can't decide what to
       | work on during sprints, they furtively introduce third party
       | libraries and unknown languages, they silently ship incompatible
       | changes to prod, etc. I guess microservices are easier if your
       | developers aren't bozos.
        
         | alt227 wrote:
         | Unfortunately there are very often bozos in your team or
         | complete teams of bozos working on the same project as you. Im
         | sure Microservices are easier if you work in a development team
         | of smart, competent, intelligent developers. However Im sure
         | everything would be easier then!
        
         | simonw wrote:
         | Every developer is a bozo for their first few months in a new
         | job, simply because it takes time to absorb all of the
         | information needed to understand how the existing systems work.
        
           | jeffbee wrote:
           | Acculturating new developers is one of the main tasks of an
           | organization. I don't think it's very difficult to
           | communicate that some company uses language[s] X[, Y and Z]
           | only.
        
             | simonw wrote:
             | That depends on the culture of each specific organization.
             | Are there top-down engineering decisions? Is there a push
             | for more team autonomy?
             | 
             | My experience is that many organizations have something of
             | a pendulum swinging between those two positions, so the
             | current state of that balance may change over time.
             | 
             | Also: many new developers, when they hear "microservices",
             | will jump straight to "that means I can use any language I
             | want, right?"
             | 
             | (Maybe that's less true in 2023)
        
       | avelis wrote:
       | > Once the monolith is well matured and growing pains start to
       | rise, then you can start to peel off one microservice at a time
       | from it.
       | 
       | Curious what is considered a growing pain of the monolith vs tech
       | debt that hasn't been tackled. If the issue is the monolith can't
       | scale in performance then a services oriented architecture
       | doesn't necessarily give you that automatically unless you know
       | where the bottleneck of your monolith is.
        
       | harry_ord wrote:
       | My current job hasn't been a great experience with microservices.
       | It's a industry where I've worked with a monolith that did a lot
       | more but having everything split between like 17 different
       | services makes managing anything not so fun. The other big
       | blocker is small team and only one of us met the original staff
       | who designed this.
        
       | ThinkBeat wrote:
       | Side issue: What software is used to make those diagrams?
        
         | FL410 wrote:
         | Looks like Excalidraw
        
       | timwaagh wrote:
       | I liked this well balanced approach. What I think is necessary is
       | more capability to have hard modularisation within a monolith, so
       | decoupling is not a reason to introduce microservices.
       | Performance should be the main/only reason to do it. It's a shame
       | few languages support this.
        
         | esafak wrote:
         | You mean like namespacing? What language did you have in mind?
        
       | ranting-moth wrote:
       | The article mentions Development Experience, but doesn't mention
       | what I think is an overlooked huge cost.
       | 
       | Bad Development Experience results in unhappy and/or frustrated
       | developer. Unhappy and/or frustrated developer is usually
       | performing considerably worse than his happier self would.
        
       | lr4444lr wrote:
       | Don't disagree with the article, but to play Devil's Advocate,
       | here are some examples of when IME the cost IS worth it:
       | 
       | 1) there are old 3rd party dependency incompatibilities that you
       | can spin off and let live separately instead of doing a painful
       | refactor, rebuilding in house, or kludgy gluing
       | 
       | 2) there are deploy limitations on mission critical high
       | available systems that should not hold up other systems
       | deployment that have different priorities/sensitive business
       | hours time windows
       | 
       | 3) system design decisions that cannot be abstracted away are at
       | the mercy of some large clients of the company that are unable or
       | unwilling to change their way of doing things - you can silo the
       | pain.
       | 
       | And to be clear, it's not that these things are "cost free". It's
       | just a cost that is worth paying to protect the simpler monolith
       | from becoming crap encrusted, disrupted with risky deploys, or
       | constrained by business partners with worse tech stacks.
        
         | eterevsky wrote:
         | 1) Why is it better than wrapping it in an interface with a
         | clear API without extracting it into a separate service?
        
           | paulddraper wrote:
           | Those dependencies might cross language boundaries, or
           | interfere with your other dependencies running in the same
           | process.
        
             | bingemaker wrote:
             | On the contrary, when they fail silently, it is hard debug
             | "dependent" services
        
               | mark_undoio wrote:
               | In the Java world, we've developed record/replay across
               | microservice boundaries, so it's effectively possible to
               | step from a failure in one microservice and into the
               | service that sent it bad data.
               | 
               | https://docs.undo.io/java/Microservices.html
               | 
               | [Edit: for which, by the way, any feedback is welcome -
               | it's technically quite promising but real world
               | experience is worth a lot!]
        
           | voxic11 wrote:
           | Because of dependency issues like he mentioned. If I am using
           | Library A which depends on version 1 of Library C and I need
           | to start using Library B which depends on version 2 of
           | Library C then I have a clear problem because most popular
           | programming languages don't support referencing multiple
           | different versions of the same library.
        
             | furstenheim wrote:
             | This node got it just right. You only get this issue for
             | big stateful libraries, like frameworks
        
             | rmbyrro wrote:
             | Can't we just isolate these two services in containers,
             | using different library versions?
             | 
             | They don't need to be microservices in order to isolate
             | dependencies, do they?
             | 
             | In Python, for instance, you don't even need containers.
             | Just different virtual environments running on separate
             | processes.
        
               | mjr00 wrote:
               | If you have two separate processes running in two
               | separate containers... those are two separate services.
               | You need to solve the same problems that would come with
               | running them on two different EC2 instances: what's the
               | method for communicating with the other container? What
               | happens if the other container I'm calling is down? If
               | the other container is down or slow to respond to my API
               | calls, am I dealing with backpressure gracefully or will
               | the system start to experiencing a cascade of failures?
        
               | jethro_tell wrote:
               | I suppose if you do that, they will communicate over the
               | network likely via api and have the ability to scale
               | independently.
               | 
               | You just invented microservices, lol.
        
             | foobiekr wrote:
             | You're not wrong here but usually this comes down to having
             | one or a small number of version specific container
             | processors (and I do not mean container in the docker
             | sense) to host those functions. Not microservices.
             | 
             | That said, almost always, if you are seriously trying to
             | keep old unsupported dependencies running, you're violating
             | any reasonable security stance.
             | 
             | I'm not saying it never happens, but often the issues that
             | the code is not understood, and no one is capable of
             | supporting it, and so you are deep in the shit already.
        
             | slowmovintarget wrote:
             | OSGi and Java Modules would like a word.
             | 
             | Too few developers use the facilities available for that
             | kind of in-process isolation, even when it is possible.
             | (Don't tell me Java isn't popular... It may be the new
             | COBOL, but it's still mainstream.)
        
         | paulddraper wrote:
         | And those would apply to < 10% of your total code base.
        
         | eduction wrote:
         | Wouldn't this just be "having one or two services"? I don't
         | think that's the same as "microservices".
         | 
         | Correct me if I'm wrong, but isn't "microservices" when you
         | make internal components into services _by default_ , instead
         | of defaulting to making a library or class?
        
           | Dudester230602 wrote:
           | Exactly, just use well-factored services of any size and
           | smack anyone saying "micro..." with a wet towel for they are
           | just parroting some barely-profitable silicon valley money
           | sinks.
        
           | rmbyrro wrote:
           | Exactly my thoughts
        
             | sroussey wrote:
             | Microservices is a newer term than SOA.
        
               | PedroBatista wrote:
               | They are not the same thing! Microservices is a discount
               | version of SOA with a new coat of paint.
        
               | sroussey wrote:
               | Clearly. But it's also SOA taken to the extreme!
        
           | lr4444lr wrote:
           | I don't want to get too tied up in the terminology, but
           | "microservices-first" does not seem to be the problem the
           | post is describing:
           | 
           |  _One way to mitigate the growing pains of a monolithic
           | backend is to split it into a set of independently deployable
           | services that communicate via APIs. The APIs decouple the
           | services from each other by creating boundaries that are hard
           | to violate, unlike the ones between components running in the
           | same process_
        
             | wongarsu wrote:
             | Without getting tied up in the whole "every language except
             | assembly, Python and possibly JavaScript already solved
             | this problem by forcing people to adhere to module-level
             | APIs" argument, I think the crux of the issue is that the
             | article just defines microservice architecture as any
             | architecture consisting of multiple services, and
             | explicitly states "there doesn't have to be anything micro
             | about the services". Which imho is watering down the term
             | microservices too much. You don't have microservices as
             | soon as you add a second or third service.
        
               | hinkley wrote:
               | I think we should start calling this Pendulum Blindness.
               | 
               | We just go from 'one' to 'too many' as equally unworkable
               | solutions to all of our problems, and each side (and each
               | person currently subscribed to that 'side') knows the
               | other side is wrong. The assumption is that this means
               | their side is right instead of reality, which is nobody
               | is right.
               | 
               | The moderates are right, but their answers are wiggly and
               | they're too busy getting stuff done to argue with the
               | purists. But the optics are bad so here we go again on
               | another swing of the swingset.
               | 
               | 'Some Services' is probably right, but it varies with
               | domain, company size, and company structure (Conway's
               | Law). And honestly developer maturity, so while 7 may be
               | right for me and my peers today, in a year it might be 6,
               | or 9. There's no clever soundbite to parrot.
        
           | gedy wrote:
           | For some reason, most of the people I've worked with recently
           | are either fully into monoliths or lots of fine grained,
           | interdependent microservices.
           | 
           | They don't seem to understand there's a useful middleground
           | of adding fewer, larger data services, etc. It's like SOA
           | isn't a hot topic so people aren't aware of it.
        
             | andorov wrote:
             | I've been using the term 'macro-services' to describe this
             | middle ground.
        
           | philwelch wrote:
           | This is why I prefer the old term "service oriented
           | architecture" over "microservices". "Microservices" implies
           | any time a service gets too big, you split it apart and
           | introduce a network dependency even if it introduces more
           | problems than it solves.
        
         | j45 wrote:
         | There are many ways to architecture well that doesn't mean
         | prematurely introducing micro services.
         | 
         | I'm a fan of microservices btw.
         | 
         | Premature optimization and scaling is almost as bad of a form
         | of technical debt as others when you have to optimize in a
         | completely different manner and direction.
        
         | jupp0r wrote:
         | To add another to your list:
         | 
         | Being able to easily use different programming languages. Not
         | every language is a good fit for every problem. Being able to
         | write your machine learning deduction services in Python, your
         | server side rendered UI in Rails and your IO and concurrency
         | heavy services in Go might justify the additional overhead of
         | having separate services for these three.
        
       | agumonkey wrote:
       | Has anyone approached microservices with good old brookes
       | modularization perspective ? when and how to split your app into
       | modules / services whatever.
        
       | abound wrote:
       | I'm as much of a "build a monolith until you can't" person as
       | any, but one motivation for using microservices that I haven't
       | seen mentioned here is differing resource/infra requirements +
       | usage patterns.
       | 
       | Throw the request/response-oriented API with bursty traffic on
       | something serverless, run the big async background tasks on beefy
       | VMs (maybe with GPUs!) and scale those down when you're done. Run
       | the payments infra on something not internet-facing, etc.
       | 
       | Deploying all those use cases as one binary/service means you've
       | dramatically over-provisioned/underutilized some resources, and
       | your attack service is larger than it should be.
        
         | charcircuit wrote:
         | A load balancer can make this work with a monolith. Requests
         | for expensive services can be routed to a separate tier of
         | servers for example.
        
       | Symmetry wrote:
       | In the robotics world it's pretty common to have something that
       | looks a lot like microservices as an organizing principle. ROS[1]
       | would probably the robotics framework that everybody cuts their
       | teeth on and in that you have a number of processes communicating
       | over a pub/sub network. Some of the reasons for organizing this
       | way would be familiar to someone doing websites, but you also
       | have factors like vendors for sensors you need providing
       | libraries that will occasionally segfault, and needing to be able
       | to recover from that.
       | 
       | [1]https://en.wikipedia.org/wiki/Robot_Operating_System
        
       | eterevsky wrote:
       | In my experience, if a piece of code is stateful and is called
       | from several different services, then it should be its own
       | service. Otherwise it is usually better off as a library.
        
       | sorokod wrote:
       | _eventual consistency_
       | 
       | A bit of tangent but one must admire the optimistic attitude
       | reflected by this name.
       | 
       | "Most of the time inconsistent" says the same thing but is not
       | great for marketing.
        
       | spott wrote:
       | "Microservices" has always been weird to me.
       | 
       | People argue for them from two different, drastically different,
       | points of view:
       | 
       | * Microservices become necessary at some point because they allow
       | you to independently scale different parts of the application
       | depending on load.
       | 
       | * Microservices become necessary at some point because they allow
       | you to create hard team boundaries to enforce code boundaries.
       | 
       | Personal opinion: micro services are always thrown out there as a
       | solution to the second problem because it is easier to split a
       | service out of a monolith than it is to put the genie of bad code
       | boundaries back in the box.
       | 
       | An application goes through a few stages of growth:
       | 
       | 1) When it starts, good boundaries are really hard to define
       | because no-one knows what the application looks like. So whatever
       | boundaries are defined are constantly violated because they were
       | bad abstraction layers in the first place.
       | 
       | 2) After a while, when the institutional knowledge is available
       | to figure out what the boundaries are, it would require
       | significant rewrites/refactoring to enforce them.
       | 
       | 3) Since a rewrite/major refactor is necessary either way,
       | everyone pushes to go to micro services because they are a good
       | resume builder for leadership, and "we might need the ability to
       | scale", or people think it will be easier ("we can splinter off
       | this service!", ignoring the fact that they can splinter it off
       | within the monolith without having to deal with networks and REST
       | overhead).
       | 
       | Unfortunately, this means that everyone has this idea that micro
       | services are _necessary_ for code boundaries because so many
       | teams with good code boundaries are using micro services.
        
         | soco wrote:
         | Granular performance and team boundaries are both valid points.
         | But, I haven't seen yet (around me) monolith applications so
         | complex to have more teams working on them. I've seen instead
         | applications developed by two persons where some higher-ups
         | requested splitting them into microservices just because (no,
         | scaling wasn't needed).
        
       | paulddraper wrote:
       | > The term micro can be misleading, though - there doesn't have
       | to be anything micro about the services. In fact, I would argue
       | that if a service doesn't do much, it just creates more
       | operational toll than benefits. A more appropriate name for this
       | architecture is service-oriented architecture, but unfortunately,
       | that name comes with some old baggage as well.
       | 
       | What baggage is that?
       | 
       | Because in my experience 90% of the argument revolves around the
       | word "micro."
       | 
       | If "micro" is indeed irrelevant to "microservices," let's name
       | things better, yes?
        
         | wholesomepotato wrote:
         | https://www.service-architecture.com/articles/web-services/s...
         | 
         | After enough people retire that remember how big of a mess this
         | was, we can probably re-use "Service Oriented".
        
       | fwungy wrote:
       | Just use Rust. Rust solves all of this.
        
       | aiunboxed wrote:
       | If your company has a microservice architecture but doesn't have
       | proper knowledge on how should they communicate, how should they
       | share code etc then it is the worst thing possible.
        
       | astonex wrote:
       | There's a spectrum between each function is a separate service,
       | to everything is in one binary. The engineering part is finding
       | the happy medium.
        
       | exabrial wrote:
       | > Getting the boundaries right between the services is
       | challenging - it's much easier to move them around within a
       | monolith until you find a sweet spot. Once the monolith is well
       | matured and growing pains start to rise, then you can start to
       | peel off one microservice at a time from it.
       | 
       | I couldn't agree more. "Do not optimize your code" applies here.
        
       | resters wrote:
       | There should be a distinction between doing a migration of a
       | "monolithic" system to "microservices" vs adding functionality to
       | a monolithic system by implementing a "microservice" that will be
       | consumed by the monolithic system.
       | 
       | In some cases, microservices are helpful for separation of
       | concerns and encapsulation.
       | 
       | But teams often get the idea that their monolithic system should
       | be rewritten using "microservice" patterns. It's important to
       | question whether the monolithic system is using the correct
       | abstractions and whether spllitting it into different services is
       | the only way to approach the deficiencies of the system.
       | 
       | In most systems there are bits that can (and probably should) be
       | quite decoupled from the rest of the system. Understanding this
       | is a more fundamental aspect of system design and does not apply
       | only to monolithic or microservice approaches.
        
       | aleksiy123 wrote:
       | I recently had some discussions and did some research on this
       | topic and I feel like there is a lot people don't talk about in
       | these articles.
       | 
       | Here are some more considerations between micro services and
       | monolothic tradeoffs. Its also important to consider these two
       | things as a scale and not a binary decision.
       | 
       | 1. Isolation. Failure in on service doesn't fail the whole
       | system. Smaller services have better isolation.
       | 
       | 2. Capacity management. Its easier to estimate the resource usage
       | of a smaller service because it has less responsibilities. This
       | can result in efficiency gains. Extended to this is you can also
       | give optimized resources to specific services. A prediction
       | service can use GPU while web server can use CPU only. A
       | monolothic may need to use compute with both which could result
       | in less optimized resources.
       | 
       | 3. Dev Ops Overhead. In general monolothic services have less
       | management overhead because you only need to manage/deploy one or
       | few services over many.
       | 
       | 4. Authorization/Permissions. Smaller services can be given a
       | smaller scope permissions.
       | 
       | 5. Locality. Monolothic can share memory and therefore have
       | better data locality. Small services use networks and have higher
       | overhead.
       | 
       | 6. Ownership. Smaller services can have more granular ownership.
       | Its easier to transfer ownership.
       | 
       | 7. Iteration. Smaller services can move independently of one
       | another and can release at seperate cadences.
        
         | tazard wrote:
         | 1. Isolation
         | 
         | With a well built monolith, a failure on a service won't fail
         | the whole system.
         | 
         | For poorly built microservices, a failure on a service
         | absolutely does being down the whole system.
         | 
         | Not sure I am convinced that by adopting microservices, your
         | code automatically gets better isolation
        
           | aleksiy123 wrote:
           | Its not automatic but it has the potential for more isolation
           | by definition.
           | 
           | If your service has memory leak, crash it only takes down the
           | service. It is still up to your system to handle such a
           | failure gracefully. If such a service is a critical
           | dependency then your system fails. But if it is not then your
           | service can still partially function.
           | 
           | If your monolith has memory leak, or crash it takes down the
           | whole monolith.
        
           | scurvy_steve wrote:
           | I work on low code cloud ETL tools. We provide the
           | flexibility for the customer to do stupid things. This means
           | we have extremely high variance in resource utilization.
           | 
           | An on demand button press can start a processes that runs for
           | multiple days, and this is expected. A job can do 100k API
           | requests or read/transform/write millions of records from a
           | database, this is also expected. Out of memory errors happen
           | often and are expected. It's not our bad code, its the
           | customer's bad code.
           | 
           | Since jobs are run as microservices on isolated machines,
           | this is all fine. A customer(or multiple at once) can set up
           | something badly, run out of resources, and fail or go really
           | slow and nobody is effected but them.
        
         | manicennui wrote:
         | 1. Except that a single process usually involves multiple
         | services and the failure of one service often makes entire
         | sequences impossible.
        
           | aleksiy123 wrote:
           | But not all sequences. It depends on your dependencies. Some
           | services are critical for some processes. In the monoloth
           | design its a critical dependency for all processes.
        
       | js8 wrote:
       | I think people get the modularity wrong. Modularity is important,
       | but I came to conclusion there is another important architectural
       | principle, which I call "single vortex principle".
       | 
       | First, a vortex in a software system is any loop in a data flow.
       | For example, if we send data somewhere, and then we get them back
       | processed, or are in any way influenced by them, we have a
       | vortex. A mutable variable is an example of a really small
       | vortex.
       | 
       | Now, the single vortex principle states that there ideally should
       | be only one vortex in the software system, or, restated, every
       | component should know which way its vortex is going.
       | 
       | The rationale is when we have two vortices, and we want to
       | compose the modules that form them into a single whole. If the
       | vortices are correctly oriented, composition is easy. If they
       | have opposite orientation, the composition is tricky and requires
       | decision on how the new vortex is oriented. Therefore, it is best
       | if all the vortices in all the modules have the same orientation,
       | and thus form a single vortex.
       | 
       | This principle is a generalization of ideas such as Flux pattern,
       | CQRS, event sourcing, and immutability.
        
         | salvadormon wrote:
         | I find this vortex concept interesting. Do you have any books
         | or online sources that I could use to study this?
        
           | js8 wrote:
           | No, I made it all up. I wish I had time and interest to
           | formalize it.
        
           | ahoka wrote:
           | It's called engineering. /s
        
         | Freebytes wrote:
         | This is a very good point, and you could probably write quite a
         | few articles about this particular subject. You may even have a
         | service A that calls service B that calls service C that calls
         | service A. Then you have a problem. Or, you have C get blocked
         | by something happening in A that was unexpected. Ideally, you
         | only have parents calling children without relying on the
         | parents whatsoever, and if you fail in this, you have failed in
         | your architecture.
        
       | nightowl_games wrote:
       | I tried to use an eBPF sampling profiler to produce a flame
       | graph/marker chart of our c/c++ desktop application. I struggled
       | to get it going, seems like the existing tools are for kernel
       | level profiling and more complicated stuff.
       | 
       | Anyone recommend a simple technique to produce a quality profiler
       | report for profiling a desktop application? Honestly the
       | chrome/firefox profiler is so great that I do wasm builds and use
       | those. Ideally Id like a native profiler that can output .json
       | that chrome/ff can open in it's profile analyzer.
        
       | sazz wrote:
       | Speaking from a Release Management point of view going from a
       | monolith to microservices is often done for the wrong reasons.
       | 
       | The only valid reason for actual doing the change seems to be for
       | scaling reasons due to performance bottlenecks. Everything else
       | is just shifting complexity from software development to system
       | maintenance.
       | 
       | Of course, developers will be happy that they have that huge
       | "alignment with other teams" burden off their shoulders. But the
       | clarity when and how a feature is implemented, properly tested
       | across microservices and then activated and hypercared on
       | production will be much harder to reach if the communication
       | between the development teams is not mature enough (which is
       | often the actual reason from breaking up the monolith).
        
         | altairTF wrote:
         | This is the sole reason we're considering breaking this out
         | into a separate component of our app. It's become too large to
         | maintain effectively. The rest of the app will remain unchanged
        
       | avandekleut wrote:
       | We started with a microservices architecture in a greenfield
       | project. In retrospect we really should have started with a
       | monolith. Every 2-3 days we have to deal with breaking changes to
       | API schemas. Since we're still pre-production it doesn't make
       | sense to have a dozen major versions up side-by-side, so we're
       | just sucking up the pain for now. It's definitely a headache
       | though.
       | 
       | We also are running on AWS API Gateway + lambda so the
       | availability and scalability are the same regardless of monolith
       | or not...
        
       | holoduke wrote:
       | I think that chunking up your application layer into smaller
       | parts is always a good idea. But when do you say its a
       | microservice? When its completely isolated, with its own database
       | etc. Are many small applications running as seperate
       | binaries/processes/on different ports talking to one database
       | endpoint also microservices?
        
         | never_inline wrote:
         | > Are many small applications running as seperate
         | binaries/processes/on different ports talking to one database
         | endpoint also microservices?
         | 
         | One database endpoint as in ? You can use different schemas and
         | have no relations between tables used by different services, or
         | on the other extreme have services which write to same table.
         | 
         | I have read in a book that the most important criteria is
         | independent deployability. I forget the name of the book.
        
           | esafak wrote:
           | https://samnewman.io/books/building_microservices_2nd_editio.
           | .. ?
        
       | wnolens wrote:
       | If a monolith is well-factored, what is the difference between it
       | and co-located microservices?
       | 
       | Probably just the interface - function calls become RPC. You
       | accept some overhead for some benefits of treating components
       | individually (patching!)
       | 
       | What is the difference between distributed microservices v.s. co-
       | located microservices?
       | 
       | Deployment is more complex, but you get to intelligently assign
       | processes to more optimal hardware. Number of failure modes
       | increases, but you get to be more fault tolerant.
       | 
       | There's no blanket answer here. If you need the benefits, you pay
       | the costs. I think a lot of these microservice v.s. monolith
       | arguments come from poor application of one pattern or the other,
       | or using inadequate tooling to make your life easier, or mostly -
       | if your software system is 10 years old and you haven't been
       | refactoring and re-architecting as you go, it sucks to work on no
       | matter the initial architecture.
        
         | marcosdumay wrote:
         | From the developer POV, the difference is on the interface. And
         | while we have all kinds of tools to help us keeping a monolith
         | in sync and correct, the few tools we have to help with IPC can
         | not do static evaluations and are absolutely not interactive.
         | 
         | From the ops POV, "deploy is more complex" is a large
         | understatement. Each package is one thing that must be managed,
         | with its own specific quirks and issues.
        
         | hu3 wrote:
         | > Probably just the interface - function calls become RPC.
         | 
         | Network calls introduce new failure modes and challenges which
         | require more plumbing.
         | 
         | Can I retry this RPC call safely?
         | 
         | How about exponential backoff?
         | 
         | Is my API load-balancer doing its thing correctly? We'll need
         | to monitor it.
         | 
         | How about tracing between microservices? Now we need
         | OpenTelemetry or something like that.
         | 
         | How harder is it to debug with breakpoints in a microservice
         | architecture?
         | 
         | How can I undo a database transaction between 2 microservices?
         | 
         | Suddenly your simple function call just became a lot more
         | problematic.
        
           | wnolens wrote:
           | My first Q was re: monolith on one host to many services on
           | one host.
           | 
           | And yes, when going off-host, you'll have these issues. One
           | should not employ network hops unnecessarily. Engineering is
           | hard. Doesn't make it not worth doing.
        
         | elevation wrote:
         | Monolith->microservice is not a trivial change no matter how
         | well-factored it is to begin with -- though being poorly
         | architected could certainly make the transition more difficult!
         | 
         | > Probably just the interface - function calls become RPC.
         | 
         | This sounds simple, but once "function calls become RPC" then
         | your client app also needs to handle:
         | 
         | * DNS server unreachable
         | 
         | * DNS server reachable but RPC hostname won't resolve
         | 
         | * RPC host resolves but not reachable
         | 
         | * RPC host reachable but refuses connection
         | 
         | * RPC host accepts connection but rejects client authentication
         | 
         | * RPC host presents untrusted TLS cert
         | 
         | * RPC accepts authentication but this client has exceeded the
         | rate limit
         | 
         | * RPC accepts authentication but says 301 moved permanently
         | 
         | * RPC host accepts request but it will be x seconds before
         | result is ready
         | 
         | * RPC host times out
         | 
         | Even for a well-factored app, handling these execution paths
         | robustly probably means rearchitecting to allow you to
         | asynchronously queue and rate limit requests, cache results,
         | handle failure with logarithmic back-off retry, and operate
         | with configurable client credentials, trust store, and resource
         | URLs (so you can honor 301 Moved Permanently) and log all the
         | failures.
         | 
         | You'll also need additional RPC functions and parameters to
         | provide data that had previously been in context for local
         | function calls.
         | 
         | Then the monolith's UI may now need to communicate network
         | delays and failures to the user that were impossible before
         | network segmentation could split the app itself.
         | 
         | Refactoring into microservices will require significant effort
         | even for the most well built monolith.
        
           | ReflectedImage wrote:
           | Microservices not being able to talk to each other the
           | network basically never comes up.
           | 
           | What you are saying is outright ridiculous.
        
           | Aerbil313 wrote:
           | Honestly we just need better tooling. Where is my micro-
           | service-fleet creator, which handles built-in all these
           | failure modes of RPC calls?
        
           | wnolens wrote:
           | I was offering that simplification to compare monolith single
           | host to multi-service single host. No network hops.
           | 
           | But yes, you proceed to explain (some of) the complexities of
           | RPC over the network. Security concerns make this even worse.
           | I could go on..
           | 
           | Engineering is hard
        
         | crabbone wrote:
         | When discussing microservices, the proponents almost always
         | forget the key aspect of this concept, the _micro_ part. The
         | part that makes this stand out (everyone already heard about
         | services, there 's no convincing necessary here, if you feel
         | like you need a service, you just make one, and don't fret
         | about it).
         | 
         | So, whenever someone advocates for this concept, they "forget"
         | to factor in the fragmentation caused by _requiring_ that
         | services be very small. To make this more concrete: if you have
         | a monolith + microservice, you don 't have microservices,
         | because the later implies everything is split into tiny
         | services, no monoliths.
         | 
         | Most of the arguments in favor of microservices fall apart as
         | soon as you realize that it _has to be micro_. And once you
         | back out of the  "micro" requirement, you realize that nothing
         | new or nothing deep is being offered.
        
       | dec0dedab0de wrote:
       | Modern micro services are ridiculous, with few exceptions. IIRC
       | the original idea of micro-services was that each team in a large
       | company should provide a documented API for the rest of the
       | company to use instead of just a web page or a manual process.
       | this is in contrast to their being only one development team that
       | decides what gets worked on. Which allows individual employees to
       | automate bigger processes that include steps outside of their own
       | department. Somehow that changed to people spinning up containers
       | for things that could be a function.
        
       | jedberg wrote:
       | Last time I did a survey of my peers, companies that were all in
       | on microservices in the cloud spent about 25% of their
       | engineering time on the support systems for microservices. That
       | number is probably a bit lower now since there are some pretty
       | good tools to handle a lot of the basics.
       | 
       | And the author sums up the advice I've been giving for a long
       | time perfectly:
       | 
       | > Splitting an application into services adds a lot of complexity
       | to the overall system. Because of that, it's generally best to
       | start with a monolith and split it up only when there is a good
       | reason to do so.
       | 
       | And usually that good reason is that you need to optimize a
       | particular part of the system (which often manifests as using
       | another language) or your organization has grown such that the
       | overhead of microservices is cheaper than the management
       | overhead.
       | 
       | But some of the things in this article are not quite right.
       | 
       | > Nothing forbids the use of different languages, libraries, and
       | datastores for each microservice - but doing so transforms the
       | application into an unmaintainable mess.
       | 
       | You build a sidecar in a specific language, and any library you
       | produce for others to consume is for that language/sidecar. Then
       | you can write your service in any language you want. Chances are
       | it will be one of a few languages anyway that have a preferred
       | onramp (as mentioned in the article), and if it's not, then there
       | is probably a good reason another language was chosen, assuming
       | you have strong technical leadership.
       | 
       | > Unlike with a monolith, it's much more expensive to staff each
       | team responsible for a service with its own operations team. As a
       | result, the team that develops a service is typically also on-
       | call for it. This creates friction between development work and
       | operational toll as the team needs to decide what to prioritize
       | during each sprint.
       | 
       | A well run platform removes most of the ops responsibility from
       | the team. The only thing they really have to worry about is if
       | they have built their system in a way to handle failures well
       | (chaos engineering to the rescue!) and if they have any runaway
       | algorithms. Otherwise it's a good thing that they take on that
       | ops load, because it helps them prioritize fixing things that
       | break a lot.
        
       | rvrs wrote:
       | I feel like this article conflates monoliths with monorepos. You
       | can have a more flexible (though not as "flexible" as
       | microservices, perhaps for the better) engineering by just
       | following the "libraries" pattern mentioned in the article and
       | splitting each of the libraries into their own repos, and then
       | having version bumps of these libraries as part of the release
       | process for the main application server.
       | 
       | Doing it this way gets you codebase simplicity, release schedule
       | predictability, and makes boundaries a bit more sane. This should
       | get you a lot farther than the strawmanned "one giant repo with a
       | spaghetti mess of code" model that the author posits.
        
       | bob1029 wrote:
       | > And think of the sheer number of libraries - one for each
       | language adopted - that need to be supported to provide common
       | functionality that all services need, like logging.
       | 
       | This is the #1 reason we quit the microservices game. It is
       | simply a complete waste of mental bandwidth to worry about with
       | the kind of tooling we have in 2023 (pure cloud native /
       | infiniscale FaaS), especially when you have a customer base (e.g.
       | banks & financial instutitions) who will rake you over hot coals
       | for _every single_ 3rd party dependency you bring.
       | 
       | We currently operate with one monolithic .NET binary distribution
       | which is around 250 megs (gzipped). Not even the slightest hint
       | of cracks forming. So, if you are sitting there with a 10~100 meg
       | SaaS distribution starting to get nervous about pedantic things
       | like "my exe doesnt fit in L2 anymore", then rest assured - Your
       | monolithic software journey hasn't even _begun_ yet.
       | 
       | God forbid you find yourself with a need to rewrite one of these
       | shitpiles. Wouldn't it be a hell of a lot easier if it was all in
       | one place where each commit is globally consistent?
        
         | _a_a_a_ wrote:
         | Word. BTDT, all services getting out of sync etc. Not worth it.
        
         | tsss wrote:
         | No one says you can't have all of your microservices use the
         | same language...
        
           | bob1029 wrote:
           | Absolutely. This is how we operated when we were at the peak
           | of our tech showmanship phase. We had ~12 different services,
           | all .NET 4.x the exact same way.
           | 
           | This sounds like it could work, but then you start to wonder
           | about how you'd get common code into those 12 services. Our
           | answer at the time was nugets, but we operate in a sensitive
           | domain with proprietary code, so public nuget services were a
           | no go. So, we stood up our own goddamn nuget server just so
           | we could distribute our own stuff to ourselves in the most
           | complicated way possible.
           | 
           | Even if you are using all the same language and patterns
           | everywhere, it still will not spare you from all of the
           | accidental complexity that otherwise becomes _essential_ if
           | you break the solution into arbitrary piles.
        
             | ValtteriL wrote:
             | Can't you share them via shared libraries in .NET?
        
               | jcmontx wrote:
               | YMMV but I think you can only do that if you have a
               | monorepo with the shared library and all the
               | microservices
        
               | Atotalnoob wrote:
               | You can, but it's just kind of hinky, you end up relying
               | on a specific path for the shared library
        
           | ljm wrote:
           | Ideally you would use the same language if it had distributed
           | support. Use Erlang or Elixir for example and you have
           | everything you need for IPC out of the box. Might take a
           | little bit more effort if you're on Kubernetes.
           | 
           | One of my problems with microservices isn't really the
           | services themselves, but the insane amount of tooling that
           | creeps in: GRPC, Kafka, custom JSON APIs, protobufs, etc.
           | etc. and a lot of them exist in some form to communicate
           | between services.
        
             | tsss wrote:
             | If you do so much IPC that you have to design your language
             | around it you're probably doing it wrong. I don't think
             | that moving to a monolith would really cut down that much
             | on other technologies. Maybe you can do without Kafka, but
             | certainly you will need some other kind of persistent
             | message queue. You will still need API docs and E2E
             | integration tests.
        
               | jocaal wrote:
               | > If you do so much IPC that you have to design your
               | language around it you're probably doing it wrong...
               | 
               | I'm gonna have to disagree with this. Often when
               | languages add features to their core, it allows for a
               | very different (and often better) approach to writing
               | code. Think Lisp with first class functions, or Rust with
               | the borrow checker. Why should concurrency be any
               | different? This feels like a Blub moment and I would
               | highly recommend you give erlang or elixir a try, just to
               | see what it is about.
        
         | superfrank wrote:
         | I think this is a false dichotomy. Most places I've worked with
         | microservices had 2 or 3 approved languages for this reason
         | (and others) and exceptions could be made by leadership if a
         | team could show they had no other options.
         | 
         | Microservices doesn't need to mean it's the wild west and every
         | team can act without considering the larger org. There can and
         | should be rules to keep a certain level of consistency across
         | teams.
        
           | orochimaaru wrote:
           | Not sure why you're downvoted - but you're right. We heavily
           | use microservices but we have a well defined stack.
           | Python/gunicorn/flask/mongodb with k8s. We run these on Kafka
           | or rest api. We even runs jobs and corn jobs in k8s.
           | 
           | Functional decomp is left to different teams. But the
           | libraries for logging, a&a, various utilities etc are common.
           | 
           | No microservices that don't meet the stack unless they're
           | already developed/open source - eg open telemetry collectors.
           | 
           | Edit: I think the article is a path to a book written by the
           | author. It's more of an advertisement than an actual
           | assessment. At least that's my take on it.
        
           | tcgv wrote:
           | > Most places I've worked with microservices had 2 or 3
           | approved languages for this reason (and others) and
           | exceptions could be made by leadership if a team could show
           | they had no other options.
           | 
           | This works well if you have knowledge redundancy in your
           | organization, i.e., multiple teams that are experienced in
           | each programming language. This way, if one or more
           | developers experienced in language 'A' quit, you can easily
           | replace them by rearranging developers from other teams.
           | 
           | In small companies, this flexibility of allowing multiple
           | languages can result in a situation in which developers
           | moving to other jobs or companies will leave a significant
           | gap that can only be filled with recruiting (then
           | onboarding), which takes much more time and will
           | significantly impact the product development plan.
           | 
           | More often than not, the choice between Microservices and
           | Monoliths is more of a business decision than a technical one
           | to make.
        
           | nijave wrote:
           | I think it's fair to say microservices increase the need for
           | governance, whether manual or automated systems. When you
           | start having more than 1 thing, you create the "how do I keep
           | things consistent and what level of consistency do I want"
           | problem
        
         | cpill wrote:
         | > God forbid you find yourself with a need to rewrite one of
         | these shitpiles. Actually, this is much easier with micro
         | services as you have a clear interface you need to support and
         | the code is not woven into the rest of the monolith like a
         | French plat. The best code is the easiest to throw away and
         | rewrite, because let's face it, the older the code is, the more
         | hands it's been through, the worse it is, but more importantly
         | the less motivated anyone is in maintaining it.
        
           | lemmsjid wrote:
           | If the monolithic application is written in a language with
           | sufficient encapsulation and good tooling around multi-module
           | projects, then you can indeed have well known and
           | encapsulated interfaces within the monolith. Within the
           | monolith itself you can create a DAG of enforced interfaces
           | and dependencies that is logically identical to a set of
           | services from different codebases. There are well known
           | design issues in monoliths that can undermine this approach
           | (the biggest one that comes to mind is favoring composition
           | over inheritance, because that's how encapsulation can be
           | most easily broken as messages flow across a single-
           | application interfaces, but then I'd also throw in enforced
           | immutability, and separating data from logic).
           | 
           | It takes effort to keep a monolithic application set up this
           | way, but IMHO the effort is far less than moving to and
           | maintaining microservices. I think a problem is that there's
           | very popular ecosystems that don't have particularly good
           | tooling around this approach, Python being a major example--
           | it can be done, but it's not smooth.
           | 
           | To me the time when you pull the trigger on a different
           | service+codebase should not be code complexity, because that
           | can be best managed in one repo. It is when you need a
           | different platform in your ecosystem (say your API is in Java
           | and you need Python services so they can use X Y or Z
           | packages as dependencies), or when you have enough people
           | involved that multiple teams benefit from owning their own
           | soup-to-nuts code-to-deployment ecosystem, or when, to your
           | point, you have a chunk of code that the team doesn't or
           | can't own/maintain and wants to slowly branch functionality
           | away from.
        
             | protomolecule wrote:
             | "composition over inheritance, because that's how
             | encapsulation can be most easily broken as messages flow
             | across a single-application interfaces, but then I'd also
             | throw in enforced immutability, and separating data from
             | logic"
             | 
             | Could you elaborate on this? I see how "separating data
             | from logic" is a problem but what about the other two?
        
           | antonhag wrote:
           | This assumes a clear interface. Which assumes that you get
           | the interfaces right - but what's the chance of that if the
           | code needs rewriting?
           | 
           | Most substantial rewrites crosses module boundaries. In micro
           | services changing the module boundary is harder than in a
           | monolith, since it can be done in a single commit/deploy.
        
           | butlike wrote:
           | I disagree. The older the code is, the more hands it's been
           | through, which generally means it's stronger, hardened code.
           | Each `if` statement added to the method is an if statement to
           | catch a specific bug or esoteric business requirement. If a
           | rewrite happens, all that context is lost and the software is
           | generally worse for it.
           | 
           | That being said, I agree that maintaining legacy systems is
           | far from fun.
        
           | wernercd wrote:
           | > The best code is the easiest to throw away and rewrite,
           | because let's face it, the older the code is, the more hands
           | it's been through, the worse it is, but more importantly the
           | less motivated anyone is in maintaining it.
           | 
           | The more testing that's been done and the more stable it
           | should be.
           | 
           | The argument for new can be flipped because new doesn't mean
           | better and old doesn't mean hell.
        
         | starttoaster wrote:
         | > Wouldn't it be a hell of a lot easier if it was all in one
         | place where each commit is globally consistent?
         | 
         | I always find this sentence to be a bit of a laugh. It's so
         | commonly said (by either group of people with a dog in this
         | fight) but seemingly so uncommonly thought of from the other
         | group's perspective.
         | 
         | People that prefer microservices say it's easier to
         | change/rewrite code in a microservice because you have a
         | clearly defined contract for how that service needs to operate
         | and a much smaller codebase for the given service. The monolith
         | crowd claims it's easier to change/rewrite code in a monolith
         | because it's all one big pile of yarn and if you want to change
         | out one strand of it, you just need to know each juncture where
         | that strand weaves into other strands.
         | 
         | Who is right? I sure don't know. Probably monoliths for tenured
         | employees that have studied the codebase under a microscope for
         | the past few years already, and microservices for everyone
         | else.
        
           | tklinglol wrote:
           | My first gig out of school was a .net monolith with ~14
           | million lines of code; it's the best dev environment I've
           | ever experienced, even as a newcomer who didn't have a mental
           | map of the system. All the code was right there, all I had to
           | do was embrace "go to definition" to find the answers to like
           | 95% of my questions. I spend the majority of my time
           | debugging distrubuted issues across microservices these days;
           | I miss the simplicity of my monolith years :(
        
           | iterateoften wrote:
           | Not so much a ball of yarn but more like the cabling coming
           | out of a network cabinet. It can be bad and a big mess if you
           | let it, but most professionals can organize things in such a
           | way that maintenance isn't that hard.
        
             | starttoaster wrote:
             | Well, the point I was making was that the same can easily
             | be true of microservice architectures. If you have people
             | that don't know what they're doing architecting your
             | microservices, you'll have a difficult time maintaining
             | them without clear and strict service contracts.
             | 
             | It's not clear to me that we're ever comparing apples to
             | apples in these discussions. It would seem to me that
             | everyone arguing left a job where they were doing X
             | architecture the wrong way and now they only advocate for Y
             | architecture online and in future shops.
        
           | disintegore wrote:
           | I've noticed that a lot of industry practices demonstrate
           | their value in unexpected ways. Code tests, for instance,
           | train you to think of every piece of code you write as having
           | at minimum two integrations, and that makes developers who
           | write unit tests better at separating concerns. Even if they
           | were to stop writing tests altogether, they would still go on
           | to write better code.
           | 
           | Microservices are a bit like that. They make it extremely
           | difficult to insert cross cutting concerns into a code base.
           | Conditioning yourself to think of how to work within these
           | boundaries means you are going to write monolithic
           | applications that are far easier to understand and maintain.
        
           | protomolecule wrote:
           | "because you have a clearly defined contract for how that
           | service needs to operate"
           | 
           | But if you need to change the contract the changes span the
           | service and all of its clients.
        
         | bibabaloo wrote:
         | How many developers do you have working on that monolith
         | though? The size of your binary isn't usually why teams start
         | breaking up a monolith.
        
         | haolez wrote:
         | This sounds reasonable and correct, but my personal experience
         | includes more monolithic messes than microservices ones.
        
         | duncan-donuts wrote:
         | I was in a meeting to talk about our logging strategy at an old
         | company that was starting micro services and experiencing this
         | problem. In the meeting I half heartedly suggested we write the
         | main lib in C and write a couple wrapper libs for the various
         | languages we were using. At the time it felt kinda insane but
         | in hindsight it probably would have been better than the
         | logging mess we created for ourselves.
        
       | 0xbadcafebee wrote:
       | So where's "the costs of monoliths" post? They don't show up
       | here, because everyone is out there cluelessly implementing
       | microservices and only sees those problems. If everyone were out
       | there cluelessly implementing monoliths, we'd see a lot of
       | "monoliths bad" posts.
       | 
       | People don't understand that these systems lead to the same
       | amount of problems. It's like asking an elephant to do a task, or
       | asking 1000 mice. Guess what? Both are going to have problems
       | performing the task - they're just different problems. But you're
       | not going to get away from having to deal with problems.
       | 
       | You can't just pick one or the other and expect your problems to
       | go away. _You will have problems either way._ You just need to
       | pick one and provide solutions. If  'what kind of architecture'
       | is your _biggest hurdle_ , please let me work there.
        
       | wg0 wrote:
       | "Monolith first."
       | 
       | Also, call stack can go pretty deep. You can't go deep with
       | microservices. Everything has to be first hop or you risk adding
       | latency.
       | 
       | Also, transactional functionalities become lot more challenging
       | to implement across services and databases because it's
       | blasphemous to share database among Microservices.
        
       | DerekBickerton wrote:
       | Separation of Concerns as a Service
        
       | Sohcahtoa82 wrote:
       | I've found that microservices are fine. In fact, they're pretty
       | damn neat!
       | 
       | But what isn't fine are _nano_ services, where a service is
       | basically just a single function. Imagine creating a website
       | where you have a RegisterService, LoginService,
       | ForgotPasswordService, ChangePasswordService, Add2FAService, etc.
       | 
       | While I haven't seen it THAT bad, I've seen things pretty close
       | to it. It becomes a nightmare because the infra becomes far more
       | complex than it needs to be, and there ends up being _so much
       | repeated boilerplate_.
       | 
       | Also, obligatory: https://youtu.be/y8OnoxKotPQ
        
       | mrinterweb wrote:
       | A bit over a decade ago, I was sold on the concept of
       | microservices. After implementing, maintaining, and integrating
       | many microservices; I have realized how incredibly exhausting it
       | is. Sure microservices have their place, but understanding the
       | often higher costs associated with microservices should be
       | considered when developing a new service. Focus on the costs that
       | are less tangible: siloed knowledge, integration costs, devops
       | costs, coordinating releases, cross-team communication, testing,
       | contract versioning, shared tooling. I could go on, but that's
       | just a sample of some of the less obvious costs.
       | 
       | I feel that the collective opinion of microservices has shifted
       | towards being trepidatious of microservices as many have
       | experienced the pains associated with microservcies.
        
       | mattbillenstein wrote:
       | I think most organizations should take a stab at scaling out with
       | in-process services first - ie, instead of an rpc it's a function
       | call, instead of a process, it's a module, etc.
       | 
       | In a well-factored app, stepping on one another's toes should be
       | the exception, not the rule.
       | 
       | Unfortunately, I think modern web frameworks don't do a good job
       | of enabling this. They often encourage "God object" patterns and
       | having code/data very tightly coupled.
        
       | andersrs wrote:
       | Goldilocks went inside the Git Hut. First she inspected the git
       | repo of the great, huge bear, and that was 2TB which was far too
       | large and monolithic for her. And then she tasted the 281 repos
       | of the middle bear, and they were far too small and numerous for
       | her. And then she went to the 10 repos of the little, small wee
       | bear, and checked out those repos. And they were neither too
       | large nor too small and numerous, but just right; each had it's
       | own reason to be and she liked it so well, that she sat down and
       | cloned it all.
        
       | parentheses wrote:
       | The biggest negative of Microservices is that it creates hard
       | ownership boundaries.
       | 
       | This affects planning - more effort is needed to understand
       | boundaries that need adjustment or crossing.
       | 
       | This affects execution - needing to cross a boundary comes with
       | inefficiencies and an automatic need to move with
       | synchronization.
       | 
       | This affects future projects - service boundary clarity works
       | against flexibility.
        
       | PedroBatista wrote:
       | Similarly with frontend devs thinking a "modern web-app" can only
       | be built with frontend frameworks/libs like React/Vue/Svelte,
       | etc, lately I feel there's an idea floating around that
       | "monolith" equals running that scary big black ball of tar as a
       | single instance and therefore "it doesn't scale", which is
       | insane.
       | 
       | Another observation is the overall amount of code is much bigger
       | and most of these services are ~20% business/domain code and ~80%
       | having to deal with sending and receiving messages from other
       | process over the network. You can hide it all you want, but at
       | the end of the day it's there and you'll have to deal with the
       | network in one way of another.
       | 
       | Just like the frontend madness, this microservice cult will only
       | end once the economy goes to crap and there's no money to support
       | all these Babel Towers of Doom.
       | 
       | PS: microservices have a place, which is inside a select few of
       | companies that get something out of it more than the cost they
       | pay.
        
         | superfrank wrote:
         | I think the thing people normally miss about microservices is
         | that the goal of microservices is usually to solve organization
         | and people problems and not technological ones. There's some
         | tech benefits like allowing services to scale independently,
         | but the biggest benefit is clear ownership boundaries and
         | preventing code from becoming overly coupled where it doesn't
         | need to be.
         | 
         | If you're a team small team working on a single product you
         | probably don't need microservices since you likely don't have
         | the type of organizational problems that microservices solve.
         | Microservices are likely premature optimization and you're
         | paying the price to solve problems you don't yet have.
        
           | koliber wrote:
           | Yep.
           | 
           | There are situations where microservices genuinely add net
           | value. In discussions like this one, people make valid points
           | for microservices and for monoliths. Often, words like
           | "large" and "many" are used without providing a sense of
           | scale.
           | 
           | Hers is a heuristic. It's not a hard rule. There are likely
           | good counter examples. It does sketch a boundary for the
           | for/against equation for microservices. Would love to hear
           | feedback about whether "100" is that number or a different
           | heuristic would be more accurate.
           | 
           | Engineering departments with fewer than 100 engineers should
           | favor monoliths and seriously question efforts to embrace
           | microservices. Above that, there's an increasing chance the
           | microservices could provide value, but teams should stick to
           | a monolith as long as possible.
        
         | economist420 wrote:
         | While I agree with you that a lot of websites overuse
         | javascript and frameworks. Can you tell me what else I'm
         | supposed to use if I'm going to build a desktop class web app
         | without it becoming a huge mess or I having to end up up
         | inventing the same concepts already existing in these
         | frameworks?
        
           | PedroBatista wrote:
           | "desktop class web app" is a subset of "modern web-app". If
           | you need frameworks, use them.
        
             | economist420 wrote:
             | The meaning behind these is pretty blurry between whats
             | considered a website vs an app, it's a spectrum. I consider
             | a web app something that has a similar UI experience to a
             | desktop app, as the word "application" came derived from
             | the desktop.
        
       | highspeedbus wrote:
       | Monoliths are like heavy trucks.
       | 
       | Microservices are like motorcycles.
       | 
       | Surely you can delivery 15 metric tons of wood with either of
       | them. You will just need _a lot_ of motorcycles to match
       | capacity.
       | 
       | Somehow a big part of industry became convinced that _of course_
       | that approach is better, since motorcycles are cheaper*, easier
       | to scale*, and solve some managerial problems*.
       | 
       | * They don't, IMHO.
       | 
       | >its components become increasingly coupled over time
       | 
       | >The codebase becomes complex enough that nobody fully
       | understands every part of it
       | 
       | >if a change introduces a bug - like a memory leak - the entire
       | service can potentially be affected by it.
       | 
       | I believe the only real solution to those challenges is
       | Competency, or Programmer Professionalism. Uncle Bob had some
       | great points on this topic.
       | 
       | https://www.youtube.com/watch?v=BSaAMQVq01E
        
       | kgeist wrote:
       | >As each team dictates its own release schedule and has complete
       | control over its codebase, less cross-team communication is
       | required, and therefore decisions can be taken in less time.
       | 
       | In my experience, it's the opposite. When a feature spans several
       | microservices (and most of them do), there's much more
       | communication to coordinate the effort between several isolated
       | teams.
        
       | ph4evers wrote:
       | I still don't fully understand what makes something a monolith.
       | 
       | For example, I have a big app which is serving 90% of the API
       | traffic. I also have three separate services, one for a camera,
       | one to run ML inference, and one to drive a laser welding
       | process. They are split up because they all need specific
       | hardware and preferably a separate process (one per gpu for
       | example).
       | 
       | Is this a monolothic app or a micro service architecture?
        
         | flakes wrote:
         | I would call that a micro-service architecture using a mono-
         | repo.
         | 
         | I think a lot of people here are conflating mono-repo/poly-repo
         | with a mono-deployment. You can easily add in extra entrypoint
         | executables to a single mono-repo. That allows initiating parts
         | of the system on different machines, allowing scaling for
         | skewed API rates across your different request handlers.
         | 
         | Similarly, you can create a monolith application building from
         | a poly-repo set of dependencies. This can be easier depending
         | on you version control system, as I find git starts to perform
         | really poorly when you enter multi-million SLOC.
         | 
         | At my job we have a custom build system for poly-repos that
         | analyzes dependencies and rebuilds higher level leaf packages
         | for lower level changes. Failing a dependency rebuild gets your
         | version rejected, keeping the overall set of packages green.
        
         | stavros wrote:
         | > Is this a monolothic app or a micro service architecture?
         | 
         | Yes.
         | 
         | To be less facetious, the continuum goes from "all the code in
         | a single process" to "every line of code is a separate
         | service". It's not either-or.
        
         | SanderNL wrote:
         | I tend to call these things "distributed monoliths" assuming
         | the camera, ML and laser driver are integral to the
         | functionality of the system.
         | 
         | There is no hard rule that I know of but my heuristic is
         | something like "can I bring instances up and down randomly
         | without affecting operations (too much)". If so, I'd call that
         | a microservice(ish) architecture.
         | 
         | The waters have been muddied though. Microservices were IMO
         | once associated with Netflix-like scalability, bringing up and
         | down bunches up "instances" without trouble. But nowadays what
         | used to be good old service-oriented architecture (SOA) tend to
         | be also called microservices.
         | 
         | SOA can also be about scalability, but it tended to focus more
         | on how to partition the system on (business) boundaries. I
         | guess like you did.
         | 
         | It's all a bit muddy in my experience.
        
       | Aerbil313 wrote:
       | Looking at the "costs" section and considering the recent
       | developments, it seems better tooling will increasingly be
       | helpful in running a software company. See for example Nix
       | Flakes. You can now have instant and performant declaratively
       | defined identical environments in production and development, and
       | across teams.
       | 
       | IMHO widespread use of such technologies (as opposed to
       | heavyweight ones like Docker) could relieve one of the biggest
       | costs of microservices: They are expensive to run.
        
       | john-tells-all wrote:
       | Monoliths are _so_ much easier to test and deploy!
       | 
       | Even if one service needs to be developed separately, it can
       | still live in the monolith. For dev, it's easier to test. For
       | production, deploy the _same_ monolith but have it only handle a
       | single service, depending on how it 's deployed. You get all the
       | benefits of a monolith while a little benefit of separate
       | services.
       | 
       | Microservices are okay, but "kick up" a lot of issues that can be
       | slow/awkward to solve. Testing being the big one. If the business
       | decides it's worth the exponential complexity then that's fine.
        
       | Gluber wrote:
       | Every company i have advised jumped on the microservice bandwagon
       | some time ago.... Here is what I tell them:
       | 
       | 1. Microservices are a great tool... IF you have a genuine need
       | for them 2. Decoupling in and on itself with services it not a
       | goal 3. Developers who are bad at writing proper modular code in
       | a monolithic setting will not magically write better code in a
       | Microservice environment.. Rather it will get even worse since
       | APIS ( be it GRPC, Restful or whatever ) are even harder to
       | design 4. Most developers have NO clue about consistency or how
       | to achieve certrain gurantees in a microservice setting ( Fun
       | fact: My first question to developers in that area is: Define
       | your notion of consistency, before be get to the fun stuff like
       | RAFT or PAXOS) 5. You don't have the problems where microservices
       | shine ( e.g banks with a few thounsands RPS ) 6. Your
       | communication overhead will dramatically increase 7. Application
       | A that does not genuinly need microservices will be much cheaper
       | as a monolith with proper code seperation
       | 
       | Right now we have generation of developers and managers who don'T
       | know any better, and just do what everybody else seems to be
       | doing: Microservices and Scrum ... and then wonder why their
       | costs explode.
        
       | camgunz wrote:
       | People really underestimate the eventual consistency thing. It's
       | 100% the case your app will behave weirdly while it's processing
       | some sort of change event. The options for dealing with this are
       | real awkward, because the options for implementing transactions
       | on top of microservices are real awkward. Product will absolutely
       | hate this, and there won't really be anything you can do about
       | it. Also fun fact: this scales in reverse, as in the bigger and
       | better your company gets, the worse this problem becomes, because
       | more services = more brains = more latency.
       | 
       | Relatedly, you can't really do joins. Like, let's say you have an
       | organizations service and a people service. You might want to
       | know what people are in which organizations, and paginate
       | results. Welcome to a couple of unpalatable options:
       | 
       | - Make some real slow API calls to both services (also implement
       | all the filters and ordering, etc).
       | 
       | - Pull all the data you need into this new 3rd service, and
       | justify it by saying you're robust in the face of your dependency
       | services failing, but also be aware you're adding yet another
       | layer of eventual consistency here too.
       | 
       | This is the simplest this problem can be. Imagine needing data
       | from 3 or 13 services. Imagine needing to to synchronize UUIDs
       | from one service to 20 others. Imagine needing to also delete a
       | user/organization from the 30 other services that have "cached"
       | it. Etc etc.
       | 
       | I used to think microservices were an adaptation to Conway's law,
       | but now I really think it's a response to the old days when we
       | didn't have databases that scaled horizontally. But we do now
       | (Cockroach, Big Query, etc) so we really should just move on.
        
         | fullspectrumdev wrote:
         | The joins problem is fucking _killing_ me in some personal
         | projects that will need a total rewrite at this point
        
       | tantaman wrote:
       | Really doesn't make sense to me that people jump to microservices
       | rather than creating module boundaries in their monolith.
        
       | bullen wrote:
       | Microservices don't cost as much if you deploy them everywhere so
       | you get both vertical and horizontal scaling without lookup cost.
       | 
       | Of course then you need the same stack which allows multiple apps
       | deployed everywhere.
        
       | aimon1 wrote:
       | Microservices are necessary and the best way to architect
       | something new that is going to be used at scale. In my experience
       | working with monolithic architecture with 20+ teams at a large
       | Tech company, I have found it takes multiple years to convert to
       | microservices. Rebuilding generally is possible in half as much
       | and gives you the opportunity to hire good talent, motivate
       | existing employees and use the latest tech. thoughts?
        
       | shoelessone wrote:
       | Anytime these discussions come up I always wonder if there are
       | any great examples in the form of a deep technical analysis of
       | microservice architecture?
       | 
       | I've built things that I've called microservices, but generally
       | have failed to do a complete job or always left with something
       | that feels more like a monolith with a few bits broken out to
       | match scaling patterns.
       | 
       | I know there are talks and papers by Netflix for example, but if
       | anybody knows if any smaller scale, easier to grok talks or
       | papers that go into common pitfalls and solve them for real (vs
       | giving a handwavey solution that sounds good but isn't concrete)
       | I'd love to check it out.
        
         | screamingninja wrote:
         | This is my gold reference: https://microservices.io/
         | 
         | Most issues I have seen around microservices circle around bad
         | design decisions or poor implementations. Both are also major
         | issues for monoliths, but since those issues do not surface
         | early on, it is easier to start developing a monolith and take
         | on the technical debt to be dealt with later on.
         | 
         | Microservices architecture takes time and effort, but pays huge
         | dividends in the long run.
        
       | benreesman wrote:
       | I've always thought the monolith vs. micro service debate to miss
       | the point.
       | 
       | Taking an RPC call has costs. But sometimes you need to. Maybe
       | the computation doesn't fit on any of your SKUs, maybe it is best
       | done by a combination of different SKUs, maybe you need to be
       | able to retry if an instance goes down. There are countless
       | reasons that justify taking the overhead of an RPC.
       | 
       | So, do you have any of those reasons? If yes, take the RPC
       | overhead in both compute and version management and stuff.
       | 
       | If no, don't.
       | 
       | Why is this so contentious?
        
       ___________________________________________________________________
       (page generated 2023-10-30 23:00 UTC)