[HN Gopher] Starting with microservices
___________________________________________________________________
Starting with microservices
Author : feross
Score : 70 points
Date : 2022-01-19 20:03 UTC (2 hours ago)
(HTM) web link (arnoldgalovics.com)
(TXT) w3m dump (arnoldgalovics.com)
| yawnxyz wrote:
| I don't understand why people treat microservices or monoliths as
| either or. If I find my various projects using the same
| functionality over and over, then I'll split that off as a
| microservice to serve my monolith projects of various sizes.
|
| It's like functions: don't turn code into a function until you
| need it in three different places.
| galovics wrote:
| I agree but the trend still seems to prefer one or the other
| and that's why it's so important to talk about the potential
| downsides instead of closing our eyes and saying everything is
| gonna be better if we do microservices.
| foobarian wrote:
| Maybe it's just the activation energy. Until you have the setup
| for making a microservice, you're kinda limited to a monolith.
| Once you pay the setup cost though, making incrementally more
| services is trivial so you end up on the other end of the
| spectrum.
| scanr wrote:
| I have a theory that auto-wired Dependency Injection in a single
| DI container is partly to blame for monolith spaghetti. Once an
| app reaches a certain size and anything can depend on anything
| else, reasoning about the whole can become difficult.
|
| I think there is value in wiring a monolith together in such a
| way that each course grained subcomponent exposes a constrained
| interface into the rest of the system (payments, orders,
| shipping, customers etc) before needing to break it into
| distributed micro-services.
|
| Note: I quite like Dependency Injection, I just think the 1 giant
| bag of dependencies can lead to complexity at scale.
| loevborg wrote:
| Check out Polylith for a description of this great pattern
| https://polylith.gitbook.io/polylith/
| bob1029 wrote:
| > Once an app reaches a certain size and anything can depend on
| anything else, reasoning about the whole can become difficult.
|
| I have experienced this pain so many times and in so many
| different varieties. Often, you cannot meaningfully subdivide
| the problem space without ruining the logical semantics per the
| business (i.e. bowl of spaghetti).
|
| An alternative is to embrace the reality that circular
| dependencies are actually inevitable and appropriate ways to
| model many things.
|
| The example scenario I like to use is that of a typical banking
| customer. One customer likely has a checking and savings
| account. For each of those accounts, there are potentially
| multiple customers (joint ownership). Neither of these logical
| business types will ever "win" in the real world DI graph.
| Certainly you can start to invent bullshit like AccountCustomer
| and CustomerAccount, but that only gets you 1 layer deeper into
| an infinitely-recursive rabbit hole of pain and suffering.
| There also exists the relational path, but I have heavily
| advocated for that elsewhere and it is not always applicable
| when talking about code-time BL method implementation. Being
| able to model things just as they are in the real world is a
| big key to success in the more complicated problem domains.
|
| Instead of trying to control what _depends_ on what, I shifted
| my thinking to:
|
| > What needs to start up and in what order?
|
| Turns out, most things don't really care in practice. The only
| thing I have to explicitly initialize before everything else in
| my current code base is my domain model working set (recovery
| from snapshot/event logs) and settings. I decided to not use DI
| for any business services. Instead, all services become a
| static class with static members that can be invoked from
| anywhere. This also includes the domain model instance which is
| used as the in-memory working set. This type just contains an
| array of every subordinate domain type (Customers, Accounts,
| etc.). By having the working set available as a public static
| type, every service can directly access it without requiring
| method injection. If I was working with a different problem
| domain (or certain bounded context within this one), I might
| prefer method-level injection.
|
| Yes - according to every book on programming style you ever
| read, this is an abominable practice. Unit testing this would
| be difficult/impossible. But you know what? It works. It's
| simple. I can teach a novice how to add a new service in an
| hour. A project manager stumbling into AccountService might
| actually walk away enlightened. You can circularly-reference
| things at runtime if you need to. I've got some call graphs
| that bounce back and forth between customer & account services
| 5+ times. And it totally makes sense to model it that way too
| as far as the business is concerned. Everyone is happy.
| mcv wrote:
| I notice there's a lot of comments here saying this exact same
| thing, and it's also what seems most sensible to me. Yet
| microservices are getting all the hype. Should be hype a
| thorough modular design more?
| 4rb1t wrote:
| In theory it all sounds great and makes total sense but when
| the company grows to a billion dollar firm and hires engineers
| to work on the said monolith thats when things breakdown. The
| founding team was a closet knit team and ensure there is no
| spaghetti code but the moment you move on from that closely
| knit group its hard to enforce constraints.
|
| Have worked at multiple companies that started out as a
| monolith and are still running the monolith in some form or the
| other while breaking it down into micro services.
| throwaway894345 wrote:
| I've never been part of a monolith that used a DI framework,
| but I've seen quite a few monoliths fail (as in "the project
| becomes too convoluted and iteration slows to a crawl until the
| project is canceled or effectively rewritten") and I certainly
| believe that one important reason microservices do well is that
| they enforce the modularity that you describe. That said, a lot
| of critics of microservices describe similar issues of
| indiscernible chaos, so either I've been very fortunate or
| microservice critics are gaslighting us. :)
| lmilcin wrote:
| I wouldn't blame (badly implemented) DI for all the problems of
| monoliths.
|
| I think the real main issue is lack of discipline when dividing
| the application into modules.
|
| Spaghetti is basically defined as an application where real
| modularisation does not exist and everything talks to
| everything.
|
| It is much easier to work with an application when you can
| abstract parts of it when you are solving your problem. You
| effectively work on much smaller part of the application.
|
| Spaghetti == you effectively have to take into account
| possibility of the piece of code you look at interacting with
| any other piece of code in the application.
|
| Well modularised application == you only need to take into
| account the contents of your current module and the interface
| of the other modules you are using.
|
| One reason why microservices sort of work (when done well) is
| because they force people to think about APIs and how those
| services talk to each other.
|
| In most cases you could just put these microservices as modules
| in a monolithic application and expend the effort on ensuring
| APIs and application structure.
|
| I have successfully rolled back couple microservice initiatives
| by integrating services into monoliths. This usually results in
| the team getting back a lot of their time because their
| environment suddenly became much simpler. Less applications to
| manage, less network communication, less possible ways for
| things to break, less frameworks, less resources needed to run
| the application, less processes (like processes around
| deployment, release management, etc.), less boilerplate code,
| and so on. The list is very long.
|
| Of course, when you work on a large monolith vs a lot of small
| microservices, it is now important to be able to structure your
| applications. But there is also an opportunity for improvement.
| jstimpfle wrote:
| Ravioli == you have so many small distinct things on that are
| hard to stick together, which makes it hard to build a larger
| structure out of them.
| lmilcin wrote:
| Then try to not make the modules too small?
|
| Sure, if you were ready to make an entire separate
| application for it it will not be worse when you turn it
| into one package of a monolith?
|
| Also you can build nested structures with modules (like a
| larger module consisting of smaller submodules).
|
| Divide and conquer. This is how abstraction works in a
| nutshell. You start with a large problem, divide it into
| smaller parts (modules), each part you treat as a separate
| problem dividing it into smaller modules, and so on.
| malort wrote:
| This is exactly what Shopify does with their monolithic Rails
| app [1]. I worked at a company with ~200 engineers that used
| the same general architecture and I really enjoyed it. We got a
| lot of the benefits (clear interfaces, teams able to work on
| their system without having to understand the whole platform,
| build optimization, etc) without any of the operational
| headaches that come with microservices.
|
| [1]: https://shopify.engineering/shopify-monolith
| kosolam wrote:
| The author needs to express itself more succinctly, maybe it will
| help also with his architecture struggles
| marcosdumay wrote:
| Do you know what kind of software has an absurd level of
| horizontal scalability by default? Web servers.
|
| The idea of splitting your web servers into multi-tiered web
| servers for scalability is, well, weird. Yet, somehow it's the
| main reason people keep pushing it. Even this article repeats
| this.
|
| There's nothing on microservices that adds scalability. They make
| it convenient to deal with data partitioning, but it's an
| ergonomics change, not one of capability.
| monocasa wrote:
| I agree with you for the most part, but to be devil's advocate,
| I believe the argument is that the data stores don't tend to
| horizontally scale the same way except in the most 'cdn' like
| data flows.
| delusional wrote:
| > but it's an ergonomics change, not one of capability.
|
| Isn't that a bit of an empty argument. Literally anything
| Turing complete is an ergonomics change when compared to
| anything else Turing complete.
|
| We are writing for the same computer. There's nothing different
| (fundamentally) about the kernel and my program.
| smrtinsert wrote:
| It's scalability with regard to parallelism if work streams and
| business units. One module can respond to market faster if all
| the other business units and teams don't have to weigh in
| JamesSwift wrote:
| It seems over time some nuance has been lost in translation on
| this. Microservices weren't 'more scalable' than monoliths,
| they were 'more appropriately scalable'. In other words, you
| could scale parts independently and shape your infrastructure
| more effectively for your workload. e.g. If your bottleneck is
| logins, you scale the LoginService and don't need extra copies
| of the AppointmentService running to keep up.
| aflag wrote:
| Sure, but I think the point is that it's unlikely the
| bottleneck will be in the web server, but in whatever
| database you're using for your logins. Just because you have
| a single program, it doesn't mean it is required to use a
| single database.
|
| But even if the webserver is the problem, it feels unlikely
| that you'll end up saving much in infrastructure cost by
| scaling just the login service rather than just deploying
| more copies of your webserver.
|
| Not saying it is impossible, but it's unlikely for that to be
| a good justification to adopt microservices. I'm not saying
| that there aren't good reasons to use microservices, but I
| agree that scalability is not a good selling point. I'd
| actually argue that it's harder to build scalable
| microservices than monoliths.
| antihero wrote:
| I guess the idea is to split the workload into parts so if one
| part of the workload is proving to use more resources, it can
| then get a larger dedicated pool of resources.
|
| Plus being able to choose different tech for different types of
| work, and also being able to split between teams when your org
| gets more vast.
|
| Though I guess this is more about using services than micro
| services, per-say.
| Nextgrid wrote:
| I never got the "scalability" argument of microservices. You
| can trivially deploy multiple instances of your monolithic web
| application - chances are you're already doing so by running
| multiple workers/threads in your application server. Spreading
| that to other machines is trivial.
|
| The real issue is in scaling the data store. Microservices
| typically work around that problem by each having their own
| separate database, but nothing prevents your monolith from also
| talking to different DBs.
| baash05 wrote:
| Totally.. and adding read replicas in the mix too. Even rails
| allows you to stipulate the DB each model lives in, in a
| rather trivial manner.
| pas wrote:
| And there are DB proxies that can hide this complexity from
| the application (to a degree).
| pas wrote:
| Monolith becomes a problem when it becomes too big to
| build/test/deploy/debug in a sane fashion.
|
| And when you have alternatives, great. But for example game
| devs don't. Or operating system devs. Though of course these
| are all active areas of ongoing research (for decades!).
|
| If the problem/subject were that easy we would have already
| solved it and it wouldn't be a hot topic.
|
| And since it's likely a nonlinear problem it's hard to map
| the problem-solution space, hard to get a good mental model
| for it, so we use the next best thing: stories. We have
| success stories and parables on what not to do. (And whole
| conferences dedicated to telling them :))
| marcosdumay wrote:
| Building, ok, but for testing or debugging microservices
| aren't really an alternative. Just like scalability, they
| add no new capability here. The most they can do is adapt
| better to some set of procedures than a monolith.
|
| For deploying they are nothing but a very large hindrance.
| a45a33s wrote:
| i think everything is already covered with this video
| https://www.youtube.com/watch?v=y8OnoxKotPQ
| jiggawatts wrote:
| Architecture astronauts _love_ microservices.
|
| I'm watching a government customer take simple, cohesive systems
| developed by a small team (4-5 people) and _split it up_ into
| tiny little pieces scattered across different hosting platforms
| and even clouds.
|
| Why?
|
| Because it's "fun" for the architects and pads out their resume.
|
| Just now, I'm watching a "digital interactions" project that will
| have dozens of components across two clouds, including multiple
| ETL steps. In all seriousness, half of that could be replaced by
| a _database index_ , and the other half with a trigger.
|
| They're seriously going to deploy clusters of stuff on Kubernetes
| for 100 MB of data to make sure it "scales"... to 200 MB. Maybe.
| Eventually.
|
| What kills me is that now that they've made the decision to over-
| engineer the thing, my consultancy firm can't bid on the tender
| because we don't have the appropriate experience building over-
| engineered monstrosities!
|
| The simple and effective solutions to problems we've delivered in
| the past are _disqualifying_ us from work.
|
| You guessed it: my next project will be an architect's wet dream
| and will be over engineered just so that we can say on tender
| applications that "yes, we have the relevant experience".
|
| Gotta play the game by the rules...
| AtNightWeCode wrote:
| noob
| marginalia_nu wrote:
| Lifecycle management is also a big part of it, I think.
|
| My search engine is a hybrid architecture, with some monoliths
| and some microservices.
|
| * Index - 5+ minute start time and uses 60% of system RAM.;
|
| * Search Server (query parser, etc.) - 30 second start time due
| to having to load a bunch of term frequency data, low resources,
| ephemeral state
|
| * Assistant Server (dictionary lookups etc.) - fast start, medium
| resources, stateless
|
| * Crawler - only runs sometimes, high resources, statefull
|
| * Archive Server- fast start, low resources, ephemeral state
|
| * Public API gateway - 5 second start time, low resources,
| ephemeral state
|
| + a few others
|
| A lot of the ways it's divided is along the lines of minimizing
| disruption when deploying changes. I don't have the server
| resources to run secondary instances of any of these, so I'm
| working with what I've got. If I patch the crawler, I don't want
| the search function to go down. If I patch the search function, I
| don't want to have to wait 5 minutes to restart the index.
|
| It would certainly be cleaner to say break apart the Index
| service in terms of design, as it does several disparate things,
| but those things have a resource synergy which means I can't, not
| without buying another server and running a small 100 GbE network
| between them. Seems silly for a living room operation.
| teekert wrote:
| We have a big codebase where I work and it's written as a set of
| micro services. I'm a bit of a noob but I thought that micro
| services were always containerized. Turns out we don't do that
| and it all runs in a very bespoke and poorly documented VM. Now
| can we call this a microservices based architecture? I feel like
| we cannot, it feels like worst of both worlds...
| monocasa wrote:
| I've come to the conclusion that microservices as usually
| implemented are simply a Conway's law concern.
|
| We haven't had a great track record defining the semantics of
| component boundaries, but http is both simple enough that it
| constricts code flow in a particular way leaving a lot of
| misunderstandings and bike shedding discussions off of the table,
| and is the ligua franca for external services owned by other orgs
| so your engineers can be universally expected to understand those
| semantics. There is a nice decoupling of letting teams make
| technology choices though. Letting that drift on a per team basis
| is kind of nice to allow experimentation with different
| technologies without having to get the whole org onboard.
|
| Maybe I need to go on the talk giving, consultant circuit
| reselling a restricted view of DI as "hybrid service
| architecture! Write your code as microservices but DevOps can
| colocate your apps in the same process if that's more efficient!"
| Lolz
|
| It's also quite a nerd trap as we developers love to separate
| components into as small of boxes as possible, but just like I've
| seen super deep object hierarchies that didn't gain you anything
| at the end of the day, it's easy to fall into the trap of
| spinning every tiny bit of code into its own service.
|
| I'm creating my own backend arch right now from scratch, and
| really trying to look at each network traversal from a "what does
| drawing this boundary actually get me" perspective. And I'm only
| splitting that out because the third party protocol being
| implemented is inherently separate services in an interesting
| way.
| [deleted]
| bob1029 wrote:
| I have done the full theme park ride on
| monolith->microservices->monolith. Both have ups and downs.
|
| The most important thing I learned is that "microservices" in
| absolutely no way necessitates "bullshit spread across multiple
| cloud vendors and other scenarios involving more than 1
| computer". What part of microservices says things _must_ be
| separated by way of an arbitrary wire protocol?
|
| We now have a "monolith" (process) that is comprised of many
| "microservices" (class files), each of which is responsible for
| its own database, business logic, types, etc. These services are
| able to communicate amongst themselves using the simple path of
| direct method invocation.
|
| If you are looking to scale up your capacity or even make things
| more resilient, microservices vs monolith is really not the
| conversation you need to be having. If you are trying to better
| organize your codebase, you might be in the right place but you
| also need to be super careful with how you proceed.
|
| We wasted a solid 3 years trying to make microservices solve all
| of our problems. Looking back, we spent more time worrying about
| trying to put Humpty Dumpty back together again with JSON APIs
| and other technological duct tape than what our customers were
| complaining about.
| jayd16 wrote:
| Farming out libraries or subprojects to separate teams is an ok
| way to go as long as you don't mind a single language/build
| system. Its not as flexible but you maybe get some perf out of
| it.
|
| You don't get the convenience of a single datastore with simple
| transactions but it sounds like you prefer the flexibility of
| services owning their own data.
|
| As it turns out there's no silver bullet. Do what works for
| you.
| frozenport wrote:
| >> single language/build system
|
| Imagine designing your entire architecture around your build
| system.
| brundolf wrote:
| You could even have a mono-repo that holds services which run
| as separate processes, if you need that. That way you still get
| the benefits of sharing code, types, version numbers, build
| processes, etc, which seem like the main headaches with the
| usual approach.
| bob1029 wrote:
| We passed right by this mode of operation while migrating
| from microservices=>monolith. Everything is a separate
| process but running on the same box. Ran that way for quite
| some time. We pulled services into the main process one-by-
| one. It went about as smoothly as one could have hoped for.
| rapind wrote:
| Pretty sure microservices will be remembered as a horribly
| convoluted stop gap once we have better language
| modularization.
|
| They encourage modular patterns which is usually good, but all
| that plumbing will eventually become unnecessary. I can't help
| but remember building 6 versions of each class in the old ejb
| days whenever I hear microservices hype.
| antishatter wrote:
| Pretty clearly the pros and cons come down to the application
| and necessary stability. Need ultra stable? More and smaller
| microservices usually better. Need fairly stable? Bigger
| services is fine. Need it to work generally? Build it however
| you can.
| withinboredom wrote:
| Currently at my day job, we have a monolithic code base with
| multiple entry points. If you update a library, you have to
| update the client or keep the changes backwards compatible. I
| really prefer this over multiple micro services.
| speed_spread wrote:
| TL;DR - don't. (which I agree with)
| m4l3x wrote:
| In my opinion implementing strong interfaces and good
| modularization is something, that we should talk about more, than
| doing Microservices. In the end it might be easy to rip of a
| Microservice, when needed, if the code is well structured.
| fatnoah wrote:
| > In my opinion implementing strong interfaces and good
| modularization is something, that we should talk about more,
| than doing Microservices.
|
| This is my position as well. Strong interfaces, good logical
| separation of function under the covers, etc. should allow
| splitting off things at whatever level of micro you prefer.
| alexvoda wrote:
| Indeed, the principal seems to often be forgotten. The why of
| monolith vs citadel vs microservices is ignored by some people.
|
| This results in K8s-driven-development instead of
| microservices.
| baash05 wrote:
| I love that sentence.
| jerf wrote:
| This is what I do in practice. I've seen it called a
| "distributed monolith".
|
| One of the good reasons to spend time with Erlang or Elixir is
| it'll _force_ you to learn how to write your programs with a
| variety of actors. Actors are generally easy cut points for
| turning into microservices if necessary. As with many
| programming languages I appreciate not being _forced_ to use
| that paradigm everywhere, but it 's great to be forced to do it
| for a while to learn how, so you can more comfortably adapt the
| paradigm elsewhere. My Go code is not 100% split into actors
| everywhere, but it definitely has actors embedded into it where
| useful, and even on my small team I can cite multiple cases
| where "actors" got pulled out into microservices. It wasn't
| "trivial", but it was relatively easy.
| ravi-delia wrote:
| That's where a lot of the fun of Elixir comes from I think.
| It's viscerally satisfying to split off functionality into
| what almost feels like an independent machine that happens to
| be in the same codebase. It clicked in a way normal object
| oriented programming never did for me, I guess since it's not
| feasible to mint 10,000 genservers to use as more complicated
| structs.
| cultofmetatron wrote:
| haha I was thinking the same thing. The code base of my
| startup is a monolith but in reality its a fork on request
| webserver sending messages to a collection of genservers
| actings as services.
|
| In essence, creating a microservice with elixir is about
| the same amount of effort as adding a controller in rails.
| baash05 wrote:
| Yeah.. rails hits this same nail with jobs.. I use jobs in
| a synch manner, and it's great. It's a rather functional
| way of working within an OO world.
| guntars wrote:
| There's something to be said about the benefits of having a
| serialization boundary between parts of the code that's a natural
| consequence of microservices, but it's something that could be
| employed in a monolith just as well. Then if at any point you
| actually need to split out a module into its own process or
| machine, it would be trivial to do.
| noizejoy wrote:
| I have no idea how much sense it makes to have this discussion
| without a reasonably specific underlying business context.
| dmitriid wrote:
| > Starting a greenfield project with microservices is easy.
|
| That's an overstatement if I've ever heard one. Staring a
| greenfield project with microservices is a nightmare. In a
| greenfield monolith you can code and deploy your dependencies
| together. In greenfield microservices? Oh, right. Set them all up
| for deployment, figure out netwroking and access between them,
| any change requires the fix-test-deploy dance etc.
___________________________________________________________________
(page generated 2022-01-19 23:00 UTC)