[HN Gopher] The Prime Video microservices to monolith story
       ___________________________________________________________________
        
       The Prime Video microservices to monolith story
        
       Author : mparnisari
       Score  : 311 points
       Date   : 2023-05-07 16:36 UTC (6 hours ago)
        
 (HTM) web link (adrianco.medium.com)
 (TXT) w3m dump (adrianco.medium.com)
        
       | qwertywert_ wrote:
       | This has been posted like 6 times already man.
        
       | iamleppert wrote:
       | If you have a predictable workload (i.e. we ingest 100,000 videos
       | a month with n size) you should be looking at it from that
       | perspective -- how much compute you are going to need, and when.
       | Serverless works well for predictably unpredictable, infrequent
       | workloads that don't need to have perfect output ("good enough").
       | 
       | The big mistake I see people make is trying to be overly clever
       | and predict the future workload needs. It never works out like
       | that. Design for your current workload now, if your architecture
       | can handle 10x of that great! Each time your scale from 10x
       | workload you will likely need to redesign the system anyway, so
       | you shouldn't pay that tax unless you absolutely have to.
       | 
       | There are a lot of limitations of serverless, the big one I
       | experienced was inability to control the host environment and
       | limitations on the slice of memory/CPU you get, such that you
       | must take that into consideration when designing your atomic work
       | units. Also paying the distributed computing tax is real and
       | occurs at development and runtime -- things like job tracking and
       | monitoring are important when you have 10,000 processes running
       | -- you start to get into the territory of things that basically
       | never happen on a single machine or small cluster, but become
       | problems with thousands of disparate machines / processes.
        
         | goostavos wrote:
         | >Design for your current workload
         | 
         | Please be my friend.
         | 
         | The bulk of my job these days is sitting in design reviews
         | trying to convince people that just making up scenarios where
         | you need to scale to 100x isn't actually engineering. It's
         | exhausting self-indulgence. Nothing is easier than inventing
         | scenarios where you'll need to "scale". It's borderline
         | impossible to get software folks to just live in the world that
         | exists in front of their eyes.
         | 
         | "Scale" should go back to its original meaning: change relative
         | to some starting point. Slap a trend line on what you know. Do
         | some estimation for what you don't. Design accordingly.
        
           | ShroudedNight wrote:
           | > just making up scenarios where you need to scale to 100x
           | isn't actually engineering.
           | 
           | Even for "peak" Amazon's concern seemed limited to about ~5x
           | daily TPS maximums unless one had extraordinary evidence.
           | 
           | The counter-balance to limiting resource balooning to 5x
           | scale is introducing Erlang-B modelling. Depending on how
           | many 9s you require, you may need way more service
           | availability than expected.
           | 
           | The 100x calculations are probably doubly wrong (both too
           | large and too small), providing negative value false
           | confidence.
        
           | jpgvm wrote:
           | Yeah unless you are designing the replacement for a system
           | that has already reached it's scalability limits you
           | shouldn't be worrying too much other than not doing very
           | silly things architecturally.
           | 
           | When you are designing replacements though you need to have
           | an idea of what your scalability runway is before the next
           | re-design will be possible. Sometimes that is just 2x, often
           | times it's 10x, occasionally it's 100x but it's all
           | situational.
        
       | jack_squat wrote:
       | Am I alone in feeling there is nothing left to be said on this
       | topic? Correct application design, and balancing that against the
       | constraints of getting a product out the door, the requirements
       | of a particular application, and the resources on hand (skill-
       | sets, developers, infrastructure) does not in my opinion boil
       | down to "microservices vs monolith".
       | 
       | Both strategies have tradeoffs, both have produced successes and
       | failures, and choice of approach seems (obviously) too specific
       | to a given context and application for general advice to apply.
        
         | TeeWEE wrote:
         | Exactly, tool vs hammer. Sometimes you made the wrong choice in
         | tool, and then you switch tools.. Nothing wrong with that. A
         | craftman just knows its tools better. There is no magic bullet
         | here.
        
           | matisseverduyn wrote:
           | > Exactly, tool vs hammer. Sometimes you made the wrong
           | choice in tool, and then you switch tools.. Nothing wrong
           | with that. A craftsman just knows its tools better. There is
           | no magic bullet here.
           | 
           | Rational take, but I see the debate similar to Roman vs
           | Arabic numerals.
           | 
           | Keeping a tally? Roman. Need to use operators? Arabic.
           | Sometimes you can keep a tally in Arabic (not ideal), and
           | sometimes you can do basic operations on Roman numerals (not
           | ideal).
           | 
           | However, when you want to start using variables, only one
           | tool enables this easily.
           | 
           | I can't architect the types of redundant and properly
           | separated interoperable systems with a monolith that
           | microservices otherwise enable.
           | 
           | So the desire to move forward isn't the need to find a magic
           | bullet, but the next evolution of an existing ability that
           | unlocks new capabilities...
           | 
           | (I don't think calculus would have been discovered using
           | Roman numerals)
        
             | TeeWEE wrote:
             | No, but you can start a business with a singlesystem, keep
             | things well decoupled (hard / software engineer is hard)..
             | Then you gradually run some parts of the systems in
             | different docker containers, then at some point you
             | completely decoupled systems and have different teams
             | handle different service boundaries. You could still have
             | the core system be a big bigger, and sometimes referred to
             | as monolith. But monolith vs microservices is a polarises
             | the discussion, its a gradual scale. Its the engineering,
             | and keeping things separated that matters.
        
         | [deleted]
        
         | mattmanser wrote:
         | I think there are an increasing number of people that believe
         | that microservice is not in the same risk category as a normal
         | application design. A design we really need to stop calling
         | "monolith" as to PMs, management, etc. it sounds ancient.
         | 
         | So the claim that "both strategies have tradeoffs" is not true
         | for a lot of people. Microservies have bigger tradeoffs and
         | consequences than a normal application design.
        
         | alexashka wrote:
         | Of course there is nothing _technical_ to be said. There is
         | plenty of personal anecdote and associated feelings to be
         | shared however, which is what homo sapiens primarily enjoy :)
         | 
         | You get double the thrill if you _think_ you are engaging in a
         | technical conversation all the while arguing anecdotal
         | experiences, beliefs and biases.
        
       | kennu wrote:
       | Many comments seem to miss the point that nowadays native AWS
       | cloud applications are meant to be built by combining various AWS
       | services using Lambda as the glue between them. It's not about
       | serverless vs non-serverless: it's about using cloud services for
       | the purpose they are each good at. In many cases you might
       | actually have more IaC code than Lambda handler code, especially
       | if you can leverage services like AppSync and Step Functions. The
       | application is no longer a set of microservices; it's a set of
       | CDK / CloudFormation stacks and the resources that they define,
       | some of which might be microservices.
        
       | balls187 wrote:
       | I don't quite understand the point in all this. Design tradeoffs
       | are still a thing; did we all just forget?
       | 
       | "Abortions for some... miniature American flags for others!"
        
         | mattgreenrocks wrote:
         | Software architecture is (incorrectly) seen as NP-hard by
         | practitioners, so they avoid it, and instead focus on
         | implementing the current "best practices." However, "best
         | practices" can't be best if they're situational, so we arrive
         | at very local maxima, such as microservices for all.
        
       | joanne123 wrote:
       | [dead]
        
       | foffoofof wrote:
       | The whole industry is stupid at this point. I don't think anyone
       | knows what they are doing. Everyone just wants to play with the
       | new shiny thing. Half the engineers probably have ADHD which
       | leads to this behavior of seeking novelty.
       | 
       | Everyone is obsessed with their shitty stack they used at their
       | last company, and this shit will keep going on and on.
       | 
       | The only things that matters is how easy is something to debug
       | and to change. Think about how often you have heard that as any
       | kind of priority. So many shitty decisions would be avoided if
       | people started thinking about that.
        
         | projectileboy wrote:
         | Alan Kay once pointed out something along the line that when
         | excitement exceeds knowledge you get fashion. For 10 years I've
         | been in and around languages that were declared dead by the HN
         | crowd because there wasn't enough new hype around it. It's not
         | a perfectly inverse correlation, but honestly at this point
         | when I see HN is excited about something new it raises red
         | flags for me.
        
           | jack_riminton wrote:
           | I see this more on twitter to be honest. HN, of late, has
           | been a great source of info on the 'boring' stacks e.g Rails.
           | 
           | Twitter is where you get the JavaScript 'influencers' who
           | appeal to a core demographic of engineers with FOMO who
           | signal their 'in status' by waxing lyrical about the latest
           | framework
        
           | vishnugupta wrote:
           | Not to pat myself on the back but fashion was the exact
           | analogy I came up while describing the situation. When
           | everyone begins focusing on tech as an end in itself as
           | opposed to it serving a bigger purpose we end up here.
           | 
           | And add to that the zero cost of capital of 2010s, it lead to
           | engineers being given almost free reign which is generally a
           | bad idea. "we are a technology company" would attract VC
           | money, as if tech itself was a product of every darn startup.
        
             | namaria wrote:
             | >And add to that the zero cost of capital of 2010s
             | 
             | I suspect the timing of ChatGPT release just as interest
             | rates started going up is a ploy to layer more obscurity
             | into tech choices to drive the hype harder ad keep the
             | gravy train going even as capex becomes more costly.
        
         | lemmsjid wrote:
         | It's easy to sit back and say that, but I can almost guarantee
         | that if you actually follow this up with a set of opinions
         | about what stack / architecture / approach is easiest to debug
         | and change, you will find yourself back in the argument with
         | the rest of us stupid people.
         | 
         | I believe there's a lot more variables than "how easy is
         | something to debug and change". The issue is that how easy
         | something is to debug and change is very contextual to the
         | point in the lifetime of an application, the industry the
         | application will live in, and the changing requirements as the
         | application grows in complexity.
         | 
         | Even the simplest interpretation of your statement leads to
         | considerable argument. Often the ease of change and the ease of
         | debugging are at odds with one another, where a highly dynamic
         | approach might make change a breeze, but make debugging more
         | difficult.
        
           | Timpy wrote:
           | I agree that "easy to debug" is subjective. But it's insanity
           | to act like that doesn't mean that some architectures are
           | objectively easier to debug than others. How can splitting
           | something up into different services and introducing network
           | calls make it easier? You're not reducing complexity at all.
           | It makes debugging harder.
        
             | lemmsjid wrote:
             | It would indeed be insanity to argue that, which is why I
             | didn't. The argument for splitting into micro services is
             | generally that particular teams would own particular
             | services and therefore make debugging/change easier for
             | them, at the cost of increased overall complexity of the
             | overall system. Anyone arguing it reduces complexity is
             | being very hopeful.
             | 
             | I personally think an org should generally cling to a so
             | called monolith as long as they possibly can. But it's very
             | hard to be prescriptive without knowing particulars of the
             | problems an org is trying to solve.
        
         | WinLychee wrote:
         | There are a lot of financial incentives pushing engineers to
         | seek novelty, even if they might not necessarily want to.
         | Promotions and raises are heavily tied to projects introducing
         | or inventing shiny new tech. Heck, even VC funding for starting
         | a business is tied to this. One might also view this as the
         | human condition: a bias to solve problems by adding rather than
         | subtracting.
        
           | Nextgrid wrote:
           | > even VC funding for starting a business is tied to this
           | 
           | This is the main problem.
           | 
           | Excluding VC, businesses exist to make profit. The purpose of
           | IT and engineering is to enable that business or make it
           | operate more efficiently - engineering and technology should
           | never be a goal in and of itself.
           | 
           | Given this situation, the most efficient technology (that
           | enables faster delivery of business software) would win.
           | "Shiny" stuff would never win in terms of mindshare unless
           | it's actually good and enables faster delivery than the
           | status quo.
           | 
           | Now add VCs to the mix. Suddenly you have an infinite supply
           | of free money that distorts the usual market dynamics,
           | allowing businesses to keep operating with no regard to
           | profitability. The downwards pressure of the market pushing
           | for businesses to operate more profitably is no longer there.
           | As an engineer, you no longer have the pressure of building
           | technology for the purposes of making the business more
           | efficient - you lost that valuable feedback loop.
           | 
           | Now run this for 10 years. Entire careers will be built on
           | this - why even _try_ to build a profitable business when you
           | can be a  "serial startup CEO" and merely use the right
           | amount of shiny to solicit endless VC funding to continue the
           | adventure? Imagine a well-meaning, budding engineer joining
           | such a startup and learning such a corrupted idea of
           | "engineering" with none of the usual objectives, pressures
           | and feedback loops of engineering.
           | 
           | In the end, you end up with "senior" engineers (with over-
           | inflated titles) whose entire careers are built on technology
           | that wouldn't actually pass muster in an _actual_ business
           | under the normal market pressures. But they don 't even know
           | anything else or why that is wrong (because they've started
           | their career during this charade and never had any taste of
           | what _real_ engineering is), so they keep pushing for these
           | (now-inadequate) solutions, deluded that they are doing the
           | right thing.
        
         | pmontra wrote:
         | I don't think ADHD had anything to do with it.
         | 
         | Young people with little knowledge (because they are young)
         | give a try to every new piece of technology and start projects
         | with them. I've been there.
         | 
         | Older people that eventually found good tools to solve problems
         | have less time and enthusiasm to try new things. They still do
         | but only when they think it's worth it. I'm there now.
         | 
         | Most developers and many CTOs are young.
        
           | mattgreenrocks wrote:
           | > Older people that eventually found good tools to solve
           | problems have less time and enthusiasm to try new things.
           | They still do but only when they think it's worth it. I'm
           | there now.
           | 
           | The best thing about this phase is you just get so much shit
           | done instead of screwing around with 10 half-baked over-
           | marketed frameworks.
        
         | gherkinnn wrote:
         | Half the engineers are made to believe they are God's gift to
         | humanity and like so totally clever. But we all spend our time
         | building banal and ultimately dumb shit nobody wants. But
         | people construct their entire identity around tech anyway.
         | 
         | It is no surprise the place is littered with ivory towers.
        
         | zwischenzug wrote:
         | I've often mused about creating a language that put
         | readability, maintainability, logging etc over, say,
         | performance.
        
           | Nextgrid wrote:
           | Performance is absolutely a non-issue when modern
           | "engineering" insists on running everything on underpowered
           | VMs with high-latency network-based storage. Even if your
           | language is truly, actually slow, you can get an instant
           | performance boost just moving that workload to a modern bare-
           | metal server (cloud providers basically ate all the
           | performance improvements of modern hardware), or scale that
           | (most likely stateless) workload horizontally if a single
           | bare-metal machine isn't enough.
        
           | Arbortheus wrote:
           | You might enjoy Go. Lots of emphasis has been put into the
           | development tooling and ecosystem. There is an impressive
           | commitment to avoiding breaking changes.
           | 
           | We have a legacy Go microservices codebase that's over a
           | decade old and because the Go maintainers have paid so much
           | attention to avoiding creating breaking changes and a
           | guarantee to never create a "Go 2", the codebase has aged
           | much better than stuff written at the same time in Python
           | (some of which is still in Python 2 because it is too risky
           | to upgrade, and that functionality will be killed instead
           | because we don't know how to safely update it). Most of the
           | time we don't need to make any changes when bumping Go
           | versions, just fixing up things our linter warns is
           | deprecated.
        
         | mk89 wrote:
         | I don't have ADHD but I find your comment quite offensive.
         | 
         | It also seems that people without ADHD don't like innovation?
         | That either came out wrong, or ...boh?
        
           | beerpls wrote:
           | I do have ADHD and op is spot on.
           | 
           | The software industry is filled to the brim with midwits who
           | think _their_ idea is _the only right idea_ even though its
           | nothing special and they're ignoring actual problems
           | 
           | As op said - when's the last time you heard someone talk
           | about writing code that's easy to debug? Never. Yet 90%+ of
           | companies will make you pass arbitrary algorithms tests
           | 
           | The field is dominated by A types who confuse their
           | neuroticism and ego for talent
        
             | mk89 wrote:
             | But why to bring ADHD into this? This I don't understand.
             | 
             | I do understand the fact that everyone is following one or
             | two bandwagons, but I don't have the need of saying that
             | someone must have a mental condition for doing that.
             | 
             | There are MANY more reasons for adopting technologies.
             | Marketing, corporate culture, hiring, trying things out,
             | wrong people in leadership position (as the article
             | shows!), etc etc. It's not just always the minions to do
             | things wrong.
        
               | cactusplant7374 wrote:
               | Because people with ADHD crave stimulation and pursuing
               | new and exciting technologies is very satisfying.
        
               | faeriechangling wrote:
               | Does adopting serverless actually accomplish that? With
               | servers you have a bunch of technologies to learn about
               | that are abstracted away with serverless. Every person I
               | know with ADHD in the IT field, not knowing many, is into
               | servers.
        
               | faeriechangling wrote:
               | Why? It's a consequence of labelling a sizeable swathe of
               | the IT workforce as "Disordered", having a "Deficit", and
               | a "Mental condition" (which is NOT to be talked about in
               | any context unapproved by the grand council of ADHD).
               | 
               | If you didn't slap derogatory labels on people, just let
               | people who wanted to use amphetamines and
               | methylphenidate, and addressed problems like organisation
               | and hyperactivity independently people would end up with
               | roughly the same quality of life but without the othering
               | and stereotyping. Which is to say, this sort of
               | stereotyping is an inevitable and continual iatrogenic
               | consequence of the sociomedical status quo. When
               | something is labelled a deficit and disorder - then
               | people almost naturally compare that thing to other
               | things they do not like.
        
           | luuurker wrote:
           | > I don't have ADHD but I find your comment quite offensive.
           | 
           | I don't mean this in a bad way, but people not part of
           | <group> need to dial it down a bit. Too many get offended by
           | things that people in <group> don't see as a problem.
        
       | mr_tristan wrote:
       | What's funny to me, is that I focused on something completely
       | different; how independent these teams at Amazon are. The move
       | from pretty different infrastructure services, all at the same
       | company. I can't think of a single place I've worked at in 20
       | years where these infra teams wouldn't be either: a.) mandated,
       | i.e., "you will stick with lambda/kubernetes/whatever", or b.)
       | review and be involved in the architectural decisionmaking of the
       | video service approach, i.e., have to politick your way around
       | any deviation.
       | 
       | I guess the shift from "mono to micro" just isn't very
       | interesting to me. You can usually change your definition of
       | either concept to fit your architecture. This just seemed like
       | the team did the math and revised their approach. Good for them!
        
         | iLoveOncall wrote:
         | This is in part where the "It's always Day 1" mentality
         | manifests at Amazon.
         | 
         | There's only one programming language that is forbidden (PHP -
         | for wrong reasons), and only one cloud vendor that is mandated
         | (obvious). But beside that, teams are able to use literally
         | whatever technology they want. There are some technologies that
         | impose themselves because of the ecosystem, but you can always
         | do your own sauce if needed.
         | 
         | I've been in two teams that had basically 90% different tech
         | stacks, but it's never a problem, and it doesn't really ever
         | come up in design reviews (unless there's a real reason, not
         | for personal preferences).
        
           | jpgvm wrote:
           | Curious though, what is the reason for banning PHP? It must
           | be pretty specific if they singled out just PHP.
        
             | iLoveOncall wrote:
             | I don't know the full history because it's from way before
             | I joined, but it is honestly a terrible take (by a guy
             | who's now Distinguished Engineer at Apple, no less!)
             | explaining that it is impossible to write secure
             | applications in PHP.
             | 
             | Basically, they used "the historical number of
             | vulnerabilities identified in applications developed using
             | a technology" as metric to determine how insecure a
             | technology is.
             | 
             | For the argument here, they looked at the CVE list, where
             | at the time (in 2012, with the list existing since 1999)
             | 40% of all software vulnerabilities recorded were in PHP
             | applications. This led to the conclusion that PHP is
             | insecure by nature.
             | 
             | Of course, he didn't mention that at the time, PHP was also
             | used by 80% of all websites, because that would have made
             | his argument worthless.
             | 
             | That wiki page explaining that is still up. It's so
             | baffling to me when the argument violates so many of
             | Amazon's leadership principles.
        
               | jpgvm wrote:
               | Yeah that is a pretty poor reason, thanks for the nugget
               | though that is pretty interesting.
               | 
               | I could think of a bunch of legitimate reasons to want to
               | ban it but they would also hit a bunch of other languages
               | as a result - hence why I was curious how it could be so
               | specific.
               | 
               | It wouldn't surprise me if LOC for LOC PHP written today
               | is much more secure than JS because of the same dynamic,
               | just JS is the one that is now on 80% of new code written
               | by beginners.
        
               | beastcoast wrote:
               | Even worse than that - Amazon spent dozens of engineer
               | years migrating their wiki from PHP MediaWiki to Java
               | XWiki, pretty much for that reason only, and for dubious
               | customer benefit. There was a very epic post mortem at
               | the end of it.
        
         | beastcoast wrote:
         | Amazon is actually better thought of as a collection of
         | independent startups with their own software architectures,
         | roadmaps and P&Ls. There's commonly used infrastructure but
         | very little is really mandated at the company level.
        
       | bhauer wrote:
       | Obviously, the article is microservice apologia, but...
       | 
       | > _They were able to re-use most of their working code by
       | combining it into a single long running microservice that is
       | horizontally scaled using ECS..._
       | 
       | No, it was no longer a microservice; it became a plain service,
       | as in SOA. It was no longer _micro_. That 's the whole point.
       | 
       | They could have saved time and money had they just made the thing
       | a plain service to begin with. This thing was not sufficiently
       | complicated to warrant the complexity of starting with a bunch of
       | microservices.
       | 
       | The article says many hot takes have missed the point, but I
       | think what we're seeing here is an example of the two sides
       | talking past one another since the author hasn't appreciated the
       | opposition's arguments at all.
        
       | lr1970 wrote:
       | Dup. Already discussed on HN less than a day ago [0].
       | 
       | [0] https://news.ycombinator.com/item?id=35847988
        
         | jlund-molfese wrote:
         | No, it's a different link. The one in this topic is addressing
         | the link you posted.
        
       | EMM_386 wrote:
       | > I do think microservices were over sold as the answer to
       | everything, and I think this may have arisen from vendors who
       | wanted to sell Kubernetes with a simple marketing message that
       | enterprises needed to modernize by using Kubernetes to do cloud
       | native microservices for everything
       | 
       | Yes, they absolutely were oversold, Kubernetes marketing was part
       | of it. There was also _a lot_ of resume-driven development, and
       | simply  "look, new shiny object!". A lot of young engineers
       | entered the workforce over the last couple of decades. That often
       | comes along with "forget this old stuff, what's the new stuff?".
       | 
       | I use KISS as much as possible with every project I'm involved in
       | desinging.
       | 
       | The microservices push got the point where a dev shops with 10
       | people had it all broken into microservices and running
       | Kubernetes to manage low-traffic front ends, an API and a
       | database. I would see some of these systems and the only thing I
       | could think was "... but why?".
       | 
       | Do microservices have a place? Yes. Have I seen them used where
       | they shouldn't be? Yes. Have I seen more poorly implemented
       | microservices architecture than good ones. _Yes_.
       | 
       | Over-engineering the tech stack, or selecting one part of it vs.
       | another based on which is trendier, only leads to more
       | frusturation, difficulty reasoning about it, added development
       | time ... all of which means it will cost more.
        
       | lopkeny12ko wrote:
       | The original, unedited title is "So many bad takes -- What is
       | there to learn from the Prime Video microservices to monolith
       | story"
        
       | goodpoint wrote:
       | The points in the Prime Video article still stand. People overuse
       | microservices and serverless and stuff gets expensive quickly.
        
         | j45 wrote:
         | It's interesting that there are folks who think there must be
         | only one architecture to be the best in all cases.
         | 
         | It reeks of inexperience and being highly opinionated followers
         | of popular opinion instead of playing with both personally.
         | 
         | So few projects actually scale to benefit from microservices
         | where its a benefit.
         | 
         | One's interpretation and benefit for microservices is fine.
         | It's just not guaranteed to be factual in many used cases.
         | 
         | First, the line of where scaling benefits from microservices is
         | also moving higher each year due to the same underlying reason:
         | 
         | Whether Monolith or Microservices, the increased performance of
         | Linux networking, availability of inexpensive ram and cpu along
         | with much more optimized databases like Postgres or mysql from
         | the past 10-15 years ago seems to have been missed by many
         | people who seem to have set an interpretation in place and
         | never looked back.
         | 
         | When horsepower gets faster and cheaper each year the
         | architectural best practices of microservices adoption was
         | likely relative to its year of adoption.
         | 
         | What to do?
         | 
         | Keep your stack unapologetically simple. Don't handcuff your
         | development speed when the complexity arrives to rearchitect
         | things anyways. Because the first version will have the wrong
         | assumptions about what customers want.
         | 
         | How?
         | 
         | Consider monorepos to let you start as monolith for velocity
         | and move to microservices as needed
         | 
         | Monolithic apps can have a service based architecture
         | internally (monorepos can be pretty handy), which in turn lets
         | startups ship much more regularly than they ought think for
         | monoliths.
         | 
         | You can design folders in a monorepo at the start to fuel a
         | monolith and keep a path to go to micro services, where and if
         | needed.
         | 
         | To know this enough experience (both good and bad) is needed
         | with both microservices and monoliths.
        
           | groestl wrote:
           | > Monolithic apps can have a service based architecture
           | internally
           | 
           | Not only that, but if you absolutely must (for example
           | different scaling requirements), forking out a service from a
           | monolith and deploying it as microservice is relatively
           | straightforward. Provided source code interdependencies have
           | been maintained cleanly before, which can be done with
           | support from tools.
        
             | gadflyinyoureye wrote:
             | How do you work around DB transactions? For example, in
             | many Spring apps the transaction is common across all DB
             | calls for an HTTP request. Would you put @RequiresNew? Many
             | data sources don't support this for a singular DB.
        
             | tonyarkles wrote:
             | I definitely agree. More than once I've taken a monolith
             | and sliced out a small piece of it to run as a separate
             | service. That's been a significantly less painful task than
             | trying to aggregate multiple microservices into a less
             | expensive monolith, especially when multiple microservices
             | use the same underlying packages but different conflicting
             | versions or different global configuration.
        
               | j45 wrote:
               | The interesting thing about doing this is how things kind
               | of end back at n-tier architecture, where one can have
               | abstractions between each layers of the cake to more
               | easily swap in/out technologies as well.
        
           | tonyarkles wrote:
           | > It's interesting that there are folks who think there must
           | be only one architecture to be the best in all cases.
           | 
           | > To know this enough experience (both good and bad) is
           | needed with both microservices and monoliths.
           | 
           | Over the years I have worked with a huge number of people who
           | are looking for Silver Bullets and Best Practices, generally
           | for one (or both) of these reasons:
           | 
           | - To be blameless if the architecture doesn't work out
           | (building with microservices is a Best Practice, it's not my
           | fault it got very expensive)
           | 
           | - To cut down on the number of deep-thinking decisions they
           | have to make
           | 
           | And to be fair on the second point, I don't necessarily
           | disagree with the premise; if these kinds of decisions allow
           | you to get going quicker, it's not necessarily the wrong
           | choice at the moment. What everyone needs to keep in mind,
           | though, is that if you don't do deep architectural thinking
           | and planning at the start of a project (or even if you do
           | sometimes...), you'll have to revisit it later. Often the
           | same forces and pressures that led to taking shortcuts at the
           | early architectural stages of a project will continue to be
           | present down the road, and it'll be more work to fix it later
           | than to do it right the first time.
           | 
           | It's a tough balance. If there's a high risk that the project
           | won't be alive in 6 months it probably doesn't matter. If
           | it's a prototype that's very likely to get into and stay in
           | production, then it really does matter.
        
             | j45 wrote:
             | Totally agree on the tough balance. At best it's a trade
             | off of different forms of technical, process, and business
             | debt.
             | 
             | Doing deep enough architectural thinking and planning at
             | the start to focus on maintaining flexibility (which can be
             | achieved in many ways) without premature optimization or
             | engineering can go a long way. Basically, don't work
             | yourself into a corner or concrete too quickly by trying to
             | do too much, or too little is an approach I try to keep in
             | mind. Technical and architectural debt comes with interest
             | as well.
             | 
             | One way to reduce the architectural risk is to make the
             | first version a throwaway. It can burn through assumptions.
             | Similar to design thinking, going broad and converging
             | twice can help get a better sense of vector,
        
           | jchw wrote:
           | There are fundamental reasons why microservices have become a
           | bit less popular though. If you see a good, elegant
           | microservice architecture, it looks very appealing:
           | independently scalable parts, greater redundancy, etc.
           | 
           | The problem comes when you try to start a new project and
           | begin to realize that except in simple cases, it's
           | exceptionally hard to know _ahead of time_ how you would best
           | be served to split an application up into services. Once
           | there 's a network boundary between two pieces of code,
           | there's now an API that needs to be managed carefully:
           | deployments of the two pieces of code may not be in lockstep,
           | rolling one back may be necessary, etc. If you got the split
           | wrong, the refactoring to fix that could be extremely hard.
           | 
           | By comparison, splitting logical services out of a monolith
           | isn't always a cakewalk, but it's usually more
           | straightforward, especially since as you mentioned, it's
           | totally possible to just share the same codebase between
           | services. It could even be the same binary invoked
           | differently.
           | 
           | My feeling is that it's easier to split a monolith than to
           | join microservices that were developed separately together;
           | they might have different codebases and possibly even be
           | written in different programming languages. Therefore, it's
           | probably better to start with a monolith even if you're
           | pretty damn sure you will need some services split off later
           | to scale.
           | 
           | Serverless on the other hand I am more skeptical of. I like
           | certain products that provide generic-ish abstractions as a
           | service: Cloud Run for example seems like a good idea, at
           | least on paper, and Amazon Aurora seems relatively
           | reasonable, too. But I think that they are often too
           | expensive and do not provide enough value to justify it,
           | especially since a lot of serverless tools don't provide
           | terribly generic APIs or services, leading to substantial
           | lock-in.
           | 
           | Both microservices and serverless APIs were sold as near-
           | panaceas to the woes of traditional monoliths, but it's a lot
           | more complicated than that.
           | 
           | edit: When I typed this response, the message I was replying
           | to was different/much shorter.
        
             | j45 wrote:
             | Apologies, I hit submit accidentally sooner than intended.
             | 
             | Splitting a monolith, in a monorepo, can be conspired to
             | break into microservices a bit easier than going the other
             | way. There's a lot more tooling becoming available.
             | 
             | The committment to cloud, serverless, devops has occured
             | when the complexity of administering servers was high.
             | Banking on progress, the administering servers has become
             | exponentially simpler. The recent project from the basecamp
             | guys (mrsk?) is an example of this.
             | 
             | You could almost roll your own "serverless" with enough
             | postgres exposing itself as a queue and/api. Once you
             | outgrow it, so be it. Tools like Hasura are quite
             | interesting.
        
           | foffoofof wrote:
           | Even monorepos are not perfect for evolving to microservices.
           | It's incredibly hard to refactor generally. We still lack
           | good monorepo tooling. Tests take huge amounts of time to
           | run. Multiple languages becomes painful especially ones with
           | slow compile speed like Rust.
           | 
           | Someone needs to make a programming language that encourages
           | you to write code that can be easily split and recombined.
           | Something that can do remoting well.
        
       | jbverschoor wrote:
       | Cant' we just ask the "Application Deployer" to upload an EAR,
       | published by the "Application Assembler", and run it on any of
       | the spec-compatible servers?
       | 
       | These problems have been solved decades ago.. 2.5 to be exact
        
       | bravoteam5 wrote:
       | [dead]
        
       | nforgerit wrote:
       | I'm starting to be convinced that the EU should regulate
       | acceptable software design patterns. This will prevent us all
       | from heaving to implement techbro cargo cult pseudo-innovations
       | like "Serverless for everything" /s
        
         | ipaddr wrote:
         | EU should stop regulating what is acceptable and let society
         | naturally come to conclusions. Limiting software designs seems
         | like an area the government shouldn't be involved with.
         | 
         | Regulate cloud providers if necessary not abstract patterns
        
       | enterthestartup wrote:
       | Very interesting that they were able to reduce costs by 90%,
       | though also agree with the opinions mentioned here.
        
       | th3h4mm3r wrote:
       | What if Amazon take bad choices in "something" and my company is
       | tied up with its services? I know that Prime is always "Amazon"
       | and that's okay. But for a startup (but not only) imho it's
       | better starts with something more "standard" than something as
       | serverless that is obviously an abstraction above other
       | tecnologies.
        
       | ThalesX wrote:
       | I completed my role as a CTO for a company several years ago. I
       | chose a boring stack with plenty of developers available: MySQL,
       | PHP with Laravel, and Angular 2. Since then, I've been a founding
       | engineer at some startups using the latest, cutting-edge tech
       | stacks. I'm proud to announce that the stack I managed as CTO is
       | still running stable and smooth with minimal intervention, and it
       | remains easy to modify. Meanwhile, the startups with flashy
       | stacks and nearly impossible-to-debug architectures have gone out
       | of business.
       | 
       | It amazes me how some companies still adopt the "build it and
       | they will come" approach, and when confronted with harsh reality,
       | they double down on building (and especially increasing
       | architectural complexity). CEOs, if you can't attract customers
       | with an idea or don't have a vertical that people are eager to
       | pay for, cramming 1000 half-baked features into a microservice
       | architecture and using the latest programming language / paradigm
       | won't save your company!
        
         | o_m wrote:
         | There is a middle ground to this. Sooner or later you have to
         | upgrade. There might be a security upgrade or the old system
         | doesn't play nice with other systems. So you are forced to
         | upgrade to something more non-boring, even if you are using the
         | same tech. It also get harded to hire developers. I doubt there
         | is many devs that want to work on Angular 2 these days.
        
           | ThalesX wrote:
           | The middle ground is that once the tech becomes the _actual_
           | problem, it might be time to investigate possibilities, until
           | then, customers don 't care, business people doesn't care and
           | the bottom line doesn't care. There are not that many
           | companies in the world where the tech stack is their
           | competitive advantage.
           | 
           | > I doubt there is many devs that want to work on Angular 2
           | these days.
           | 
           | From my experience, a product that makes money and is able to
           | pay people at market rates, will find said people. Not
           | everyone is an EMACS wielding, Linux wizard that refuses to
           | touch tech out of idealistic concerns.
           | 
           | The problem, in my experience, is more often lack of PMF than
           | tech stack (I've seen customers give insane leeway to
           | actually valuable products).
        
         | goodrubyist wrote:
         | Were those startups that went out of business did so because of
         | their stacks/architecture, or are you confusing correlation
         | with causation? And, there is a good reason people shy away
         | from PHP, and it has nothing to do with trying to be "flashy."
         | There should be a name for this kind of fallacy.
        
           | iLoveOncall wrote:
           | > there is a good reason people shy away from PHP
           | 
           | Please share it. And don't rely on articles that are more
           | than a decade and 3 or 4 versions of PHP old.
        
             | 5Qn8mNbc2FNCiVV wrote:
             | Afaik you cannot discern null and undefined. So, not there
             | and "null"? Either that or I'm going to have to have a good
             | talk with my colleague
        
               | ThalesX wrote:
               | > Afaik you cannot discern null and undefined.
               | 
               | This is a discussion for us tech folks. When it comes to
               | business. Customers don't give a rat's ass about this as
               | long as the product provides them with value.
               | 
               | If you're flooded with customers and the tech stack is a
               | pain in the ass, perhaps it would be a good time to start
               | considering this, but until you have this problem, the
               | technical aspects are just wasting investor money and /
               | or engineering time.
        
               | iLoveOncall wrote:
               | Go have your talk [1]. But that's a pretty weak argument
               | given that:
               | 
               | 1. Other languages also do not have this distinction
               | (Python for example AFAIK),
               | 
               | 2. It's relatively useless. I get the value of it in
               | JavaScript which has a lot of asynchronous code and
               | frontend where the behavior may indeed be very different
               | between null and undefined, but in a server-side
               | application I don't think the nuance is necessary at
               | runtime, and potential programming mistakes involving
               | that can be caught by a linter.
               | 
               | [1] https://stackoverflow.com/questions/6905491/how-to-
               | tell-whet...
        
           | ThalesX wrote:
           | > Were those startups that went out of business did so
           | because of their stacks/architecture, or are you confusing
           | correlation with causation?
           | 
           | As far as I am aware, and I can tell, it wasn't because of
           | the stacks / architecture, but lack of product market fit.
           | The fact that the response to lack of PMF was to double down
           | on product features and tech (instead of a business pivot),
           | in my opinion, was what made the investor money go poof and
           | the startup die down. I've seen this happen' in too many
           | places (as an employee or consultant) to be a coincidence.
           | 
           | > And, there is a good reason people shy away from PHP, and
           | it has nothing to do with trying to be "flashy." There should
           | be a name for this kind of fallacy.
           | 
           | Different stacks for different use cases. If I can't get
           | money with a shitty PHP product offering, it's probably best
           | to figure out why instead of attributing it to my tech stack.
        
       | getcrunk wrote:
       | Seeing as how a lot of players are going in on serverless
       | (vercel, netlify, deno, cloudflare) the cynic in me wonders if
       | this is a calculated move on part of Amazon to bring people back
       | to using ec2 .. etc
        
       | mellosouls wrote:
       | The nth posting of this story.
       | 
       | https://hn.algolia.com/?dateRange=pastWeek&page=0&prefix=fal...
        
       | bdcravens wrote:
       | Serverless isn't "cheap". It's rightsized. If you aren't serving
       | 24/7, it makes sense to move to an on-demand model. However if
       | you compare the unit of compute pricing, Lambda is comparable to
       | other services, and likely more expensive than buying compute
       | wholesale. You can just buy it in smaller portions.
        
       | zwischenzug wrote:
       | I still don't really get it. There are so many simple, easy to
       | build, cheap to run three tier frameworks out there so why bother
       | with all the hassle of porting at all? If your system gets super
       | busy then you can repurpose the code with higher performance and
       | well tuned DBs later. I spent 15 years doing that for some of the
       | busiest systems in the world, but those started on a single cpu.
        
         | gregoriol wrote:
         | The point is: managers and keywords
        
         | gadflyinyoureye wrote:
         | At my client, we go Serverless first in a bastardized state,
         | NestJS. The benefit is quick development at a low cost. We have
         | a series of calculations. Each read from a common S3 and write
         | to it. This allows the teams to do their work with independent
         | deployments.
         | 
         | Each one costs about $5 a month. That's rather hard to get in
         | the AWS EC2 world. They are also easier to manage. We don't
         | have to manage keys or certain. There is no redeploy after 90s
         | that comes with EC2.
         | 
         | However these are low access, slowish apps. Maybe a 1,000 calls
         | a day. They can take a minute to do their thing. Seems like a
         | nice match for lambdas.
         | 
         | Benefit of NestJS is if we ever needed to move out of lambda we
         | have a tradition all Express app.
        
           | zwischenzug wrote:
           | Useful insight, but whether it's 5$ or 50$ a month, compared
           | to dev costs that's a rounding error pretty much everywhere.
        
             | gadflyinyoureye wrote:
             | That's why we went with NestJS. It's like Spring but for
             | TypeScript. We can run everything locally. Boot up the
             | NestJS service at localhost:8080 with LocalStack running on
             | the side for access to S3 and DynamoDB compatible, working
             | services.
        
       | amluto wrote:
       | > they had some overly chatty calls between AWS lambda functions
       | and S3. They were able to re-use most of their working code by
       | combining it into a single long running microservice that is
       | horizontally scaled using ECS, and which is invoked via a lambda
       | function. This is only one of many microservices that make up the
       | Prime Video application. The problem is that they called this
       | refactoring a microservice to monolith transition, when it's
       | clearly a microservice refactoring step
       | 
       | If I understood the post at all, I disagree.
       | 
       | One can spend hundreds of HN comments discussing technology
       | stacks, monoliths, etc, and this is important: it affects
       | maintainability, developer hours, and money spent on
       | orchestration. For some applications, almost the entire workload
       | is orchestration, and this discussion makes sense.
       | 
       | But for this workload, actual work is being done, and it can be
       | quantified and priced. So let's get to the basics: an
       | uncompressed 1080p video stream is something like 3 Gbps. It
       | costs a certain amount of CPU to decompress. It costs a certain
       | amount of CPU to analyze.
       | 
       | These have price tags. You can pay for that CPU at EC2 rates or
       | on-prem rates or Lambda rates or whatever. You can calculate
       | this! And you pay for that bandwidth. You can pay for 3Gbps per
       | stream using S3: you pay S3 and you pay EC2 (or whatever) because
       | that uses 1/10th of the practical maximum EC2 <-> S3 bandwidth
       | per instance. Or you pay for the fraction of main memory
       | bandwidth used if you decode and analyze on the same EC2 instance
       | (or Lambda function or whatever). Or you pay for the fraction of
       | cache bandwidth used if you decide partial frames and analyze
       | without ever sending to memory.
       | 
       | And you'll find that S3 is not "chatty". It's catastrophic. In
       | fact, any use of microservices here is likely catastrophic if not
       | carefully bound to the same machine.
        
       | distantsounds wrote:
       | microservices? to sling video files over the internet? tell me
       | you're over-engineering without telling me you're over-
       | engineering
        
       | jpgvm wrote:
       | Serverless first is a mistake. It should be "serverless where
       | it's distinct advantages are actually useful" which is way less
       | common than people think.
       | 
       | Also generally the conception that building things with
       | containers + k8s actually takes more time than serverless is a
       | horseshit take IMO. It doesn't and if it's taking you longer you
       | are probably doing something wrong.
       | 
       | All the points noted in the slides etc just come from poor
       | engineering leadership. i.e decisions/choices, etc. Constaints do
       | speed up development but they can be enforced by leadership, not
       | by putting yourself in an endless pit of technical debt.
       | 
       | The only world where I can make a good case for serverless for
       | small teams is if you have said poor leadership and serverless
       | helps you escape their poor decision making.
       | 
       | Otherwise assuming you have the mandate necessary to get stuff
       | built the right way just stick with the Boring Tech stack of JVM
       | + PG + k8s. It solves all problems regardless of size, there are
       | minimal decisions to be made (really just Spring or Quarkus atm)
       | and you won't spend time dicking around with half-baked
       | serverless integration crap. Instead you can just use battle
       | tested high performance libraries for literally everything.
       | 
       | Don't get sucked into this serverless nonsense. It's always been
       | bad and it probably always will be. When it's out of fashion
       | Boring Tech will still be here.
        
         | emodendroket wrote:
         | I can't help but laugh at this when I think of how easy it is
         | to imagine the exact same tenor in a post dismissing Kubernetes
         | as newfangled, overcomplicated garbage.
        
           | jpgvm wrote:
           | Yeah there probably were people that wrote those. Not me
           | though, I had been through the whole wringer before k8s so I
           | knew what it was when I saw it.
           | 
           | I was working on a competitor to k8s at the time though so my
           | knowledge of the subject matter definitely influenced my
           | opinion of it.
           | 
           | I thought it was unstable though, largely due to dependency
           | on etcd which was also unstable at the time. Thankfully that
           | got sorted out though.
        
         | winrid wrote:
         | When I talked to people at Amazon it sounded like there was a
         | huge push to dogfeed Lambda. The guy was almost worshipping it
         | in the interview and appalled that I wasn't interested in it.
         | :)
        
         | richardw wrote:
         | Stop with the confident silver bullets. There are effective
         | teams using every stack out there, including serverless. The
         | professional chooses the right tool for the job and team. This
         | is an instance of one change between options that happened to
         | blow in the direction you like, but there are good options at
         | all points of the continuum.
         | 
         | I've used most languages in on-prem and distributed settings,
         | from app to web app to micro services to serverless to single
         | DB's of all kinds to distributed everything, in all clouds, and
         | all have their place and time and team. Sometimes I'd arrive
         | with a perfectly confident view and then learn that actually
         | there is another truth. Sometimes a 50-page stored procedure is
         | perfect. Microservices is awesome in the right context,
         | monolith in the right context, just stop with the simple
         | answers because you have a particular lens that works for you.
         | 
         | What a single lens does really well is that it removes options
         | and that makes you faster at how you solve most problems. Good
         | for you, simplicity is excellent. But don't diss professionals
         | and their leaders who have made choices and then adjust them
         | based on the data.
        
           | jpgvm wrote:
           | There is a big difference between learning the nuances of
           | engineering as you go and re-adjusting your viewpoint vs
           | bandwagoning on a new tool because it's cool even though you
           | can't tie any of it's advantages back to your problem
           | statement.
           | 
           | That isn't making a choice and adjusting based on the data,
           | that is just shoddy engineering.
           | 
           | It should have been obvious to anyone at a cursory glance
           | that not only did serverless not offer anything to this
           | application it would be actively detrimental. Simple napkin
           | math of data rate of a 1080p video stream should have made
           | that immediately clear.
           | 
           | Just because you can use something to get a task done doesn't
           | make it a good choice. I can still write pretty decent Perl.
           | It gets the job done and for some protocols that haven't
           | changed in a long time like LDAP the existing CPAN libraries
           | are still some of the best. But that isn't the right choice
           | is it?
           | 
           | I don't think it's defensible to argue that we should build
           | thing sub-standardly just to give people more choice. That is
           | like arguing that real engineers should have more freedom in
           | choice in how they design bridges. No one would agree with
           | that.
           | 
           | We get away with it because our bridges are much cheaper to
           | replace and noone dies if they break or have to be rewritten
           | because they rotted too much to run anymore.
           | 
           | Right now as it stands arguments for serverless aren't
           | technical in nature. They are a best an appeal to "the
           | traditional way is slow" or some notion of "infinite
           | scalability", the first of which isn't true at all and the
           | latter doesn't matter for anyone that is arguing in favor of
           | serverless.
           | 
           | Also even if it was "faster" (I don't agree but lets
           | postulate) that probably wouldn't make it a good decision
           | either unless it was proportionally faster vs it's costs (not
           | just dollar costs, architecture, complexity of managed
           | objects, etc). Which I highly doubt would be the case.
           | 
           | As you can see from a technical POV it's just very very hard
           | to rationalise a position for serverless > Boring Tech. For
           | me it would take too much mental gymnastics.
           | 
           | I was perhaps a little strong but my points are solid.
           | 
           | If it was good it would have proven advantages by now. K8s
           | adoption curve vs serverless adoption curve tells the real
           | story of quality IMO, especially given how much bigger of a
           | move it was to k8s from what preceeded it.
        
         | websap wrote:
         | Serverless first is a great way to start building your
         | business. As long as you don't have strict latency
         | requirements. You focus on your business, while the
         | infrastructure requirements are handled for you.
         | 
         | The point of engineering is to deliver the best solution and
         | the right cost. Cost needs to be evaluated on multiple
         | dimensions - actual dollars, time required to build, time
         | required to maintain.
         | 
         | Nothing stops you from using Postgres with Lambda. AWS has done
         | a bunch of work to make this possible.
         | 
         | I would recommend, creating the correct layer of separation in
         | your code, so that when the times comes to migrate from Lambda
         | or other serverless functions to Fargate, cloudrun, or even
         | containers run on VMs managed by K8S, you have to make minimal
         | changes to get up and running.
        
         | Tainnor wrote:
         | I don't care for serverless, but your suggestion that the only
         | right way to build software is by using the JVM, and the only
         | right way to use the JVM is to use Spring or Quarkus is...
         | puzzling to say the least.
         | 
         | I might as well say that you should run everything on PHP. I
         | guess that's boring too?
        
           | jpgvm wrote:
           | You can sub JVM for something else but you should probably
           | stick with PG and k8s.
           | 
           | What I am saying is the highest quality tools are still the
           | best and will remain that way. As other things come and go
           | they absorb the best ideas from the new stuff without losing
           | what makes them good.
           | 
           | Sometimes the new stuff survives (Node/TS/Go/Rust) sometimes
           | it doesn't (Clojure, Scala, arguably Ruby), some we are
           | waiting to find out.
           | 
           | But betting on JVM and .NET has been winning strategy from a
           | technical perspective for a long time now.
           | 
           | Also worth mentioning no, PHP probably doesn't work here
           | because it's not "general purpose" enough IMO. JVM can handle
           | almost any workload shape, the main one it doesn't do well in
           | is memory constrained environments (though it's possible if
           | you really want to). PHP, Node/TS, Python, Ruby all suffer
           | from being single threaded so can't take advantage of
           | properly large machines without a ton of extra complexity and
           | you pay a high memory overhead for most of those strategies.
           | 
           | Go is OK, I hate writing it but it can do all the things so
           | if that floats your boat go for it. Rust is also perfectly
           | fine, I just find it is actually pretty slow to write because
           | changing requirements often mean changing structure which can
           | be harder to manage in Rust vs Java/Kotlin/C#.
           | 
           | Anyways, wasn't meant to be a thing about languages. Just
           | pick a good runtime or AoT lang of choice, write everything
           | in it, stick to conventional frameworks, choose PostgreSQL,
           | run the lot on k8s. That is the core takeaway.
        
             | Tainnor wrote:
             | I mean, I currently work on the JVM, I just have enough
             | experience with different languages (including Ruby, which
             | I don't think is quite dead yet) and tech stacks to know
             | that they all suck in their own way. The JVM, and
             | specifically Java, and even more specifically Spring, have
             | a lot of annoying warts, it's just that other languages
             | have different warts. I agree that you shouldn't try to
             | rewrite everything in an obscure, untested technology, but
             | beyond that, there's a number of options with enough
             | tradeoffs. But it seems we're largely in agreement there.
             | 
             | I don't necessarily agree with the "keep everything in one
             | language / stack" sentiment. There's certainly companies
             | that overdo it, having 20 different languages at the same
             | time. But there's also valid reasons for keeping certain
             | tech stacks separate (e.g. having to do ML / data analysis
             | stuff in Python). In the worst case, the "use only a single
             | language" mindset leads to disasters like GWT, which is
             | something that my current company unfortunately still uses
             | because some Java developers (that don't work for the
             | company any more) thought that JavaScript was beneath them.
             | 
             | I also wouldn't choose PHP, for the record. I would like to
             | trash it here, but I've never actually been forced to use
             | it, so I can't even comment on it.
             | 
             | I'm not so sure about k8s. I think you can get away with
             | much simpler deployment models, too. At some point the
             | overhead of having to manage everything manually (or a need
             | for zero-downtime deployments) might make the added
             | complexity of k8s more bearable, but it's certainly added
             | complexity and not necessary everywhere and all the time.
             | 
             | I 100% agree about postgres, it's so good and feature-rich
             | that you should probably really only ever use another
             | database if you have very specific needs (except maybe also
             | adding redis as a cache, because redis is also very good).
        
               | DanHulton wrote:
               | Quarkus also has a lot of annoying warts, as does just
               | plain ol' Java.
               | 
               | Lord, if I never see another Builder pattern again in my
               | life, I may die a happy man.
        
             | foffoofof wrote:
             | Agreed re: jvm and .net.
             | 
             | It's easy to imagine a resurgence in the future.
        
           | [deleted]
        
         | erulabs wrote:
         | The amount of complexity created in the desperate attempt to
         | avoid learning k8s APIs has at this point, dramatically out
         | complexified k8s itself.
        
           | politelemon wrote:
           | The quadratic function that represents the hand waving away
           | of k8s complexities causes a stack overflow exception.
        
             | [deleted]
        
           | Arbortheus wrote:
           | I see the anti-K8S sentiment so much on hackernews. Sure,
           | there's a time and a place for K8S, but I see a lot of point-
           | blank objection to it for often weak reasons.
        
             | nova22033 wrote:
             | _It 's bloated_
             | 
             | Often this translate to "It has features I PERSONALLY don't
             | need or understand"
        
               | jpgvm wrote:
               | I generally find people that don't like k8s don't
               | understand it or more commonly even the reasons why it
               | exists.
               | 
               | If your job hasn't been to write Cloudformation/Terraform
               | /Pulumi/Chef/Puppet/Salt/Ansible/Capistrano/Packer for a
               | living, you might not appreciate it.
               | 
               | As someone that has been around the block I think k8s is
               | the best thing since sliced bread.
        
         | afandian wrote:
         | I'm building something with Spring Boot, Kotlin, Postgres and
         | containers, with a small team. It's far from boring, it's the
         | most fun I've had in ages.
        
           | jpgvm wrote:
           | Yeah I very much prefer Kotlin now. It's Boring Tech but at
           | the same time pleasant because of it's high expressivity and
           | tooling.
           | 
           | Also I find it way easier to convince people coming from a
           | dynamic language background to try it out vs Java which they
           | find less approachable. Which is ironic as Kotlin is by far
           | the more complicated language vs Java.
        
           | switch007 wrote:
           | Spring is fun until you need to do anything mildly complex
           | and then you tear your hair out trying to find the right
           | documentation. You have to navigate myriad of sub projects.
           | They really really need to sort out the SEO. It's atrocious.
           | 
           | The only way to learn Spring is to read the latest manuals of
           | every subproject you think you might need, cover to cover.
           | 
           | Even then you might not figure out what awkward collection of
           | beans you need to override the magic Spring does.
        
             | jpgvm wrote:
             | I found Spring to have a little bit of an understandability
             | cliff. Coming from Guice though once I understood the
             | container and had a feel for the general patterns it was
             | easy.
             | 
             | I think if you try to use it without understanding the
             | magic you can probably get pretty far but you will always
             | feel lost when things don't work just like the example.
             | Understanding how all the runtime wiring works at the
             | DI/IoC layer generally makes it pretty easy to work out
             | what is going on though.
        
         | jimbob45 wrote:
         | How are you going to throw away the competitive advantage of
         | serverless first if you're a startup? You're just asking to
         | lose to whichever of your competition that goes serverless
         | first.
         | 
         | However, if you're a massive company like Amazon working on a
         | less-than-cutting edge project, you certainly have the
         | resources to avoid serverless up front to avoid rewrite costs
         | down the line.
        
           | glutamate wrote:
           | LOL.
           | 
           | Competitive advantage comes from understanding user needs,
           | great UX, and a smooth running sales machine. No-one cares
           | that you built with technology X, whatever that is. There is
           | no silver bullet.
        
             | namaria wrote:
             | While I agree with the conclusion, competitive advantage is
             | by definition very relative. It might be what you say, if
             | you're producing some sort of saas offering. But it might
             | be something completely different as well.
        
         | ChicagoDave wrote:
         | Wow what an emotional and biased take.
         | 
         | Architecture isn't like Lord of the Rings where one ring rules
         | them all.
         | 
         | What if the dev team doesn't know Java? Are you suggesting they
         | be forced to learn it?
         | 
         | What if the company has 20 applications with complex
         | interactions? Are you suggesting they all be baked into k8s and
         | then have someone manage containers full-time? What if no one
         | in the company understands containers?
         | 
         | I've been at this for 40 years and will tell you there is no
         | common architecture environment.
         | 
         | There can be any mix of business logic, services, data storage,
         | containers, events, queues, redundancy, reports, analytics,
         | external systems, 3rd party black boxes, skill sets and levels,
         | budget, tech stack, and leadership.
         | 
         | Trying to pigeonhole any architecture into that mix is the very
         | definition of amateur.
        
           | jjtheblunt wrote:
           | > Trying to pigeonhole any architecture into that mix is the
           | very definition of amateur.
           | 
           | I've also been at this 40 years and what you ended with is
           | worded unnecessarily crass, in that folks of course learn
           | perspective over time.
        
             | ChicagoDave wrote:
             | I think I was matching the same tone as the previous
             | comment.
        
         | sfvisser wrote:
         | Really strong language without any backing arguments
         | whatsoever. You can't just pick some random tech and decide
         | this is boring tech so it's good and so the other is bad and
         | has always been. This type of comment is so incredibly
         | meaningless and infuriating.
         | 
         | Please explain _why_ you are saying what you're saying so at
         | least we can learn from it.
        
           | mjr00 wrote:
           | I do think specifying JVM/PG/k8s in that comment was a
           | mistake. Boring tech will be different for different people
           | and companies. Boring tech doesn't just mean it's time-tested
           | and battle-hardened, it means you have deep knowledge across
           | the company because you've used it for so long. I'm in a
           | Python shop, so Python is my boring default tech stack
           | choice. If I introduced C++ into our tech stack, it would be
           | not-boring tech for us, despite C++ not being a cutting edge
           | language.
           | 
           | k8s is funny to call "boring tech" because it's notoriously
           | complex. _But_ if your company has set up dozens of k8s
           | clusters and has a bunch of k8s experts on staff, it _can_ be
           | considered  "boring tech." It's all relative.
        
             | jpgvm wrote:
             | Yeah probably. I should have specified "sufficiently boring
             | tech stack of choice".
        
               | dylan604 wrote:
               | or append "for example"
        
           | bawolff wrote:
           | I think the big thing with "boring" vs "exciting" tech is
           | people pick exciting tech for reasons other than it being a
           | good technical fit (lets board the hype train!!!). Bad
           | results then happen.
           | 
           | Boring tech is usually more stable and more mapped out.
           | Everyone knows where its good and where its not. Other people
           | have already had whatever problems you have.
           | 
           | I'll always bet on the boring tech over the hyped exciting
           | tech.
        
           | jpgvm wrote:
           | Serverless actual pros:
           | 
           | "infinite" (meaning wallet level) scalability.
           | 
           | Integrated deployment tooling usually.
           | 
           | Integrated ingress usually.
           | 
           | Those last 2 really don't matter though as neither problem is
           | that difficult if you know what you are doing. They both
           | would have been a huge selling point in a world where k8s
           | doesn't exist. Alas it does and thus mostly moot.
           | 
           | The first however is a big deal as managing very bursty loads
           | on k8s can actually be a bit of a challenge. This is doubly
           | true when you are using some sort of queue and don't have
           | good queue scaling metrics for whatever reason. Serverless
           | lets you cop out of managing that by using infinite
           | concurrency.
           | 
           | Ok so now cons of serverless:
           | 
           | Turns all your code into spaghetti, this is probably the
           | biggest one.
           | 
           | Cold boot performance is highly varible and generally worsens
           | with added function complexity but this depends on tech
           | choice.
           | 
           | Infinite concurrency is a bit of a curse. Want to use that
           | MySQL DB that contains all your business critical data?
           | Better use some MySQL Serverless proxy nonsense because
           | unbounded connections isn't fun.
           | 
           | Lock-in is awful, migration story is awful.
           | 
           | Amount of Infrastructure as Code overhead, ala Terraform,
           | CloudFormation, whatever else floats your boat to support
           | serverless is insane. The sheer number of objects that need
           | to defined, kept in sync etc is just bonkers. Not to mention
           | if you have multiple services setup like this it's all
           | boilerplate you either need to abstract or carry around.
           | 
           | Hides serious programming deficiencies like memory leaks
           | because context is thrown away after servicing request.
           | 
           | Unpredictable runtime makes observability more complicated,
           | probably forcing you into poor decisions as a result, i.e
           | prom push gateway.
           | 
           | Runtime/execution time limits mean people trying to do all-
           | serverless end up doing horrible things to accomplish long-
           | running tasks.
           | 
           | All of the above leads to increasing pressure to make poor
           | architectural decisions to work around limitations of the
           | runtime.
           | 
           | I could probably go on but that is a reasonable set for you.
           | 
           | EDIT: Actually I forgot the worst con. The cost. The
           | $/CPU/RAM cost of Serverless is beyond bonkers. When I was
           | pricing up Cloud Functions vs Spot instances on GKE I think I
           | came to the conclusion Cloud Functions are about 300x as
           | expensive. If your code can run on serverless it can probably
           | run on Spot instances which you can just add to your GKE
           | deployment using another node pool and only schedule things
           | that can handle nodes going away within 60s there and
           | instantly reap 300x cost savings. This is because Serverless
           | is sold at a massive premium to CPU list prices and Spot is
           | sold at like ~30-40% of list prices depending on machine type
           | (for those curious when I last checked T2D was the best for
           | price/performance for highly single thread performance
           | dependent workloads, ala Node.js)
        
             | kpw94 wrote:
             | > cons of serverless:
             | 
             | > Turns all your code into spagetti, this is probably the
             | biggest one.
             | 
             | Overall agree with you, but I don't think spaghetti is the
             | right term here - even if the typo is corrected :)
             | 
             | Spaghetti code is more of a term for monolithic mess where
             | you have a hard time following all the code logic
             | (cyclomatic complexity) and dependencies.
             | 
             | Understanding & tracking dependencies is actually easier
             | with serveless, it's pretty clear what dependency that one
             | tiny micro service has.
             | 
             | But where it becomes worse is regarding the code logic
             | mess, you've now turned it in a distributed system problem
             | where, in addition to having a even hard time with
             | cyclomatic complexity, you also have to think about
             | transient failures/retries at each service boundaries.
             | 
             | That and the other problems you describe (performance hit
             | etc)
        
               | bawolff wrote:
               | > Spaghetti code is more of a term for monolithic mess
               | where you have a hard time following all the code logic
               | (cyclomatic complexity) and dependencies.
               | 
               | The only thing worse than a monolithic mess, is the same
               | thing but distributed
        
               | bcrosby95 wrote:
               | How about spaghetti architecture.
        
               | jpgvm wrote:
               | I like that one.
        
               | hyperhopper wrote:
               | Prior art:
               | 
               | https://preview.redd.it/y3pw78z0s3oz.jpg?auto=webp&v=enab
               | led...
        
               | goalieca wrote:
               | > Spaghetti code is more of a term for monolithic mess
               | where you have a hard time following all the code logic
               | (cyclomatic complexity) and dependencies.
               | 
               | I started using distributed monolith.
        
               | ohmahjong wrote:
               | I saw this described recently as the "Lambda Pinball"
               | architecture.
               | 
               | * https://www.thoughtworks.com/radar/techniques?blipid=20
               | 19110...
        
               | jpgvm wrote:
               | When even Thoughtworks thinks you are doing it wrong you
               | know you are in trouble.
        
               | jpgvm wrote:
               | Yeah I completely agree with all of that.
        
         | scarface74 wrote:
         | "I don't think you need unnecessary complexity. You should use
         | Kubernetes"
         | 
         | There is a certain lack of self awareness.
         | 
         | What exactly do you think that serverless is besides spinning
         | up VMs on demand where you don't have to maintain the
         | orchestration?
        
         | foffoofof wrote:
         | Is jvm pg and k8s boring enough?
         | 
         | K8s is overkill. Jvm is heavy. Pg is fine but relational dbs
         | generally suck.
        
           | beckingz wrote:
           | elaborate on how relational databases suck?
        
             | vbezhenar wrote:
             | They're not web scale.
             | 
             | https://youtu.be/b2F-DItXtZs
        
           | sanderjd wrote:
           | What is better than relational dbs (and for what purposes)?
           | 
           | I've been learning k8s fundamentals recently after using borg
           | for a long time and then awhile kind of just poking at k8s
           | trying to use it without learning it. Honestly I think it
           | seems like about as simple as a solution to a broad set of
           | real common problems could plausibly be.
        
         | opmelogy wrote:
         | > All the points noted in the slides etc just come from poor
         | engineering leadership. i.e decisions/choices, etc.
         | 
         | Hindsight is 20/20. It's easy to make these claims after the
         | fact. In the moment it's far more complex and the decisions
         | made at the time take in a lot of factors. Oversimplifying
         | things and placing blame on a tiny thing doesn't push the
         | discussion and learnings forward.
         | 
         | Lambda came out in late 2014 and Step Functions came out in
         | late 2016. It's entirely likely that this team was an early
         | adopter of the tech. Internally a lot of these AWS technologies
         | have a roadmap that sound like they will solve a lot of your
         | issues but those roadmaps get tossed fairly regularly as
         | priorities shift at the product level and AWS-wide level.
         | 
         | We won't be able to get the info on what went on here (likely
         | the team that improved things doesn't even know due to turn
         | over). But watering it down to be "poor engineering leadership"
         | is off the mark and way too simplistic.
        
           | jpgvm wrote:
           | I wasn't talking about the Prime Video case, I was talking
           | about the slides in Adrians blog post which compare
           | Serverless with traditional development.
           | 
           | IMO if you are facing those problems it has to be poor
           | leadership, there is no excuse for why you shouldn't be able
           | to make effective decisions if you know what you are doing.
        
         | sanderjd wrote:
         | Doesn't JVM here result in some amount of "dicking around" with
         | respect to which JVM language to use? Or do you mean that it
         | doesn't matter, as long as it's all JVM? Or do you mean to just
         | pick java?
         | 
         | One thing I've seen in my experience generally buying this
         | "standardize on boring tech" take, is that it's really common
         | these days for a company to be doing some amount of "serious"
         | data science work, and most people seem to really want to do
         | that in python, and it also seems to benefit pretty
         | significantly from some kind of OLAP database system. So adding
         | those things onto your "boring" stack becomes
         | JVM+psql+k8s+python+snowflake/databricks/duckdb(?).
         | 
         | But I agree with your real point about serverless.
        
           | jpgvm wrote:
           | Generally speaking your data-science stuff wouldn't fit into
           | the serverless bucket anyway though.
           | 
           | Also generally Python is just used for terminal processing in
           | the Big Data world, all the pieces in between are JVM anyway.
        
             | disgruntledphd2 wrote:
             | Yeah, I mean if you're already a JVM shop then you could
             | probably run Spark yourself (which can be painful) and use
             | SparkSQL to dump to Python (maybe on databricks, though I'm
             | not a massive fan) and do your fancy ML stuff there.
             | 
             | Alternatively, you could just use a read-only PG replica
             | and Python here, which would suffice for most DS needs
             | (unless your core product is DS related or you have
             | huuuuuggggggggeeee data (which you probably don't)).
        
               | jpgvm wrote:
               | Yeah that is fair too. Most people have tiny data. Then
               | there is places I have worked... where 10TB qualified as
               | small data that you "just query with Presto" and where
               | more exotic things were needed for actually interacting
               | with the "big data".
               | 
               | All relative in the end.
        
           | vbezhenar wrote:
           | For backend, JVM implies Java. Although Kotlin is safe choice
           | as well, it's not as popular so I wouldn't suggest to use it,
           | unless you have good reasons to (like team of Android
           | developers who decided to build backend).
        
             | jpgvm wrote:
             | I actually use Kotlin. :)
             | 
             | I am happy with either Kotlin or Java though.
             | 
             | Reasons for Kotlin is mostly nullability typing though
             | @NotNull gets you pretty far in Java these days with the
             | tooling configured correctly.
             | 
             | I like the syntax a bit more and extension functions are a
             | nice sugar vs util classes with static functions.
             | 
             | The main reason however is I have a lot more luck
             | convincing Typescript developers to try Kotlin than Java
             | which is relevant when most of my colleagues have primarily
             | done Typescript.
             | 
             | Either or though IMO, both are great choices for backend
             | development and they interoperate really well.
        
         | Jasper_ wrote:
         | Hearing K8s described as "boring" made me chuckle.
        
           | NickBusey wrote:
           | It's just yaml. It's pretty boring.
        
         | throwawa23432 wrote:
         | I'm confused by your language since you can do "serverless" or
         | "FaaS" or "cloudfunctions" on k8s.
        
         | MuffinFlavored wrote:
         | > Boring Tech stack of JVM + PG + k8s.
         | 
         | We live in a new time where k8s is "boring" and grouped in as
         | such.
        
           | jpgvm wrote:
           | For infra folk that lived through the "before time" it
           | certainly is and damn is it refreshing that it's become
           | boring at most places. :)
        
         | jonfromsf wrote:
         | What is PG?
        
           | jpgvm wrote:
           | PostgreSQL
        
         | [deleted]
        
         | romanhn wrote:
         | As someone who went with Lambda for my current project, I don't
         | agree with this take at all. In fact, it's as boring of an
         | implementation as you can imagine - a single "fat function"
         | developed in .NET serving the entire web application. Can be
         | easily transplanted onto any other hosting paradigm, but Lambda
         | is incredibly cheap (free at my current usage) and incredibly
         | scalable should I hit some inflection point. Learning K8s would
         | have been quite an overkill.
        
           | fallat wrote:
           | k8s is not as hard as its advertised
        
             | vbezhenar wrote:
             | I implemented self-hosted k8s in my current company and had
             | to learn it from the ground. I'm not a professional, but I
             | learned a thing or two.
             | 
             | My opinion is as follows:
             | 
             | Installing kubernetes with kubeadm is easy. You can do it
             | in few hours if you're fluent with Linux.
             | 
             | Kubernetes storage is where it starts to be hard. We were
             | lucky to have OpenStack hosting which allows to use network
             | block storage (ceph) without any additional work. Using
             | kubernetes without any storage might severely limits apps
             | you can run there, although it's possible. Using local
             | storage is an option but also with many "but". Rolling out
             | ceph: I've heard it's hard.
             | 
             | Load balancing: again I was lucky to have OpenStack load
             | balancer. Otherwise, I think that it's possible, but
             | definitely not within a few hours, unless you did it
             | before.
             | 
             | Kubernetes networking might be tricky. I wasn't able to
             | implement it in a way I like. I used calico with overlay
             | mode in the end, but I'd prefer "flat" network. Calico
             | overlay mode was easy.
             | 
             | Upgrading kubernetes manually is easy. At least until it
             | breaks, it didn't break for me yet. kubeadm just works.
             | 
             | Node autoscaling is hard. There're no simple solutions, at
             | least I didn't find any. Our cluster does not autoscale.
             | Manual scaling is not hard, though. I can implement
             | autoscaling with loads of bash/terraform/cloudinit scripts
             | but I'd rather not to. I think that node autoscaling is
             | what managed kubernetes offering do well.
             | 
             | Now that we have kubernetes, story is not really over. We
             | need to collect logs for nodes, for services, for pods. We
             | need to collect metrics. We need to set up alerts. We need
             | to have traces. Service mesh looks really nice so you can
             | see how services interact with each other. Those things are
             | not for given. They take lot of time and expertise. And
             | while they're not strictly about kubernetes, usually you
             | need them.
        
               | __turbobrew__ wrote:
               | Agree on storage being the hard part. You either need a
               | robust SAN to store persistent volumes or you need to
               | store persistent volumes on the local storage of kubelets
               | which is a major pain. Once a pod binds a local pvc to a
               | kubelet that pod must now run on that kubelet for the
               | rest of eternity. If you want to get rid of a kubelet
               | with a local pvc you either need the application be
               | durable to data loss (which many are not) or you need to
               | implement some sort of stateful pod draining logic.
               | 
               | When rolling your own, I think you need to invest in a
               | battle tested SAN or just not do stateful on k8s. For the
               | handful of stateful services I have run on k8s it would
               | have been better to just use a managed stateful solution.
        
             | ghaff wrote:
             | One of my recent conclusions is that it's not so much that
             | Kubernetes on its own is so very complicated; it's that if
             | someone tries to glue their own container platform together
             | from Kubernetes plus another dozen or so cloud-native
             | projects and integrate the whole thing, it can get wickedly
             | complex very quickly.
        
               | derefr wrote:
               | Right -- Kubernetes was evolved in the context of being a
               | Google Cloud service, and despite now having other
               | deployments, still is mostly architected around the
               | assumption that your Kubernetes cluster is surrounded by
               | a featureful IaaS platform providing things like managed
               | load balancers, VM auto-scaling pools, iSCSI volumes on a
               | dedicated SAN network, non-throughput-bottlenecked object
               | storage (and a container image repo on top of that!), a
               | centralized logs and metrics sink, etc. A k8s cluster
               | expects these like a computer expects a bootable storage
               | device.
               | 
               | If you're using a _managed_ Kubernetes (in GCP, or AWS,
               | or Azure, etc) then the owner of the platform presumably
               | has all this stuff going on somewhere (whether they
               | expose it to you or not), and so is solving all these
               | problems for you. But if you 're _not_ using managed
               | Kubernetes -- i.e. if you 're ever typing `k3s deploy` or
               | whatever equivalent -- then you'd better realize that
               | you're signing up to either _also_ deploy and manage your
               | own bare-metal OpenStack deployment for your k8s cluster
               | to plug into; or to tediously glue together a grab-bag of
               | services from your hosting provider + third-parties to
               | achieve the same effects.
               | 
               | (And this is why Google's "Anthos" is an interesting
               | offering to many enterprises: rather than running your
               | own on-prem k8s cluster implying the need to own+manage
               | on-prem supporting infra, you can instead have a GCP
               | project that has all the managed supporting infra, and
               | maybe some hosted k8s clusters; and then your on-prem k8s
               | nodes can just think of themselves as part of that GCP
               | project, relying on the same cloud infra that the hosted
               | clusters are relying on -- while still being on-prem
               | nodes, and so still having data security guarantees, low-
               | latency links to your other on-prem systems, etc.)
        
               | ghaff wrote:
               | There are other container platforms like Red Hat
               | OpenShift as well (disclaimer: I work there) either
               | operated for you or on-prem (doesn't need to be on top of
               | OpenStack and typically isn't these days). I'm biased but
               | Kubernetes complexity tends to be conflated with the
               | complexity of a DIY container platform which can be
               | significant, especially given that Kubernetes itself has
               | generally made the decision to keep its scope relatively
               | narrow.
        
           | bob1029 wrote:
           | We are thinking the exact same thing (w/ Az Functions).
           | Keeping it simple as possible means you can forklift the
           | whole thing in one afternoon to a different provider. It also
           | means you could potentially even go back to an on-prem stack
           | or whatever.
           | 
           | We are looking at taking our .NET6 monolith and "inverting"
           | it so the entry point is our logical HTTP resources instead
           | of the various DI monstrosities that emerge from main.
           | 
           | The reasoning for us is that we don't want ownership over all
           | of the horrible things we could get wrong. If a <TLS1.2
           | connection some how gets established and something naughty
           | happens because we forgot to update some startup.cs
           | boilerplate, it's currently our pain today. Like legally. If
           | we are just eating pre-approved/filtered/etc HTTP requests
           | from inside our hermetically-sealed function environment, we
           | get to cast a lot more blame than we otherwise would be able
           | to.
           | 
           | I really don't get why you'd go to serverless and then shit
           | it up with a lot of extra complexity. The whole point in my
           | view is to inherit the compliance packages of your
           | hyperscalar by way of directly using their native primitives
           | in as standard a way as possible. This means you get to
           | 'sail' through audits and engage up-market clients that are
           | typically out of reach for everyone else.
        
             | ricardobeat wrote:
             | If you have a monolithic entrypoint, is it really
             | serverless? You're simply using the Lambda infra as a run-
             | on-demand server, something Heroku has been offering for
             | 10+ years. Why would you pay extra when you can deploy to a
             | simple VM or managed platform costing 100x less?
        
               | romanhn wrote:
               | What makes it serverless is the ability to spin up
               | instances on demand (including down to zero when not in
               | use), while paying only when the resources are in use.
               | Given Lambda's generous free tier, I'm literally paying
               | nothing for it. That would not be the case for any other
               | platform.
        
               | rhodysurf wrote:
               | Because it doesn't actually cost less? You have to have a
               | LOT of invocations to match the cost of the entry level 6
               | dollar digital ocean box. And with that you have to
               | manage that box vs serverless you don't have to spend
               | engineering time (which costs an order of magnitude more)
               | on maintenance, a big deal for small shops without
               | dedicated/capable ops teams.
        
               | lukevp wrote:
               | What you described would be more akin to using ECS
               | Fargate to run the workload. What they described sounds
               | like a single .NET lambda behind API Gateway, so the
               | infrastructure of ingesting requests (and handling TLS
               | termination, DDoS mitigation, WAF, etc.) all happens on
               | the AWS side. All you have to do is handle the routing
               | inside of the lambda using regular ASP.NET Core. It would
               | also scale up infinitely and down to zero with no work on
               | their end. This isn't similar to running something on
               | Heroku on an allocated server with fixed overhead at all.
        
             | [deleted]
        
         | xg15 wrote:
         | > _Serverless first is a mistake. It should be "serverless
         | where it's distinct advantages are actually useful" which is
         | way less common than people think._
         | 
         | Wouldn't that approach take away the remaining advantages of
         | Serverless while still leaving you with the cost?
         | 
         | My understanding was that Serverless is supposed to free you
         | from the burden of maintaining a complete software stack,
         | including configuring the db, server, application runtime etc
         | and just focus on the business logic - a bit like one of the
         | old "sharehost"/LAMPP setups from a few decades ago.
         | 
         | If that is true, then you'd be well-advised to either go all-in
         | and structure your entire application as lambda functions, or
         | don't use functions at all and go with a "traditional"
         | kubernetes deployment.
         | 
         | As both strategies come with considerable complexity cost,
         | combining them would give you the cost of both. I think you'd
         | want to avoid that if possible.
        
           | vbezhenar wrote:
           | Main issue of lambda and similar offerings is that you're
           | pretty much bound to the cloud.
           | 
           | Kubernetes is much more open standard. It's much easier to
           | move to other cloud or self-host.
           | 
           | It's more about insurance and keeping other options.
           | 
           | Managed Kubernetes, managed databases are a thing and you
           | should use them if you can. Just don't depend on cloud-
           | specific stuff too much.
        
           | jpgvm wrote:
           | Somewhat. I think if I was to ever use serverless for
           | something it would be in concert with a traditional setup so
           | in a way yes, you pay 2x the deployment complexity because
           | now you are orchestrating 2 environments.
           | 
           | However if you have a usecase that is absolutely a perfect
           | fit for serverless i.e incredibly bursty, unpredicable, very
           | short runtime. To the point it would be to the detriment of
           | other services hosted on the same infrastructure i.e would
           | need say an isolated cluster anyway then yeah, serverless
           | could be a cost worth paying.
        
         | optimuspaul wrote:
         | I disagree very much. I don't think any blanket statement like
         | that makes sense.
         | 
         | I recently started building a service and need to keep costs
         | down. Serverless is perfect for it. My service has low to
         | moderate traffic, costs pennies to run on AWS Lambda and
         | MongoDB Atlas. If I had gone the boring route of JVM + PG +
         | k8s, putting aside the time it would take to defamiliarize
         | myself with anything on the JVM, the cost to run this service
         | would have been in the hundreds of dollars a month vs the
         | pennies. Interestingly the most expensive part of my current
         | setup is the NAT Router on my VPC. With JVM + PG + k8s it would
         | have been PG or K8S depending on where I ran PG.
         | 
         | I do agree that there is a misconception with containers taking
         | longer than server less. I don't think either takes longer
         | considering the current tooling available.
         | 
         | Seems like you got burned on Serverless at some point, I'm
         | sorry that happened, but for many people and teams it is a
         | productivity multiplier and can be a big cost cutting solution.
        
         | GuB-42 wrote:
         | JVM+PG+k8s is absolutely not "boring tech" to me.
         | 
         | LAMP, this is boring tech. You can switch Apache for Nginx, and
         | MySQL for another SQL database, but that's the idea. People
         | still develop in PHP today, and performance-wise, assuming you
         | follow standard good development practices, and good servers,
         | it can take a lot. If it was good enough to get Facebook
         | started, it is most likely good enough for you, and the
         | situation has much improved since the early days of Facebook, a
         | lot of it thanks to Facebook itself, and also better hardware.
         | PHP security is not as terrible as it once was too.
         | 
         | At work we have a couple of Spring projects, and I don't think
         | I have hated a tech stack more than that. If you to illustrate
         | the "action at a distance" anti-pattern, look no further. I'm
         | sure most Spring web apps could be implement in PHP for a
         | fraction of the size, both in terms of application code and
         | memory. I don't do k8s, so I have no real opinion except that
         | it looks like overkill in many cases. As for Postgres, nothing
         | against it, it is a very good SQL database, though my opinion
         | is that you should limit yourself to basic SQL as much as you
         | can, so that you are not dependent on a particular database.
        
         | waboremo wrote:
         | Actually your comment is a great demonstration of poor
         | engineering leadership. You've already decided on an outcome
         | and are shifting the narrative to adhere to this outcome.
         | 
         | The labelling of JVM + K8 as boring, to posture why it's
         | automatically superior to "serverless integration crap" is
         | exactly the sort of thing poor engineering leadership does to
         | avoid having the sort of difficult conversations necessary.
         | There's no metrics involved, no studies, no prototypes, no
         | financial assessments, none of that it's purely based on
         | emotion.
         | 
         | You can envision the same exact thing happening (but to the
         | preference of serverless) on the Prime Video team.
        
           | jpgvm wrote:
           | It's not emotion. I make money from porting people off of
           | poor technical choices onto solid ones. It comes from
           | experience working with all these various systems.
           | 
           | I have used (at large scale) GAE, Google Cloud Functions and
           | Lambda, before that I also used all the previous "serverless-
           | like" things including Beanstalk (anyone else even remember
           | that?), Heroku and bunch of other PaaS things for various
           | stuff.
           | 
           | It's all come and gone. To me serverless is just the latest
           | in a long line of things that I get paid to replace with
           | things that actually solve the problem and doesn't turn into
           | a ball of spaghetti that no one is able to modify anymore.
        
             | unusualmonkey wrote:
             | Bit of survivorship bias there! No-one is going to hire you
             | to port off of serverless, if they're happy with it.
             | 
             | Ergo you only see cases where it isn't working well.
        
               | jpgvm wrote:
               | Yeah that is fair. There is probably serverless chugging
               | away fine in lots of places.
               | 
               | Doesn't make it better than alternatives but still a
               | valid point.
        
               | unusualmonkey wrote:
               | I use serverless successfully - handing tens of millions
               | of annual revenue.
               | 
               | We do this in a handful of small services, none of which
               | gets much traffic. The alternatives (e.g. app-engine,
               | container in vm, or k8s) all require more infrastructure,
               | would cost more (no scaling to zero), and are overall
               | less suited for the kinds of problems we expect to see.
        
             | emodendroket wrote:
             | > I make money from porting people off of poor technical
             | choices onto solid ones
             | 
             | So it was successful enough for long enough to get them to
             | the point where someone cared about optimizing and paid you
             | to come in, right? I don't see how you are disproving the
             | article's argument that this is a great way to prototype
             | ideas or get a first version of the product out there.
        
               | rat9988 wrote:
               | "So it was successful enough for long enough " Which
               | doesn't necessarily make it a good choice at any point.
               | It just says it wasn't enough of a bad technical choice
               | to make the product sink.
        
               | emodendroket wrote:
               | OK, sure, it's not ironclad proof that it was the right
               | decision. But it's certainly not proof it was the wrong
               | decision when it was made.
        
               | namaria wrote:
               | "It works and makes money" is at the root of so much
               | shoddy workmanship in software I think you're making a
               | different point than what you're trying to.
        
               | jpgvm wrote:
               | My opinion from doing both these ports and greenfield is
               | this:
               | 
               | None of the tools that say they help you move faster
               | matter after the first 3 months.
               | 
               | It's true they take less code and pieces to get started
               | with. No k8s manifests, no Dockerfiles (usually) but if
               | you don't hate your future self you will still use IaC to
               | create the API Gateways/function
               | contexts/DynamoDB/whatever, so you will still be dicking
               | around with at least one of Terraform/Pulumi/similar.
               | 
               | The thing is the setup cost of a traditional lang + pg +
               | k8s setup amortised over 3 months is essentially nothing
               | and yet you have a setup that will last you effectively
               | forever.
               | 
               | If your company wasn't going to die because of being a
               | few days slower in that initial 3 month period you have
               | came out ahead. You have a forever architecture and you
               | avoided a whole bunch of costs and technical debt in the
               | process.
               | 
               | Maybe the calculus is different if you don't have the
               | same depth of infrastructure experience I do but for me
               | it's obvious this is a faster and better way of
               | delivering a prototype that gives it proper room to grow
               | into production software.
        
               | nostrebored wrote:
               | The IAC piece is the same across both. Except with k8s
               | you're managing infrastructure across cloud, k8s,
               | potentially frameworks like helm, etc.
               | 
               | And on a large team there's very little likelihood that
               | everyone understands the different configuration layers.
               | There's very little likelihood that people understand the
               | scope that changes to configuration might touch either.
               | But when you work with CDK, it's one tool to understand.
               | 
               | Faster and better way to grow into production ignores
               | team level problems and team level constraints. At 10,
               | 20, 100 engineers this is fine. But when you have dozens
               | of teams you're almost certainly losing efficiency by
               | being prescriptive.
               | 
               | I find that this type of pedantry really surrounds the
               | k8s ecosystem. I think k8s is an enterprise scale tool
               | that has uses in enterprise applications. But the idea of
               | suggesting it to everyone is lunacy
        
               | emodendroket wrote:
               | I am more of a generalist than you are but I have worked
               | with a Kubernetes setup and I've also worked with going
               | all-in on Lambda and other AWS services managed by CDK
               | and I don't share your intuition. Besides the initial
               | setup, the former also has other ongoing maintenance
               | things that are harder to deal with. It does give you
               | more flexibility if whatever you're doing isn't a good
               | fit for those tools, though (though obviously nothing
               | stops you from provisioning EC2 instances or whatever
               | through other tools).
        
             | trollied wrote:
             | Amen, brother. Love you.
        
             | vlovich123 wrote:
             | Have you tried Cloudflare Workers? I work at Cloudflare but
             | I'm genuinely curious what your feedback is on it as it
             | does work quite differently from Lambda/Cloudflare
             | functions.
        
               | jpgvm wrote:
               | Yes actually, currently using in production. I actually
               | advocated for the move to Cloudflare and we ended up
               | adopting workers because Cloudflare unfortunately has
               | broken stale-while-revalidate support. So I built an
               | implementation in Workers that abuses the Cache API to
               | store our own TTLs and also our revalidating state. It
               | works pretty well as we don't end up with too many
               | isolates per POP but it's not perfect.
               | 
               | I didn't end up using KV or other features yet, they
               | weren't a good fit for the cache revalidation
               | implementation.
               | 
               | I think at-least for now I see it differently to FaaS
               | services like Lambda/CloudFunctions because I don't
               | intend to use it to implement application logic, rather
               | just augment (or in this case shore up deficiencies) in
               | the edge.
               | 
               | We may consider doing more stuff at the edge later for
               | latency advantages. This to me changes the calculus
               | because Lamba (well vanilla, not Lamda@Edge) and CF solve
               | my usual problems just with higher costs. CloudFlare
               | workers actually let me do things I couldn't do. So the
               | cost is still just as high as FaaS platforms BUT
               | importantly I actually get something for it, something I
               | can't otherwise have and important enough I am willing to
               | pay the costs and overheads for.
               | 
               | Please advocate for real state-while-revalidate support
               | internally! It would make tons of folks super happy.
        
               | vlovich123 wrote:
               | Interesting. Since I TL KV currently I'm particularly
               | interested in the ways it's not working for you. We're in
               | the process of rolling out major changes here that might
               | help you here so hit me up on the KV Discord or my
               | username at cloudflare.com. I actually do have a meeting
               | with the Cache team on Monday about something related and
               | I'll ask about it if you could message me what
               | specifically is broke / what the semantics you want are
               | (I'm not intimately familiar with all the HTTP
               | semantics).
        
               | jpgvm wrote:
               | Shot you an email.
               | 
               | I must admit I didn't go super deep on KV so I will come
               | by discord at some point. I can share what we are doing
               | and maybe it turns out KV is better suited.
        
             | Spivak wrote:
             | Seconded, just got done with one of these. Literally 1200
             | separate lambdas, step functions, hundreds of kinesis
             | streams, and sqs topics plus a smattering of ecs fargate
             | tasks for when invariably something takes longer than the
             | max lambda time. And oh god all the stupid supporting
             | infrastructure to build layers.
             | 
             | There primary app was Rails, you know what replaced all of
             | it? Running the synchronous tasks on the app servers
             | themselves, the async ones on Sidekiq and changing the max
             | instances on the two ASGs.
             | 
             | The only downside has been not being able to meme about how
             | we're web scale anymore.
        
               | jpgvm wrote:
               | Not being able to make jokes about how fast it is to
               | serve requests from /dev/zero must be absolutely killing
               | your team.
               | 
               | However, great job. Once they get to that scale of
               | objects it becomes really hard to clean it up -
               | especially if most the people that wrote them have
               | already left the company for example.
               | 
               | Curious if you ended up also saving money in the process?
        
             | noduerme wrote:
             | As a one man shop, I made a decision years ago to try
             | Beanstalk for a particular app backend / API. It's the only
             | "serverless" deployment I've done. It still works fine for
             | that purpose, and it's less trouble than manually deploying
             | to multiple servers. Forced updates are an occasional
             | headache but at least they're all in one place. The code
             | does a lot of serverside computation for a lot of client
             | connections, so that part needs to scale, and since there's
             | relatively little database stuff it just connects to a
             | single RDS instance.
             | 
             | I can't think of any other code of mine for which I'd use
             | Beanstalk or something similar now, but it doesn't hurt to
             | evaluate each project independently and pick the right
             | tools for the job.
        
               | jpgvm wrote:
               | Yeah Beanstalk was pretty simple and decent for it's
               | time.
               | 
               | I told anyone that wasn't comfortable with Cloudformation
               | + Packer and wanted to be on AWS to use it during that
               | era. At that time things were pretty binary, you used
               | PaaS like Beanstalk or Heroku or you were off in the wild
               | west where there was no standards at all. People were
               | doing immutable AMIs on ASGs at the high end and then
               | everything down to pet AWS instances w/Capistrano or
               | similar tools at the low end.
               | 
               | These days things are different, containers were a big
               | deal. If you target k8s your application is pretty
               | portable, can be run locally in something like kind to
               | integration test the infra components, etc. You can take
               | it to the 3 major clouds and bring most of your ingress,
               | secrets, volume, etc configuration with you.
               | 
               | For me k8s has been a godsend because for once I haven't
               | had to learn a whole new set of idioms each new place
               | that are all equivalent but different in their own
               | special snowflake ways.
        
               | emodendroket wrote:
               | > You can take it to the 3 major clouds and bring most of
               | your ingress, secrets, volume, etc configuration with
               | you.
               | 
               | How often have you seen someone actually do this, and was
               | it ever actually as "easy" as they imagined? I feel like
               | the advantages of "going with the flow" and using the
               | stuff specific to the cloud environment you're in are
               | substantial and I feel like the efforts to avoid vendor
               | lock-in start to look insufficient once you try to
               | actually execute most of the time.
        
               | jpgvm wrote:
               | I have done one massive AWS + kops > GKE migration and
               | several smaller migrations between AWS/GCP/Azure.
               | 
               | My experience leads me to believe the main problems stem
               | from adopting IaaS/PaaS specific stuff outside of k8s. If
               | you aren't doing that they are very portable and even if
               | you are if you are using standard stuff like managed
               | PostgreSQL/MySQL then it's still easy.
               | 
               | Where that breaks down is very vendor specific stuff, ala
               | Spanner, queues/logs like SQS/PubSub etc. If you are
               | using those and your architecture is tied to specific
               | semantics of them you are in for a rough time.
               | 
               | I personally try to avoid any and all of those. Not for
               | vendor lock-in reasons but because I greatly dislike not
               | being able to run everything locally in exactly the same
               | configuration it will be deployed in.
               | 
               | So I run PostgreSQL in k8s using Patroni, Kafka or Pulsar
               | for queue/log, etc and that is about it usually. I
               | generally don't need/use other things. Maybe memcached if
               | there is a good need for it over say Caffiene (in-memory
               | cache).
               | 
               | This is awesome because it means it's super super easy to
               | have staging/UAT environment or even preview environments
               | on PRs and I can run integration tests against the whole
               | stack E2E without spinning up/manipulating cloud provider
               | resources.
        
             | plugin-baby wrote:
             | Has GAE come and gone? Surely it's still there, and capable
             | of running the JVM? K8s seems like overkill for small teams
             | which don't already have experience
        
               | jpgvm wrote:
               | GAE is actually still pretty good tbh. However it has
               | it's limitations and even though there are things like
               | Cloud Run et al I still find GKE is the best option.
        
             | babyshake wrote:
             | One potential reason to advocate for serverless
             | architecture is it is relatively stateless and event-driven
             | and therefore modular and easy to reason about.
             | 
             | Can you be more specific about how serverless tends to turn
             | into a ball of spaghetti? It is simply because the lack of
             | state becomes more of a bug than a feature?
        
               | xvinci wrote:
               | Statelessnes is really a prerequisite for proper
               | horizontal scaling (with small exceptions), so best
               | practices for e.g. spring boot web applications will lead
               | to a stateless architecture as well. Same goes for event-
               | driven -> amqp, mqtt (and jms for java) plug easily into
               | most modern web-frameworks, you can be as event-driven as
               | you want to (which in my experience is an overhead to
               | plain http / rest which needs to be justified)
        
               | ShroudedNight wrote:
               | > Statelessnes is really a prerequisite for proper
               | horizontal scaling
               | 
               | I could understand making this claim w.r.t. Idempotence.
               | Claiming statelessnes as a requirement strikes me as non-
               | trivial enough to require some evidence or a citation
               | thereto.
        
               | namaria wrote:
               | >stateless and event-driven and therefore modular
               | 
               | Non-sequitur
        
               | jpgvm wrote:
               | State can be part of it. For example it can be way
               | simpler to use a in-memory cache than a network attached
               | cache and by doing so you save yourself a dependency, a
               | potential network request that can fail, etc.
               | 
               | Also for event processing consider the case you need to
               | process events in order for instance. That is a ton
               | easier with stateful consumers, batching etc which while
               | possible in serverless is usually more difficult/off the
               | beaten path.
               | 
               | However by spaghetti I'm more referring to the
               | architecture of your application being turned into
               | networked components, i.e a distributed system.
               | 
               | It's pretty much accepted fact now that distributed
               | systems are by far the most challenging area of both
               | computer science and software engineering.
               | 
               | Serverless doesn't technically -mandate- this, you could
               | put all your code in a giant function. However it
               | strongly incentivises you not to. a) generally it's
               | deployment times and cold start issues get worse with
               | function size and b) it pushes you to adopt other
               | "building blocks" which are themselves network attached.
               | 
               | Essentially it leads people into a massively distributed
               | micro-service architecture right out of the gate. For
               | most applications this is definitely the wrong choice
               | even if you have the folks on staff that specialise in
               | distributed systems, usually those folk are the last to
               | advocate for anything distributed ironically.
               | 
               | Getting into the weeds as for why this is hard and all
               | the problems you can run into is probably a bit much for
               | a single HN comment but if you really are curious I
               | suggest reading Leslie Lamport and John Ousterhout. Both
               | have covered the problem space in depth with foundational
               | papers, Leslies "Time, Clocks, and the Ordering of Events
               | in a Distributed System" in particular is foundational.
        
               | scarface74 wrote:
               | > Also for event processing consider the case you need to
               | process events in order for instance. That is a ton
               | easier with stateful consumers, batching etc which while
               | possible in serverless is usually more difficult/off the
               | beaten path
               | 
               | By off the beaten path you mean a few lines of yaml to
               | configure a FIFO SQS queue and have it trigger a Lambda?
        
               | eternalban wrote:
               | (quite impressed by the sheer number of comments you have
               | made in this thread, btw)
               | 
               | > For example it can be way simpler to use a in-memory
               | cache than a network attached cache and by doing so you
               | save yourself a dependency, a potential network request
               | that can fail, etc.
               | 
               | Elsewhere you decided against using cloudflare's kv
               | because it was a poor fit for "cache invalidation" --
               | "way simpler"? -- and then of course turns out you really
               | don't know if it's a bad fit.
               | 
               | > by spaghetti I'm more referring to the architecture of
               | your application being turned into networked components,
               | i.e a distributed system.
               | 
               | That is a novel definition of "spagetti". It's a
               | pejorative term in software sense. Why?
               | 
               | > It's pretty much accepted fact now that distributed
               | systems are by far the most challenging area of both
               | computer science and software engineering.
               | 
               | Ahem. So it caching and cache invalidation. But that
               | didn't stop you, so exactly how is this line you seem to
               | draw with such vocal authority regarding "bad hard",
               | "good hard" _informed technically_?
        
               | jpgvm wrote:
               | > Elsewhere you decided against using cloudflare's kv
               | because it was a poor fit for "cache invalidation" --
               | "way simpler"? -- and then of course turns out you really
               | don't know if it's a bad fit.
               | 
               | I evaluated it at the time. I don't have the context
               | loaded to remember exactly why it didn't work for my use-
               | case but I'm going to look again because CFs platform
               | moves pretty fast and perhaps it could be simpler than
               | what we are currently doing.
               | 
               | > That is a novel definition of "spagetti". It's a
               | pejorative term in software sense. Why?
               | 
               | Elsewhere it's been coined Lambda Pinball, perhaps you
               | find that a more pleasing description?
               | 
               | > Ahem. So it caching and cache invalidation. But that
               | didn't stop you, so exactly how is this line you seem to
               | draw with such vocal authority regarding "bad hard",
               | "good hard" informed technically?
               | 
               | Yes, both of which are generally necessary in distributed
               | systems. i.e not only is it a hard problem in and of
               | itself it requires solving sub-problems which are also
               | some of the hardest problems.
               | 
               | I think you may have made my points for me.
        
               | _gabe_ wrote:
               | > That is a novel definition of "spagetti". It's a
               | pejorative term in software sense. Why?
               | 
               | I don't necessarily agree with everything OP is saying,
               | and I certainly don't speak for them, but I completely
               | understand what they mean by a big ball of microservice
               | spaghetti. The spaghetti is distributed across a massive
               | literal network of dependencies instead of being
               | distributed off a giant graph of object dependencies, or
               | goto statements or something. The spaghetti is real and
               | just as messy in the microservice world as the monolith
               | world. I should know, we inherited a massive one.
               | 
               | But this shouldn't be too much of a revelation since
               | there is no silver bullet in taming software
               | architecture. Architecting software is very subjective,
               | and more often than not we make the wrong decisions and
               | have to untangle the mistakes further down the road.
        
               | eternalban wrote:
               | > I completely understand what they mean by a big ball of
               | microservice spaghetti
               | 
               | Sure, I understand that too, but that's not what was
               | said. Here it is again:
               | 
               | > by spaghetti I'm more referring to the architecture of
               | your application being turned into networked components,
               | i.e a distributed system.
        
               | ricardobeat wrote:
               | Modular, maybe (though sometimes it's just introducing a
               | network hop in between dependencies). Event-driven being
               | easy to reason about? No way, never.
        
               | erikpukinskis wrote:
               | Yah, I was confused by that too. Out of:
               | 
               | 1) event-driven
               | 
               | 2) procedural
               | 
               | 3) functional
               | 
               | 4) inheritance
               | 
               | I would say event-driven is tied for last as the hardest
               | to reason about.
        
               | emodendroket wrote:
               | I'd say it comes down to your choice of events. I also
               | find these systems very easy to reason about.
        
               | tharkun__ wrote:
               | Instead of your IDE very easily telling you on click the
               | callers or callees of things you'd rather have events
               | where you have no idea who might be processing or sending
               | a particular event in your system of hundreds of
               | microservices?
               | 
               | Yeah, totally easy!
        
               | ShroudedNight wrote:
               | I think you're conflating problems here. IDLs allow
               | interface definitions for interactions that cross compute
               | and administrative boundaries, and event-driven
               | architectures do not require compute distribution
               | (Erlang/beam).
               | 
               | Different people have different strengths and anxieties.
               | The suggestion that, with some regularity, the lack of
               | (local) ambiguity of event message arrival may be of
               | higher leverage relative to complexity than increasing
               | the surface of an invokable interface does not seem
               | beyond the pale.
               | 
               | Ultimately relinquishing a total temporal ordering is
               | going to be painful regardless of whether it's local or
               | distributed.
        
             | stickfigure wrote:
             | > I make money from porting people off of poor technical
             | choices
             | 
             | I make money by delivering features my customers care
             | about.
             | 
             | We tried k8s, it was a ton of mental effort and time wasted
             | when we could have been building more features. That
             | particular bit of infrastructure is currently running in
             | elastic beanstalk, where it's cheap and I don't have to
             | think about it.
        
             | grogenaut wrote:
             | I remember beanstalk. We used it for years at twitch. We
             | got a shit ton of mileage out of it. It's easy to talk shit
             | about the problems of the old tech, and beanstalk has a lot
             | of issues. But it also let us build a ton of stuff much
             | easier than the alternatives.
             | 
             | This cycle will forever be this way. You only think that
             | the new stuff is the best if this is your first rodeo. If
             | it's not, it's just the right or wrong tradeoff for the
             | point in time you are at.
        
             | golemiprague wrote:
             | [dead]
        
           | stcroixx wrote:
           | I agree with him that serverless should not be the default
           | and there should be a clear justification for needing those
           | features.
           | 
           | He then fails to apply the same rule to k8. Almost nobody
           | needs that either and treating it as the default not a good
           | strategy.
        
           | echelon wrote:
           | > poor engineering leadership.
           | 
           | Or fantastic engineering leadership.
           | 
           | OP has constrained the technology choices of the org into a
           | very capable set of primitives. The entire org can move
           | faster by sitting atop the same rails.
           | 
           | K8s + tried and true tech lets the org spin up additional
           | compute anywhere and easily migrate off a particular vendor.
           | Lift and shift is work, but not nearly as bad when the rails
           | are simple.
        
             | jpgvm wrote:
             | Having done the dreaded AWS + kops -> GKE migration I must
             | agree, it was hard but nowhere near as hard as it could
             | have been.
        
             | nostrebored wrote:
             | > simple
             | 
             | > k8s
             | 
             | Pick one
        
               | jpgvm wrote:
               | Eh. There are different types of complexity.
               | 
               | k8s has a nice contained local complexity that I prefer.
               | 
               | It's not simple internally but it's simple to use if you
               | know how it works.
               | 
               | So really it's not "k8s isn't simple" it's more "nothing
               | is simple" so pick your preferred type of complexity.
        
           | mk89 wrote:
           | > Actually your comment is a great demonstration of poor
           | engineering leadership. You've already decided on an outcome
           | and are shifting the narrative to adhere to this outcome.
           | 
           | For me it's actually the decision of a CTO of one of top 10
           | companies in the world in terms of software size/value/etc.
           | to push it down to very smart people - that's bad leadership.
           | 
           | Why? Because I guess this thing didn't go as planned - a self
           | fulfilling prophecy that shows how awesome serverless is. It
           | backfired in a period where people are questioning whether
           | all these megahypes are not actually just some marketing.
           | Yes, he is responsible for that, for pushing it down to his
           | people that "everything new must be started with serverless"
           | first. When actually in some situations, as the article
           | showed, a monolith or a (big) microservice would do the job
           | better.
           | 
           | Why the hell hire super smart people when you have to tell
           | them how to do things?
        
             | ec109685 wrote:
             | Those smart engineers should be encouraged to devote their
             | brainpower to other aspects of the business and technology
             | stack.
             | 
             | Far too often teams use an opportunity to build something
             | new as a science project to use all the latest wiz bang
             | cloud features when the boring stack would work just fine.
             | 
             | For those that truly want to innovate on that aspect of the
             | stack, they should be part of a central infrastructure
             | team, whose job it is to build scalable solutions for the
             | group.
        
             | hyperhopper wrote:
             | > Why the hell hire super smart people when you have to
             | tell them how to do things?
             | 
             | A very hard question. One side is that every smart engineer
             | will do what's best for their use case.
             | 
             | The other is now you might have 100 smart engineers with
             | 100 smart different solutions which can cause repeated
             | work, integration issues, missed common solutions, etc.
             | 
             | Both approaches have lots of pros cons and tradeoffs
        
               | jpgvm wrote:
               | Generally the best approach there is to elevate a bunch
               | of your smartest engineers into an architecture team that
               | a) set good standards but also b) appreciates the nuance
               | of problems sufficiently to know when to deviate.
               | 
               | A small team can probably only handle one tech stack, an
               | org with 10 teams can probably handle 10 stacks if they
               | needed to but can get efficiency by standardising on some
               | number less than 10.
               | 
               | However making those calls shouldn't be coming down from
               | the CTO, it should be made among the architects that
               | actually design and build the systems.
        
               | phil-m wrote:
               | Where I'd consider a (good) CTO should be somehow
               | responsible for the high-level architectural decisions.
               | Accumulate enough expertise and know-how (via
               | "architects" or rather very senior engineers) and then
               | discuss the pros/cons and at the end weigh each argument
               | and decide what may work best.
        
           | AndrewKemendo wrote:
           | Fully agree with this - it completely ignores the context and
           | domain for a one size fits all solution, which is precisely
           | what is wrong with engineering today - lack of holism.
           | 
           | The OP's comment "Also generally the conception that building
           | things with containers + k8s actually takes more time than
           | serverless is a horseshit take IMO. It doesn't and if it's
           | taking you longer you are probably doing something wrong."
           | 
           | I couldn't disagree with more from a hands on practical
           | perspective. That will 100% take anyone longer to spin up for
           | an MVP (for more cost) than spinning up a FreeBSD server and
           | I'm unaware of anything that needs more than that for an MVP.
           | 
           | This isn't even mentioning the fact that every second your
           | data sits in an AWS/CGP/Azure datacenter - you are not only
           | telling them what kind of data you have (don't pretend like
           | the hosts don't know and can't tell) but you're increasing
           | your switching/egress costs
        
             | maccard wrote:
             | > I couldn't disagree with more from a hands on practical
             | perspective. That will 100% take anyone longer to spin up
             | for an MVP (for more cost) than spinning up a FreeBSD
             | server and I'm unaware of anything that needs more than
             | that for an MVP.
             | 
             | I agree with OP. If you hand me a binary and instructions
             | on how to run it, I can have it running on either AWS or DO
             | in a container, connected to a managed database with
             | snapshots enabled in about 30 minutes. If the toolchain is
             | easy to set up, I can have CI/CD with 0-downtime
             | deployments with auto-rollback in another 30 minutes, and
             | you won't need to touch either of those until you're
             | hitting thousands of concurrent users for a typical web
             | app.
             | 
             | > you are not only telling them what kind of data you have
             | (don't pretend like the hosts don't know and can't tell)
             | 
             | this is FUD of the n'th degree - if you suspect AWS are
             | spying on your managed RDS data, I don't know what to tell
             | you.
        
               | AndrewKemendo wrote:
               | Great...So you quickly put yourself in a position where
               | the longer you're on the service the more expensive it
               | will be to migrate
               | 
               | And yes Amazon uses AWS utilization to keep track of
               | promising startups or places that they want signal about
               | market changes.
        
               | maccard wrote:
               | > Great...So you quickly put yourself in a position where
               | the longer you're on the service the more expensive it
               | will be to migrate
               | 
               | With 30 minutes work, I've set up a production quality
               | solution that I won't have to touch for months/years, and
               | has almost no vendor lock-in. Containers are portable,
               | you can run them on AWS, DO, GCP, or install docker on a
               | bare metal instance, it's up to you. Until you _do_
               | decide to migrate, it's pretty much 0 upkeep (I checked
               | one of the work projects I stood up and we haven't
               | touched the infrastructure side of it for 9 months).
               | 
               | > And yes Amazon uses AWS utilization to keep track of
               | promising startups or places that they want signal about
               | market changes.
               | 
               | That's not what you said at first, you said "you are not
               | only telling them what kind of data you have (don't
               | pretend like the hosts don't know and can't tell)".
               | You're telling AWS that you're running containers and a
               | database.
        
               | scarface74 wrote:
               | Yes, and I'm sure after you set things up with "no vendor
               | lock in", you came back years later when the company grew
               | and organized a large migration where you had to deal
               | with budgets, transferring data, regression testing,
               | going through architecture review boards, the PMO,
               | permissions, compliance, training, etc.
               | 
               | I'm also sure after many years no vendor specific corner
               | cases snuck in there you didn't realize and you did all
               | of that with no regressions.
        
         | hn_throwaway_99 wrote:
         | Your comment highlights that you really don't understand what
         | "serverless" is. We run a monolith running on App Engine
         | "NodeJS Flexible". All of our code is basically a single Node
         | monolith, and under the covers many of GCP's newer serverless
         | options (like Cloud Run and Cloud Functions V2) run on the same
         | infrastructure -it's basically just containers, and they even
         | offer versions that let you simply deploy any Docker
         | containerized app.
         | 
         | But compared to the maintenance burden I've seen with other
         | similarly sized startups that use k8s, we just spend way, way
         | less time on infrastructure management.
         | 
         | On a related note, this kind of language, "The only world where
         | I can make a good case for serverless for small teams is if you
         | have said poor leadership and serverless helps you escape their
         | poor decision making." which is basically "if you don't agree
         | with me you're an idiot", says a lot more about the speaker
         | than the speakers usually realize, and it's a huge, giant red
         | flag.
        
           | jpgvm wrote:
           | Yeah I am familiar with those (Cloud Run, Fargate et al) and
           | they are much less egregious than Lambda and Cloud Functions.
           | As for if they qualify as Serverless though I don't know, it
           | seems poorly defined.
           | 
           | How about I clarify that my stance mostly relates to
           | Functions as a Service so it's clear that while I wouldn't
           | choose Cloud Run over GKE that is hugely different to FaaS.
        
             | maccard wrote:
             | I think Cloud Run, Fargate and Azure Container Apps all
             | fall more into the category of "platform-as-a-service"
             | rather than "function-as-a-service", in the same way that
             | EKS or GKE could be categorised as a PAAS.
             | 
             | EDIT: I also think they're _excellent_ to use for
             | development. The migration from ECS + Fargate to EKS + EC2
             | is not a difficult migration from the application side, and
             | can be done relatively piecemeal, in my experience.
        
       | AndrewKemendo wrote:
       | Look, the point is simple, AWS marketing says you can do
       | everything all the time with microservices and their 'DevOps'
       | infrastructure to "abstract away" all the complex you know -
       | engineering - that you cannot automate effectively and have a
       | long running robust system.
       | 
       | So it's humorous to see an Amazon team that is doing things
       | correct and holistically _in contrast to the sales and marketing
       | bullshit_ that AWS and Cloud evangelists spew.
       | 
       | Us nerds have a great way of missing the point and bikeshedding
       | on the technicals (which is also great sometimes!)
        
         | namaria wrote:
         | It's amazing how cloud companies managed to convince so many
         | people in software engineering that software engineering wasn't
         | a core competence in producing software.
        
       ___________________________________________________________________
       (page generated 2023-05-07 23:00 UTC)