[HN Gopher] The Architecture Behind a One-Person Tech Startup (2...
___________________________________________________________________
The Architecture Behind a One-Person Tech Startup (2021)
Author : thunderbong
Score : 211 points
Date : 2024-06-16 01:51 UTC (21 hours ago)
(HTM) web link (anthonynsimon.com)
(TXT) w3m dump (anthonynsimon.com)
| mitchbob wrote:
| (2021). Discussion of this when it was first submitted (320
| comments): https://news.ycombinator.com/item?id=26737771
| extr wrote:
| Would be interested to know which parts have better solutions as
| of 2024. But still feels pretty relevant.
| _puk wrote:
| If you click the link the original SAAS (Panel Bear) is now "a
| part of" cronitor.io.
|
| Wonder if they kept it as is, and how many people manage it
| nowadays.
| amzans wrote:
| Hey I'm the author of the post.
|
| We ran Panelbear for about a year before merging it into
| Cronitor's codebase.
|
| Cronitor is also a Django monolith but on EC2 and was already
| humming along nicely for several years through frequent
| traffic spikes.
|
| We had no reliability issues with K8s, but came down to: as a
| small team we decided to have less moving parts.
|
| The new setup is not too different from what I describe here:
| https://anthonynsimon.com/blog/kamal-deploy/
| esafak wrote:
| This is pretty sophisticated for one person.
| swagasaurus-rex wrote:
| Fairly simple and explainable for an organization
| EuAndreh wrote:
| s/sophisticated/complex/
|
| FTFY
| welanes wrote:
| Things are much easier for one-person startups these days--it's a
| gift.
|
| I remember building a todo app as my first SaaS project, and
| choosing something called Stormpath for authentication. It
| subsequently shut down, forcing me to do a last-minute migration
| from a hostel in Japan using Nitrous Cloud IDE (which also shut
| down). Just pain upon pain.[1]
|
| Now, you can just pick a full-stack cloud service and run with
| it. My latest SaaS[2] is built on Google Cloud, so
| Authentication, Cloud Functions, Docker containers, logging, etc
| straight out of the box.
|
| Not to mention, modern JavaScript and CSS are finally good. With
| so many fewer headaches, it's a great time to build.
|
| [1] Admittedly, I was new to software dev and made some rather
| poor tech choices
|
| [2] https://simplescraper.io
| robertlagrant wrote:
| How do you find the costs involved with a cloud service like
| that? I remember getting bitten once with a bill from Azure
| because a service went wild with logging once.
| joshstrange wrote:
| Also solo founder here, for myself you just have to watch
| costs like a hawk when you make any big changes. AWS and
| friends have calculators but there is only so much you can
| estimate and it's hard to know the usage patterns till
| something is live.
|
| I'm lucky that my work is event-based, as is it used by in-
| person live events so my usage comes in waves (pre-sales a
| month or two out, steady traffic the week leading up to the
| event, and high traffic the day before or week/day of the
| event). This means that at worst I only have to ride out the
| current "wave" and then I have some amount of time before the
| next event (gives me an opportunity to fix run-away costs.
|
| One of my big runaway costs was when I tried to use something
| like Datadog/NewRelic/Baseline. You work yourself up to the
| cost of the service, make your peace with it (the best you
| can, since it's also hard to estimate), then get hit with AWS
| fees (that none of the providers call out) for things like
| CloudWatch when they are pulling logs/metrics out. I've had
| the CloudWatch bills be 4-6x as expensive as the service
| itself and it's a complete surprise (or was the first time).
| Thankfully AWS refunded it when it happened. I caught it
| after 2 days and had run up a few hundred dollars in that
| time, I could have handled it but thankfully they refunded it
| for me.
|
| The second runaway cost was Google Maps, once you fall off
| that free tier the costs accumulate quickly. In just a few
| days I had a couple hundred in fees from that. I scrambled a
| switch to ProtonMaps and took my costs down to a couple
| dollars a month.
| FlyingSnake wrote:
| Came here expecting a tangled mess of disparate frameworks but
| was pleasantly surprised to see sane choices in using Django as a
| base. Sane choices and it shows why the product got acquired.
| dmwilcox wrote:
| Context: I write mostly write Python for a living and run a
| Django app as a major responsibility
|
| I always wonder if python, auto scaling, and cloud change the
| kinds of businesses (and margins) required. Running 60+ pods of
| gunicorn for an app incurs lots of overhead, especially if one is
| using small VMs.
|
| I can't count the number of war stories I've heard from SRE
| friends who joined some company and realized they were taking
| over a Django stack for the core of a business that was blowing
| gaskets left and right.
|
| This sort of thing just leaves me wondering if margins have to be
| high to support a slower language, slower framework, and
| complicated deployment model to work around the Python GIL.
|
| Call me jaded but I just wonder if it's worth the time to market
| to do Python these days versus say Go and be able to deploy a
| static binary and use all your CPU cores simply. K8s is way less
| tempting for a "one-process-architecture " (minus database and
| maybe nginx) until the system is much larger (instead a couple of
| systemd units would sort you)
| lifeisstillgood wrote:
| I'm not sure I understand - pythonnis "just" a process once you
| kick it off, want to run 8 processes on 8 CPUs, kick off 8
| processes.
|
| Are you saying that to run this 8 processes you suddenly need 8
| VMs running in the cloud - yeah. That does seem expensive. It's
| very tempting to say one should not start in the cloud but host
| locally, but that seems an unpopular view.
|
| But maybe as we start to see more and more local workspaces
| catering to people working from home, we will see more and more
| "mini data centres" - it might even by Oxides sweet spot.
|
| Edit: """
|
| because it actually winds up being less DevOps work, on
| average, to support open-source systems running on bare VMs,
| than to try to keep up with Google's deprecation treadmill
|
| """ https://steve-yegge.medium.com/dear-google-cloud-your-
| deprec...
|
| Mr Yegge makes my point much better ofc
| schrodinger wrote:
| We went through a migration from Ruby on Rails (as a json
| api) to Go and it just really does make a difference when you
| need 1/50th the servers to run (don't remember exact
| multiple, but it really was that dramatic--the concurrency
| model enabled so many parallel requests per instance compared
| to Ruby).
|
| It's kinda like using a map reduce cluster when a beefy
| server with a few Linus commands piped could have handled the
| "big data" needs. Both have their time and place, but it's
| amazing how far a simple setup can go, so don't overengineer
| prematurely.
| p_l wrote:
| Stack Overflow used to claim that the money they spent on
| licensing etc. to run their stack on .NET was well recouped in
| reduced hardware and energy spend, because .NET AOT compilation
| meant they got way more performance per buck.
| CuriouslyC wrote:
| Pro tip, the strangler fig pattern is great for taming those
| oversized Django monoliths. FastAPI is a good replacement if
| you want to stay python, it's much faster.
| schrodinger wrote:
| Totally agreed. Such a fan of Go for this. It obviates just
| about any reason to use docker. You compile a single binary,
| for any platform from any platform, and just... copy it onto a
| server.
|
| Hell there's even first class support to compile assets into it
| if you wanna throw your whole site into the binary, or many
| other use cases that would otherwise make docker nice.
|
| Damn I miss the startup that was a Go build and deploy to 3
| static VMs over my current company's dozens of microservices +
| ecs + kafka + aurora postgres.
|
| Startup A had the servers all sitting around at 2% cpu all day,
| trivially scalable by adding more vms. No downtime over 3
| years.
|
| Startup B had 10x the users, but an elaborate docker + ecs +
| dozens of microservices (that could have just been libraries,
| avoiding all that complexity) + aurora postgres. Downtime all
| the time. Most recent was so hard to detect: a db server
| running fine at 40% load and our application around the same,
| yet db timeouts left and right so considered an official
| outage. Aws had throttled the network throughout between the DB
| and App with no warning! We even had an AWS support person on
| the call and took them 2 hours to figure it out.
| williamdclt wrote:
| I don't see how using a go binary vs dockerized microservices
| is related to database concerns? You need a database either
| way, and this problem you describe doesn't seem related to
| the application level
| redman25 wrote:
| Web apps are generally IO bound by the database. Hardly any
| time gets spent in cpu even in python. There could be a number
| of things contributing to having to horizontally scale python
| more than compiled frameworks. Many apps are still stuck on
| non-async frameworks. Many devs are also bad at knowing how not
| to block server processes.
| marcosdumay wrote:
| Python can waste 90% or 99% of your ops budget. But auto-
| scaling starts from 99%, and can go all the way to 4 or 5 9s of
| wastage. Of course, if you combine them, you add up the 9s.
|
| Anyway, it's patently obvious that increasing your unit costs
| by 7 orders of magnitude will constrain the kinds of business
| you can run. A few people will deny that, but the cloud is so
| impactful that those are very few nowadays.
|
| What a lot of people will deny is the relative importance of
| those two. If you take your C code from an auto-scaling
| environment and replace it with bare metal Python, you often
| get a few orders of magnitude gain.
| nostrademons wrote:
| This is sort of like why plumbers hate chemical drain cleaners:
| you call the plumber only on the times that it _doesn 't_ work,
| and then pay them handsomely to clean up your mess. For all the
| times that Drano works, you never call the plumber in the first
| place, and so they never see it.
|
| Likewise, you only hire a SRE if a.) you have the budget to
| hire at all, which the vast majority of startups do not and b.)
| reliability is shitty enough that you need to hire someone to
| fix it. All the Django setups that hum along fine do not need
| SREs; the founders continue to rake in the money and don't need
| to hire anyone.
| andrewstuart wrote:
| I closed the page at "Kubernetes", despite the "you may not need
| this", because that was all I needed to hear to know this setup
| has little to do with the ordinary one person tech startup.
| dimitar wrote:
| I would normally avoid recommending k8s, but perhaps the
| founder already was pretty comfortable with k8s from a previous
| job and knows what to avoid.
|
| There are no production dbs in k8s, no service meshes,
| monitoring is only side-cars that send data to an off-cluster
| backend,
|
| The automatic deployment with flux seems pretty nice!
| p_l wrote:
| Having learnt k8s once, I found out it greatly simplifies
| things for me when I want to run new projects, even if it's
| single server only (there I use k3s).
|
| Especially if there's a chance I need to run more than one
| client, or environment, on a given server, which is more
| important for "solo" enterprise than for a bigger company
| because I do not have money or time to care of multiple
| servers.
| KronisLV wrote:
| I guess some people are just very used to it nowadays, hence it
| will work pretty well to them - same as how someone might have
| 10+ years of experience in Java and pick it for a project and
| use it successfully, whereas someone else could find the pace
| of development in it to be slower than the alternatives
| (despite how decent it is nowadays).
|
| Personally, I think that containers are a good choice for most
| webdev projects.
|
| If you are just starting out, then Docker Compose is probably
| very much sufficient for anything that touches upon a single
| node.
|
| Realistically, once you need to most past that, I feel like
| most folks would be served just fine by something like Docker
| Swarm + Portainer, you'd still need to run your own web server
| in front of your containers if you want something similar to an
| ingress, but in my eyes that's a plus, given how simple the
| config is (if you've ever installed Apache/Nginx/Caddy it's
| very much like that), or maybe go with Traefik if you must.
| There's scaling, resource reservations and limits, overlay
| networking between nodes, port mapping, storage, restart
| policies, configuration and pretty much _most_ of the things
| you might reasonably need. Something like Hashicorp Nomad could
| also be mentioned here, but Docker Swarm seems like the simpler
| option to me.
|
| Past that, it's not like you can't run a bit cut down versions
| of Kubernetes, that will decrease its surface area somewhat -
| K0s, K3s and others all try to do this. I personally rather
| like K3s, it also integrates with Portainer/Rancher nicely if
| you need a dashboard, you can take advantage of Helm charts and
| all that other good stuff. I found the choice of K3s using
| Traefik by default (when I last used it) a bit suspect
| (configuring a custom non-ACME wildcard cert as the default for
| the ingress wasn't exactly well documented), but overall it was
| an okay experience and the resource usage wasn't anything crazy
| either.
|
| For my personal stuff nowadays, I still run a Docker Swarm
| cluster and I've no complaints there. It doesn't have
| autoscaling out of the box, but then again, my workloads are
| too boring to need that and I enjoy the predictability in
| billing I get.
| udev4096 wrote:
| Well, let's hear it. Is it because you heard somewhere that
| "k8s is overwhelming and unnecessary for everyone" or do you
| actually have some constructive argument for not using k8s?
| EuAndreh wrote:
| Not GP, but nothing wrong with kubernetes per se. It is
| simply unnecessary.
|
| You can get to gigantic success and extraordinary scale by
| growing vertically, without needing to add a very complex
| layer that kubernetes is.
|
| Think of the many literal millions of lines of code, and
| thus, potential bugs and liabilities one inherits by "just
| using kubernetes".
|
| It's not a requirement, neither problematic. Its simply
| unnecessary for anything remotely resembling a "one-person
| tech startup".
|
| Besides unnecessary features like horizontal scaling,
| necessary features like restarting a service already exist,
| and most likely are already installed in the base distro and
| being used by the rest of the system. Its not like this
| couldn't be done before, all kubernetes provides is a
| conveniet and streamlined package for all that, at the cost
| of black-hole levels of extra complexity via several added
| indirections.
| andrewstuart wrote:
| >> do you actually have some constructive argument for not
| using k8s
|
| It adds vast complexity and is not needed.
| danjl wrote:
| Assuming that "ordinary" means the same thing to everyone is a
| bit of a generalization.
| dakiol wrote:
| I wish they could have explained the deployment/release part via
| ci/cd more in depth.
| wiradikusuma wrote:
| My backend setup for my one-person startup ("case study" for my
| book, https://opinionatedlaunch.com) is GitHub (for code repo and
| CI/CD), Firebase (auth, analytics, push notif) and CloudFlare
| (static pages, DNS stuff) and Fly.io. Less work but more done.
|
| My only complaint Fly.io doesn't have managed DB
| (https://fly.io/docs/postgres/getting-started/what-you-should...)
| and task queue.
|
| At work we use GCP, but try to be as high as possible in the
| abstraction layer (e.g. use Cloud Run and Cloud SQL).
| vindex10 wrote:
| I was able to use managed Postgres ~a year ago on Fly.io. They
| seem to have offloaded its maintenance to Supabase, but it is
| still seamlessly interfaced with Fly.io.
|
| Moreover, they still allow 1 free db per account:
|
| > Supabase offers one free, resource-limited database per
| Fly.io user
|
| Are you concerned that it is not directly managed by Fly.io?
|
| * also Kafka: https://fly.io/docs/reference/kafka/
| qeternity wrote:
| I see a lot of comments to the effect that this is a complicated
| setup, especially for one person. I really think most people need
| to re-evaluate their stacks without all of the marketing of cloud
| providers and gigascalers.
|
| This setup is very similar to the startup that I run. We have
| used k8s from day one, despite plenty of warnings when we did,
| but it has been rock solid for 3 years. We also decided to run
| Postgres and Redis on k8s using Patroni + Sentinel and the LVM
| storage class. This means we get highly performant local storage
| on each node and we push HA down to the application.
|
| Was there a learning curve? Yes, it took a solid week to figure
| out what we were doing. And we've had the odd issue here and
| there, but our uptime has exceeded what we would have had with a
| cookie cutter AWS setup, and our costs have been a fraction
| (including human capital costs).
| malux85 wrote:
| I am solo builder and runner of https://atomictesselator.com
| and I have used k8s from day one. I also agree with you, it's
| been rock solid, and the ability to horizontally scale so
| easily is great. I dont even know which physical machines in my
| rack the jobs/pods are running on, I just have a high level
| dashboard (lens) and I can see the CPU+GPU utilization hit 100%
| as everything self balances, it's great, this has been very
| helpful scaling my GPU workload which has increased a lot
| recently
| FlyingSnake wrote:
| Looks like the setup didn't survive a death hug from HN, the
| site's not loading for me.
| azornathogron wrote:
| It's just a typo in the URL. It should be
| https://atomictessellator.com (two Ls)
| marcinzm wrote:
| Same. The simple fact that every PR can deploy a full stack,
| including RDS and managed Redis if desired, automatically in
| it's own namespace with proper DNS pointing to services is a
| massive win for us. As in, if you label the PR then it all
| happens automatically and then it all shuts down automatically
| when the PR is closed.
| williamdclt wrote:
| Do you have any resource to how to set that up? I'm
| interested!
| marcinzm wrote:
| Should make a blog post but in short:
|
| - Every service is deployed via a Helm chart and using
| containers
|
| - GitHub actions build the container and deploy the helm
| chart
|
| Some of the details that matter:
|
| - ACK is used to create AWS services (RDS, Redis, etc.) via
| Helm charts (we also have a container option for helm
| charts as it's faster and less expensive)
|
| - External Secrets is used to create secrets in the new
| namespace and also do things like generate RDS passwords
|
| - ExternalDNS creates DNS entries in Route53 from the
| Ingress objects
|
| - Namespace name is generated automatically from the branch
| name
|
| - Docker images use the git hash for the tag
|
| Some things that are choices:
|
| - Monorepo although each service aims to be as self-
| contained as possible.
|
| - Docker context is the git root as this allows for a
| service to include shared libraries from the repo when
| creating a container. This is for case where we break the
| previous rule.
| CuriouslyC wrote:
| K8s is overkill though. Google cloud run gives you the good
| bits with less work and it's still reasonably priced. Azure
| container apps in theory should be similarly good, but it's
| kind of buggy and poorly engineered.
| qeternity wrote:
| Why do you say it's overkill? Why do people think k8s (or
| k3s) is some impossible monster?
|
| Have you deployed and managed k8s before? What did you find
| more complicated than learning the ins and outs of the cloud
| variants?
| CuriouslyC wrote:
| I'm not saying it's an impossible monster but I can setup
| and deploy a project from scratch with cloud run in ~5
| gcloud cli commands while being somewhat protected from my
| own ignorance.
| mitjam wrote:
| Caution: Working with CloudRun with ignorance on any
| level can lead to very large bills:
| https://news.ycombinator.com/item?id=25372336 - The
| article discussed there is a good cautionary tale:
| https://archive.is/D5qc8 - I prefer to explore and
| experiment on a less scalable solution which is usually
| also less scalable in terms of costs.
| dinkleberg wrote:
| In my experience, people who think k8s is really hard to
| user are either: a) someone who had k8s forced on them
| without getting a chance to learn it (or had no interest in
| the first place) and was stuck with its complexity, or b)
| has never actually used k8s and is just parroting what
| they've heard.
|
| The former situation does suck. K8s is amazing and once you
| understand it, it is easy to work with. But if you haven't
| learned the concepts and core resources, it will definitely
| appear as a black box with a ton of complexity.
|
| But I think for many of us who have used Kubernetes a lot,
| it is a no-brainer in a lot of situations as it doesn't
| matter which cloud provider you're using (for the most
| part), you get a common and familiar interface.
| qeternity wrote:
| I usually find that it's B. The number of professional
| programmers who don't have the slightest clue how a
| database actually works, much less the ability to just
| spin up a container containing one, is shocking.
| dinkleberg wrote:
| Sadly I think that extends to most people about most
| things. I'm certainly guilty of stating something I've
| heard but don't actually know from time to time (though I
| do make a conscious effort not to).
|
| It is funny (or depressing, depending on your point of
| view) to catch people doing this by asking them when was
| the last time they did X or how they know whatever
| they've stated and then seeing the gears turn.
| EuAndreh wrote:
| From the fact that the LoC count of the implementation is
| beyond "millions".
|
| How about this equivalence: I appreciate how extremely
| sophisticated GCC is, and the very well optimized output it
| generates. It is still is 100x more complex internally (and
| thus, error prone, buggy, more complex to modify when
| needed) than TinyCC, for instance.
| qeternity wrote:
| Um, ok. I think you're massively underestimating the LoC
| of the services mentioned (I would bet significant sums
| that cumulatively it would exceed a basic k8s deploy).
|
| I'm also not really sure what the relevance of LoC here
| is though? The Linux Kernel is a large codebase...but
| surely you don't object to using that?
| victor106 wrote:
| > Azure container apps in theory should be similarly good,
| but it's kind of buggy and poorly engineered.
|
| 100% agree.
|
| At a mega corp they were mandated to use Azure(worst
| decision) and we choose Azure container apps.
|
| Nothing worked: from the deployment ARM templates to
| scalability.
|
| It was a disaster.
|
| This was after we had the highest level of support from
| Azure.
|
| The engineer working with us in this almost admitted this
| shouldn't have been released in it's current form.
|
| I can't for the life of me understand how Microsoft releases
| such products!!!
| mitjam wrote:
| CloudRun with Django works well, but for a complete setup,
| you need to configure, manage and pay other services, too,
| like Cloud SQL for PostgreSQL, Memorystore for Redis, API
| Gateway, Observability, Cloud Build, Artefact Registry, Cloud
| Key Management, IAM, Pub/Sub, CDN, and Cloud Storage. When
| you have a live service, you probably also want to pay for
| Google Cloud Customer Care. The same is true for AWS and
| Azure.
|
| Depending on training and experience, it may be easier to
| setup a GCP than a Kubernetes based solution, but it's still
| not exactly trivial. And, once a fitting Kubernetes platform
| is up and running, it's almost trivial to add new services,
| as the article describes.
|
| I think, even in 2024, and even for a one-person business
| like the authors', starting with Kubernetes is not a bad
| decision. I'm currently building my own Kubernetes based
| platform, but on a less expensive service (Hetzner Cloud). I
| use a separate single-node Kubernetes "cluster" with Rancher
| for management, and ArgoCD for continuous deployment from
| git. Currently I only have a single-node "playground" cluster
| up and running for my "playground" workloads, to save costs,
| but I will upgrade this to a proper HA cluster before a
| service goes live.
|
| Later, I can still upgrade to GCP, or in my case probably
| Azure. Until then, I don't need to worry about large cloud
| bills, which is also a good thing.
| FlyingSnake wrote:
| For a startup that was generating revenue and got acquired,
| this is par for the course. What are people expecting instead?
| cj wrote:
| Small secret:
|
| The best tech stack when starting a startup is one you don't have
| to learn.
|
| There's a million things you learn when starting a company. Don't
| make yourself learn an entirely new tech stack on top of
| everything else. This advice means you'll be using whatever
| you've used in the past which might not be the sexiest or newest
| technology, but your users won't care. Your users want a working
| product. Choose the stack that will result in a working product
| the quickest.
|
| Refactor and migrate to a better stack later, if necessary. (It
| rarely is)
|
| For me, that meant deploying to Elastic Beanstalk (I know,
| boring, no one talks about it, but it works!) and using Mongo
| because I was already comfortable with it. It also meant not
| using React, at first. This was the right answer for me, but
| might be the wrong answer for you! Build your app on the
| technology you know.
| CuriouslyC wrote:
| That's true if you're optimizing for start-up success. The odds
| of a startup succeeding are pretty low and mostly tech is not
| the cause of failures though. When people learn a new stack for
| a startup they're optimizing for the startup failing, so they
| can still derive some value out of the failure.
|
| This just comes down to a napkin calculation of expected value
| of building a startup given P(success). If your startup is
| looking like a real business out of the gate, using the simple
| stack you know is the way to go, and if it's more of a hobby
| project that you'd like to monetize eventually, learn something
| new.
| NameError wrote:
| This lines up with how I make tech stack decisions for my own
| projects. But I think it's not always obvious going into
| something if it's going to end up being a money-making
| endeavor or just an educational project in the end, so I'll
| fall somewhere in between.
|
| What makes the most sense to be is to be really selective
| about what new technologies to use, and try to really learn
| ~one thing per project. E.g. my current project is a small
| search engine, and I've spent a lot of time exploring /
| figuring out how to use LLM Embedding models and vector
| indices for search relevance (vs. falling back on using
| ElasticSearch the same way we use it at work), but I'm using
| tools that are familiar to me for the UI/db/infrastructure.
| kugelblitz wrote:
| For my 12 year side project, I recently moved from Elastic
| Beanstalk after 6-7 years to Platform.sh, because "it just
| works" even more so and it was way easier to debug (EB just
| says "error in step 18_install_yarn" or something).
|
| I use Symfony (Php) and have not used a full SPA after I
| retired AngularJS (v1) like 10 years ago. What people now call
| server side rendering (SSR) is just how Symfony works with its
| regular Twig templating language (heavily inspired by Django's
| templating language).
|
| As I gained more experience, I rewrote it. Once from vanilla
| PHP to Laravel, then later to Symfony.
| Bilal_io wrote:
| Hey, I'd be curious to know what made you move from Laravel
| to Symphony.
|
| I've not been exposed to any of the two and only dealt with
| PHP as part of messing with WordPress in the past.
| throwAGIway wrote:
| Server side rendering of React applications doesn't just
| refer to the fact that it's generated on the server like you
| do with PHP.
|
| The main difference is that there are IDs that allow the
| client side code to seamlessly attach event handlers (this is
| called hydration) to the DOM - and that there is no
| difference between server and client side code.
|
| In your case, you'd have to do that manually - a huge
| difference (speaking as someone who used to do that with
| jQuery).
| lopatin wrote:
| ElasticBeanstalk is wonderful. Each to his own but for me it
| was the answer to the question "I agree Kubernetes is overkill
| for me .. but then what should I use?".
|
| I certainly didn't want to be manually creating instances,
| installing Java / Node / Docker, starting / stopping services
| and doing deploys manually because that's also annoying work :)
| Elastic Beanstalk + the Github actions deploy plugin is dead
| simple and just works.
| eyphka wrote:
| Seconded, (and also started on elastic beanstalk!)
|
| Has kept surprisingly well over time, with some hiccups around
| platforn version changes.
| grogenaut wrote:
| If anyone tells you different you can take stacks like
| Beanstalk VERY FAR. Probbably way further than you want to. We
| just moved our highest TPS (Transactions per second) and OPS
| (Objects Per Second) service off of beanstalk after 7 years. At
| peak this service was serving around 1.1M TPS and around 8M OPS
| with peaks up to 3M TPS and 14M OPS. It was horizontally scaled
| and run by about .25 engineers. That was actually the main
| problem. That and the company got real good at running ECS. Our
| services are all golang so beanstalk eventually was getting in
| the way, and one of our last major issues had to do with nginx
| proxy on bean stalk not keeping up with our go service. There's
| several other edge cases with Beanstalk like how to do health
| checks right, and speed of deploys esp when they get wedged.
| But if you know it you can, as we found, go very far with it,
| powering a core feature of a very large web property (check my
| bio).
|
| Am I happy we moved off, yes. Was I impressed how big we took a
| very simple "starter" setup, also yes. Again one of the biggest
| reasons was that all of our competence in the company was no
| longer on beanstalk but on ECS where we had solved the issues
| we had with it with more complex setups more in our control. If
| I were starting again would I use Beanstalk? no not me
| personally but when we started ecs didn't exist and beanstalk
| was better than bare EC2. now I know and am comfortable with
| ECS. Did it help the company? very much so, we leveraged the
| heck out of Beanstalk to scale the company horizontally and
| vertically.
| nunez wrote:
| Honestly, this is a pretty solid stack.
|
| I wonder how they make time to manage it all _and_ iterate on
| projects _and_ have a life outside of that.
|
| This is the biggest problem I've long struggled with. I don't
| know of a good solution.
| danesparza wrote:
| A solid Kubernetes stack kind of manages itself. This will help
| you have a life. Boundaries will also help (you need to time
| box certain activities it sounds like). You can do this. Just
| keep trying.
| ACV001 wrote:
| Stack and the technical part is rarely the important factor in
| success. The idea and marketing seem to be the important bit. The
| stack can be the cause of failure, but it can never be the reason
| for the success of a startup.
| danesparza wrote:
| Because you'll be fighting a lot of forces that don't care if
| you fail (as a startup entrepreneur) confidence in your stack
| will help you sleep at night. And this is a good thing.
| enahs-sf wrote:
| I think the nice benefit here is because of OPs infrastructure
| experience and process, he can now quickly run and test new ideas
| and projects for a reasonable marginal cost. Seems legit. Very
| clean setup.
| mattbillenstein wrote:
| Guy lost me at k8s and terraform - just put your thing on a
| single VM and yolo.
| oooyay wrote:
| I can understand why you'd say that, but I don't think the
| world's that black and white.
|
| Many people would say a detached, Typescript frontend is less
| simple than SSR. At this point in my career writing a bunch of
| ad-hoc, per-page Javascript would be more toilsome than writing
| components. Thus, the prevailing idea of "simple" would be less
| simple for me.
|
| In that same way, if you've spent a large chunk of your career
| in Kubernetes then _managed_ Kubernetes is probably a lot more
| simple than maintaining an image production pipeline and a
| disjointed Terraform deploy. You 'd have to make a choice
| between immutability and non-immutability with VMs as well, and
| design lifecycles for them if you choose immutability. That
| choice is made for you with Kubernetes and has well-established
| patterns.
|
| That's to say, simple is highly subjective. Like another
| comment said, simple is the technology you know best _today_
| (and that you can easily hire for).
| marcosdumay wrote:
| > You'd have to make a choice between immutability and non-
| immutability with VMs as well
|
| I'd like to emphasize the yolo from the GP. I does really not
| imply image maintenance and immutability. (By the way, "non-
| immutability" is called "mutability".)
|
| And no, if you know kubernetes to your heart, it's still very
| often easier to start the grug way and build OPS up only
| after you have something to run on it. (But, of course, there
| are exceptions. There are always exceptions.)
| tracerbulletx wrote:
| k8s is easy once you know it imo. My recent project I made a
| Digital ocean cluster and some Github actions to deploy to it
| in like half an hour, and it would be easy to port to another
| cloud, or self hosted k8s cluster. It can auto scale nodes, do
| 0 downtime deployments, bg deployments, has automated
| certificate provisioning, it does everything I need.
| mrj wrote:
| I don't know why this is being downvoted, this is my
| experience too. I don't mess with networking or volumes in
| k8s but it gives me a lot of stuff out of the box for when I
| need it later. It takes a couple of minutes to copy and paste
| the Deployment definition and change a few things.
|
| The trouble with "single vm and yolo" it is tech debt of the
| worst kind. It's not a shortcut you might have to expand on
| someday, it's the kind where whole platform changes are
| necessary. Someday that vm process won't be enough and
| that'll require changing everything about the deploy,
| potentially at a time when big changes aren't desirable. If
| the business is starting to pick up so that we need something
| more reliable, that's the wrong time to want to replatform.
| I'd rather but in a small amount of upfront knowing that I
| won't have a big tech debt to pay back later.
|
| So I keep it real simple, and keep away from parts that I
| know will give me a headache. I use a managed service and
| plan to give it to somebody else when we're big enough to
| have that somebody else. Then they can worry about the parts
| I didn't.
|
| It's wise not to do too much upfront, for sure. But it's wise
| not to back into corners that are hard to undo later, too.
| jonas21 wrote:
| > _Someday that vm process won 't be enough_
|
| For a surprisingly large set of cases, that day will never
| come. I've scaled services to millions of users on a single
| VM. The best debt is the kind you never have to pay off.
| codeisawesome wrote:
| I'd love to read a blog post from successful one person startups
| that are in the highly competitive spaces like DevOps tooling
| (uptime, analytics...) on how they go-to-market and
| differentiate.
___________________________________________________________________
(page generated 2024-06-16 23:01 UTC)