[HN Gopher] Take a look at Nomad before jumping on Kubernetes
___________________________________________________________________
Take a look at Nomad before jumping on Kubernetes
Author : sofixa
Score : 133 points
Date : 2021-02-28 09:30 UTC (13 hours ago)
(HTM) web link (atodorov.me)
(TXT) w3m dump (atodorov.me)
| kashif wrote:
| Much better and easier to run/maintain than Kubernetes. I say
| this after using it for 5 years.
| ahachete wrote:
| The absence of CRDs is a significant lacking. They enable
| abstraction, composability and reusability. Even if Kubernetes
| may be more complex, deploying complex jobs on Nomad may become
| (much) more complex than CRD-powered complex equivalent on
| Kubernetes.
|
| I'm also not sure if Nomad has an equivalent of the listeners
| that you have on Kubernetes to implement reconciliation loops,
| i.e. the controller part of operators.
| sdotsen wrote:
| I looked into Nomad and my problem is having to run a minimum of
| six machines in a cluster for a few microservices. The argument
| could be made that we wouldn't need container orchestration for a
| few, but if we plan on scaling I don't want to have to deal with
| a fleet of servers. With ECS or EKS, I would only have to worry
| about the worker nodes so it's less to manage
| jiveturkey wrote:
| why do you have a microservices environment for "just a few"?
| core-questions wrote:
| You can totally do it on three machines, each running triple
| duty as consul, nomad, and worker machines.
| sofixa wrote:
| Or if reduced redundancy is accepted, on one/two ( either one
| with everything running as server and client, or two with one
| running Nomad, Vault Consul servers and the other clients)
| markbnj wrote:
| We're all-in on kubernetes and cri where I work, but Nomad is
| cool and if we were starting out now we'd certainly take a close
| look at it. We use other Hashicorp tools, terraform for example,
| so we're already familiar with HCL (which I don't necessarily
| prefer over yaml or json for declarative tasks but that's another
| debate). Most of the simplicity argument comes from the single
| server/worker binary and ease of deployment on the ops side,
| which if you use managed k8s like GKE might not mean all that
| much. If you're going to implement and run your own clusters
| that's a bigger deal.
|
| Looking at it from the consumer side and comparing like to like I
| don't feel its a _lot_ simpler. Task groups and pods, tasks and
| containers, config, networking, storage, it is all different on
| nomad but not, imo, tremendously simpler. That makes sense given
| the nature of the problem the systems are addressing. Over the
| last five years kubernetes has grown horizontally a great deal to
| encompass more and more enterprise features and use cases, and I
| feel in general this is a repeating pattern: a thing is created,
| it works well and becomes popular, more people use it and join
| the community bringing their own use cases, which drives the
| addition of features to the thing, which eventually gets
| complicated enough that people are attracted to a new thing that
| does largely the same stuff but hasn't had a lot of the new
| features added yet. Rinse and repeat.
|
| On a somewhat unrelated note: I'm not very attracted to server-
| side features for tracking and manipulating versions of
| deployments and their history. I'm a fan of git ops, and what I
| want out of the server-side runtime is reliable, fast
| reconciliation of the infrastructure state with the release
| branch in my repo. If I want to roll back I would much rather
| revert in git and redeploy. It seems troublesome and error prone
| to rely on a cli command to force the server side state to a
| previous iteration that is not in sync with what master in the
| repo says should be the canonical release version. Interested in
| others' thoughts, but maybe its a separate topic.
| justinsaccount wrote:
| I think nomad really needs a k3s equivalent.
|
| The equivalent of k8s is not nomad, it's nomad+consul+vault
|
| If you read through something like kubernetes-the-hard-way most
| of the really tricky parts are setting up the PKI infrastructure
| and bootstrapping etcd.
|
| https://learn.hashicorp.com/tutorials/nomad/security-enable-...
| starts to look a lot like
| https://github.com/kelseyhightower/kubernetes-the-hard-way/b...
|
| Vault is better than k8s secrets, but getting nomad working
| doesn't magically give you a working vault cluster.
| steeleduncan wrote:
| > The equivalent of k8s is not nomad, it's nomad+consul+vault
|
| This was my stumbling block.
|
| Nomad alone is clearly easier than k8s, and if it alone will
| suit your needs then you are in luck, but I didn't find
| installing and configuring 3 separate services any easier than
| k8s alone.
| insanejudge wrote:
| I've never seen a reliable, scaled and coherent k8s operation
| without at least vault, and often consul as well, so that
| argument never held any water for me
| leetrout wrote:
| I love nomad so much. It never really gets a fair shake IMO. The
| world needs both k8s and nomad and I hope it continues to grow
| and see more adoption.
|
| And many years ago we used it as a feature in a product written
| in Go. We could just tap in to the APIs directly. It is amazing
| for batch jobs.
| justinko wrote:
| Or ECS for that matter.
| sofixa wrote:
| If you're on AWS and don't intend to move, yep, it's a decent
| option.
| rileymichael wrote:
| If multi-region deployments weren't an enterprise only feature,
| Nomad would be a lot more appealing for me since Kubefed is still
| nowhere near ready.
| jxf wrote:
| The main advantage of Kubernetes isn't really Kubernetes anymore
| -- it's the ecosystem of stuff around it and the innumerable
| vendors you can pay to make your problems someone else's
| problems.
| orthecreedence wrote:
| The main advantage of Nomad is Nomad. It strikes the perfect
| balance between simple and capable, enough so that I don't have
| to pay someone else. You can read the bulk of the docs in a day
| or two and deploy it that afternoon.
|
| Consul/Nomad deployment is easy and operation is a breeze. As
| the sole ops person at my company, Nomad has made it possible
| for me to scale our infrastructure without an entire team
| dedicated to it.
|
| I'm not saying that it's better than K8S, but for a small team
| who wants to focus on _building things_ , it's an amazing time
| saver. Obviously, read the docs and make your own decisions,
| but there is a definitely an infrastructure void that Nomad has
| filled perfectly.
| peter_l_downs wrote:
| As a counterpoint, everything you just said about
| Consul/Nomad applies to using hosted K8s like GKE. I moved my
| company's infra to it last year and now we barely think about
| it. We're a small team, I'm the only person doing devops, and
| there are very, very few problems requiring my attention.
|
| I'm skeptical that anything allowing flexibility in this
| space can ever be that simple because the domain is actually
| pretty complicated. Networking, secrets, scaling,
| healthchecking, deployment strategies, containerization --
| you need to know something about all of these before you can
| use Nomad or Kubernetes.
| jxf wrote:
| I think that is the point, though. If you're small and you
| want to focus on building things, it doesn't matter very much
| what it's running on; it's almost always going to be a better
| use of your time to make it someone else's problem.
|
| If you're doing your own bespoke ops or you need to own more
| of the stack, then Nomad is a powerful choice.
| rualca wrote:
| > The main advantage of Nomad is Nomad.
|
| Is it, really? It feels like it is it's main disadvantage, in
| the sense that Kubernetes is a first-class citizen offered by
| practically all major cloud providers, while Nomad... Is
| anyone actually offering Nomad at all?
| jamesog wrote:
| Managed Kubernetes exists because Kubernetes is so mind-
| bendingly complex.
|
| Nomad, on the other hand, is pretty simple and easy to run,
| and a single binary. There's probably no benefit to having
| a managed offering.
|
| That said, I wouldn't be surprised if HashiCorp add Nomad
| to HashiCorp Cloud Platform, which currently lets you
| deploy Consul and Vault to cloud providers via their
| system.
| jpgvm wrote:
| Managed Kubernetes exists for the same reason managed
| PostgreSQL, MySQL, Redis, ElasticSearch so on and so
| forth exist.
|
| Which is to say it's useful enough to a large amount of
| people to make it viable as a product offering. Nomad
| would fail as a managed offering here like like Deis,
| Flynn, Convox and probably 100 other container management
| platforms that came before.
|
| As a niche tool to manage sub-1000 boxen it's probably
| ok. But k8s has won the greater war.
|
| Disclaimer: Worked on Flynn, still run it for my own
| personal stuff by $DAY_JOB is all k8s.
| tutfbhuf wrote:
| If public cloud providers would offer Nomad, then it
| would only be a question of time until managed Nomad
| would be a thing.
|
| Managed _something_ does not mean it's "mind-bendingly
| complex", but rather that people don't want to take care
| about it and focus on their own stuff like building
| applications.
| KaiserPro wrote:
| or, just hear me out, not use k8s and avoid 90% of the problems
| in the first place.
|
| There are some specific places that k8s can shine, but easy
| scaling(1000 nodes +) and low management overhead isn't one of
| them.
| ozim wrote:
| Up vote because that really is the main thing to consider when
| evaluating actual needs of a project.
|
| My actual need is that I deploy my application and I don't care
| where or how it is running. Most of people should not even
| think about running k8s on their own. That should be job of a
| service provider and there should be service provider for
| running "Nomad".
| joana035 wrote:
| I don't get why people are so afraid of kubernetes, the thing is
| pretty much hands off.
| rualca wrote:
| > I don't get why people are so afraid of kubernetes, the thing
| is pretty much hands off.
|
| To be fair, this time around the ones afraid of Kubernetes are
| the companies trying to sell a Kubernetes competitor.
| sofixa wrote:
| To be clear, i ( the author of the article) am not affiliated
| with Hashicorp in any way besides using some of their ( free,
| open source versions) products.
|
| I just used Kubernetes at work, inherited a poorly maintained
| cluster, and when time came to change, due to very limited
| time available, we opted for Nomad. Few years in, i'm telling
| what i've learned ( while still running a k8s cluster for
| home use, so i'm not entirely disconnected from the k8s
| ecosystem)
| rualca wrote:
| I see, I apologize for the unintended misrepresentation.
|
| However one of the things I jumped into the assumption that
| the blog post was stealth advertising for Nomad was a few
| exaggerated claims that feel quite a stretch to be able to
| come up with anything in favour of Nomad over Kubernetes.
|
| One of them, for example, is that
|
| Nomad is supposed to be "significantly easier than microk8s
| or minikube or kind" to run locally as a dev environment.
| Well, anyone familiar with minikube is well aware that to
| get it up and running you only need to install it and
| kubectl, and quite literally just run 'minikube start'.
| Nomad agent is not only far more convoluted to get up and
| running according to Nomad's official documentation, but it
| is also explicitly stated that it's use is highly
| discouraged. Don't you feel that you're misrepresenting
| Nomad's selling points?
| sofixa wrote:
| I see.
|
| > Nomad is supposed to be "significantly easier than
| microk8s or minikube or kind" to run locally as a dev
| environment. Well, anyone familiar with minikube is well
| aware that to get it up and running you only need to
| install it and kubectl, and quite literally just run
| 'minikube start'
|
| It would appear that i wasn't up to date on minikube -
| last time i installed it ( it was a few years back, and
| since then i've switched to microk8s for local stuff), it
| required a VM intermediary and driver, but that's no
| longer the case, direct Docker being supported. That's
| drastically easier than it was before, but there's still
| a slight advantage to Nomad (see below).
|
| > Nomad agent is not only far more convoluted to get up
| and running according to Nomad's official documentation,
| but it is also explicitly stated that it's use is highly
| discouraged
|
| How so? Download a binary, run `nomad agent -dev` and
| that's it, there's nothing convoluted. It's highly
| discouraged the same way minikube is - for production
| use. With `nomad agent -dev` you get a local ephemeral
| Nomad instance you can do whatever you want with, but
| should never use it for production use.
|
| The small advantage i talked about is that minikube is a
| separate thing you have to download ( and there's
| minikube vs microk8s vs kind); with nomad, it's the same
| one as the CLI you need to interact with the cluster. So
| i'd be like there was `kubectl minikube start`, and
| that's it - you can go from working with a cluster to
| having a local one for testing in a command. Very slight
| advantage, indeed, but still one IMHO.
| sgt wrote:
| I think you have to ask yourself - why are you running
| Nomad locally as a dev environment? It's indeed very
| useful to test running jobs on Nomad just to learn it,
| but to develop using Nomad would be defeating the
| purpose. Just develop your own code as usual, e.g. with a
| Python stack you end up with a docker container and on
| Java you end up with a JAR file you can run from Nomad.
| sofixa wrote:
| Is it? If GCP are launching an even more managed version of
| their managed Kubernetes because the current one is too complex
| for some, i'd say it isn't that hands off, even with a managed
| service.
|
| It's gotten drastically better, but updates are still serious
| work ( point in case, all cloud providers bar Scaleway are
| usually multiple versions late)
| tutfbhuf wrote:
| It's a thing of personal preference and requirements. I can
| easily see that there are requirements where Nomad is a better
| fit. However, I also have to say that Kubernetes has clearly
| won the battle a will be the safe choice for most companies for
| the years to come. It will be much easier to find knowledgeable
| DevOps and SRE workforce for k8s, than any other orchestrator
| alternative.
| atmosx wrote:
| Any engineer worth his salt will be able to hit the ground
| running on both systems pretty quickly. The concepts are
| pretty similar.
| qbasic_forever wrote:
| It's really only in the last couple years that kubernetes has
| started to get tamed and be more approachable. For the longest
| time the only way to get a simple dev cluster up on your laptop
| was to make a VM with like 8-16GB of RAM and four cores
| dedicated to it--it was pretty outrageous. Nowadays kind, k3s,
| etc. make it a simple one line install and sip resources. But I
| agree, there is definitely a misperception that k8s is too
| complex.
| corobo wrote:
| Honestly my turn off was seeing Google Cloud and later
| DigitalOcean providing managed versions of it. If they see
| value in doing that I assume it's gonna be a pain in the arse
| to solo sysadmin it.
|
| I later realised my blog and little side projects do not need a
| cluster but that's another story. I was mainly looking at it to
| learn than to properly utilize
|
| Completely off topic, sorry. Whatever happened to Flynn.io?
| That was my favorite of the clustery things but it just sort of
| fell off the internet
|
| e: oh. Answered by visiting the site just now. "Flynn is no
| longer being developed." that is a shame.
| mrkurt wrote:
| We use (and abuse) Nomad for Fly.io workloads. My favorite thing
| about it is that it's simple to understand. You can read the
| whole project and pretty much figure out what's going on .
|
| We've written:
|
| * A custom task driver to launch and manage Firecracker VMs
|
| * A device plugin for managing persistent disks
|
| The second is interesting - we looked hard at CSI, but the
| complexity cost of CSI is very high. The benefit is pluggable
| storage providers. We don't need pluggable storage providers,
| though.
|
| Likewise, we don't need pluggable networking.
|
| Device plugins are a really nice way to extend Nomad.
| junon wrote:
| Neither have ever really made me happy as a systems designer.
| They both have some massive drawbacks.
| gravypod wrote:
| I personally feel HCL (or any specific thing) is a large
| disadvantage for any orchestration platform. Kubernetes is an API
| for infrastructure. This is why it uses json/yaml. It doesn't
| mean your configuration should use json/yaml natively.
|
| You can easily write tools that generate json/yaml for you based
| on your needs. For most applications you're deploying you'll
| need:
|
| 1. A Deployment
|
| 2. A Service that points to that Deployment
|
| 3. An Ingress or similar
|
| In some cases you'll want to manage persistent state on disk and
| you'll need a StatefulSet.
|
| A good example of this can be seen
| https://youtu.be/keT8ixRS6Fk?t=1317 and
| https://youtu.be/muvU1DYrY0w?t=581
| xyzzy_plugh wrote:
| A thousand times this.
|
| With Kubernetes, you can even use one of the many native
| clients (and/or protobuf) which makes converting from something
| else a breeze, avoiding the JSON/YAML entirely. You get static
| typing, you can write tests, etc.
| sofixa wrote:
| Well, it's same thing with Nomad - there are native SDKs,
| protobuf and a classical JSON REST API for everything.
| sofixa wrote:
| > You can easily write tools that generate json/yaml for you
| based on your needs. For most applications you're deploying
| you'll need:
|
| Can you with yaml? Helm tries, but isn't that great, jinja (
| depending on how its done ( jinja in yaml as in ansible or
| jinja to yaml as in SaltStack)) does an ok job, but
| fundamentally managing a language which uses spaces for logic
| with code is hard, and certainly YAML wasn't created to be
| managed that way.
|
| JSON i agree, you can easily convert your language of choice's
| data structures to it. But do you have to? It adds another
| layer of complexity, and you can have unexpected bugs ( since
| you have a logic layer that creates JSON, which could be wrong
| due to a logical error).
|
| HCL is a DSL that needs learning and few use outside of
| Hashicorp products, but IMHO it's not that hard ( docs are
| great, tutorials are plenty) and it's worthwhile to have
| data+logic in the same human and machine readable layer.
|
| In any case, Nomad accepts JSON for everything ( mostly for
| when non-humans interact with it).
| gravypod wrote:
| Helm, in my personal opinion, is one of the biggest mistakes
| of the kubernetes ecosystem. Not embracing Jsonnet, Cue,
| Dhall, or even an in-language DSL like Pulumi has set the
| usability of kube back quite far.
| qbasic_forever wrote:
| Helm isn't that connected to the broader Kubernetes system.
| It was popular in the early days when Kubernetes knowledge
| was sparse and things were still being figured out. None of
| the official Kubernetes docs really mention or describe
| using helm, and I see more getting started tutorials are
| starting to avoid it too. The official kubectl tool
| supports a -k mode which applies Kustomize logic and I'd
| argue Kustomize is probably the official way forward with
| app deployments by the Kubernetes project.
| jpgvm wrote:
| $DAY_JOB replaced helm with custom tool that does
| equivalent of kubectl apply by uses Jsonnet as templating
| engine.
|
| I intend to write an OSS version of it because Jsonnet
| really is probably the best fit here (though Cue/Dhall are
| interesting I think Jsonnet is easiest to get others to
| adopt).
| gravypod wrote:
| I've done the same in the past. The important thing is
| making it so it's easy to build up abstractions and to
| work in a sensible deployment flow.
|
| kubectl is close but doesn't understand
| secrets/variables, the concept of clusters, or changing
| image tags programmatically. It's difficult to get right.
| jpgvm wrote:
| Yeah this is all stuff this tool understood, multiple
| clusters, managing kube contexts, external secret storage
| or k8s secrets, native git integration, etc.
|
| Definitely agree that it's difficult to get right but the
| authors of the tool definitely struck a really good
| balance.
|
| Hopefully if I implement it myself I can do their
| original design and implementation justice.
| mdaniel wrote:
| > kubectl is close but doesn't understand
| secrets/variables, the concept of clusters, or changing
| image tags programmatically. It's difficult to get right.
|
| Isn't that the problem kustomize (which kubectl now has
| native support for applying) is trying to solve? I ask
| that honestly, because I haven't used kustomize in anger
| in order to compare its workflow to that of helm
| gravypod wrote:
| My mistake. I meant kubecfg which is built off of
| jsonnet.
| jpgvm wrote:
| I did the full gambit of helm 1/2 tiller/tiller-less and
| kustomize. Personally kustomize is only marginally better
| than helm.
|
| I need to check out kubecfg, I haven't used it yet.
| etse wrote:
| I also did something like this a few years ago (with Go
| templating instead of Jsonnet), but we recently replaced
| it with Kustomize (https://kustomize.io/) as it worked
| quite well for the basic use case of getting shared
| Kubernetes configs to deploy code to different
| environments at different times.
| atmosx wrote:
| The fact that trivial things like ingresses don't have
| transformers is a PITA.
| qbasic_forever wrote:
| No need to build it, there's already kubecfg:
| https://github.com/bitnami/kubecfg
| rickspencer3 wrote:
| We use this heavily at InfluxData to manage multiple
| clusters in multiple regions of multiple clouds. One of
| the maintainers of kubecfg recently joined InfluxData as
| well.
| geofft wrote:
| A team at my workplace uses Jsonnet for this problem and it
| seems to work very well. It's a nicely human-readable DSL for
| generating JSON in a way that respects JSON structures - you
| can construct lists of objects (deployments, services, etc.)
| by looping over templates and composing modifications.
|
| That seems a lot better than either SaltStack's use of Jinja
| (using text-based templating on a structured format) or
| Ansible's (using YAML as syntax for an imperative programming
| language that has Jinja-evaluation functionality).
| nunez wrote:
| Unless you start messing with Operators (which are hot right
| now) for desired state configuration within your cluster, which
| can, theoretically, be written in anything, but realistically
| tend to be written in Golang.
| gravypod wrote:
| Yep, exactly. KubeDB is a great example of this.
| tutfbhuf wrote:
| I think operators are great and true to the k8s spirit. You
| describe your desired state (e.g. a postgres database)
| according to a CRD. And the operator will make your yaml
| definition become reality. I think the declarative approach
| is a big improvement over the imperative step-by-step setup.
| I rather like to describe how I would like things to be,
| instead of having to create the desired state myself.
| okso wrote:
| HCL is JSON compatible, and Nomad can load JSON natively.
| mrkurt wrote:
| It can only load json with the api, the nomad CLI can't
| launch json jobs.
| core-questions wrote:
| But who would use the Nomad CLI for serious work? I use the
| Terraform provider for Nomad to launch all my jobs.
| mrkurt wrote:
| Me? It's handy to be able to dump -> modify -> load jobs
| when I'm poking at things. No one said anything about
| serious work, though, they said "load json natively".
| gravypod wrote:
| I assumed that was a possibility but that still means "HCL"
| isn't a benefit since kubectl can load json from the CLI and
| from apiserver.
| haolez wrote:
| I'd consider Nomad if I'd need to manage my own orchestration
| infrastructure somehow.
|
| My first option would be to hire a managed Kubernetes service. If
| that proves to be unreliable or not cost-effective in the long
| run, I'd then consider making this in-house and use Nomad
| instead.
| nunez wrote:
| I'm surprised that Nomad is gaining traction _now_, five years
| later. Nomad has always been "container orchestration, the easy
| HashiCorp way," but finding case studies from others using it has
| been challenging
| jokethrowaway wrote:
| From my experience nomad and other hashicorp tools required more
| learning how to use all the pieces together / figuring out edge
| cases / debugging.
|
| K8s was definitely easier to setup.
|
| I feel like nomad is appealing to people who don't really need
| orchestration but feel like they do.
|
| It's perfectly fine to run apps with just docker. I have small
| projects on the side which are just a couple of servers with
| docker: one nginx container pointing at a few docker containers
| containing some services.
|
| Once my needs grow, I'll just roll them over to k8s.
| sofixa wrote:
| > From my experience nomad and other hashicorp tools required
| more learning how to use all the pieces together / figuring out
| edge cases / debugging. > K8s was definitely easier to setup
|
| How so? I can't imagine any production setup of Kubernetes from
| scratch ( not managed by a cloud provider) being easier,
| initially or on day 2, than Nomad+Consul+Vault, but maybe i'm
| missing something.
| francis-io wrote:
| As a cloud engineer in the UK, it's almost required these days to
| know K8s in one form or another. I wish it wasn't the case, but I
| think the battle is over.
| sgt wrote:
| Learning k8s is not a problem, but if you are faced with the
| decision to implement either k8s or Nomad at a company, it's
| worth evaluating both and what fits better for your team.
|
| Recruitment should not be an issue since anyone who understands
| k8s should be able to grasp Nomad, and vice versa (although in
| the latter case, with much more effort).
___________________________________________________________________
(page generated 2021-02-28 23:01 UTC)