[HN Gopher] Nginx Unit: open-source, lightweight and versatile a...
___________________________________________________________________
Nginx Unit: open-source, lightweight and versatile application
runtime
Author : thunderbong
Score : 145 points
Date : 2024-06-01 06:18 UTC (16 hours ago)
(HTM) web link (unit.nginx.org)
(TXT) w3m dump (unit.nginx.org)
| ngrilly wrote:
| Neat! What is the benefit of using this over "standalone" nginx?
| The HTTP API enabling configuration change at runtime without
| downtime (like Caddy)? No need for a process supervisor like
| supervisord or systemd as nginx Unit is managing the backends?
| supriyo-biswas wrote:
| For me I'd rather ship a single binary with PHP support in it
| when using containers.
| la_fayette wrote:
| Can you elaborate on that? Especially, where is the php
| runtime and the webserver?
| 9dev wrote:
| It's pretty much like Caddy vs. nginx: Language runtime, static
| asset serving, TLS, routing and so on bundled in a single
| package. That makes it very easy to deploy a container, for
| example.
|
| Thinking of a typical PHP app, which exposes both dynamically
| routed endpoints and static asset. With a traditional setup,
| you'd let nginx handle all paths as static assets and fallback
| to the index.php file to serve the app. When you package that
| as a container, you'll either have to use separate PHP-FPM and
| nginx containers, or run two processes in a single container.
| Both of which is not ideal. And it gets ever more complex with
| TLS, and so on.
|
| Using unit or caddy, you can simplify this to a single
| container that achieves it all, easily.
| SahAssar wrote:
| What is caddys language runtime? Which languages does it
| support? Or are you thinking of the frankenphp plugin?
| 9dev wrote:
| Caddy can either use something sophisticated like
| FrankenPHP, which I very much look forward to using soon
| now that it seems stable, or a regular old FastCGI SAPI.
| SahAssar wrote:
| But nginx also supports FastCGI, and you need to run the
| FastCGI server as a separate process (like php-fpm),
| right?
|
| I don't see how caddy (without stuff like frankenphp) is
| any closer to a complete single binary reverse-proxy AND
| language runtime than nginx.
| chuckadams wrote:
| Unit has its own SAPI for PHP and executes it directly,
| no php-fpm needed. I'm using it to serve Wordpress right
| now, works pretty well.
| SahAssar wrote:
| Right, but the parent comment seemed to imply that the
| same was true for caddy. I was asking what caddys
| language runtime was (besides via plugins like
| frankenphp)?
| chuckadams wrote:
| Plugins would be it, so you could say Go is Caddy's
| runtime. Which is of course duh, but since the official
| mechanism to extend it is by statically compiling in go
| code, it's also accurate. It's not like nginx and apache
| are that much different, their "language runtimes" also
| boil down either to extensions linked into the server or
| proxying to a backend through another protocol like
| FastCGI. Caddy supports fcgi out of the box, even using
| PHP's default settings with one line of config, but I'm
| not a big fan of php-fpm, and I like having just one
| daemon to supervise.
| SahAssar wrote:
| So walking back to the comment
| (https://news.ycombinator.com/item?id=40543839) this
| started at:
|
| > It's pretty much like Caddy vs. nginx: Language
| runtime, static asset serving, TLS, routing and so on
| bundled in a single package. That makes it very easy to
| deploy a container, for example.
|
| > Using unit or caddy, you can simplify this to a single
| container that achieves it all, easily.
|
| With caddy this is not true, unless you have compiled in
| your own plugin (custom or frankenphp), right?
|
| All I was asking is what they thought the language
| runtime for caddy was.
| attentive wrote:
| It's an app server. It can run your asgi or wsgi app.
| callahad wrote:
| Exactly, they're complements: you'd deploy your application on
| Unit and put that behind NGINX or another reverse proxy like
| Caddy or Traefik.
|
| Unit can serve static assets, directly host Python / PHP /
| WebAssembly workloads, automatically scale worker processes on
| the same node, and dynamically reconfigure itself without
| downtime.
|
| Unit _cannot_ do detailed request /response rewriting, caching,
| compression, HTTP/2, or automatic TLS... yet. ;)
| geenat wrote:
| At this point I'd rather be good at Caddyfile and have a project
| folder of: /home/me/project/caddy
| /home/me/project/Caddyfile
|
| No sudo, no config spew across my filesystem. Competition is
| good, and I had a lot of fun with nginx back in the day but it's
| too little too late for me.
| kaptainscarlet wrote:
| Caddy sounds like the go to tool for people who care a lot
| about getting things done. It's time for me to try it.
| diarrhea wrote:
| A coworker of mine dislikes it as it bundles everything into
| a single binary. For example, to have ACME DNS-01 challenges
| for certificate issuance working, I need to compile in a
| Google DNS-specific plugin.
|
| But then it... just works. Try the same with most other web
| servers/proxies and you're in for a world of pain. Having
| this much functionality bundled into a single binary is as
| much a curse as it is a blessing.
|
| That said, having your own little 'Cloudflare Workers' in the
| form of Nginx Unit with wasm sounds great. Not sure Caddy can
| do that.
| TheCapeGreek wrote:
| For me, the promise of Caddy and especially tools around it
| like FrankenPHP make the "everything in a single binary"
| idea the MORE enticing option, not less.
|
| Sure we already have repeatable infrastructure, containers,
| etc. but I also love the idea of just building and shipping
| a PHP app binary that includes the webserver. It makes
| server provisioning even less of a priority, especially if
| I have reasons to not use serverless or PaaS tools.
| AbraKdabra wrote:
| Having a single binary is definitely what drives me to use
| certain software, Deno is one of them.
| beeboobaa3 wrote:
| It's great until you want to include a non-standard
| plugin and need to compile your own binaries.
|
| Now that single binary deployment requires you to compile
| the software yourself. Caddy has nice tooling for this
| but it'd be far more convenient to just drop a dll/so
| file in the right directory.
|
| Single binary deployments are great if someone else did
| the compiling for you. If you need to compile yourself it
| truly does not matter if you need to ship a single binary
| or a directory or whatever.
| 9dev wrote:
| If you want to see a real-life example of what Caddy can do,
| feel free to check the configuration of my iss-metrics
| project:
|
| https://github.com/Radiergummi/iss-
| metrics/blob/main/caddy/C...
|
| I was in the same boat as you and wanted to try out what
| Caddy is capable of. I was immediately convinced. So many
| features, where you expect them. Consistent configuration
| language. Environment interpolation, everywhere. Flexible
| API. It's really all there.
| ac130kz wrote:
| From the first glance it doesn't look convincingly better
| than a generic and manually polished nginx configuration.
| Are there any other benefits to Caddy?
| corobo wrote:
| It does all the letsencrypt stuff for you - certbot is
| not a massive hassle if you're just serving the one
| domain of course but I really liked it for that when I
| was setting up a redirect server (corps do love buying
| TheirBrand.everytld haha)
|
| Set the config up with CI/CD and can now just edit the
| config and git push knowing Caddy will just handle the
| rest
| ac130kz wrote:
| Seems to be a middleground between doing certs on a small
| scale with cronjobs and a fully fledged automated
| Kubernetes cluster.
| 9dev wrote:
| If you choose to start the project with docker compose,
| you'll notice how it will immediately bring up a fully
| functional reverse proxy setup with TLS support on
| localhost; set the SITE_DOMAIN environment variable to
| your proper domain instead, and you'll find that
| configured as well, along with a proper, ACME-issued
| certificate. Add a bit more effort, and you'll also get
| mTLS for all services automatically.
|
| All of this is more or less doable with nginx, I've done
| it often enough. But read the Caddyfile and tell me this
| isn't miles ahead in clarity.
| otabdeveloper4 wrote:
| Better Docker integration out of the box, I guess.
|
| I don't use docker so I don't care.
| page_fault wrote:
| It's a fine project right up to the point of you needing
| additional functionality that's split into one of the
| plugins. Since Go applications do not support proper .so in
| practice, you have to build your own binaries or rely on
| their build service, and this puts the responsibility of
| supporting and updating such custom configuration on you.
|
| So no setting up unattended-upgrades and forgetting about it.
| thegeekpirate wrote:
| I think that's what https://caddyserver.com/docs/command-
| line#caddy-upgrade (and the following commands) are for ;)
| beeboobaa3 wrote:
| > experimental
|
| also totally non-standard, apt unattended-upgrades won't
| be doing that for you.
|
| sure you can do a cronjob, but, non-standard
| bavell wrote:
| Eh, it's a bit over hyped imo although I do like the config
| format and built-in acme. My production clusters all run
| nginx though and give me minimal fuss with a lot of
| flexibility.
| NetOpWibby wrote:
| I recently setup a Flarum forum and the instructions
| mentioned Apache and Nginx. I sighed until I saw Caddy
| immediately below.
|
| Caddy really is the most pleasant webserver software I've
| ever used.
| asmor wrote:
| has anyone figured out why caddy is substantially slower
| (thoughput, not latency) than nginx at reverse proxying? i've
| switched it around for my seafile and it's a night and day
| difference.
| ac130kz wrote:
| Garbage collection pauses might have something to do with
| that.
| dboreham wrote:
| Unlikely.
| tempest_ wrote:
| Also the lack of 20 years of optimization where it spent a
| lot of time as one of the larger open source web servers
| and so got a lot of attention.
|
| In 2024 people are more likely to turn the cloud knob up to
| pay for throughput (if they need it) and save on dev time
| with the comparably better dev ex that caddy offers.
| nickjj wrote:
| > In 2024 people are more likely to turn the cloud knob
| up to pay for throughput (if they need it) and save on
| dev time with the comparably better dev ex that caddy
| offers.
|
| This seems like a weird trade off to me.
|
| The "learning tax" is really only paid once with nginx.
| Once you understand how it works and configured a
| reasonably end-to-end example with it then you can carry
| that over to your next project with minimal changes.
|
| I've hosted countless Flask, Django, Rails, etc. apps
| over the years and very little changes on the nginx side
| of things. I'd rather learn this tool once and have
| better runtime performance all the time across all
| projects.
|
| With that said, the performance difference probably won't
| be very noticeable for most sites but still, I personally
| wouldn't want to give in to running a less efficient
| solution when I know a more efficient solution exists
| right around corner that requires no application code
| changes to use -- just a little elbow grease to configure
| nginx once. This is especially true when nginx has a ~20
| year track record of stability and efficiency.
| jacob019 wrote:
| Right. Everytime nginx comes up someone has to bring up
| how much better caddy is. After using nginx everywhere
| for nearly two decades I have no desire to learn a new
| tool. Nginx does everything I want, even some exotic
| stuff and plugins for certain use cases. It is highly
| configurable, has plenty of good documentation, is well
| supported in every distro, and is extremely performant. I
| don't care how much easier caddy is and that it can
| configure certs for me. I prefer the unix philosophy
| anyway, and it's not like I'm spending a significant
| amount of time on nginx configs or certs. I use acme.sh
| for certs it only takes a couple minutes to provision a
| new instance with nginx and acme.sh, just the way I want
| it. End rant.
| lelanthran wrote:
| > I don't care how much easier caddy is and that it can
| configure certs for me.
|
| I always find this a weird selling point, TBH.
|
| It's probably a selling point for people who don't
| already know the existing $FOO.
|
| For me, putting effort into learning the new thing only
| to use it exactly as the old thing is wasted effort.
|
| I don't know what I gain by moving to the new $FOO,
| usually.
| tempest_ wrote:
| Sure, but you list of things there describes a work flow
| that is becoming a bit old.
|
| I would wager the vast majority of Nginx "installs" are
| running in a container now a days.
|
| The distro doesnt matter and few are provisioning an
| instance of anything, that's some container orchestration
| job.
|
| Last week trying to coax nginx to be able to set a CSP
| nonce in a web apps index.html which apparently meant I
| would need to custom build Nginx or a custom build a
| container with a custom built Nginx to install a plug-in
| to do it. This type of stuff adds up and having a bunch
| of stuff hidden in Nginx Plus doesn't help either.
|
| I think Nginx is a great piece of software its just that
| people don't need all its offerings, they just want to
| host some tiny js and proxy to an API and things like
| caddy were built for that. The limited throughput doesnt
| matter when CloudFlare or Cloud front cache most of the
| things it is serving anyway.
| jacob019 wrote:
| I'm pretty much always in a container too, and I'm
| perfectly happy with the workflow, it scales just fine
| and can be orchestrated as well. When I'm deploying a new
| project, bringing up a new container with nginx is like
| 0.1% of the work, why mess with that? I like Debian, I
| use it in my containers. Yes there are slimmer and
| lighter things, I don't care, my stack is rock solid.
|
| As for the CSP nonce, I'm surprised that there isn't a
| plugin, but compiling isn't a big deal, just annoying.
| Alternately NGINX is scriptable or you can do it at the
| application level as well. If caddy is easier for you or
| for that use case, then that's great, use it with my
| blessing.
| asmor wrote:
| I'd consider only pushing 20Mbps on a 2.5GbE network more
| than a lack of optimization. Supposedly you can tune some
| buffer sizes to make it better, but it's still laughably
| bad for serving larger files.
| asmor wrote:
| Go isn't Java, especially after the rewrite in 1.5 (and
| smaller scale changes in 1.19) the GC doesn't pause long or
| often enough to affect throughput.
| cpach wrote:
| AFAIK, nginx doesn't require root. If you're thinking about the
| ability to bind port 80/443, you should be able to do that via
| CAP_NET_BIND_SERVICE.
|
| With that said, Caddy is pretty rad.
| klabb3 wrote:
| This is more of a linuxism no? I agree though, I have used
| Linux for decades but I never remember the usr, bin, local, etc
| permutations of magical paths, nor do I think it makes any
| sense. It's a mess honestly, and is almost never what I want.
| When I was younger I thought I was holding it wrong but these
| days I'm sure it will never, ever map well to my mental model.
| It feels like a lot of distro-specific trivia that's leaking
| all over the floor.
|
| It seems like a lot of the hipster tooling dropped those things
| from the past and honestly it's so much nicer to have things
| contained in a single file/dir or at most two. That may be a
| big reason why kids these days prefer them, honestly.
|
| As for nginx itself it's actually much better suited for high
| performance proxies with many conns, imo. I ran some benchmarks
| and the Go variants (traefik, caddy) eat a lot of memory per
| conn. Some of that's unavoidable because of minimum per-
| goroutine stacks. Now I'm sure they're better in many ways but
| I was very impressed with nginxs footprint.
| gjvc wrote:
| honestly
| klabb3 wrote:
| Making me self conscious, honestly. I'm not a patient and
| careful writer. That's why I'm lurking in the comments.
| nevermore24 wrote:
| I don't know if not being able to remember filesystem
| conventions is Linux's fault. Computers have a lot of
| esoterica and random facts to recall. How is this one any
| different?
|
| See also:
| https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html
| kbenson wrote:
| Windows has the same thing, it's just much less exposed, and
| none of the paths are magical, they're well defined and
| mostly adhered to for all major distros. The core of how it
| works in Linux is fairly straightforward.
|
| The main difference is in how additional software is handled.
| Windows, because of its history with mostly third party
| software being installed, generally installed applications
| into a folder and that folder contained the application...
| Mostly. Uninstalling was never as simple as that might imply.
|
| Linux distros had an existing filesystem layout (from Unix)
| to conform to, so when they started developing package
| managers, they had to support files all over the place, so
| they make sure packages include manifests. Want to know where
| use executable are? Check bin. Superuser.com executables?
| Check sbin (don't want those cluttering the available utils
| in the path of regular users). Libs for in libs.
|
| /bin and /usr/bin and the others are holdovers from the long
| past when disks were small, and recent distros often symlink
| usr to / so they're different in name only. /usr/local is for
| admin local modifications that are not handled through a
| package. /opt is for whatever, and often used for software
| installed into a contained folder, like in windows.
|
| Just know what bin, sbin, lib, opt and etc are for and most
| the rest is irrelevant as long as you know how to query the
| package manager for what files a package provides or as it
| what package a specific filer belongs to. If you looked I to
| windows and the various places it puts things I suspect you'd
| find it _at least_ complicated, if not much more.
|
| Note: what I said may not match the LSB (which someone else
| usefully posted) perfectly, but for the most part it should
| work as a simple primer.
| renewiltord wrote:
| These are things for sure, but nginx config files are well
| understood by LLMs so I get good advice from them. That's
| really the limiting factor for most equivalent tools for me
| these days, how well the LLM handles it.
|
| If someone hooks them up to a man page I think it might level
| the playing field.
| sofixa wrote:
| I'm really not a fan of Caddy. It tries to be too smart and
| make things simple for users, which means that the second
| something goes wrong or you step out of the expected box, you
| get weird behaviour which is hard to debug.
|
| Fun example from last week, a colleague was trying to try out
| ACME with a custom ACME server, and configured it. For some
| reason Caddy was not using it and instead used its own internal
| cert issuer, even if explicitly told to use the ACME provider
| as configured. Turns out that if you use the .local domain,
| Caddy will insist on using its own cert issuer even if there's
| an ACME provider configured. Does that make sense? Yeah,
| somewhat, but it's the kind of weird implicit behaviour that
| makes me mistrust it.
|
| My go-tos are nginx for static stuff, Traefik for dynamic
| stuff.
| gnaman wrote:
| Unfortunately, Caddy does not support and do not plan on
| supporting anything other than HTTP/HTTPS. These days I find
| myself going back to nginx only for TCP/UDP reverse proxy.
| simonw wrote:
| For some reason I had a thought lodged in my head that Unit
| wasn't open source, but I just checked their GitHub repo and it's
| been Apache 2 since they first added the license file seven years
| ago.
|
| I must have been confusing it with NGINX Plus.
| rvnx wrote:
| "Oops, sorry, thank you for letting us know, we will change
| that to the proprietary license instead"
| vdfs wrote:
| Usually it's done on purpose, they wait until it get very
| popular and used everywhere before pulling the carpet
| callahad wrote:
| I wouldn't bet on that. :)
|
| F5 isn't the most visible corporation in terms of grassroots
| engagement, but NGINX itself has remained F/OSS all these
| years and newer projects like the Kubernetes Ingress
| Controller [0], Gateway Fabric [1], and NGINX Agent [2] are
| all Apache 2.0 licensed. Just like Unit.
|
| We _do_ have commercial offerings, including the
| aforementioned NGINX Plus, but I think we 've got a decent
| track record of keeping useful things open.
|
| [0]: https://github.com/nginxinc/kubernetes-ingress
|
| [1]: https://github.com/nginxinc/nginx-gateway-fabric
|
| [2]: https://github.com/nginx/agent
| rvnx wrote:
| Ok, seems better than the industry then :)
|
| I have trauma from Aerospike, Redis, and couple of others,
| so it may have affected my perception.
| g15jv2dp wrote:
| I don't the other one, but what's your gripe with redis,
| exactly? Can you articulate it?
| casperb wrote:
| I tried a setup with Nginx Unit and php-fpm inside a Docker
| container, but the way to load the config is so combersome I
| never was confident to use it in production. It feels like I am
| doing something wrong. Is there a way to just load a config file
| from the filesystem?
| jonatron wrote:
| https://unit.nginx.org/howto/docker/#apps-in-a-containerized...
|
| > We've mapped the source config/ to /docker-entrypoint.d/ in
| the container; the official image uploads any .json files found
| there into Unit's config section if the state is empty.
| casperb wrote:
| I saw that, but I do like to make my own container. So I did
| roughly the same steps as they do. But it feels complicated.
| jonatron wrote:
| Can you copy the official image's script? https://github.co
| m/nginx/unit/blob/0e79d961bb1ea68674961da17...
| gawa wrote:
| The docs mentions:
|
| > The control API is the single source of truth about Unit's
| configuration. There are no configuration files that can or
| should be manipulated; this is a deliberate design choice
|
| (https://unit.nginx.org/controlapi/#no-config-files)
|
| So yeah, the way to go is to run something like `curl -X PUT
| --data-binary @/config.json --unix-socket
| /var/run/control.unit.sock http://localhost/config/` right
| after you start your nginx-unit.
|
| The way to manage a separate config step depends on how you
| manage to run the process nginx-unit (systemd, docker, podman,
| kubernetes...). Here's an example I found where the command is
| put in the entrypoint script of the container (see toward the
| end): https://blog.castopod.org/containerize-your-php-
| applications...
| casperb wrote:
| I did that, but sometimes it takes a short moment before Unit
| is started, so you need a loop to check if Unit is responding
| before you can send the config. In total it was around 20
| lines just to load the config. It feels like doing something
| wrong. Or using the wrong tool.
| ajayvk wrote:
| I am building https://github.com/claceio/clace. It allows you
| to install multiple apps. Instead of messing with routing
| rules, each app gets a dedicated path (can be a domain). That
| way you cannot break one app while working on another.
|
| Clace manages the containers (using either Docker or Podman),
| with a blue-green (staged) deployment model. Within the
| container, you can use any language/framework.
| callahad wrote:
| We're very actively working on improving Unit's UX/DX along
| those lines. Our official Docker images will pick up and read
| configuration files from `/docker-entrypoint.d/`, so you can
| bind mount your config into your container and you should be
| off to the races. More details at
| https://unit.nginx.org/installation/#initial-configuration
|
| But that's still kinda rough, so we're also overhauling our
| tooling, including a new (and very much still-in-development)
| `unitctl` CLI which you can find at
| https://github.com/nginx/unit/tree/master/tools/unitctl. With
| unitctl today, you can manually run something like `unitctl
| --wait-timeout-seconds=3 --wait-max-tries=4 import
| /opt/unit/config` to achieve the same thing, but expect further
| refinements as we get closer to formally releasing it.
| casperb wrote:
| That sounds much better, thanks for the effort.
| RantyDave wrote:
| Django without gunicorn? I'll give it a go...
| move-on-by wrote:
| I've largely enjoyed gunicorn. What do you dislike about it?
| DataDive wrote:
| I largely agree here that of all the components in a stack,
| unicorn seems to be the least troublesome and almost
| invisible.
|
| I have never had a problem that I would have traced back to
| gunicorn not working ...
|
| On the other hand not having to run gunicorn as another
| separate service might be an advantage.
| anentropic wrote:
| I'm wondering what are the pros and cons of that vs this
|
| Years ago every website needed Apache or Nginx, then lately
| I've hardly used it at all... usually have a container with
| gunicorn behind a load balancer that is part of the cloud
| platform
|
| It's easy to see how to get Nginx Unit working, but not sure
| exactly how it fits into the utility picture vs other options
| teitoklien wrote:
| Nginx is a reverse proxy, it can work as the load balancer,
| the static asset server, a response caching server, an
| authorization server, HLS streaming server, etc.
|
| Nginx has a lot of usecases in one package, most larger
| companies and more technical workplaces, use Nginx or similar
| alternatives.
|
| Nginx Unit typically is meant for unifying the entire
| application space of multiple programming languages served by
| 1 Server (which also acts as a static server)
|
| So serving golang, python, php code and application all under
| 1 single wsgi/asgi application server called nginx unit and
| be able to dynamically change its routes with api calls too.
|
| This allows you to have 1 root process in a docker container
| to control your python fastapi or golang api processes,
| without needing 1 container for nginx and 1 container for
| Python/Golang process or a supervisord like init system
| controlling 2 or more processes inside 1 container
|
| Everything is under nginx unit and nginx unit is being
| triggered as the main process inside the container.
|
| Moreover it is also much much much more faster in terms of
| response time than most language dependent application
| servers like gunicorn/unicorn for python3, etc [1]
|
| [1](https://medium.com/@le_moment_it/nginx-unit-discover-and-
| ben...)
| attentive wrote:
| unit replaces gunicorn. It should also be much faster but you
| do your tests.
| e-brake wrote:
| I am using NGINX Unit with Django in a bunch of production
| workloads, with high traffic. Works really well!
|
| The most time spent was building from source in Docker for ARM
| support and going down a rabbit hole of targeting the minor
| Python version that apt was installing on some Debians, w/o a
| virtual env, instead of the one it defaulted to.
|
| I'm a fan. High performance, easily configurable ASGI server
| for so many flavors.
| random_savv wrote:
| How does this compare to OpenResty? Could it somehow help with
| OIDC support (e.g. by integrating a relevant nodejs lib)?
| pjmlp wrote:
| There is a certain irony that after the application servers
| bashing phase, a decade later everyone is doing their own
| version.
| throwaway894345 wrote:
| What is an application server exactly and who besides nginx is
| building one?
| pjmlp wrote:
| Apache with mod_tcl, mod_perl, mod_php, Websphere, JBoss (now
| Wildfly), Payara, IIS/.NET, Erlang/BEAM...
|
| Basically a full stack experience from programming language,
| web framework and networking protocols, possibly with a
| management dashboard.
|
| As for who is building one everyone that is now trying to
| sell the idea of packaging WebAssembly into containers,
| deploying them into managed Kubernetes clusters, or
| alternatively cloud managed applications, like Vercel,
| Nelify, Azure Container Apps, Cloud Run,....
| anentropic wrote:
| > everyone that is now trying to sell the idea of packaging
| WebAssembly into containers, deploying them into managed
| Kubernetes clusters, or alternatively cloud managed
| applications
|
| Does Nginx Unit really fit into this picture though?
|
| Is there a place for an all-in-one app server in that
| scenario, I would have thought they want each component to
| be separated (wasm host, load balancer, etc etc) for
| commoditisation and independent scaling of different layers
|
| (This is not a criticism in form of a question... I am
| honestly curious)
| callahad wrote:
| Absolutely keep your load balancer for multi-node
| scaling, but _how_ are you going to run you WebAssembly
| workloads within a given node? Unit can do that.
|
| Or what if you have a single logical service that's
| composed of a mix of Wasm endpoints and static assets
| augmenting a traditional Python application? Unit pulls
| that all together into a single, unified thing to
| configure and deploy.
|
| If you're writing Node, Go, or Rust you haven't had to
| think about application servers for a long time. Folks
| writing Python and PHP still do, and WebAssembly will
| require the same supporting infrastructure since Wasm --
| by definition -- is not a native binary format for any
| existing platform. :)
| anentropic wrote:
| Well there are other dedicated "WASM in k8s" solutions
| like SpinKube
|
| and my Python apps have not been behind Nginx for a long
| time, they're mostly wrapped in a zero-config gunicorn
| runner in a Docker container, static assets in S3 via a
| CDN
|
| am wondering who wants a single-node heterogenous
| application server these days
|
| TBH the simplicity of it is appealing though
| callahad wrote:
| IMHO, it's still a few years a early for pure-play Wasm
| solutions, though Fermyon is doing exceptional work to
| manifest that future.
|
| My hope is that Unit can offer a pragmatic bridge: run
| your existing applications as-is, and when you want to
| sprinkle in some Wasm, we're ready. That's not to say
| Wasm is Unit's only use case, but do believe it's what
| will get people thinking about application servers again.
| :)
|
| > _my Python apps have not been behind Nginx for a long
| time, they 're mostly wrapped in a zero-config gunicorn
| runner in a Docker container, static assets in S3 via a
| CDN_
|
| ...and are there any reverse proxies, load balancers, or
| caches on the network path between your end user and your
| container? ;)
| xandrius wrote:
| I read this and expected some sort of unit testing for nginx
| configurations.
|
| I'd love to have something like that: provide a configuration and
| automatically check all the paths that the configuration enables.
| Maybe throw in some LLM for some comments and tips to improve
| performance/security.
| nginxdud wrote:
| Another dud from the people that bought stolen code.
|
| Nginx is awful, archaic software.
| victorbjorklund wrote:
| What do you use instead?
| OneOffAsk wrote:
| Nginx is a state machine that efficiently handles lots of L4-L7
| protocols. Seems weird to feel any emotions about it.
| fastball wrote:
| We've been using Nginx Unit in production for a Python backend
| for about a year now and it's been working pretty well. Some
| thoughts in no particular order:
|
| - "Nginx Unit" is an annoying name to google when you have a
| problem. You get a lot of nginx results that are of course
| completely irrelevant to what you're looking for, as there is
| zero actual overlap between the two things. Using quoted search
| terms is not sufficient to overcome this.
|
| - When it works, the all-in-one system feels great.
|
| - However sometimes the tightly-coupled nature can be slightly
| annoying. For example, they publish packages for the various
| runtimes (ours is python) in the various registries, but only for
| the "defaults. Concrete example: we are currently running Ubuntu
| 23.04 but wanted to upgrade to Python 3.12. However Nginx Unit
| only pre-packages a Python 3.11 package for unit on Ubuntu 23.04
| as that is the system-included Python. Had to build our own
| support from source, which was fairly easy, but still more
| difficult than our pre-Nginx Unit infra, where all I would have
| to do is install Python 3.12 and I'm good to go (because the
| python runtime wasn't at all coupled with the webserver when our
| stack was Nginx + Python + Gunicorn)
|
| - I never properly benchmarked the two in a comprehensive
| comparison, but Nginx Unit is definitely faster than the
| aforementioned previous stack. I tested some individual routes
| (our heaviest/most important) and the speedup was 20-40%.
|
| - Even when I tell it to keep a minimum number of worker
| processes around, it kinda seems... not to? I haven't properly
| tested, but sometimes it feels more like serverless, where if I
| haven't sent a request in a while it takes a bit of extra time to
| spin up a process, but after that requests are snappier.
| Definitely need to properly investigate this but haven't gotten
| around to it yet. It might just be the difference between
| allocated memory and not rather than spinning up processes.
|
| - It's a shame it doesn't support SSL via Let's Encrypt out-of-
| the-box, like Caddy. To me that is the biggest (really only)
| missing piece at the moment.
|
| - I much prefer using the HTTP system to manage config than
| files, and find the Nginx Unit JSON config much, much more
| readable than either Nginx or Apache configs I've worked with in
| the past. I'd also give it a slight edge over caddy
| configuration.
|
| - That said, managing the config (and system in general) can
| sometimes be annoyingly opaque. The error messages are somewhat
| terse when updating config fails, so you need to dig into the
| actual logs to see the error. Just feels a little cat-and-mousey
| when you could just tell me what the error is up-front, when I'm
| sending the config request.
|
| In summary, overall I've liked using Nginx Unit, but wish they
| would: change the name to something less confusing, add built-in
| Let's Encrypt support ala Caddy, and make the errors and overall
| stack feel a little less opaque / black boxy.
| callahad wrote:
| Hi! I'm currently in charge of Unit. If you're using it, I'd love
| to chat with you to understand what sucks, what doesn't, and
| what's missing. I have my own ideas, but external validation is
| always nice. :)
|
| Contact info is in my profile.
___________________________________________________________________
(page generated 2024-06-01 23:01 UTC)