[HN Gopher] We're migrating many of our servers from Linux to Fr...
___________________________________________________________________
We're migrating many of our servers from Linux to FreeBSD
Author : NexRebular
Score : 314 points
Date : 2022-01-24 13:53 UTC (9 hours ago)
(HTM) web link (it-notes.dragas.net)
(TXT) w3m dump (it-notes.dragas.net)
| nix23 wrote:
| Congratulation, i did the same ~5years ago, and cant be any
| happier, jails bhyve dtrace zfs/ufs pf geom-compressed pkg/ports
| etcetc...nearly every day i find some useful features, and when
| try them out...they work!!
| kodah wrote:
| > Linux has Docker, Podman, lxc, lxd, etc. but... FreeBSD has
| jails!
|
| Docker, podman, lxc, lxd, etc are userland components. Linux has
| cgroups and namespaces.
|
| FreeBSD jails are a bit more complicated because FreeBSD isn't
| distributed the way Linux is. Linux is distributed as _just_ the
| kernel, whereas FreeBSD is a base OS. This probably could 've
| been phrased better as, "Linux has no interest in userland and I
| want some userland consistency". That's fair, Linux was built
| around the idea that operating system diversity was a good thing
| long term, FreeBSD was more interested in consistency. I'm
| reading between the lines, a bit, here because of the critique of
| SystemD (note: not all linuxes use SystemD)
|
| Personally speaking, I like both Linuxes and FreeBSD but I don't
| think debating the two is important. Rather, I'd encourage
| turning your attention to the fact that every other component on
| a system runs an OS-like interface that we don't make open OS's
| or "firmware" for.
| CyberRabbi wrote:
| To be fair all of these reasons come down to personal preference
| (sans the TCP performance claim). E.g. he prefers FreeBSD's
| performance monitoring tools to Linux's monitoring tools, or he
| prefers FreeBSD's user land to Linux's user land. That's fine but
| it's not very persuasive.
| rwaksmunski wrote:
| vmstat -z gives me counters of kernel limit failures on
| FreeBSD. Very useful when debugging errors in very high
| performance environments. Anybody knows what the Linux
| equivalent is? Say I need to know how many times file
| descriptor/network socket/firewall connection state/accepted
| connection limits were reached?
| CyberRabbi wrote:
| If one doesn't exist out of the box you can relatively simply
| roll your own using bcc https://github.com/iovisor/bcc.
| quags wrote:
| I use freebsd for one project on node/js and for ssh jumper
| boxes. I have also been admining linux boxes since 99. I have no
| hate for systemd - there just was a learning curve. I like a
| basic rc.conf set up that freebsd has. Everything can go in this
| one file for startups. Binary updates have been around for years
| so doing security updates are easy with no need to rebuild world
| or compile. You can use pkg for third party installs (binaries)
| although they don't always follow the version in ports. Security
| wise kern_securelevel / ugidfw for security. freebsd update also
| allows for easy updating in major os releases. ZFS on root just
| works on freebsd. PF / ipfw to me makes much more sense than
| iptables (I haven'ted really moved to nftables).
|
| When I compare to ubuntu which is the OS I use for linux mostly
| now: * kvm is superior to bhyve in every way * automating
| security updates via apt are better than a combination of
| freebsdupdate/pkg updates. Plus the deb packages are made by
| Ubuntu and just work. ports/pkgs are third party on freebsd *
| rebootless kernel updates exist for ubuntu * It is easier to find
| people familiar with linux right away
|
| Really though the learning curve of freebsd <-> linux is not
| high.
| smlacy wrote:
| Bikeshedding at it's finest!
| themerone wrote:
| The Wireguard debacle scared me off from FreeBSD. It seems they
| put too much trust in committers and don't have a solid enough
| review process.
| loeg wrote:
| That Ars article is so distorted that it should best be seen as
| creative fiction.
| themerone wrote:
| Do you have a link to the real story?
| danachow wrote:
| Who mentioned anything about an Ars article? I read the whole
| debacle on mailing lists and Twitter. That was enough to form
| my own conclusions and I did not come away from that
| impressed with the FreeBSD development environs. It
| definitely flys in the face that somehow FreeBSD upholds
| technical excellence above business interests as is being
| claimed in the article.
| kevans91 wrote:
| It was quite blown out of proportion even outside of the
| published articles.
| Melatonic wrote:
| What was the debacle?
| themerone wrote:
| FreeBSD 13 came very close to shipping with a WireGaurd
| implementation with many bugs an vulerabilites that were
| quickly identified by the creator of the WireGaurd prototocal
| shortly after learning about the update.
|
| Attempts were made to fix it, but they eventually decided to
| ship 13.0 without wiregaurd.
|
| It was very policitcal because the company that sponsored the
| development had already promised the feature to customers.
| ppg677 wrote:
| I could have sworn I read the same exact thing in 1997.
| m-i-l wrote:
| Not sure about 1997, but definitely in 1998: I've been a Linux
| user since 1995, when I worked for a (then small now big)
| software company. I setup up their web site on Slackware Linux,
| building an early content management system, early content
| delivery network etc. But after I left in 1998, one of the
| first things my replacements did was change all the web servers
| to run NetBSD rather than Linux, because they said it had a
| better networking stack.
| xianwen wrote:
| Did you hear what happened to the NetBSD servers afterwards?
| densone wrote:
| First off FreeBSD FTW. I use it everywhere over Linux now for the
| first time in 25 years and couldn't be happier. My only wish is
| that BSD had a better non-CoW file system. Databases and
| Blockchains are already CoW so it does irk me slightly to use zfs
| for them. That being said, I've never had a problem because of
| it.
| moonchild wrote:
| > databases
|
| direct i/o?
| csdvrx wrote:
| ZFS performance on raids of NVMe is quite bad. If you need
| performance, use xfs over mdadm.
| kazen44 wrote:
| this highly depends on how ZFS is configured.
|
| for instance, what is your ARC configuration in this case? It
| can have a massive impact of performance.
|
| getting ZFS to perform well takes a bit of work, but in my
| opinion performance is on par with most filesystems. (and it
| has a ton of additional features).
| csdvrx wrote:
| No, it doesn't, there's a hard cap. I spent a long time
| trying to replicate the performance I was accustomed to in
| XFS.
|
| L2ARC can improves cached reads, but it's not magical,
| especially not for random reads... or writes. (and yes, I
| know about SLOG, but doing async is faster than improving
| sync)
|
| And don't get me started on how ZFS is not using mirrors to
| improve read speed (unlike mdadm can do, cf the difference
| between o3 n3 f3) or how it can't take advantage of mixed
| arrays (ex: a fast NVME + a regular SSD or HDD to add
| redundancy: all the reads should go to the NVME! The writes
| should go async to the slow media!)
|
| If you don't have a RAID of fast NVMe that are each given
| all the lanes they need, you may not see a difference.
|
| But if you are running baremetal close to 100% of what your
| hardware allows, and the choice of everything you want to
| buy and deploy, you'll see these limits very soon.
|
| In the end, I still chose ZFS most of the time, but there
| are some usecases where I think XFS over mdadm is still the
| best choices.
| reincarnate0x14 wrote:
| How, out of curiosity?
|
| I haven't made much use of them but the mirrors or raidzs
| seemed to perform more or less inline with expectations
| (consumer hardware may not have the PCIE lanes really
| available to run multiple fast NVME devices well).
| csdvrx wrote:
| > How, out of curiosity?
|
| Compare with XFS over a mdadm say in raid10 3 legs f3, then
| cry.
|
| > consumer hardware may not have the PCIE lanes really
| available to run multiple fast NVME devices well
|
| Trust me, I have all the lanes I need, even if I would
| always wish I had more :)
| ivoras wrote:
| That's one of the fields FreeBSD is bad at: it's not really
| possible to get info on the current "normal" file system, UFS2.
|
| This latest version has something called "journaled soft
| updates" and it's a metadata-journaled system, i.e. the actual
| data is passed through, and it's non-CoW.
| chalst wrote:
| If your complaint about UFS is the lack of journalling, you
| might be interested in
| https://docs.freebsd.org/en/articles/gjournal-desktop/
| trasz wrote:
| Do not use gjournal though, use the more recent SUJ. (I
| believe it's enabled by default those days.)
| densone wrote:
| My issue is performance. But I've only read about UFS
| performance. So it might be fine?
| gorgoiler wrote:
| Things I actually care about: kernel that supports my hardware,
| ZFS for secure snapshotted data, scriptable tools to manage NICs
| and ppp and VPNs, a fast optimised C++ compiler, the latest
| versions of dynamic language runtimes, a shell, a text editor, a
| terminal multiplexer, a nerdy window manager and an evergreen
| browser.
|
| On that playing field, the integrated nature of FreeBSD is nice
| but it's an asterisk on top of the kernel rather than anything
| approaching what makes up a the system part of an _Operating
| System_. Almost everything else comes from a third party (and I'm
| fine with that.)
|
| I haven't used FreeBSD as a daily OS for over a decade though.
| What's the new coolness?
| ianai wrote:
| " The system is consistent - kernel and userland are created and
| managed by the same team"
|
| Their first reason is really saying a lot but with few words. For
| one, there's no systemd. The init system is maintained alongside
| the entire rest of the system which adds a lot of consistency.
| The documentation for FreeBSD is also almost always accurate and
| standard. Etc etc
|
| I think you also largely don't need a docker or etc in it since
| jails have been native to the OS for decades. I'd want to do some
| cross comparison first though before committing to that
| statement.
|
| Shouldn't be lost that the licensing is also much friendlier to
| business uses. There's afaik no equivalent to rhel, for that
| matter. This goes both ways though as how would you hire a
| FreeBSD admin based on their resume without a rhce-like FreeBSD
| certification program?
|
| Edit-I'll posit that since FreeBSD is smaller an entity wishing
| to add features to the OS might face either less backlash or at
| least enjoy more visibility from the top developers of the OS.
| Linus, for instance, just has a larger list of entities vying for
| his attention on issues and commits.
| 29athrowaway wrote:
| Also remember that Darwin, the kernel used in macOS, is in part
| derived from FreeBSD.
| acdha wrote:
| > Consider systemd - was there really a need for such a system?
| While it brought some advantages, it added some complexity to an
| otherwise extremely simple and functional system. It remains
| divisive to this day, with many asking, "but was it really
| necessary? Did the advantages it brought balance the
| disadvantages?"
|
| This is really telling for the level of analysis done: systemd
| has been the target from a small number of vocal complainers but
| most working sysadmins only notice it in that they routinely deal
| with tasks which are now a couple of systemd stanzas instead of
| having to cobble together some combination of shell scripts and
| third-party utilities. Confusing noise with numbers is a
| dangerous mistake here because almost nobody sits around randomly
| saying "this works well".
| TurningCanadian wrote:
| "Btrfs is great in its intentions but still not as stable as it
| should be after all these years of development." may have been
| true years ago, but doesn't seem to be anymore.
| csdvrx wrote:
| Give it another 10 years, and we may get a stable alternative
| to ZFS that will live in the kernel tree.
| dralley wrote:
| And maybe it will be bcachefs :)
| traceroute66 wrote:
| How's the old RAID56 problem in BTRFS coming on ? ;-)
| TurningCanadian wrote:
| Pick any given feature and a FS may or not support it well.
| RAID56 did have a major update, but still has the write hole
| and isn't recommended for production.
|
| I think the point still stands though. There has been lots of
| stabilization work done to BTRFS, and anyone using
| production-recommended features should consider the
| filesystem stable.
| Melatonic wrote:
| I have been forced to work far too much with Citrix Netscaler
| virtual networking appliances and while I can see how it was
| probably a great product before Citrix purchased it the amount of
| bugs and regular security holes in it is insane. Especially for a
| damn networking appliance!
|
| That being said it also forced me to use FreeBSD a lot more than
| I ever would have otherwise and I have a lot of respect for the
| OS itself. I would not use it everywhere but it has amazing
| latency which makes it obviously great for networking.
| blakesterz wrote:
| "Some time ago we started a complex, continuous and not always
| linear operation, that is to migrate, where possible, most of the
| servers (ours and of our customers) from Linux to FreeBSD."
|
| I don't really disagree with any of the stated reasons, but I
| also didn't see a reason that would make me even consider making
| the move with our servers, or even bother with some small number
| of servers. At least for me, I'd need a bunch of REALLY GOOD
| reasons to consider a move like that. A huge cost savings AND
| some huge time savings in the future might do it.
| johnklos wrote:
| Some people see the bigger picture and recognize that a medium
| amount of work now is better than lots of small amounts of work
| stretched over many years.
|
| Likewise, some people and many businesses see the immediate now
| and aren't always the best at planning for long term, and/or
| are overly optimistic that their pain points will eventually be
| fixed.
| pm90 wrote:
| > Likewise, some people and many businesses see the immediate
| now and aren't always the best at planning for long term,
| and/or are overly optimistic that their pain points will
| eventually be fixed.
|
| But that's the thing. The pain points mentioned in this
| article aren't really that strong to need to ditch the OS and
| move to a new one. These kinds of decisions have huge
| tradeoffs.
|
| One could argue, in fact, that moving to a non-traditional OS
| will make it much harder to hire experts or hand off the
| system to another team in the future.
| washadjeffmad wrote:
| I think of it this way: If you interpret "rolling stones
| gather no moss" to mean you've always got to keep pushing
| forward or you'll become obsolete, choose Linux. If you
| venerate moss as proof of the stability granted by doing
| the same thing simply and perfectly, you're probably
| already using FreeBSD.
|
| After CentOS Stable was cancelled, I migrated a number of
| our platforms to FreeBSD because it met our needs and I
| enjoyed working on it. No surprises, nothing breaking, and
| most importantly, no drama.
| djbusby wrote:
| What? FreeBSD as a non-traditonal OS? I must disagree,
| FreeBSD may not be as common as Windows or Linux but it's
| not like Plan9 (from outer space) or BeOS.
| toast0 wrote:
| I agree that the stated reasons don't sound very compelling.
| Maybe in aggregate, but not individually.
|
| But they left out one of the bigger reasons, IMHO. FreeBSD
| doesn't tend to churn user (admin) facing interfaces. This
| saves you time, because you still use ifconfig to configure
| interfaces, and you still use netstat to look at network
| statistics, etc; so you don't have to learn a new tool to do
| the same thing but differently every couple of years. Sure,
| there's three firewalls, but they're the same three firewalls
| since forever.
| crazy_hombre wrote:
| ifconfig/netstat was deprecated more than a decade ago,
| that's more than a couple years don't you think?
| [deleted]
| comex wrote:
| Deprecated on Linux. But I for one can't consign them to
| the dustbin of my memory, because on my Mac, they are not
| deprecated, while the `ip` command that replaces them on
| Linux does not exist. With this part of macOS being derived
| from FreeBSD, I don't know whether that makes FreeBSD a
| savior or a villain.
|
| Personally I blame all of the major Unix-derived operating
| systems (Linux, macOS, BSDs), as none of them show any
| interest in standardizing any APIs or commands invented
| this millennium. The subset that's common to all of them is
| frozen in time, and is slowly being replaced by new bits
| that aren't. From epoll/kqueue to eBPF, containers/jails to
| seccomp/pledge, DBus/XPC to init systems... from low-level
| network configuration (ifconfig, ip) to high-level network
| configuration (whatever that is this month).
| i386 wrote:
| wait wait they were deprecated? why on earth?
| ori_b wrote:
| On Linux, the last commit on it was about a decade ago.
|
| On FreeBSD, the last commit on it was last week.
|
| They're not the same tool, and FreeBSD didn't abandon their
| core tools, because they're part of the base system.
| Datagenerator wrote:
| The FreeBSD POLA design principle makes sure fundamental
| tools don't disappear or worse double over the years.
| Linux distributions differ vastly from vendor to vendor.
| NoSorryCannot wrote:
| Since it was just an example, I don't think refuting this
| particular item will nullify the opinion. The idea, I
| think, is that there are always more pieces in a state of
| deprecation and replacement at any given time in Linux land
| than in FreeBSD land.
| phkahler wrote:
| I think that's just due to the pace of development. The
| BSDs are resource constrained, so they have to pick and
| choose what to work on. That is both a good thing and a
| bad thing. Here the benefit is less churn. On the
| downside, they're just catching the Wayland train
| recently. On the up side, by catching it late they didn't
| suffer a lot of the growing pains.
| raverbashing wrote:
| I agree.
|
| Nothing there seems to deliver explicit customer value when
| switching to FreeBSD.
|
| How will switching help you deliver your service? Or is it just
| a "nice to have thing"?
| linksnapzz wrote:
| Under what circumstances does the choice of backend OS ever
| deliver "explicit customer value", so much so that the
| customer would care about said choice?
| lp0_on_fire wrote:
| Normally the customer doesn't care. They will care,
| however, if something goes wrong during the migration or
| some unforeseen issue comes up later that degrades their
| experience.
| nine_k wrote:
| I'd hazard to say that it delivers customer value via two
| avenues: (1) better SLAs, and (2) lower expenses.
|
| If your backend OS starts to run software more efficiently,
| cost less to host, has less maintenance downtime, has fewer
| security incidents, has fewer crashes, etc, having changed
| to it _has_ produced customer value.
| brimble wrote:
| It is definitely the case that underlying architecture,
| certainly all the way down to the OS, can be the difference
| between "I can create, test, and deploy this feature in a
| week, and it will be rock-solid" and "it'll take months and
| still fail on some edge cases--and fixing those is not
| remotely in our budget".
| scrubs wrote:
| https://news.ycombinator.com/item?id=28584738 not Linux.
| embik wrote:
| The characterisation of systemd in this post really bothers me,
| particularly this:
|
| > 70 binaries just for initialising and logging
|
| It's just not true. Those 70 binaries provide much more
| functionality than an init system, they can cover a significant
| portion of system management, including a local DNS resolver,
| network configuration or system time management. You can dislike
| the fact everything is so tightly integrated (which feels ironic
| given that the post goes on to praise a user space from one
| team), but let's at least be correct about this.
| [deleted]
| betaby wrote:
| TL;DR - for no compelling reasons.
| eatonphil wrote:
| I don't think that's fair. Maybe half of their reasons are more
| on the subjective side but half of them are actual technical
| choices like wanting ufs/zfs and jails.
| ianai wrote:
| It's even made easy to TLDR by their reasons being headlined
| in larger font and bold from the rest of the text.
|
| My question is how well moving their systems will shake out
| longer term.
| betaby wrote:
| zfs on FreeBSD is from the same sources as in Linux. There is
| not a single word in the article why to chose jails. Article
| is 90s style rant, which is not a bad thing.
| zinekeller wrote:
| > zfs on FreeBSD is from the same sources as in Linux.
|
| You should know that due to license incompatibilities (CDDL
| and GPLv2), it's not really as smooth as the BSD
| integration (I wonder how Canonical avoids this issue). You
| know that OpenZFS's codebase is a monorepo (for the most
| part) but the in-kernel implementation is vastly different.
| mnd999 wrote:
| AFAIK, Canonical are just hoping nobody sues them.
|
| I use Zfs as my boot file system on FreeBSD and Arch
| Linux and both work great. Linux was a bit of a faff to
| set up though, FreeBSD was easy.
| quags wrote:
| I have found this too. ZFS on root just works, no hacks
| needed to set it up on freebsd.
| trasz wrote:
| The issue isn't any different from using closed source
| NVidia drivers with FreeBSD kernel.
| [deleted]
| densone wrote:
| jails. The main reason I chose to use FreeBSD over Linux and it
| has only proven to be a better choice for what I do on a daily
| basis.
|
| I guess to each their own but I dislike docker. I think it's
| bloated and over complicated.
| Cloudef wrote:
| Just use bwrap (bubblewrap). Linux has namespaces.
| markstos wrote:
| I ran FreeBSD servers for about a decade. Now all my servers are
| Linux with systemd. I'm liked FreeBSD then, I'm happy with
| systemd now. I have commits in both.
|
| I'm glad there are some people who use and prefer FreeBSD and
| other init system now, because diversity in digital ecosystems is
| benefits the whole just as diversity in natural ecosystems do.
|
| The shot taking at systemd here was disingenuous though. The
| author complained about the number of different systemd binaries
| and the lines of source code, but all these tools provide a
| highly consistent "system layer" with standardized conventions
| and high quality documentation-- it's essentially the same
| argument made to support FreeBSD as a large body of kernel and
| userspace code that's maintained in harmony.
| Thaxll wrote:
| FreeBSD is most likely slower than linux in most scenarios. ZFS
| is supported natively in Linux ( Ubuntu ), jail are terrible
| compared to Docker, since Docker is very popular there are a
| millions tools built arround it, it's not just for sand boxing,
| it's a part of a complet development process. Who cares about
| boot process on a server seriously?
|
| "FreeBSD's network stack is (still) superior to Linux's - and,
| often, so is its performance."
|
| This is wrong, if it was the case most large compagnies would use
| BSD, atm they all use Linux, the only large compagny using BSD is
| Netflix because they added some tls offloading in kernel for
| their CDN which could have been done in Linux btw.
|
| imo don't use tech that is not widely used, you're going to
| reinvent the wheele in a worse way because tool a.b.c is missing.
| tester756 wrote:
| >imo don't use tech that is not widely used, you're going to
| reinvent the wheele in a worse way because tool a.b.c is
| missing.
|
| so kinda windows with wsl v2 is the way to go
| NexRebular wrote:
| any source on all large companies using linux instead of e.g.
| Windows?
| tombert wrote:
| I don't have enough experience with FreeBSD (outside of FreeNAS
| seven years ago), but I've never had any success getting it to
| run on a laptop. Every time I've tried installing it on a laptop
| I get issues with either the WiFi card not working, issues with
| the 3D accelerator card not working at all, or the lid-close go
| to sleep functionality not working.
|
| I've been using Linux since I was a teenager, so it's not like I
| am a stranger to fixing driver issues, but it seemed like no
| amount of Googling was good enough for me fix these problems
| (googling is much harder when you don't have functioning wifi).
| As a result I've always just stuck with Linux (or macOS semi-
| recently, which I suppose is kind of BSD?).
| GordonS wrote:
| I've had similar issues when trying to run it in Hyper-V or
| VirtualBox - issues with network adapters and disk not being
| recognised. I've tried FreeBSD and FireflyBSD, and give it
| another try ever year or so, but always hit the same walls
| toast0 wrote:
| I've run FreeBSD in Hyper-V and VirtualBox with no issues,
| are you starting from the virtual machine images or the
| installer images? If you haven't tried the virtual machine
| images, they're linked from release notes, and 13.0 is
| available here: https://download.freebsd.org/ftp/releases/VM-
| IMAGES/13.0-REL...
|
| I set up Hyper-V for the first time last summer, and I seem
| to recall it just working.
| GordonS wrote:
| I was in installing from ISOs. Didn't realise there were
| prebuilt VM images, so thanks for that.
| lordgroff wrote:
| I've recently switched my home server to Freebsd after it was
| on Debian for who even knows how long (Debian still virtualized
| on bhyve for some tasks).
|
| My take: I love FreeBSD as a server os. It's really, really
| well designed. Spend a bit of time with the handbook and after
| a while it's really simple to hack on. I really like the
| separation of the base OS, I like the init, I like jails,
| documentation puts every Linux distribution to SHAME.
|
| But laptop?... Unless you scour for exactly the parts that will
| work (esp for wifi), you're going to have a bad time, even on
| old equipment. At the same time as I rebooted my server, I had
| an old ThinkPad I decided I wanted to FreeBSD for the hell of
| it. I gave up on it. It may have been possible but it was just
| too much work. In this day and age when almost any Linux
| desktop distro boots without a hitch on hardware that's not
| completely brand spanking new, it was just not worth it
| tombert wrote:
| That was basically my experience. I had tried multiple times
| on old laptops, thinking that maybe someone had developed a
| better driver by this point, and it just didn't happen.
|
| Usually when this happens in Linux, a few hours of Googling,
| swearing, and retrying is enough to fix my problems, and I'm
| sure that with enough time that approach would have worked in
| FreeBSD as well, but I always grew impatient.
|
| > almost any Linux desktop distro boots without a hitch on
| hardware that's not completely brand spanking new
|
| Can't speak for anyone else, but even for brand spanking new
| hardware, as long as I've stuck with AMD drivers, nowadays
| Linux "Just Works" when I boot it. Obviously YMMV between
| systems, but I feel like Linux has finally become competitive
| with Windows and macOS from a usability standpoint.
| Melatonic wrote:
| I would probably just a bare metal hypervisor on the laptop
| and then make FreeBSD a VM but you might still run into a lot
| of hardware issues.
| mrtweetyhack wrote:
| kitsunesoba wrote:
| I haven't actually verified it for myself, but I've read
| several times that for BSD on laptops, OpenBSD is generally a
| better experience, supposedly because the OpenBSD devs heavily
| dogfood (using OpenBSD to develop OpenBSD) whereas FreeBSD devs
| tend to use other operating systems (predominantly macOS,
| apparently) on their laptops/desktops.
| tombert wrote:
| I have heard that too, and at some point I might try it out.
|
| Honestly I just wish there was a way to run ZFS as a core
| partition on macOS. APFS is pretty good, but a full on ZFS
| thing is really cool, and makes FreeBSD so much more
| appealing as a result.
| philkrylov wrote:
| Have you seen https://openzfsonosx.org/wiki/ZFS_on_Boot ?
| tombert wrote:
| I have not...if I weren't afraid of losing all my stuff
| I'd do it tonight...
|
| Maybe I'll just back up to S3 or something. You have
| peeked my interest.
| passthejoe wrote:
| I agree.
| koeng wrote:
| I use OpenBSD on my laptop. Honestly it was a better
| experience than even Arch Linux or the like. More things
| worked out of box (ran on thinkpad) and there is love put
| into some simple commands - zzz and ZZZ alias built-ins are
| actually super useful and you can really tell the people
| building the OS use em themselves.
| sandworm101 wrote:
| >> lid-close go to sleep functionality
|
| I've read hundreds of post surrounding this issue. I've used
| laptops for decades but I don't think I have ever used this
| feature. It doesn't take even a second to hit a couple keys and
| sleep a laptop. The only feature lower on my priority list
| would be syncing the RGB keyboard to soundcloud. But that's
| just me. Evidently the lid-close-sleep thing is of vital import
| to millions of laptop users. It's one of those things where I
| just shake my head in bewilderment.
| jodrellblank wrote:
| Maybe get out of your rut and try it before mocking everyone
| who uses it and posting long rambling comments about how you
| "don't understand" but still think you have enough
| understanding to judge "millions" of people as beneath you?
| trasz wrote:
| I use it all the time; it boils down to a single line in
| sysctl.conf:
|
| hw.acpi.lid_switch_state="S3"
| Melatonic wrote:
| One of the features I always actually make sure to turn off -
| I am in the same boat and there are many times when I want to
| close the lid of my laptop but still have some stuff chugging
| along. Saves a lot of battery that way!
| tombert wrote:
| I mean, sure, I'm an engineer, I'm sure I can figure out a
| workaround, but I _like_ the feature of going to sleep on lid
| close. It 's a feature I use, and I couldn't get it working
| in FreeBSD.
| brimble wrote:
| Having computers do things automatically is kinda the whole
| point of what we do.
|
| > Evidently the lid-close-sleep thing is of vital import to
| millions of laptop users.
|
| It's one of _many_ little things that takes a laptop from
| feeling like some weird barely-functioning misfit of a toy to
| an actual portable tool that you don 't have to worry about
| or baby.
| lazyant wrote:
| FreeBSD is a nicer, more logical Unix than Linux in general. Now
| as soon as you have a package or hardware that you want to use
| and it's not supported by FreeBSD let us know how that goes.
| edgyquant wrote:
| It may be a more "logical unix" but most engineers (including
| me) don't care about that. I started using Linux in the mid
| 2000s and had never heard of unix at the time and only
| discovered it by reading about Linux.
| AnonHP wrote:
| I have a tangential question on this part:
|
| > I sometimes experienced severe system slowdowns due to high
| I/O, even if the data to be processed was not read/write
| dependent. On FreeBSD this does not happen, and if something is
| blocking, it blocks THAT operation, not the rest of the system.
|
| I've seen this for a long time in Windows, where any prolonged
| I/O brings the entire system down to its knees. But it also seems
| to affect macOS (which is based on FreeBSD) as a system, though
| it's not as bad as on Windows. Has Windows improved on this over
| the years? I'm unable to tell.
| detaro wrote:
| > _macOS (which is based on FreeBSD)_
|
| That's somewhat overstated, so I wouldn't draw such
| conclusions. (https://wiki.freebsd.org/Myths )
| bastardoperator wrote:
| The funny part about this is I actually end up installing a
| lot of the GNU tools so I can have some level of parity when
| writing (mostly shell) code on MacOS.
| toast0 wrote:
| Adding onto this; the parts of the kernel that were from
| FreeBSD were taken two decades ago, without much if any
| attempt to follow up and rebase. I don't know about the disk
| I/O subsystem, or even if it was taken from FreeBSD, but the
| 2000 era FreeBSD tcp was scalable for 2000 era machines
| (although it would have been nice if Apple had taken it after
| syncache/syncookies), but needs changes from the 2010s to
| work well with modern machines. I'm sure that similar
| improvements have happened for disk I/O, but I just don't
| know the details. Not a lot of people would run a FreeBSD 4.x
| kernel today, but that's what's happening with the FreeBSD
| derived kernel bits in Mac OS.
| yehNonsense0 wrote:
| It's either the disk is the bottleneck or the GUI process
| taking a back seat to the OS prioritizing the "real work"
| requested.
|
| Computers get faster, we throw more at them.
|
| Physics is still the law of the land.
|
| Truh truh truh trade offfss; in computer eng; truh truh truh
| trade offs _sorry Bowie_
| philliphaydon wrote:
| I have no idea how operating systems work, but at a wild guess.
| If you're running the OS off the same disk thats threashing the
| io, and, its queued too many read/write operations to handle
| the reads for the OS? And maybe FreeBSD is just so small it
| effectively caches itself into memory and doesn't need the IO
| so it appears to function still?
|
| Curious to know the reason.
| philliphaydon wrote:
| I guess based on the downvotes I'm wrong but they also don't
| know?
|
| Reason I want to know is when I tried running ubuntu off a
| spinning disk and wanted to move some games to the drive it
| would transfer about 1gb then lock up. Didn't happen with an
| SSD.
| loeg wrote:
| Yeah, in general wild speculation (especially if it happens
| to be incorrect, as it is in this case) is discouraged on
| this forum.
| bopbeepboop wrote:
| formerly_proven wrote:
| Windows seems to rely on disk accesses in a lot of critical
| processes which is why it tends to have GUI lockups and
| slowdowns under I/O load. Even opening task manager, or just
| switching tabs in it, while the system is loaded can take a few
| seconds (~dozens to hundreds of billions of cycles).
| marcodiego wrote:
| Linux took many markets. The HPC, for example, has been 100%
| linux in TOP500 for a few years already. Monopoly by FLOSS is
| still monopoly. Healthy competition is good for users and forces
| options to improve, see LLVM vs GCC.
|
| To sum up: healthy FLOSS competition is welcome and needed.
| pjmlp wrote:
| Agreed, if UNIX as concept is ever to evolve, it cannot be
| bound at UNIX === Linux that many now seem to consider.
| marcodiego wrote:
| Hi pjmlp!
|
| This may not be the best place, but I have to. I'd like to
| tell you that, although we've had some disagreements on HN, I
| carry no bad feelings and, for as much as possible, have
| extreme respect for you and your opinions.
|
| I'm glad our previous disagreements don't prevent us from
| posting when we do agree.
| pjmlp wrote:
| Sure, most of the stuff I kind of rant about tend to be
| founded on experience, you will me seldom see ranting about
| stuff I never worked with on daily basis, and it always
| goes both ways, just like coins have two sides.
|
| So it ok to agree to disagree. :)
| Shared404 wrote:
| I think it's somewhat to late for UNIX to evolve in general.
|
| There's too many decades of cruft and backwards compatibility
| built up. Afaict, most interesting new OS's being built right
| now are similar to UNIX, but very explicitly not.
| trenchgun wrote:
| GNU is not UNIX, though.
| marcodiego wrote:
| I don't know about UNIX per se, but consider Linux and
| MacOS progress in the last two decades. MacOS showed that
| it is possible for UNIX to be successful on the desktop.
| During the same period, Linux scaled from embedded
| computers and smartphones to supercomputers and servers.
|
| In terms of innovations, I'd bet MacOS has evolved too.
| Although "logical partitions"-like solutions were already
| known for some time, Linux made it widespread through
| containers; io_uring allows high throughput syscall-less
| zero-copy data transfer and futex2 allows to implement NT
| synchronization semantics that are very common in game
| development. All that ignoring just how much the desktop
| changed!
|
| The UNIX children are definitely not sitting still.
| Shared404 wrote:
| Good points all around, I stand corrected.
| Melatonic wrote:
| I have always wondered what the world might be like if
| Apple had also focused big time on developing a server
| version of MacOS. I am very much not a fan of Apple as a
| company in general but I have always liked OSX quite a
| bit.
|
| Then again they would probably charge 10x as much for a
| 2U rackmount server where the main difference was a nice
| stainless steel front fascia...
| pjmlp wrote:
| They had two go's at it, A/UX and OS X Server, and
| ironically they do need servers for iCloud.
| kaladin-jasnah wrote:
| What does iCloud run on?
| astrange wrote:
| macOS's best POSIX-level innovations are probably
| sandbox, xpc/launchd, and libdispatch. These have been
| copied elsewhere as Capsicum, systemd, and libuv (TBB?),
| but the originals are more consistently used.
| pjmlp wrote:
| To some extent you are right, however as POSIX actually won
| the server room it will stay around for decades to come,
| even when it is not fully exposed as you mention.
| Bayart wrote:
| >There is controversy about Docker not running on FreeBSD but I
| believe (like many others) that FreeBSD has a more powerful tool.
| Jails are older and more mature - and by far - than any
| containerization solution on Linux.
|
| If FreeBSD jails and Solaris zones were equivalent to Linux
| containers, we'd have seen them take over the backend already. We
| haven't. They're really useful, they provided a degree of safety
| and peace of mind for multi-tenancy but they're not granular
| enough for what's done with $CONTAINER_RUNTIME these days.
|
| Jerome Petazzoni has an old talk where he touches upon container
| primitives and compared them to jails :
| https://www.youtube.com/watch?v=sK5i-N34im8
| area51org wrote:
| Jails are not a replacement for containers.
| yjftsjthsd-h wrote:
| I think the problem is that docker is an excellent frontend,
| and zones and jails are excellent backends. People who say
| jails are better are probably right but they're missing the
| point, because they're not really solving the same problem;
| until I can use jails to create a container image, push it to a
| registry, and pull it from that registry and run it on a dozen
| servers - and do each of those steps in a single trivial
| command - jails are not useful for the thing that people care
| about docker for.
| [deleted]
| oneplane wrote:
| While I get the author's reasoning, it makes me wonder at what
| scale, portability and level of automation and disposability all
| of this is done.
|
| Even if an OS is 'better', a VM with a short lifetime will
| generally be 'good enough' very quickly. If you add a very large
| ecosystem and lots of support (both open source and community as
| well as commercial support) and existing knowledge, FreeBSD
| doesn't immediately come to mind as a great option.
|
| If I were to go for an 'appliance' style system, that's where I
| would likely consider FreeBSD at some point, especially with ZFS
| snapshots and (for me) the reliably and fast BTX loader. Pumping
| out BSD images isn't hard (great distro tools!) and complete
| system updates (due to the mentioned "one team does the whole
| release") are a breeze as well. This is of course something we
| can do with systemd and things like debootstrap too, but from a
| OS-image-as-deployable perspective this will do just fine.
| idoubtit wrote:
| The reasons are, for a large part, not on the technical side. I
| was surprised, because this this a lot of work for little visible
| gain. Here are the reasons, slightly abbreviated:
|
| > The whole system is managed by the same team
|
| Mostly philosophical.
|
| > FreeBSD development is less driven by commercial interests.
|
| Mostly philosophical.
|
| > Linux has Docker but FreeBSD has jails!
|
| IMO, this comparison is a mistake. In the Linux world, systemd's
| nspawn is very similar to Jails. It's a glorified chroot, with
| security and resource management. All the systemd tools work
| seemlessly with nspawn machines (e.g. `systemctl status`).
| Containers a la Docker are a different thing.
|
| BTW, I thought the last sentence about security issues with
| Docker images was strange. If you care about unmaintained images,
| build them yourself. On the other side, the FreeBSD official
| documentation about Jails has a big warning that starts with
| "Important: the official Jails are a powerful tool, but they are
| not a security panacea."
|
| > Linux has no official support for zfs and such
|
| Fair point, though I've heard about production systems with zfs
| on Linux.
|
| > The FreeBSD boot procedure is better than grub.
|
| YMMV
|
| > FreeBSD's network is more performant.
|
| Is there some conclusive recent benchmark about this. The post
| uses a 2014 post about ipv6 at Facebook, which I think is far
| from definitive today. Especially more since it "forgot" to
| mention that Facebook intended to enhance the "Linux kernel
| network stack to rival or exceed that of FreeBSD." Did they
| succeed over these 8 years ?
|
| > Straightforward system performance analysis
|
| The point is not about the quality of the tools, but the way each
| distribution packages them. Seems very very low impact to me.
|
| > FreeBSD's Bhyve against Linux's KVM
|
| The author reluctantly admits that KVM is more mature.
| boring_twenties wrote:
| The systemd-nspawn man page includes the following:
|
| > Like all other systemd-nspawn features, this is not a
| security feature and provides protection against accidental
| destructive operations only
|
| Doesn't seem very similar to jails to me.
| LeonenTheDK wrote:
| I have essentially the same take. The sysadmin at my company
| prefers FreeBSD for all these reasons (as such that's what
| we're running), and he's engaged me a tonne about FreeBSD but
| all I see is an operating system that's just as good as the
| other mainstream server Linux distributions. Except now we've
| got a system that's more difficult to hire for. "Any competent
| admin can learn it easily" is something I've been told but how
| many will want to when they could easily go their whole career
| without encountering it again?
|
| I like your point about Docker vs Jails, I haven't seen it
| discussed like that before. I keep hearing Jails are more
| secure than anything else, I'll have to read more into it.
|
| As far as the networking goes, I haven't seen any recent
| benchmarks to substantiate those claims either. However,
| considering Netflix uses FreeBSD on their edge nodes and has
| put a lot of work into the upstream to improve networking
| (among other things), it wouldn't surprise me if it's
| technically superior to the Linux stack. Clearly though Linux's
| networking isn't an issue for most organizations.
|
| And regarding ZFS, ZFS on Linux and FreeBSD's ZFS
| implementation are now one and the same. It would be nice to
| see some of the big distributions(or even the Linux kernel)
| integrate it more directly. This is probably a solid point in
| favor of FreeBSD, but it's not like it doesn't work in Linux.
| I'm not a systems guy, so I'm probably out of the loop on this,
| but Proxmox is the only distribution I've seen with ZFS as a
| filesystem out of the box, but I don't know how much production
| use Proxmox sees. I only run it on my home server.
|
| All that to basically say, I like FreeBSD conceptually. I'm
| just still not convinced that it's doing enough things better
| to warrant using it over a common Linux distribution for
| general computing purposes.
| Melatonic wrote:
| I have mostly only seen FreeBSD used for virtual networking
| devices so I think if that is your speciality (maybe someone
| who is a devops / networking role - is there a name for
| that?) you probably encounter it quite a bit. I do not think
| I would bother learning it or using it for much outside of
| networking just because it would make hiring competent people
| harder. Many times you also need to think about what may not
| be the very best solution from a purely technical perspective
| but also what is the best from both a people and technical
| perspective.
|
| One of the things that annoys me to no end ( and I am not
| sure if this is something specific to Netscalers or FreeBSD
| networking appliances in general ) is that the damn VM's are
| the only thing I run that does not properly report memory and
| CPU usage to VMware. From the hypervisor perspective they are
| constantly running at 100% CPU (their support has told me
| that the system reserves the full amount of CPU given but is
| not actually running that high - and when you actually
| connect to the box it self reports the number correctly). Not
| a huge deal but it annoys me to have that little red "X" in
| the GUI next to the VM name all the time.
|
| That being said I also have not seen any recent FreeBSD vs
| Linux benchmarks but I would imagine if a company as large as
| Netflix is already using it en masse then there would need to
| be not just parity but tangible benefits to swap it all out
| for some other linux distro. An org starting from scratch of
| course would be a different beast.
| asveikau wrote:
| I'm seeing a lot about how hiring competent people would be
| harder. But I think competence is independent of this, so
| perhaps what you're saying is Linux use masks incompetence?
| I would really question the chops of somebody who says they
| can work with Linux but not FreeBSD. People who really know
| their stuff on Linux would be able to figure it out.
|
| And if you never see FreeBSD again... So what? The stuff
| you are building on top is probably more relevant than
| questions like this.
| LeonenTheDK wrote:
| Solid point about networking. No one really seems to
| dispute its presence there and rightly so it seems (again,
| I'm a programmer not an admin so my personal experience is
| limited).
|
| > Many times you also need to think about what may not be
| the very best solution from a purely technical perspective
| but also what is the best from both a people and technical
| perspective.
|
| This is huge when it comes to running the company's
| software imho. It's useless having the perfect solution if
| you can't get anyone to run it when "good enough" will have
| a plethora of folks ready and willing. Especially when it
| comes to managing the bus factor. When I asked the admin at
| my org about that, he explicitly said he didn't care about
| it (that's more an indictment of him than FreeBSD though).
|
| Overall the human factor is something I've been trying to
| embrace more lately when it comes to work. For personal
| stuff though I'll be as esoteric as I like.
|
| > the damn VM's are the only thing I run that does not
| properly report memory and CPU usage to VMware
|
| Well that's just plain frustrating.
| ashtonkem wrote:
| > However, considering Netflix uses FreeBSD on their edge
| nodes and has put a lot of work into the upstream to improve
| networking (among other things), it wouldn't surprise me if
| it's technically superior to the Linux stack.
|
| Possible. But it's also possible that the founder of the team
| was a BSD person. You occasionally get cases where the
| preferences of one key person affects things in the long run.
| At this point I'm sure that they're baked in, because
| removing all that code would not be worth the effort too, so
| even if it was better when Netflix went online, that doesn't
| guarantee that it's still true today.
|
| That being said, Linux is also used in a ton of places,
| including high network jobs. It would take a decent sized
| amount of evidence to convince me that all of those other
| places were wrong and basically only Netflix was right to use
| BSD in network heavy cases.
|
| My suspicion is that either:
|
| 1. The difference is minimal to negligent, and not enough to
| justify mixed OS development
|
| or
|
| 2. The difference is significant, but you have to be pushing
| your network much harder than most teams ever do for it to
| show up.
| area51org wrote:
| Exactly. Personal preference is all fine and good, and he's got
| a right to his own opinions, absolutely.
|
| But to imply that these are empirical differences between
| FreeBSD and Linux, well, that's nonsense.
| area51org wrote:
| All the "advantages" of FreeBSD are really just personal
| preferences, and little more. E.g., FreeBSD jails are not a
| replacement for containerization in any way. The FreeBSD network
| stack is better? I'll bet you can talk to a Linux kernel expert
| who will explain why exactly the opposite is true. And things
| being "simpler" in *BSD? Simpler is not always better. SystemD
| may be somewhat over-engineered, but it's also powerful as hell
| and can do things the old rc.X system couldn't dream of doing.
|
| There's nothing wrong with switching to another OS, but implying
| it's because the other OS is somehow empirically "better" is
| misguided.
| redm wrote:
| I used FreeBSD for many years (on servers) between 2001-2009. I
| also used it as a personal machine in the 90's. We used it for
| stability, at which it did well.The real problem was that
| everything was moving to Linux. The Linux kernel and community
| kept up with with bleeding edge hardware or software. Stability
| of Linux continued to improve and most people stopped compiling
| custom kernels anyway. I used to compile most user-space software
| too, and almost never do now. That largely negated the FreeBSD
| benefits.
| mbreese wrote:
| I ran a FreeBSD ZFS NFS server for a cluster for quite a while. I
| loved it. It was simple and stable. The thing that led me away
| from FreeBSD (aside from IT not being happy with an "alternative"
| OS), was that I needed a clustered filesystem. We outgrew the
| stage where I was comfortable with a single node and where
| upgrading storage meant a new JBOD.
|
| Are there any FreeBSD-centric answers to Ceph or Gluster or
| Lustre or BeeGFS?
| mikehotel wrote:
| The handbook and wiki has info on FreeBSD Highly Available
| Storage:
|
| https://docs.freebsd.org/en/books/handbook/disks/#disks-hast
|
| https://wiki.freebsd.org/HAST
| arminiusreturns wrote:
| I haven't checked on it in a while but dragonfly has HAMMER2,
| the last docs
| https://gitweb.dragonflybsd.org/dragonfly.git/blob/57614c517...
|
| Might be a future alternative in the space.
| loeg wrote:
| HAMMER2 is not yet clustered, as far as I know. They're still
| working on single-node functionality (I think).
|
| https://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEA.
| .. was last updated in 2018, and at the time much of the
| clustering logic is described as "under development" or "not
| specced."
| diekhans wrote:
| Ceph is supported on FreeBSD
|
| https://www.freshports.org/net/ceph14/
| adamcstephens wrote:
| Ceph 14 was EOL on 2021-06-30. There are two major releases
| past v14 that don't seem to be available through freshports.
| I'm not sure I'd qualify this as supported, and definitely
| not actively supported.
|
| https://docs.ceph.com/en/latest/releases/#active-releases
| [deleted]
| prakhunov wrote:
| This changed a couple of years ago but GlusterFS does run on
| FreeBSD now.
|
| It's not BSD centric but it works.
|
| https://www.freshports.org/net/glusterfs/
| johnklos wrote:
| This articulates most of my frustrations with the Linux world.
|
| Some of the distros are very good, but some of us who have work
| to do cringe at the thought of bringing up newer versions of an
| OS just to check all the things that've broken and changed
| needlessly.
| briffle wrote:
| I hear people always complaining about 'needless' changes like
| systemd over init. But I sure like how my modern database
| servers reboot in less than 30 seconds, vs 5-12 minutes when
| they were running RHEL 5. (yes, some of that is because we
| moved from BIOS to UEFI)
| mnd999 wrote:
| I wouldn't consider reboot time to be a massive selling point
| for a database server though. Surely you don't want to be
| doing that very often?
| regularfry wrote:
| On the other hand, when you do, you want it back up _now_.
| kazen44 wrote:
| actually, i want it backup up concurrently.
|
| for instance, the systems better checks all memory banks
| and disks before it finishes booting.
|
| Having a system boot slower is an annoynance when
| downtime occurs, but having a system boot incorrectly is
| far worse.
| 1500100900 wrote:
| If it's something like a galera cluster then not really,
| I think. You want the updated machine to become
| operational again in a reasonable amount of time, because
| during the time it's down you have fewer points of
| failure. But the difference between 2 minutes and 5
| minutes in this case is not a big deal, in my opinion.
| mnd999 wrote:
| Yeah, maybe you're right. I actually had this
| conversation with my PM the other day - I work on a
| fairly well known database. They were asking us to
| improve the startup time of the product.
| brimble wrote:
| > 5-12 minutes
|
| I've run a lot of Linux machines on a lot of hardware for a
| lot of years and have no idea what could have been going on
| that would cause a 5+ minute boot, that was also unnecessary
| enough that a different BIOS could get it down to 30s. Back
| in the bad old days of circa ~2000 when I ran Linux on a
| variety of garage-sale potatoes, I don't think any took 5
| minutes to boot.
| lordgroff wrote:
| Sure (although 5-12 minutes is very odd no matter how you
| slice it), but that's a bit of a false choice. I have nothing
| against systemd personally, but I recognize more and more
| that a good part of the reason is because I never had to use
| it in anger.
|
| There were other choices that Linux could have taken rather
| than sysvinit vs init.d and some distros through the years
| have taken different approaches. FreeBSD's rc.d is not
| sysvinit either.
| throwawaysysd wrote:
| >There were other choices that Linux could have taken
| rather than sysvinit vs init.d
|
| I don't think that is true. If you look at those other
| distros you'll mostly find that it either didn't pan out,
| or it's quickly approaching the same design and
| architecture of systemd, where distros start using
| declarative prgramming to integrate tooling around common
| workflows. This is inevitable when you consider the
| constraints on the system as a whole. Other related service
| management tools like Docker are under the same constraints
| and you'll notice those have a similar architecture too.
| throwawayboise wrote:
| The Linux machines that I have managed that took anywhere
| close to that long to boot (older IBM and HP) just have a
| very lengthy BIOS/POST start-up process. Once you got
| through that, the OS (RHEL 5 at the time) booted up pretty
| quickly.
| zinekeller wrote:
| Hmm, why do I feel that a) the older servers verifies memory
| banks fully before boot (which takes up minutes) and b) the
| change to solid-state media has the biggest impact and not on
| SystemD. Can you confirm that at least a) is not the reason
| for slow boot times?
| antoinealb wrote:
| I know that anecdote is not data, but I had an Arch Linux
| laptop when the distribution moved from Init scripts (SysV
| init ?) to systemd, and the boot time was easily cut in
| three, going from 30s to 10s, on exactly the same laptop.
| Of course switching from HDD to SSD was a huge improvement,
| but don't discount what systemd's efficient boot
| parallelism was able to achieve.
| trasz wrote:
| It's worth keeping in mind that the old Linux rc scripts
| where quite mediocre compared to their BSD counterparts.
| They didn't even have a dependency mechanism. So, it's
| fine to compare sysv Linux scripts to systemd, but please
| don't extrapolate that to other systems.
| gtsop wrote:
| It feels like the title is wrong. Instead of saying "Linux is bad
| because I encountered X problem in production, which would have
| been prevented by BSD" the author goes on to list why BSD is
| better in general outside his specific use case.
|
| Nothing wrong with the comparison probably, but I got the
| impression the author just really wanted to do the migration and
| found some reasons to do so, without actually needing it. Nothing
| wrong with that as well. It's just the expectations set by the
| title that are off
| tomxor wrote:
| I had a similar feeling, throughout reading the post I wanted
| to know what the specific issues were that made BSD better
| suited, it's all been too abstracted.
|
| I think many of us (including me) have a tendency to try to
| quickly generalise our experiences, even when it's not
| appropriate - and when we go onto explain things to others
| without the original context it can sound too abstract or come
| across as evangelical. Either way, it loses meaning without
| real examples.
| eatonphil wrote:
| Neither the HN title nor the blog title is saying Linux is bad
| though. The title seems pretty in line with the article to me.
| gtsop wrote:
| The blog titles says "why we are migrating...". I didn't get
| any of that. I got " why I like BSD, and thus I am migrating"
| acatton wrote:
| Funny enough, I decided to play with FreeBSD for personal
| projects in 2020. I gave up and I am reverting all my servers to
| Linux in 2022, for the opposite of the reasons mentioned in this
| article.
|
| * Lack of systemd. Managing services through shell scripts is
| outdated to me. It feels very hacky, there is no way to specify
| dependencies, and auto-restarts in case of crashes. Many FreeBSD
| devs praise launchd, well... systemd is a clone of launchd.
|
| * FreeBSD jail are sub-optimal compared to systemd-nspawn. There
| are tons of tools to create freebsd jails (manually, ezjail,
| bastillebsd, etc...) half of them are deprecated. At the end all
| of your jails end up on the same loopback interface, making it
| hard to firewall. I couldn't find a way to have one network
| interface per jail. With Linux, debootstrap + machinectl and
| you're good to go.
|
| * Lack of security modules (such as SELinux) -- Edit: I should
| have written "Lack of _good_ security module "
|
| * nftables is way easier to grasp than pf, and as fast as pf, and
| has atomic reloads.
| lordgroff wrote:
| There's a lot to unpack here. For example, there's certainly
| other ways to network jails and all three ways you've mentioned
| to maintain jails are not deprecated.
|
| Security modules do exist, they're different from Linux. Are
| you sure you're not just expecting FreeBSD-as-Linux?
|
| As for init... What can I say, I've never been anti-systemd,
| not even remotely, but rc.d is much nicer than sysvinit, and I
| find it much simpler to understand than systemd. In fact, I
| think rc.d is an example of how Linux could have alternatively
| migrated from sysvinit without pissing some people off.
| acatton wrote:
| > Security modules do exist, they're different from Linux.
| Are you sure you're not just expecting FreeBSD-as-Linux?
|
| You're right. My original message was wrong, I edited it,
| while keeping the original content. What I meant is "good
| security module".
|
| SELinux on CentOS is "enabled by default and forget about
| it", unless you do something weird. MAC (= Mandatory Access
| Control) on FreeBSD requires much more configuration. They
| have some cool stuff like memory limits, but it's not as
| powerful as SELinux.
| zie wrote:
| The security posture is quite different, so it's not as
| easy as just oh, turn on some magic module and be done.
| Security requires work, regardless of OS.
|
| A fair bit is included in the base system already, see
| capsicum for example. Also, see HardenedBSD, which is
| arguably better than anything Linux has built-in.
| YATA0 wrote:
| >which is arguably better than anything Linux has built-
| in.
|
| No it isn't. There's a reason why government and military
| servers run hardened Linux with SELinux, and not any of
| the BSDs.
| trasz wrote:
| "Government and military servers" tend to run Windows ;-)
| SELinux looks nice on paper - another box to check - but
| it's just another mitigation later, not something that
| can be considered "trusted".
| YATA0 wrote:
| Microsoft Server systems for government use have been
| audited and have strict controls for implementation,
| hardening, securing, etc.
|
| They're probably more secure than BSD.
|
| >SELinux looks nice on paper - another box to check - but
| it's just another mitigation later, not something that
| can be considered "trusted".
|
| This is fractally wrong.
| zie wrote:
| I said built-in, you seem to have missed that part.
| SELinux is not built-in(though it is for certain
| distributions of Linux).
|
| Security is hard to define, let alone prove. Everyone has
| a very different definition of security. So first one has
| to ask, secure from what?
|
| I imagine most of the reason around BSD not on the
| official list(s) is because it's not as popular. I mean
| GenodeOS[0] is arguably one of the most secure OS's
| around these days, but I doubt you can find any public
| Govt support(by any govt) for running it in production
| today.
|
| Going back to my original comment, security is
| complicated, and there is no "secure", but hopefully for
| a given set of security threats, there is a "secure
| enough".
|
| The same exists in physical security. Our home door locks
| are notoriously not secure, but they are generally secure
| enough for most home needs. But your average home door
| lock would obviously be idiotic as protection for Fort
| Knox's gold deposit door.
|
| Comparing BSD to Linux security is complicated, but for
| most high value targets, the answer probably is, run more
| than one OS. Root DNS servers and other highly critical
| internet infrastructure all do this as a matter of common
| practice. If you are mono-culture Linux only, I worry for
| your security, as you are effectively a single zero-day
| away from being owned. Linux, BSD, Windows, etc will all
| have RCE's and zero-days as a normal part of existing.
|
| 0: formal proof secure(sel4), for some definitions of
| provable even: https://genode.org/
| YATA0 wrote:
| >I said built-in, you seem to have missed that part.
|
| I did not miss that part, you're just mistaken.
|
| >SELinux is not built-in(though it is for certain
| distributions of Linux).
|
| Wrong. SELinux is 100% "built-in" to Linux. That's like
| saying btrfs or Wireguard are not "built-in" to Linux
| because certain distros may or may not have them compiled
| in. Nonetheless, SELinux is part of the kernel [0].
|
| The rest of your dribble is a painful Gish gallop because
| you were decisively proven wrong. Mature up a bit and
| take the L. Fomenting about being proven wrong is against
| the Guidelines here.
|
| [0] https://lore.kernel.org/selinux/
| zie wrote:
| I'm not trying to hate on SELinux, it's great stuff, for
| what it is. I'm not trying to hate on you either, though
| clearly you seem to have hatred towards me, which is just
| sad.
|
| I'm happy to accept that SELinux is now built-in to
| Linux, the kernel parts do indeed seem to be built in
| now, news to me, thanks for that. I don't follow Linux
| kernel stuff much anymore, I haven't contributed to Linux
| in over a decade.
|
| You seem to assume SELinux is the end-all be all of Linux
| security. It isn't. I recognize, based on your other
| comment, that you are fairly new to the field(a whole
| decade, go you!). Please open your mind and accept
| differing perspectives, it will do wonders for your
| ability to reason about security properly.
|
| HardenedBSD[0] essentially implements grsecurity for
| FreeBSD, plus FreeBSD has built-in capabilities with
| Capsicum[1], which is true capability based security,
| which is much different than SELinux's MAC stuff. If you
| don't believe me, go read the capsicum paper[1] and come
| to your own conclusions, it might prove enlightening.
|
| Also, see CheriBSD. :)
|
| 0: https://hardenedbsd.org/content/easy-feature-
| comparison 1: https://papers.freebsd.org/2010/rwatson-
| capsicum/
|
| If you just want to continue hating on me, no reason to
| respond, we can go our separate ways. If you want to have
| a reasoned discussion about security, then I'm happy to
| continue.
| YATA0 wrote:
| >You seem to assume SELinux is the end-all be all of
| Linux security.
|
| Never said or implied anything of the sort.
|
| >I recognize, based on your other comment, that you are
| fairly new to the field(a whole decade, go you!).
|
| I've been implementing secure, hardened UNIX and Linux
| probably longer than you've been alive. I just
| specifically worked on DoD TS+ systems for a decade.
|
| The rest of your Gish gallop is nonsense. Linux also has
| capability based security ON TOP of all the other aspects
| of security, SELinux included.
|
| >If you want to have a reasoned discussion about security
|
| That's not possible with you. You instantly showed how
| little you know about security in general when you
| flouted your lack of SELinux knowledge, then you
| proceeded to Gish gallop and sealion because you've been
| called out.
|
| Stop it.
| NexRebular wrote:
| Any source for this statement?
| YATA0 wrote:
| There are no DoD STIGs[0] for the BSDs, meaning they
| cannot run on military servers. Similarly, there are no
| CIS guides[1].
|
| What would I know, only designed and implemented TS+
| systems for a decade!
|
| [0] https://public.cyber.mil/stigs/downloads/
|
| [1] https://www.cisecurity.org/
| NexRebular wrote:
| There are a lot of them for Windows... should I then
| trust that linux is as secure for government use as
| Microsoft systems are?
| trasz wrote:
| It's not about providing any real security, it's about
| ticking checkboxes. So yes, if the checklist says "Linux
| and Windows are ok" then you can mark the checkbox, and
| with FreeBSD you couldn't.
| YATA0 wrote:
| Now you're shifting the goal posts after I provided
| sources that the US gov does not use BSD.
|
| Microsoft Server systems for government use have been
| audited and have strict controls for implementation,
| hardening, securing, etc.
|
| They're probably more secure than BSD.
| hestefisk wrote:
| Btw, HardenedBSD is a good security module and available.
| krageon wrote:
| Systemd has only one advantage, which is also it's prime
| disadvantage: It's paws are _everywhere_ in your system. Before
| there was Systemd, init systems worked okay too - in that space
| nothing was changed by it.
| acatton wrote:
| Every company I worked at (before systemd was mainstream) was
| running most of their services under supervisord[1] which was
| started by initv.
|
| I'm not sure initv "worked okay".
|
| [1] http://supervisord.org/
| spamizbad wrote:
| init scripts were "great" if you were a unix wizard. For
| mere mortals they were frustrating.
| DarylZero wrote:
| Systemd is so fucking great. LOL @ the haters. They don't even
| know what they're missing.
| freedomben wrote:
| > _well... systemd is a clone of launchd_
|
| sort of. Systemd took a lot of lessons from launchd but also
| from sysv and upstart. For anyone who hasn't read Lennart
| Poettering's "Rethinking PID 1" post[1] I highly recommend it.
| You'll understand the history, but most importantly you'll
| understand systemd a ton better.
|
| [1]: http://0pointer.de/blog/projects/systemd.html
| toast0 wrote:
| > At the end all of your jails end up on the same loopback
| interface, making it hard to firewall. I couldn't find a way to
| have one network interface per jail.
|
| You may want to look at vnet, which gives jails their own
| networking stack; then you can give interfaces to the jail. If
| you use ipfw instead of pf, jail id/name is a matchable
| attribute on firewall rules; although it's not perfect, IIRC I
| couldn't get incoming SYNs to match by jail id, but you can
| match the rest of the packets for the connection. And that
| brings up the three firewalls of FreeBSD debate; maybe you had
| already picked pf because it met a need you couldn't (easily)
| meet with ipfw; you can run both simultaneously, but I wouldn't
| recommend it. Nobody seems to run ipf, though.
|
| Edit: you may also just want to limit each jail to a specific
| IP address, and then it's easy to firewall.
| SpaceInvader wrote:
| Regarding jails - I do use separate loopback device per jail,
| plus pf with nat. No issues with firewalling.
| densone wrote:
| Same .. I don't use the separate loop back. Just firewall
| what I need to firewall .
| HotHotLava wrote:
| My boss graduated from Berkely, and so I occasionally had to
| administer a FreeBSD box he kept around for the "superior OS
| philosophy".
|
| My biggest annoyance, apart from the obvious lack of systemd,
| was a social one: Any time I had to look up how to accomplish
| some non-trivial task I would inevitably find a thread on the
| FreeBSD forums by someone else who has had exactly the same
| problem in the past, together with a response by some dev along
| the lines of "why would anyone ever need to do that?".
| throwawayboise wrote:
| > I would inevitably find a thread on the FreeBSD forums
|
| That was your mistake. With BSDs, you use the man pages ;-)
| passthejoe wrote:
| I needed to script something for a FreeBSD server, and I was
| doing the development in Linux. It would have been OK if I
| had used Ruby/Python/Perl, etc., but I decided to do it in
| Bash. I had to make quite a few fixes when I got the script
| on the FreeBSD box. Then I deployed it to an OpenBSD system.
| More fixes.
| gunapologist99 wrote:
| Even though I _severely_ dislike systemd, and am a fan of
| FreeBSD 's stability and simplicity, I also have found this
| attitude to be an annoyance. For example, if I want to
| upgrade a fleet of a hundred or so FreeBSD boxes remotely,
| the answer seems to be, "Don't do that. You must upgrade each
| one by hand."
|
| This is the principal reason why I've leaned toward Debian
| stable (which also aims at being a stable UNIX like FreeBSD)
| for decades, even though Debian has also been infected with
| systemd and has made other questionable decisions.
| Alternatively, I've also had good luck with Void, Alpine, and
| Artix. (I've had difficulties with electron apps on Void
| desktop, but it runs flawlessly on servers.)
| simcop2387 wrote:
| > For example, if I want to upgrade a fleet of a hundred or
| so FreeBSD boxes remotely, the answer seems to be, "Don't
| do that. You must upgrade each one by hand."
|
| either that or "don't do an upgrade, build new ones and
| replace the old ones instead"
| throw0101a wrote:
| > _there is no way to specify dependencies_ #
| PROVIDE: mumbled oldmumble # REQUIRE: DAEMON cleanvar
| frotz # BEFORE: LOGIN # KEYWORD: nojail
| shutdown
|
| * https://docs.freebsd.org/en/articles/rc-scripting/
|
| * https://www.freebsd.org/cgi/man.cgi?query=rcorder
| acatton wrote:
| From your link
| https://www.freebsd.org/cgi/man.cgi?query=rcorder
|
| > The `REQUIRE' keyword is misleading: It does not describe
| which daemons have to be running before a script will be
| started.
|
| > It describes which scripts must be placed before it in the
| dependency ordering. For example, if your script has a
| `REQUIRE' on `sshd', it means the script must be placed after
| the `sshd' script in the dependency ordering, not necessarily
| that it requires sshd to be started or enabled.
| fullstop wrote:
| I've grown to love systemd. It solves a lot of my problems with
| init scripts, particularly the ones involving environment /
| PATH at boot time. I've made init scripts for things before
| which work when they are invoked manually, but not at boot time
| because PATH was different. With systemd I am confident that if
| it works through systemctl it will work at boot.
|
| Maybe I am not tuning Linux appropriately, but I have been in
| situations where a Linux system is overwhelmed / overloaded and
| I am unable to ssh to it. I have _never_ had that experience
| with FreeBSD -- somehow ssh is always quick and responsive even
| if the OS is out of memory.
|
| Most of the systems that I deal with are Linux, but I still
| have a few FreeBSD systems around and they are extraordinarily
| stable.
| tored wrote:
| On my personal linux workstation I experienced multiple times
| that I couldn't ssh into it after the desktop froze, it was
| due to too small swap file.
| themodelplumber wrote:
| That's interesting, what proportion are you using to
| determine swap size? For example 1x RAM, 2x RAM, etc.
| tored wrote:
| I run at 1xRAM. Seems that my problems went away after
| that.
|
| Ubuntu by default sets it to 2GB even if you have
| something like 16GB RAM. Swap file can then fill up
| pretty quickly.
| themodelplumber wrote:
| > Ubuntu by default
|
| Yep, one of my systems got this treatment. It's annoying
| because OOM has been an issue ever since. I'm playing
| with EarlyOOM and trying to remember if it'll be a huge
| pain to resize the partition for more swap space. Thanks
| for your reply.
| eptcyka wrote:
| I think there's something pathological in the I/O subsystems
| on Linux that make it a bad experience - I've experienced
| horrible U/I latencies on desktop and server settings with
| Linux when there was any kind of I/O load, and found FreeBSD
| to always be a breath of fresh and quick air in this regard.
| pedrocr wrote:
| There are definitely pathological cases around. Here's an 8
| year old bug, still valid, that's a common extreme slowdown
| when copying to/from a USB drive:
|
| https://bugs.launchpad.net/ubuntu/+source/linux/+bug/120899
| 3
| travisgriggs wrote:
| > I've grown to love systemd.
|
| I think I've grown to appreciate it, but not love it. It
| feels like both systems are at opposite ends of a pendulum.
|
| The RC/unit system was very approachable for me. You could
| explain it easily, and you could get inside of it and mess
| around very easily. That affordance/discoverability was
| awesome.
|
| With systemd, simple things are nicely templated. I can find
| an example and tweak it to achieve what I want. Complicated
| things get complicated real fast.
| fullstop wrote:
| The RC system was simple, I'll give you that.
|
| I love not having to track down pid files. I love not
| having to check if the pid file contents match a valid
| instance of the expected binary.
|
| The RC system was simple and also full of exceptions and
| corner cases.
| tluyben2 wrote:
| What is the use case for that? I never had these needs on
| servers. Maybe you are doing something very different
| from me, so I am curious. I use RC on servers and systemd
| on clients; things run robustly for 15+ years on many
| servers for me (with security updates included).
| fullstop wrote:
| Let's say that you're starting Apache Tomcat. You have to
| dig into the tomcat startup scripts and figure out where,
| if anywhere, it writes the pid file to disk so that you
| can later use it to determine if the process is running
| or not. If java happened to crash, though, this pid file
| is stale and _might_ not point to a valid process. There
| 's a chance that the pid has been reused, and it could
| have been reused by anything -- even another java
| process!
|
| This is important, because this pid file is used to
| determine which process receives the kill signal. If you
| get it wrong, and have the right permissions, you can
| accidentally kill something that you did not intend to
| kill.
|
| This is further complicated if you want to run multiple
| instances of Tomcat because now you need to have a unique
| path for this pid file per tomcat instance.
|
| If the thing that you're trying to run doesn't fork, you
| then have to execute it in the background and then store
| the result of $! somewhere so that you know how to kill
| the process later on.
|
| It's all very error prone and the process for each daemon
| is often different.
| herpderperator wrote:
| > Let's say that you're starting Apache Tomcat. You have
| to dig into the tomcat startup scripts and figure out
| where, if anywhere, it writes the pid file to disk so
| that you can later use it to determine if the process is
| running or not.
|
| https://tomcat.apache.org/tomcat-8.5-doc/windows-service-
| how...
|
| The commandline argument is --PidFile and --LogPath.
| Most, if not all, programs allow you to customise this.
| It should never be a guessing game, especially when you
| are the one creating the init file, therefore you are the
| one in control of the running program.
| fullstop wrote:
| Those arguments are for the Windows service and don't
| appear to have a corresponding Linux option. On the Linux
| side, if things are still done the same way as they were
| in the past, you had to set CATALINA_PID before starting
| the java process.
|
| It's still a guessing game, though, even with
| CATALINA_PID. It is entirely possible for Java to crash
| (something which RC scripts do not handle, at all) and
| another java process starting up which happens to be
| assigned the same process id as the dead java process.
| This can not happen with systemd units because each
| service unit is its own Linux cgroup and it can tell
| which processes belong to the service.
| toast0 wrote:
| You could just run every daemon in its own jail. You
| don't need to chroot (unless you want to), and don't need
| to do any jail specific network config (unless you want
| to), but you could use the jail name to ensure no more
| than one tomcat (or tomcat-config) process, and you could
| use the jail name/id to capture process tree to kill it
| without ambiguity.
|
| With respect to servers that crash, approaches differ,
| but you could use an off the shelf, single purpose daemon
| respawner, or you could change the whole system, or you
| could endeavor to not have crashing servers, such that if
| it crashes, it's worth a human taking a look.
| trasz wrote:
| Your description is fine, but it's missing one crucial
| detail: it's specific to Linux. In FreeBSD this is
| already taken care of by rc infrastructure; there is no
| need for the user or sysadmin to mess with it.
| Spivak wrote:
| You have to do it for correctness. Every use case needs
| this handled.
|
| Okay so you have a PID file somewhere
| /var/myservice/myservice.pid. The contents of that file
| is a number which is supposed to correspond to a process
| you can find in proc.
|
| But PIDs are recycled or more likely your PID files
| didn't get cleaned up on reboot. So you look at your file
| and it says 2567, you look up 2567 and see a running
| process! Done right? Well it just so happenes that a
| random other process was assigned that PID and your
| service isn't actually running.
|
| pidfd's are the real real solution to this but the short
| and long tail of software uses pidfiles.
| fullstop wrote:
| > But PIDs are recycled or more likely your PID files
| didn't get cleaned up on reboot. So you look at your file
| and it says 2567, you look up 2567 and see a running
| process! Done right? Well it just so happenes that a
| random other process was assigned that PID and your
| service isn't actually running.
|
| If you're unlucky, though, pid 2567 might match another
| myservice instance. This can easily happen if you're
| running many instances of the same service. Even checking
| /proc/$PID/exe could give you a false positive.
| dataflow wrote:
| I don't expect many programs do this (and I agree the
| real solution would be handles) but it should be possible
| to check the timestamp on the PID file and only kill the
| corresponding process if its startup time was earlier.
|
| There might still be race conditions but this should cut
| down the chance dramatically.
| capitainenemo wrote:
| How about keeping pidfiles on tmpfs mounts so they do get
| cleaned up? I guess that'd be an organisational thing to
| get all the apps to change to a consistent /var/tmp so
| you could mount it..
| fullstop wrote:
| That's still not good enough. pids wrap and eventually it
| could point to something valid, especially on a busy
| system.
|
| systemd uses linux cgroups so that it knows exactly which
| pids belong to the group.
|
| The defaults have surely changed over the years, but
| pid_max used to be ~32K by default. On the system I'm
| typing this comment on, /proc/sys/kernel/pid_max is set
| to 4194304.
| cesarb wrote:
| > The defaults have surely changed over the years, but
| pid_max used to be ~32K by default. On the system I'm
| typing this comment on, /proc/sys/kernel/pid_max is set
| to 4194304.
|
| The commit which changed the defaults is this one: https:
| //github.com/systemd/systemd/commit/45497f4d3b21230756...
| znpy wrote:
| Your nice shell-scripts in /etc/rc.d just won't handle a
| service crashing for any reason, at all (systemd does
| that)
|
| Your nice shell-scripts in /etc/rc.d will start
| EVERYTHING and ANYTHING just in, case, even if you don't
| always need it (systemd does support socket activation)
|
| Your nice shell-scripts can't handle parallelism (systemd
| can)
|
| Your nice shell-scripts can't reorder stuff at boot, you
| have to specify it by hand (systemd can via
| Needs/RequiredBy etc)
|
| Your nice shell-scripts are worthless if you need to run
| more than one instance of a service per server (with
| systemd having many instances of the same service is a
| feature, not an after-thought)
|
| Your nice shell-scripts won't help you troubleshoot a
| failing service (systemd will, via systemctl status AND
| journalctl).
| gerdesj wrote:
| Complicated? https://lwn.net/Articles/701549/ If you have
| time, read the first two articles mentioned early on.
| wyager wrote:
| > At the end all of your jails end up on the same loopback
| interface, making it hard to firewall.
|
| I suppose you didn't use vnet? It's a vastly better jail
| networking experience. You can pretend jails are separate
| machines, connected via ethernet.
|
| > I couldn't find a way to have one network interface per jail.
|
| I think vnet is what you want?
| jsiepkes wrote:
| > Many FreeBSD devs praise launchd, well... systemd is a clone
| of launchd.
|
| No they are not. Systemd has a way bigger scope then launchd.
| It's like saying a truck is the same thing as a family car
| because they both solve the mobility problem.
|
| > FreeBSD jail are sub-optimal compared to systemd-nspawn.
|
| systemd-nspawn isn't a container. For example it doesn't manage
| resources such as CPU or IO. Again, the scope is way different
| and in this case there is a whole slew of things systemd-nspawn
| isn't going to manage for you.
|
| BTW launchd doesn't have a feature like 'systemd-nspawn'.
|
| > Lack of security modules (such as SELinux) -- Edit: I should
| have written "Lack of good security module"
|
| And how is SELinux a good security module? SELinux with it's
| design fell in the same pitfall dozens of security systems did
| before it; ACL-Hell, Role-hell and now with SELinux we also
| have Label-hell.
| nottorp wrote:
| > Systemd has a way bigger scope then launchd.
|
| I hear it's taken over dns. Has it taken over sound too?
|
| When will it be able to read mail?
| Arnavion wrote:
| systemd the init system has nothing to do with DNS.
|
| systemd the family of tools has a DNS server, systemd-
| resolved. Like all other tools in the family, using the
| init system does not require using the other tools, and
| sometimes also vice versa.
|
| In the broader "Poetteringware" family of tools, sound is
| handled by pulseaudio, though the server side of pulseaudio
| is in the process of being replaced by pipewire.
| acatton wrote:
| > systemd-nspawn isn't a container. For example it doesn't
| manage resources such as CPU or IO. Again, the scope is way
| different and in this case there is a whole slew of things
| systemd-nspawn isn't going to manage for you.
|
| It does[1]. At the end systemd-nspawn runs in a unit, which
| can be (like all systemd unit) resource-controlled.
|
| > BTW launchd doesn't have a feature like 'systemd-nspawn'.
|
| Neither does systemd. systemd-nspawn is just another binary,
| like git ant git-annex. The only difference from git and git-
| annex is that systemd and systemd-nspawn are maintained by
| the same team.
|
| But like git and git-annex, most systemd installations don't
| have systemd-nspawn, but systemd-nspawn needs systemd.
|
| > And how is SELinux a good security module? SELinux with
| it's design fell in the same pitfall dozens of security
| systems did before it; ACL-Hell, Role-hell and now with
| SELinux we also have Label-hell.
|
| SELinux is generic enough to allow for sandboxing of
| services. But in its mainstream use, SELinux is well design
| that it allows for reuse of policies. Most people don't care
| about SELinux, they just run CentOS, install RPM from the
| repository, and everything works out of the box with heighten
| security. (= if there is a vulnerability in any of these
| packages, the radius-blast of the attack is limited thanks to
| SELinux's Mandatory-Access-Control)
|
| [1] https://www.freedesktop.org/software/systemd/man/systemd.
| res...
| traceroute66 wrote:
| >Lack of systemd. Managing services through shell scripts is
| outdated to me.
|
| This, this and this again !
|
| After decades of cron, I discovered systemd timers via a
| passing comment I read here on HN.
|
| My god is it amazing. No more hacky work-arounds in my scripts,
| systemd now takes care of all the magic such as random timings
| etc.
|
| I'll never go back to cron.
| torstenvl wrote:
| Can you elaborate? I'm a macOS laptop/FreeBSD server guy.
| What are systemd timers, how do they work, and why do you
| feel they solve your problem better?
| zaarn wrote:
| Generally I would say the biggest difference is that with
| timers, you get more control.
|
| For example, you can schedule a timer to run 10 minutes
| after boot. Or a timer that actives 10 minutes after it has
| last _finished_ running (note: not when it started last
| time but when it finished! So if the proc takes 10 hours,
| there is a 10 minute gap between runs. If it takes 10
| minutes, there is still a 10 minute gap).
|
| You can also schedule something 10 minutes after a user
| logs in (or 10 seconds later, etc.).
|
| Additionally you get Accuracy and RandomizedDelay. The
| former lets you configure how accurate the timer needs to
| be, down to 1 sec or up to a day. So your unit now runs
| somewhere on the day it's supposed to run. And with the
| later you can ensure that there is no predictable runtime,
| this can be important for monitoring.
|
| My biggest favorite is Persistent=. If true, systemd will
| check when the service last ran. And if it should have been
| scheduled atleast once since then, it'll activate. I use
| this for backing up my home PC. When I do a quick restart,
| no backups are done but when I shutdown for the night,
| first thing in the morning my PC has a backup done.
| throwawayboise wrote:
| Counterpoint is that I've never had a requirement for
| timing scenarios like this, so now I'm dragging all that
| along for the ride for no advantage. Bloat is great when
| you need that rare thing I guess; otherwise it's just
| bloat and complexity for no benefit.
| waynesonfire wrote:
| agreed, and more importantly it's possible to implement
| these scenarios with cron if they're needed. i'll take a
| simple building block over bloat. where does the bloat
| stop? i can come up with tens more scenarios that timers
| doesn't cover.
| zaarn wrote:
| Maybe because you've never had the ability to use such
| timing requirements. Having more advanced tools available
| also makes you think more in them. If you only ever used
| cron, there has been no reason to think about a service
| running every hour or immediately if it missed the
| schedule. Because the only tools are every hour and on
| reboot. And no option to make "every hour" mean "between
| script invocations" instead of "on the hour of the
| schedule".
| freedomben wrote:
| This was my experience. Cron was great because it did
| what I needed it to do and I knew how to use it. But over
| the years I accumulated all sorts of hacks to avoid
| pitfalls. When I learned systemd timers I didn't like the
| complexity, but as new needs arose I thought about both
| cron and systemd and realized that systmd timers were
| better for 75% of my needs.
| throwawayboise wrote:
| There are lots of things that I have the _ability_ to do
| that I 've never had the _requirement_ to do. I call
| those things "bloat" because they are unnecessary
| features that add complexity and bugs to the system.
| KronisLV wrote:
| Not OP, but this page seems to have a nice writeup:
| https://wiki.archlinux.org/title/Systemd/Timers
| michaelmrose wrote:
| Fcron can also do random timings, jitter, running if the time
| passed occured while system was off, timing based on runtime
| not elapsed time, delay until system is less loaded, avoid
| overlapping instances of the same job.
|
| http://fcron.free.fr/doc/en/fcrontab.5.html
|
| 1.0 released 21 years ago.
| JediPig wrote:
| I like how userland and the kernel are sync in freebsd. I
| disdain how linux works in that. However Linux has its core
| advantages, some distos are well suited and tested for certain
| scenarios. I used to have a bunch of freebsd, but nowadays, I
| use containers, light weight runtimes. With Lambdas and
| Serverless, most, but not all, business api are streamed line
| where servers matter not. Its the runtime.
|
| Serverless is killing containers. Its killing the need to care
| about if its freebsd or linux, does it run my api fast enough?
| passthejoe wrote:
| This whole "userland and kernel" thing sounds good, but in my
| BSD use, the trade-off is that most desktop app packages (and
| ports) get a whole lot less attention and are more buggy than
| those in Linux. I imagine it's not a problem for servers.
| hnlmorg wrote:
| > _Lack of systemd. Managing services through shell scripts is
| outdated to me. It feels very hacky, there is no way to specify
| dependencies, and auto-restarts in case of crashes. Many
| FreeBSD devs praise launchd, well... systemd is a clone of
| launchd._
|
| I'm not a fan of systemd personally but I do understand it has
| some good parts to it (such as the ones you've listed). That
| all said, you can still specify dependencies in FreeBSD with
| the existing init daemon. Albeit it's a little hacky compared
| with systemd (comments at the top of the script). But it does
| work.
|
| > _FreeBSD jail are sub-optimal compared to systemd-nspawn.
| There are tons of tools to create freebsd jails (manually,
| ezjail, bastillebsd, etc...) half of them are deprecated. At
| the end all of your jails end up on the same loopback
| interface, making it hard to firewall. I couldn 't find a way
| to have one network interface per jail. With Linux, debootstrap
| + machinectl and you're good to go._
|
| It's definitely possible to have one interface per jail, I've
| done exactly that. However back when I last did it (5+ years
| ago?) you needed to manually edit the networking config and
| jail config to do it. There might be a more "linuxy" way to do
| this with Jails now though but its definitely possible. eg
| https://etherealwake.com/2021/08/freebsd-jail-networking/
| hestefisk wrote:
| "Managing services through shell scripts is outdated." By
| inference, since most things in Unix like systems are built on
| the notion of shell (and automation using the shell), this is
| saying large part of the foundation of Unix is outdated. A tool
| is a tool, but I would take a shell script from rc.d any time
| over a binary blob from systemd.
| crazy_hombre wrote:
| systemd unit files are text files, not binary blobs. And it
| is much easier to grok a unit file than a 500 line init
| script.
| waynesonfire wrote:
| At what cost? Thats not the enirety of systemds complexity
| curve.
|
| Your systemd unit file is backed by pages and pages of docs
| that must be comprehended to understand and hack on. Unix
| developers have all they need from the script. Furthermore,
| its all in context of existing unix concepts and thus your
| unix experiance is paying dividends.
| NikolaeVarius wrote:
| How dare a system have "checks notes" have too much
| documentation describing how it works.
| astrange wrote:
| FreeBSD has a lot of documentation, which is something
| people like about it.
|
| I think it actually shows a problem, which is that BSD is
| designed for all your machines to be special snowflakes
| with individual names, edited config files, etc instead
| of being mass managed declaratively. So you need to know
| how to do everything because you're the one doing it.
| Datagenerator wrote:
| Our research company hosts 15 PB of data on what we call
| Single System Imaged FreeBSD with ZFS. All systems pull
| the complete OS from one managed rsync repo during
| production hours. Doing this for ten years, never ever
| any problems. Config files are included using the
| hostname to differentiate between servers. Adding servers
| doesn't add manual labor, it's the borg type of setup
| which handles it all.
| crazy_hombre wrote:
| Now you're just exaggerating. Are you really saying
| spending 5-10 mins skimming through a couple of man pages
| is that hard? Are you saying that a lot of documentation
| is a bad thing? (I thought FreeBSD fans liked to harp
| about their handbook..) And besides, there are already
| hundreds of systemd unit files on your system that you
| can easily copy and make relevant changes for your own
| services. Not having to deal with finicky shell features
| is a major advantage IMO.
| waynesonfire wrote:
| I think the disconnect we're having is your inability to
| perceive complexity. And I don't blame you, it's not easy
| to quantify. I suggest you start with, Out of the Tar Pit
| by Ben Moseley. I'm not knocking on documentation, it's a
| vital property that I consider when adopting new
| technology.
|
| What I'm saying is that systemds documentation currency
| (if you accept my metaphor) is spent on covering its
| accidental complexity and it's voluminous. If you
| disagree with me, that's fine. This is just my experience
| as a linux user that's had to deal with systemd.
|
| If your claim is that systemd man pages are well written
| documentation then I think you're exaggerating and I'll
| wager you've relied on stackoverflow examples or tutorial
| blogs to solve your systemd issues--because I have. The
| reason for this is because the number of concepts and
| abstractions that you have to piece together to solve
| your problem is massive. But yeah, it's just a 5 line
| Unit file. I prefer strawberry kool-aid, thanks.
| crazy_hombre wrote:
| I genuinely don't see what's so complex about a service
| unit file. It's a simple INI file that has multiple
| sections that describes the service, tells what command
| to run and specifies any dependencies. It's literally the
| same thing that init scripts do except in a much more
| concise and efficient manner. And as I said before,
| there's a ton of systemd service unit files on any Linux
| system that you can take a look at and use as inspiration
| for your own services. Taking a little time to learn the
| ways of systemd is not a huge burdensome task like you're
| making it seem to be. I don't see why you think everyone
| should conflate systemd with complexity.
|
| And about the voluminous documentation, well man pages
| are supposed to be comprehensive and cover every single
| aspect of the tools being described. They're not there to
| just be an intro to systemd for new users and
| administrators. If you want something like that, look no
| further that the "systemd for Administrators" series of
| articles written by the systemd author himself.
| https://github.com/shibumi/systemd-for-
| administrators/blob/m....
| trasz wrote:
| But they only provide fixed functionality, while shell
| scripts allow for practically unlimited customization.
|
| As for 500 lines - take a look at proper rc scripts, eg the
| ones in FreeBSD. They are mostly declarative; it's nothing
| like Linux' sysv scripts, which were in some ways already
| obsolete when first introduced (runlevels? In '90s,
| seriously?)
| matthewmacleod wrote:
| _But they only provide fixed functionality, while shell
| scripts allow for practically unlimited customization._
|
| This is the _exact opposite_ of a good thing.
| crazy_hombre wrote:
| If you need extra customization capabilities, just run a
| shell script via the ExecStart= parameter and boom, you
| have all the power of systemd and the shell combined.
| Spivak wrote:
| You can even do one better since systemd can natively run
| rc scripts. If you're on a systemd based distro peak at
| /etc/init.d. You can even manage services with
| /etc/init.d and the service command.
|
| The amount of effort systemd went through to make
| existing software work is genuinely heroic.
| lordgroff wrote:
| Yeah, this conversation seems a bit like people arguing
| past each other. But it's a result of the fact that the
| story on Linux was stuck for so long (e.g., sysvinit on
| Debian, Upstart with some sharp edged hacks on Ubuntu).
| Systemd as the solution seems to have sucked out all the
| air out of the room: either it's great and people are
| idiots, or it's the worst thing on the planet and people
| using it are sheep.
| passthejoe wrote:
| Yes. Exactly.
| dralley wrote:
| > But they only provide fixed functionality, while shell
| scripts allow for practically unlimited customization.
|
| Why is unlimited customization a good thing in the
| context of a system init?
| trasz wrote:
| For the same reason it's a good thing in other contexts.
| It's the main reason Unix got popular - because it can be
| made to fit whatever requirement you have.
| howinteresting wrote:
| Large parts of the foundation of Unix are absolutely,
| obviously outdated, starting from filesystems. There is
| nothing better yet for big chunks of Unix, but systemd
| (despite all its flaws) is a notable exception.
| all2 wrote:
| Can you expand on "starting from filesystems"?
| howinteresting wrote:
| The fact that filesystems casually allow TOCTTOU races--
| and that those races cause security vulnerabilities
| unless you delve into arcane, OS-specific syscalls like
| renameat2--is an embarrassment.
| all2 wrote:
| Username checks out. :D
|
| If I understand correctly, the issue is mutex on file
| objects at the kernel level. Basically it is a failing of
| the implementation of the "file" abstraction. Or perhaps
| a failing of the "file" abstraction itself?
|
| Per Wikipedia [0] (because I had no idea what a TOCTTOU
| race condition was) In the context of
| file system TOCTOU race conditions, the fundamental
| challenge is ensuring that the file system cannot be
| changed between two system calls. In 2004, an
| impossibility result was published, showing that there
| was no portable, deterministic technique for avoiding
| TOCTOU race conditions.[9] Since this
| impossibility result, libraries for tracking file
| descriptors and ensuring correctness have been proposed
| by researchers.
|
| It seems to me that solutions to the problem from inside
| the "files as an abstraction" space won't solve the
| problem.
|
| I was curious to see how a different abstraction would
| avoid this problem. Plan 9 FS appears to have had this
| issue at one point [1], but notably not because of the FS
| implementation itself. Rather the problem was caused by
| the underlying system's executing outside of the P9FS
| ordered access to a given file (someone please tell me if
| my understanding is incorrect).
|
| Here's an article that talks about the problems of this
| (and other) race conditions from the level of abstraction
| [2].
|
| NOTE: I'm not claiming that P9FS is immune from this sort
| of attack, I'm only commenting on what I've found.
|
| [0] https://en.wikipedia.org/wiki/Time-of-check_to_time-
| of-use
|
| [1] https://bugs.launchpad.net/qemu/+bug/1911666
|
| [2] https://gavinhoward.com/2020/02/computing-is-broken-
| and-how-...
| dathinab wrote:
| That is a flawed argument as the is a big difference between
| using the shells at places where it makes sense and managing
| services using shell scripts.
|
| I mean the shell is still used everywhere, like e.g. to
| configure and control systemd.
|
| Still I would say a lot of core shell tools are indeed
| outdated (for backwards compatibility).
| irthomasthomas wrote:
| 1.5M lines of code is not an init system, it is a time bomb...
| and with dodgy wiring.
|
| I think it was a ZFS dev who complained he had to update 150
| files to port ZFS to systemd. Simples?
|
| And you have to love all them binary log files, with their
| dynamic and spontaneus api. Yes, everything is always new and
| exciting with systemD.
|
| I use devuan. A debian fork with a choice of init systems
| (well, everything but systemD. ;) It took the dev/devs 2 years
| just to swap out/revert the init system.
|
| Also, just learned devuan's website is JS and cookie free.
| https://www.devuan.org/
|
| Edit: I will concede that systemD is great for a lot of people.
| I honestly wish them success. The work that goes in to
| building, maintaining, and using it, is substantial. It must be
| a boon for the economy and job creation.
| fullstop wrote:
| > 1.5M lines of code is not an init system, it is a time
| bomb... and with dodgy wiring.
|
| https://news.ycombinator.com/item?id=21935186
| irthomasthomas wrote:
| Oh that's rich. That is rich. The [Flagged] tag is what
| really makes it. I'm tempted to screenshot that post and
| make an NFT from it. It really does capture something about
| the zeitgeist.
|
| Seriously, though, read the article. The 1.2M lines
| includes removed lines from refactoring. So, it is not
| completely made up, as implied in that comment. https://www
| .phoronix.com/scan.php?page=news_item&px=systemd-...
| fullstop wrote:
| Is this better?
|
| https://twitter.com/pid_eins/status/1214268577509003266
|
| https://www.phoronix.com/misc/systemd-eoy2019/files.html
|
| Including the number of lines in static tables and
| documentation hardly seems like a meaningful comparison.
| nine_k wrote:
| Don't make a mistake, systemd is not (just) an init system.
| It's a replacement of more and more Linux userland. I suppose
| later on it will replace much of the filesystem, network, and
| process security management tools, stuff like xattrs, ip /
| ipfw, and selinux.
|
| I suppose that the end goal of the systemd project is an
| ability to deploy a production Linux system with just systemd
| and busybox, and run all software from containers.
|
| Not that it's a bad thing to strive for. But it's not going
| to be Unix as we know it.
| dullgiulio wrote:
| > But it's not going to be Unix as we know it.
|
| Right, that's Plan9.
| StreamBright wrote:
| > It's a replacement of more and more Linux userland.
|
| To be more specific is it a replacement of more and more
| Linux userland that nobody asked. I would be ok if systemd
| would be a process management system that replaces init
| with a standardised way of managing services (it would be
| amazing if it was written a safe language, if it was
| respecting configuration files, if it did not take over
| managing system limits, etc. etc.).
|
| Unfortunately it is trying to do too much.
| jiripospisil wrote:
| > FreeBSD's network stack is (still) superior to Linux's - and,
| often, so is its performance.
|
| Where is this coming from exactly? The linked article about
| Facebook is 7 years old. The following benchmark shows the exact
| opposite: Linux's network stack has long surpassed FreeBSD's. And
| I would expect nothing else given the amount of work that has
| gone into Linux compared to FreeBSD.
|
| https://matteocroce.medium.com/linux-and-freebsd-networking-...
| drewg123 wrote:
| It depends on your workload. For static content, especially
| kTLS encrypted static content, FreeBSD is quite a bit better.
| jiripospisil wrote:
| Do you have any numbers you can share publicly? What's the
| reason it's better (Linux has in-kernel TLS as well,
| correct?)?. It would make for a great topic for Netflix
| TechBlog.
| drewg123 wrote:
| Comparing to Linux? No. There are no public numbers.
|
| However, I'm up to serving 709Gb/s of TLS encrypted traffic
| to from a single host to real Netflix customers with
| FreeBSD.
| Melatonic wrote:
| Have you guys done a ton of your own customization and
| modification to the FreeBSD stack?
|
| Basically just wondering if both of you are correct and
| the default implementation of FreeBSD is in fact losing
| to other flavors of Linux but the networking customized
| OS' (not even just thinking of Netflix at this point) are
| still superior.
| drewg123 wrote:
| Most of our customizations and modifications have been
| upstreamed. We get similar performance on production 100g
| and 200g boxes when running the upstream kernel. I see no
| reason I shouldn't be able to hit 380Gb/s on a single-
| socket Rome box with an upstream kernel. I just haven't
| tried yet.
|
| Most of the changes that I have for the 700g number are
| changes to implement Disk centric NUMA siloing, which I
| would never upstream at this point because they are a
| pile of hacks. They are needed in order to change the
| NUMA node where memory is DMAed, so as to better utilize
| the xGMI links between AMD CPUs.
| doublerabbit wrote:
| > At this point, we're able to serve 100% TLS traffic
| comfortably at 90 Gbps using the default FreeBSD TCP stack.
|
| https://netflixtechblog.com/serving-100-gbps-from-an-open-
| co...
| loeg wrote:
| That was 2017 -- it's quite a bit higher now.
| Prolixium wrote:
| I honed on this as well. I can't speak much to running FreeBSD
| as a server, but can say that using it as a router is not a
| great experience compared to Linux. I can't even get the latest
| ECMP (ROUTE_MPATH) feature working with FRR (or even by hand).
| Cloudef wrote:
| Isnt bsd's tcp stack single threaded?
| rwaksmunski wrote:
| Netflix is pushing 400Gbit/s of TLS traffic per server with 60%
| CPU load. WhatsApp was doing millions of concurrent TCP
| connections per server. FreeBSD's networking has been multi-
| threaded for a long time now.
| hestefisk wrote:
| Esp thanks to in-kernel TLS, which is fantastic.
| technofiend wrote:
| As seen most recently here
| https://news.ycombinator.com/item?id=28584738
| Beermotor wrote:
| It has been multi-threaded since at least the early 2000s and
| most of the work for handling scaling beyond 32 cores was
| completed in 2012.
|
| https://www.cl.cam.ac.uk/teaching/1516/ConcDisSys/2015-Concu...
| trasz wrote:
| Well, to be honest a whole lot has been done to FreeBSD
| network stack scalability quite recently (14-CURRENT), eg
| introduction of epoch(9) and the routing nexthop patches.
| hyperionplays wrote:
| It's multithreaded.
|
| I run FRR Routing punching 400gbit+ @ 75% CPU load on FreeBSD
| 12.
|
| On the same hardware, Debian fell over at 1.8gbit.
| Melatonic wrote:
| How long ago did you compare them?
|
| I have been using FreeBSD for networking for awhile now but
| not at those levels
| bell-cot wrote:
| tl;dr - FreeBSD has ample nice features for their use case, and
| is considerably simpler. Linux has loads of unneeded (for their
| use case) features, and so many cooks in the kitchen that the
| ongoing cognitive load (to keep track of the features and
| complexity and changes) looks worse than the one-time load of
| switching over to FreeBSD.
| znpy wrote:
| It's 2022 and if you still can't see the good in systemd then
| it's you choosing ignorance.
|
| Related: https://www.youtube.com/watch?v=o_AIw9bGogo -- The
| tragedy of systemd.
|
| Where Benno Rice (FreeBSD Committer / FreeBSD Core member)
| explains the value of something like systemd.
___________________________________________________________________
(page generated 2022-01-24 23:02 UTC)