[HN Gopher] Using CMake and Managing Dependencies
       ___________________________________________________________________
        
       Using CMake and Managing Dependencies
        
       Author : ingve
       Score  : 32 points
       Date   : 2021-05-24 14:55 UTC (1 days ago)
        
 (HTM) web link (eliasdaler.github.io)
 (TXT) w3m dump (eliasdaler.github.io)
        
       | gspr wrote:
       | Why people think it's a good idea for the _build systems_ to
       | reach out and grab code from the internet is beyond me.
        
         | hoseja wrote:
         | It's not a build system, it's a meta build system.
        
           | mathstuf wrote:
           | [ Full disclosure: CMake developer. ]
           | 
           | Eh. CMake is a build system. It supports multiple build
           | _tools_ , but it's not like one can just tell CMake "here's
           | some literal Make code, please put it into the generated
           | files". The generated builds are still restricted to the
           | semantics that CMake itself provides via its interface (e.g.,
           | no dyndep support in `add_custom_command` because only the
           | Ninja generator supports that reasonably well; it's possible
           | with Make, but I don't know how easy it would be to make it
           | work for arbitrary custom commands).
        
             | iamcreasy wrote:
             | [Not a C/C++ dev] I am under the assumption CMake do not
             | perform build itself, it produces the build file for Make,
             | Ninja etc. and have those tools run the build file to
             | produce the final binary. If that's correct, wouldn't it
             | make CMake a meta build system?
        
               | mathstuf wrote:
               | Right, I'm being more semantic here. CMake is a build
               | system with the concepts of targets, linking, compiling,
               | languages, toolchains, etc. The fact that it gets
               | "rendered" into a `build.ninja`, `.vcxproj`,
               | `.xcodebuild`, or `Makefile` is (largely) ambivalent (I
               | call these "build tools" or "build executors"). Other
               | instances would be `gn` or Meson. They're the build
               | system; ninja is "just" the build tool they use to
               | actually implement their semantics. Still others like
               | build2, Boost.build, or the Bazel family are both: the
               | build system and the build tool.
               | 
               | One cannot use CMake to write arbitrary Makefiles using
               | all the fancy features available through macro expansion,
               | rule templates, etc. All you have are CMake's semantics
               | that can then be executed by a tool that implements POSIX
               | makefile execution behaviors.
               | 
               | A (probably poor) analogy is a network resource. It can
               | be provided over HTTP, Gopher, or whatever. The semantics
               | of the resource are the same, but the transport mechanism
               | is just how it interacts with the rest of the world.
        
         | mr_tristan wrote:
         | It's the pragmatic solution to a hard problem.
         | 
         | Supporting distribution for all the different kinds of Linux
         | package mangers alone isn't easy, and you take in how macOS and
         | Windows do things, I'm not even sure where you start.
         | 
         | It's kind of sad, but there's multiple walled gardens in play
         | here. I mean, Java's been around for 25 years, and it's just
         | starting to get tooling for making OS installers for
         | applications (jpackage).
         | 
         | So, programming toolchains just invent package management that
         | is easy for everyone to use. Everyone needs a problem solved
         | today for multiple platforms, and it's not like OSes are
         | working together to make this easy.
         | 
         | Personally, I've come to prefer vendored dependencies with the
         | sources, and building scripts around vendoring, separating the
         | build from dependency distribution. But not a lot of
         | programmers are comfortable with this. It's not a common
         | practice, and the most popular version control system (git)
         | isn't very great with binary management.
        
           | gspr wrote:
           | The point isn't the whole debate about vendoring. I have
           | strong opinions on that matter, and they are different from
           | yours, but independently of whether one wants vendoring or
           | not: It seems to me to be highly advantageous to decouple
           | _building_ from _dependency management_ (nomatter if said
           | dependencies are vendored or not).
        
         | jasonpeacock wrote:
         | How else would you add/manage external dependencies for your
         | project?
         | 
         | Do you manually install them on your local host? That's not
         | reproducible and doesn't support multiple versions for
         | different projects.
         | 
         | Do you use an _uber_ build system like Yocto to wrap your CMake
         | build, and let it pull the packages?
         | 
         | Do you manually copy all the external deps into your own
         | project repo, and then manually keep them updated with the
         | latest changes?
        
           | gspr wrote:
           | > How else would you add/manage external dependencies for
           | your project?
           | 
           | Either using a package manager, or doing it "manually" (like
           | you suggest) - a process which can of course also be
           | automated _outside of the build system_.
           | 
           | > Do you manually install them on your local host? That's not
           | reproducible and doesn't support multiple versions for
           | different projects.
           | 
           | Of course that's reproducible and supports multiple versions.
           | Any build system worth its salt can be directed to look for
           | dependencies at specific locations. Then multiple versions of
           | things can be installed at multiple locations. This is
           | literally how things have been done for ages.
           | 
           | > Do you manually copy all the external deps into your own
           | project repo, and then manually keep them updated with the
           | latest changes?
           | 
           | I abhor bundling dependencies, so I'm probably not the right
           | person to ask, but supposing that you do want to bundle
           | dependencies: yes (a process that can be scripted just as
           | easily as it can be handed off to the build system!)
        
             | howinteresting wrote:
             | Build systems that manage dependencies _are_ that
             | automation you 're talking about, except they're easy-to-
             | reuse across many projects. That's incredibly valuable.
             | 
             | Being able to ship software, especially across many
             | platforms is much more important than maintaining some sort
             | of purist separation between build system and package
             | manager. Your view of affairs, after being dominant for
             | decades, is finally receding into the background, and the
             | world is a better place for it.
        
               | gspr wrote:
               | We shall see.
               | 
               | The separation of concerns is of far greater value than
               | just purism. It can ease cross-compilation, it can help
               | building in offline environments, it can make dependency
               | customization far easier, it can make swapping out
               | compilers much easier, etc.
               | 
               | These are all things that have been valuable in the past,
               | and there's little reason to think that they aren't still
               | valuable and will be valuable in the future.
        
               | howinteresting wrote:
               | cargo supports all of the things you talked about,
               | though.
               | 
               | Cross-compiles are part of the fundamental model of cargo
               | (target vs host dependencies).
               | 
               | cargo can be used in a completely offline manner easily,
               | through either caching dependencies in an online step or
               | vendoring them (or just not using dependencies).
               | 
               | Cargo has pretty good support for dependency patching.
               | 
               | The compiler can be swapped out through an environment
               | variable, RUSTC_WRAPPER.
               | 
               | I agree that all of those things are valuable! The cargo
               | developers think so too which is why cargo supports all
               | of them.
               | 
               | I suspect the future is more in making tools like cargo
               | support all the use cases people are concerned about,
               | than trying to find an orthogonality where one doesn't
               | exist.
        
             | Joker_vD wrote:
             | Why do I want the _package manager_ to fetch and install
             | dependencies for a project? I don 't want to put all kinds
             | of random junk into my /usr/lib before running "make" in
             | the project's directory stops failing. And I've seen
             | makefiles with "sudo apt install libqt5core libxml2-dev
             | ..." in them and I didn't like that.
        
               | gspr wrote:
               | > Why do I want the package manager to fetch and install
               | dependencies for a project?
               | 
               | You don't have to do that. If you separate the task of
               | _building_ from the task of _managing software_ , you can
               | mix and match solutions for each separate task as you
               | wish.
               | 
               | > And I've seen makefiles with "sudo apt install
               | libqt5core libxml2-dev ..." in them and I didn't like
               | that.
               | 
               | That's obviously terrible too.
        
           | enriquto wrote:
           | > Do you manually copy all the external deps into your own
           | project repo, and then manually keep them updated with the
           | latest changes?
           | 
           | That is the saner option among the ones that you mention.
           | 
           | The sanest, of course, is to not update your dependencies
           | unless really necessary, to avoid introducing new bugs.
        
             | jasonpeacock wrote:
             | > The sanest, of course, is to not update your dependencies
             | unless really necessary, to avoid introducing new bugs.
             | 
             | So instead you live with all the existing (and unknown)
             | bugs and security issues?
             | 
             | And when you finally do update those frozen dependencies it
             | becomes a nightmare because of all the drift, not just in
             | that package but also its dependencies.
             | 
             | You should be updating regularly; if you're concerned about
             | stability then stay a release or two behind latest but
             | that's not guaranteed to more stable than latest.
             | 
             | Too often I see people take the approach you're espousing
             | to only suffer _more_ later when they are forced to take an
             | update, and then they become so scared of future updates
             | that they never update again, manually backporting patches,
             | and creating a nightmare legacy application.
        
           | krapht wrote:
           | I think parent is saying to use a package manager.
        
             | dagmx wrote:
             | That would be the same issue though, just moved to a
             | different spot
        
               | nerdponx wrote:
               | Not necessarily. Package managers can cache things,
               | especially if you're fetching the same version over and
               | over.
               | 
               | Also, separation of concerns is a good thing IMO.
        
               | howinteresting wrote:
               | Cargo etc also cache dependencies in a central location.
               | 
               | "Separation of concerns" is ill-defined -- you can use
               | cargo as just a package manager, just a build system, or
               | both. Most people use it as both but there are definitely
               | places (e.g. bigcos) that use just one of the aspects of
               | cargo.
        
               | gspr wrote:
               | Separation of tasks. The build system builds. Another
               | system (like a package manager, if you are so inclined,
               | or not if not) manages packages.
        
               | howinteresting wrote:
               | Systems that combine the two generally let you use just
               | one or the other. For example, cargo fetch just acts as
               | the package manager. cargo build can be configured
               | (through vendoring or just not using any external
               | dependencies) to not perform any package management.
               | 
               | What is missing is a modern cross-language, cross-
               | platform package manager, I agree. But there are really
               | good reasons for that, chief among them being that I
               | don't think it's possible to define the scope of such a
               | thing in a coherent fashion.
        
               | gspr wrote:
               | > Systems that combine the two generally let you use just
               | one or the other. For example, cargo fetch just acts as
               | the package manager. cargo build can be configured
               | (through vendoring or just not using any external
               | dependencies) to not perform any package management.
               | 
               | Indeed, and I'm still highly skeptical. Learning Rust has
               | been an amazingly rare sequence of "wow that's perfect"
               | uttered from my mouth - except for Cargo. I couldn't
               | disable the networking functionality of Cargo fast
               | enough!
        
             | gspr wrote:
             | Yes. Or not, if it's not suited for the task. But it seems
             | insane to me to shoehorn this into _the build system_. I
             | don 't even understand why the latter should contain
             | networking code at all.
        
           | bluGill wrote:
           | One of the above, depending on my current situation each of
           | the above (and a few others you didn't mention) make the most
           | sense. That is why the cmake for a project shouldn't pull in
           | dependencies, it should just build with whatever happens to
           | be there or throw an error if something is missing.
           | 
           | Now there do exist "uber build systems" that are written in
           | cmake. I'm fine with those existing (though I would probably
           | not chose cmake to write it in, that is the author's choice).
           | However when building a project you should not force a method
           | to get dependencies on your users. The package management is
           | a system level concern and you should not mess with it.
           | 
           | Note that I just put a lot of hate on systems like Cargo,
           | pip, or hackage. Or google's pull everything into the one
           | repository. Dependencies belong to tools like apt, rpm,
           | yocto, ,snap, conan (when not building packages to install) -
           | Microsoft/Apple are deficient for not providing them. Your
           | program should just attempt to build with what it is given,
           | not try to subvert my system - whatever it is. Yes I know it
           | makes your life easier to not have to deal with more than one
           | [possibly buggy] version of a dependency, but it makes my
           | life worse.
        
         | krapht wrote:
         | People also curl and execute shell scripts from the internet
         | every day.
        
           | gspr wrote:
           | Yes. And it's a terrible idea.
        
       | vadersb wrote:
       | Very nice tutorial on the subject! Probably it's a noob question,
       | but - with such CMake setup, will it be possible to step into
       | dependencies' source code when debugging?
        
       | gomoboo wrote:
       | Excellent guide. I wish I had had it when struggling to learn
       | CMake initially. There's good content there for those past the
       | basics as well. For example, I had no idea FetchContent could be
       | used to download release zips. I will be trying that out tonight.
        
       | xvilka wrote:
       | I wish CMake would have kept the engine but change the syntax to
       | something more sensible.
        
         | flohofwoe wrote:
         | Agreed. What cmake does under the hood is actually very
         | sensible and useful, but the arcane scripting language makes
         | even simple things hard. Adding a "proper" alternative
         | scripting language would be a good reason to bump the version
         | number to 4.0
        
           | hoseja wrote:
           | It is serviceable after time, and if you don't need nothing
           | too fancy. Just remember that _basically everything_ is a
           | string.
        
           | wmkn wrote:
           | CMake as a library with bindings for various scripting
           | languages (e.g. Python) would make a lot of sense to me. Have
           | the power of a proper scripting language and let the CMake
           | library deal with generating the build system.
           | 
           | I once looked into extracting the CMake build system
           | generation code and make it into a standalone library. But I
           | soon realized that it's not exactly a weekend project.
        
           | klapatsibalo wrote:
           | Let's make a language that compiles to CMake! :D
        
             | xvilka wrote:
             | Let's call it Autocmake.
        
               | MaxBarraclough wrote:
               | Or perhaps _CTools_.
        
         | umvi wrote:
         | They should have kept it as a frontend and backend like llvm so
         | people could use any language to interface with it
        
           | therealjumbo wrote:
           | Couldn't they still do this? I think there was a project a
           | ways back to add a lua frontend to CMake. It's been since
           | abandoned, but I don't see why it couldn't be revitalized. I
           | guess the project just doesn't see the value in doing it, or
           | they don't have enough manpower.
        
             | mathstuf wrote:
             | See my sibling comment. The way CMake's language has
             | evolved is very organic. There is a declarative core now
             | with targets and usage requirements, but due to backwards
             | compatibility guarantees, actually making it the only thing
             | available is nigh impossible.
             | 
             | Lua was also deemed not suitable in the long run because of
             | our backwards compatibility guarantees. We'd need to ship N
             | Lua interpreters internally because Lua is not compatible
             | between 5.1, 5.2, 5.3, etc. Or say "we only support Lua
             | 5.2, we cannot support 5.4" which...distros _really_
             | dislike.
        
               | abz10 wrote:
               | I may not be understanding something; wouldn't the Lua be
               | embedded? So yeah, you'd pick a version. Why wouldn't
               | distros like it?
        
               | mathstuf wrote:
               | Multiple issues:                 - Distros like
               | unbundling (vcpkg, Anaconda, Spack, etc. also do too)
               | - Old releases of Lua are not maintained, so it would be
               | our problem       - Compiled Lua packages need to match
               | our build (and since we'd likely need multiple Lua
               | versions, our symbols would be mangled, so nothing would
               | work unless explicitly compiled against CMake's version).
               | I really don't want to deal with "why can't I use Lua
               | package XYZ?" issues every week
        
               | enriquto wrote:
               | 10 lua interpreters would still be less than 1% of the
               | cmake code base. Shipping all of them would be orders of
               | magnitude better than the current clusterfuck that cmake
               | is. But in reality you would only need one.
        
               | mathstuf wrote:
               | Except that our testing and behavior matrix now explodes.
               | It's not the code I'd be worried about maintaining: it's
               | the effects and interactions of that code.
        
               | enriquto wrote:
               | It has already much exploded, fractally several times
               | around. My main portability problem when building
               | packages is the very existence different cmake versions,
               | and hardcoded cmake version requirements in cmakelists.
        
               | mathstuf wrote:
               | > My main portability problem when building packages is
               | the very existence different cmake versions
               | 
               | Why? You should _always_ be able to update CMake and have
               | it continue to work. If not, that 's a regression in
               | CMake and we take those very seriously.
        
               | krapht wrote:
               | I think parent is referring to the fact that for many
               | Linux distributions, you must build packages with the
               | version of CMake they have in the repositories.
               | 
               | Even today, in 2021, I have to use CMake 2.8.12 / GCC
               | 4.8.5 in order to build code that will deploy on a CentOS
               | 7 server. That means I have to backport the CMake file of
               | every dependency that declares a minimum_requirement of
               | 3+.
        
               | mathstuf wrote:
               | If you want to be in that distro repo, yeah. If you're
               | just building on CentOS 7, download newer binaries
               | (https://github.com/Kitware/CMake/releases) and use them
               | instead. I do this all the time for our CentOS 7
               | deployments (and we use gcc 8 or 9; I forget when we last
               | uplifted the devtoolset).
               | 
               | FWIW, `cmake3` is packaged in CentOS 7 (though it might
               | be EPEL?).
               | 
               | All that said, given CMake's backwards compatibility
               | guarantees, it is a bit silly that distributions don't
               | update CMake more aggressively.
        
               | ihnorton wrote:
               | > You should always be able to update CMake and have it
               | continue to work.
               | 
               | CMake hard-codes the absolute path to itself all over the
               | place. Any time a package manager that relies on symlinks
               | (eg homebrew) updates the CMake package, every build that
               | relies on it breaks. (last time that happened I ended up
               | downloading the cmake.org build and recreating the
               | symlinks because I couldn't afford hours of rebuild)
        
               | mathstuf wrote:
               | Hmm? I know it puts the absolute path of the CMake that
               | generated the cache file in there, but what else cares
               | about the CMake version that shouldn't also be
               | reconfigured when CMake changes anyways?
               | 
               | FWIW, CMake is highly relocatable and doesn't really care
               | where it ends up living. That Homebrew removes versions
               | you were using out from under you is not something we can
               | fix. Or maybe I'm not understanding the problem fully (I
               | don't use Homebrew with any consistency to know what the
               | issue is here).
        
           | mathstuf wrote:
           | [ Full disclosure: CMake developer. ]
           | 
           | That presumes that we knew it would have been such a problem
           | when CMake was started (back in the late 90's; I started
           | contributing in 2010). The issue is that a lot of semantics
           | are tied up in the way the language works between the list
           | representation, variable lookup, scoping, policies, etc.
        
         | mxcrossr wrote:
         | Also I wish they could nuke from orbit all the tutorials and
         | stack exchange answers that are completely wrong about how to
         | use it!
        
         | dgellow wrote:
         | So much this. Their DSL is so full of weird details.
        
         | cprecioso wrote:
         | That's kinda Meson I guess. I found it super pleasant - once
         | you grasp the concepts and terminology; but the website is very
         | informative.
        
           | xvilka wrote:
           | Meson has different engine and requires Python. CMake is
           | already nearly perfect in everything except syntax. Moreover,
           | it requires only C++ to be built so bootstrapping is easier
           | than in case of Meson.
        
             | ironman1478 wrote:
             | Python doesn't seem like a huge barrier to entry to me.
             | CMake might be great in many ways, but the syntax is truly
             | awful. It's just not intuitive to many people. I think due
             | to how spacing works in CMake you can have incorrect things
             | run and not fail. That's a huge issue, especially when
             | onboarding people who aren't familiar with CMake.
             | 
             | Meson has some warts, especially around documentation but
             | it's just easier and is just generally predictable. I've
             | converted some CMake projects internally to meson and
             | people were able to contribute and make changes with
             | minimal learning and training, because the structure is
             | intuitive. That means a lot.
        
       ___________________________________________________________________
       (page generated 2021-05-25 23:01 UTC)