[HN Gopher] Show HN: I built a Cargo-like build tool for C/C++
       ___________________________________________________________________
        
       Show HN: I built a Cargo-like build tool for C/C++
        
       I love C and C++, but setting up projects can sometimes be a pain.
       Every time I wanted to start something new I'd spend the first hour
       writing CMakeLists.txt, figuring out find_package, copying
       boilerplate from my last project, and googling why my library isn't
       linking. By the time the project was actually set up I'd lost all
       momentum.  So, I built Craft - a lightweight build and workflow
       tool for C and C++. Instead of writing CMake, your project
       configuration goes in a simple craft.toml:
       [project]       name = "my_app"       version = "0.1.0"
       language = "c"       c_standard = 99            [build]       type
       = "executable"       Run craft build and Craft generates the
       CMakeLists.txt automatically and builds your project. Want to add
       dependencies? That's just a simple command:                 craft
       add --git https://github.com/raysan5/raylib --links raylib
       craft add --path ../my_library       craft add sfml       Craft
       will clone the dependency, regenerate the CMake, and rebuild your
       project for you.  Other Craft features: craft init - adopt an
       existing C/C++ project into Craft or initialize an empty directory.
       craft template - save any project structure as a template to be
       initialized later. craft gen - generate header and source files
       with starter boilerplate code. craft upgrade - keeps itself up to
       date.  CMakeLists.extra.cmake for anything that Craft does not yet
       handle.  Cross platform - macOS, Linux, Windows.  It is still early
       (I just got it to v1.0.0) but I am excited to be able to share it
       and keep improving it.  Would love feedback. Please also feel free
       to make pull requests if you want to help with development!
        
       Author : randerson_112
       Score  : 154 points
       Date   : 2026-04-09 16:04 UTC (19 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | wg0 wrote:
       | Yesterday I had to wrestle with CMake.
       | 
       | But how this tool figures out where the header files and build
       | instructions for the libraries are that are included? Any
       | expected layout or industry wide consensus?
        
         | integricho wrote:
         | I believe it supports only projects having a working cmake
         | setup, no extra magic
        
         | flohofwoe wrote:
         | I suspect it depends on a specific directory structure, e.g.
         | look at this generated cmake file:
         | 
         | https://github.com/randerson112/craft/blob/main/CMakeLists.t...
         | 
         | ...and for custom requirements a manually created
         | CMakeLists.extras.txt as escape hatch.
         | 
         | Unclear to me how more interesting scenarios like compiler- and
         | platform-specific build options (enable/disable warnings,
         | defines, etc...), cross-compilation via cmake toolchain files
         | (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be
         | handled. E.g. just trivial things like "when compiling for
         | Emscripten, include these source files, but not those others".
        
         | eliemichel wrote:
         | CMakes piles up various generations of idioms so there are
         | multiple ways of doing it, but personally I've learned to steer
         | away from find_package() and other magical functions. Get all
         | your dependencies as subdirectories (whichever way you prefer)
         | and use add_subdirectory(). Use find_package() only in so-
         | called "config" mode where you explicitly instruct cmake where
         | to find the config for large precompiled dependencies only
        
       | duped wrote:
       | FWIW: there is something fundamentally wrong with a meta-meta
       | build system. I don't think you should bother generating or
       | wrapping CMake, you should be replacing it.
        
         | SpaceNoodled wrote:
         | My thoughts exactly. I thought this was going to be some new
         | thing, but it's just yet another reason that I'll stick with
         | Makefiles.
        
           | flohofwoe wrote:
           | Do your Makefiles work across Linux, macOS and Windows
           | (without WSL or MingW), GCC, Clang and MSVC, or allow loading
           | the project into an IDE like Xcode or Visual Studio though?
           | That's why meta-build-systems like cmake were created, not to
           | be a better GNU Make.
        
             | uecker wrote:
             | There is something fundamentally wrong with Windows or
             | Visual Studio that it requires ugly solutions.
        
               | delta_p_delta_x wrote:
               | Windows and Visual Studio solutions are perfectly fine.
               | MSBuild is a declarative build syntax in XML, it's not
               | very different from a makefile.
        
               | uecker wrote:
               | XML is already terrible. But the main problem seems to be
               | that they created something similar but incompatible to
               | make.
        
               | flohofwoe wrote:
               | Ok, then just cl.exe instead of gcc or clang. Completely
               | different set of command line options from gcc and clang,
               | but that's fine. C/C++ build tooling needs to be able to
               | deal with different toolchains. The diversity of C/C++
               | toolchains is a strength, not a weakness :)
               | 
               | One nice feature of MSVC is that you can describe the
               | linker dependencies in the source files (via #pragma
               | comment(lib, ...)), this enables building fairly complex
               | single-file tools trivially without a build system like
               | this:                  cl mytool.c
               | 
               | ...without having to specify system dependencies like
               | kernel32 etc... on the cmdline.
        
         | flohofwoe wrote:
         | Cmake is doing a lot of underappreciated work under the hood
         | that would be very hard to replicate in another tool, tons of
         | accumulated workarounds for all the different host operating
         | systems, compiler toolchains and IDEs, it's also one of few
         | build tools which properly support Windows and Visual Studio.
         | 
         | Just alone reverse engineering the Xcode and Visual Studio
         | project file formats for each IDE version isn't fun, but this
         | "boring" grunt work is what makes cmake so valuable.
         | 
         | The _core ideas_ of cmake are sound, it 's only the scripting
         | language that sucks.
        
         | SleepyMyroslav wrote:
         | Another fresh example of what you don't like:
         | https://www.youtube.com/watch?v=ExSlx0vBMXo Building C++: It
         | Doesn't Have to be Painful! - Nicole Mazzuca - Meeting C++ 2025
         | 
         | Build systems don't plan to converge in the future =)
        
       | flohofwoe wrote:
       | Heh, looks like cmake-code-generators are all the rage these days
       | ;)
       | 
       | Here's my feeble attempt using Deno as base (it's _extremely_
       | opinionated though and mostly for personal use in my hobby
       | projects):
       | 
       | https://github.com/floooh/fibs
       | 
       | One interesting chicken-egg-problem I couldn't solve is how to
       | figure out the C/C++ toolchain that's going to be used without
       | running cmake on a 'dummy project file' first. For some
       | toolchain/IDE combos (most notably Xcode and VStudio) cmake's
       | toolchain detection takes a _lot_ of time unfortunately.
        
         | apparatur wrote:
         | I'm intrigued by the idea of writing one's own custom build
         | system in the same language as the target app/game; it's
         | probably not super portable or general but cool and easy to
         | maintain for smaller projects:
         | https://mastodon.gamedev.place/@pjako/115782569754684469
        
       | lgtx wrote:
       | The installation instructions being a `curl | sh` writing to the
       | user's bashrc does not inspire confidence.
        
         | ori_b wrote:
         | They did say it was inspired by cargo, which is often installed
         | using rustup as such:                   curl --proto '=https'
         | --tlsv1.2 -sSf https://sh.rustup.rs | sh
        
         | uecker wrote:
         | This is fitting for something simulating cargo, which is a huge
         | supply chain risk itself.
        
         | bikelang wrote:
         | I don't love this approach either (what a security
         | nightmare...) - but it is easy to do for users and developers
         | alike. Having to juggle a bunch of apt-like repositories for
         | different distros is a huge time sink and adds a bunch of build
         | complexity. Brew is annoying with its formulae vs tap vs cask
         | vs cellar - and the associated ruby scripting... And then
         | there's windows - ugh.
         | 
         | I wish there was a dead simple installer TUI that had a common
         | API specification so that you could host your installer spec on
         | your.domain.com/install.json - point this TUI at it and it
         | would understand the fine grained permissions required, handle
         | required binary signature validation, manifest/sbom validation,
         | give the user freedom to customize where/how things were
         | installed, etc.
        
         | maccard wrote:
         | Given you're about to run a binary, it's no worse than that.
        
           | hyperhopper wrote:
           | It is definitely worse. At leas a binary is constant, on your
           | system, can be analyzed. Curl|sh can give you different
           | responses than just curling. Far far worse
        
             | maccard wrote:
             | Only if you download an analyse it. You're free to download
             | the install script and analyze that too in the same way.
             | The advantage that the script has is it's human readable
             | unlike the binary you're about to execute blindly.
        
       | spwa4 wrote:
       | Just switch to bazel, copy my hermetic build config and just use
       | it ... yes, you can hate me know.
        
       | cherryteastain wrote:
       | Seems to solve a problem very similar to Conan or vcpkg but
       | without its own package archive or build scripts. In general,
       | unlike Cargo/Rust, many C/C++ projects dynamically link libraries
       | and often require complex Makefile/shell script etc magic to
       | discover and optionally build their dependencies.
       | 
       | How does craft handle these 'diamond' patterns where 2
       | dependencies may depend on versions of the same library as
       | transitive dependencies (either for static or dynamic linking or
       | as header-only includes) without custom build scripts like the
       | Conan approach?
        
       | Surac wrote:
       | Uses CMAKE, Sorry not for me. Call me old but i prefere good old
       | make or batch. Maybe it's because i can understand those tools.
       | Debugging CMAKE build problems made me hate it. Also i code for
       | embedded CPU and most of the time CMAKE is just overkill and does
       | not play well the compiler/binutils provided. The Platform
       | independency is just not happening in those environments.
        
         | bluGill wrote:
         | For simple projects. Make is easier for simple things I will
         | grant. However when your projects gets complex at all make
         | becomes a real pain and cmake becomes much easier.
         | 
         | Cmake has a lot of warts, but they have also put a lot of
         | effort into finding and fixing all those weird special cases.
         | If your project uses CMake odds are high it will build
         | anywhere.
        
           | tosti wrote:
           | Odds are high the distro maintainer will lose hair trying to
           | package it
        
           | lkjdsklf wrote:
           | Also, for better or worse, cmake is pretty much the
           | "standard" for C/C++ these days.
           | 
           | Fighting the standard often creates it's own set of problems
           | and nightmares that just aren't worth it. Especially true in
           | C++ where yhou often have to integrate with other projects
           | and their build systems. Way easier if you just use cmake
           | like everyone else.
           | 
           | Even the old hold outs, boost and google open source, now use
           | cmake for their open source stuff.
        
         | delta_p_delta_x wrote:
         | > most of the time CMAKE is just overkill and does not play
         | well the compiler/binutils provided
         | 
         | You need to define a CMake toolchain[1] and pass it to CMake
         | with --toolchain /path/to/file in the command-line, or in a
         | preset file with the key `toolchainFile` in a CMake preset.
         | I've compiled for QNX and ARM32 boards with CMake, no issues,
         | but this needs to be done.
         | 
         | [1]: https://cmake.org/cmake/help/latest/manual/cmake-
         | toolchains....
        
         | vnorilo wrote:
         | When you need a configuration step, cmake will actually save
         | you a lot of time, especially if you work cross platform or
         | even cross compile. I love to hate cmake as much as the next
         | guy, and it would be hard to design a worse scripting language,
         | but I'll take it any time over autoconf. Some of the newer
         | tools may well be more convenient - I tried Bazel, and it sure
         | wasn't (for me).
         | 
         | If you're happy to bake one config in a makefile, then cmake
         | will do very little for you.
        
         | Night_Thastus wrote:
         | For toy projects good old Make is fine...but at some point a
         | project gets large enough that you need something more
         | powerful. If you need something that can deal with multiple
         | layers of nested sub-repositories, third-party and first-party
         | dependencies, remote and local projects, multiple build
         | configurations, dealing with non-code assets like
         | documentation, etc, etc, etc - Make just isn't enough.
        
       | looneysquash wrote:
       | Nice. I have been thinking of making something similar. Now
       | hopefully I don't have to!
       | 
       | Not sure how big your plans are.
       | 
       | My thoughts would be to start as a cmake generator but to
       | eventually replace it. Maybe optionally.
       | 
       | And to integrate suppoet for existing package managers like
       | vcpkg.
       | 
       | At the same time, I'd want to remain modular enough that's it's
       | not all or nothing. I also don't like locking.
       | 
       | But right now package management and build system are decoupled
       | completely. And they are not like that in other ecosystems.
       | 
       | For example, Cmake can use vcpkg to install a package but then I
       | still have to write more cmake to actually find and use it.
        
         | psyclobe wrote:
         | > For example, Cmake can use vcpkg to install a package but
         | then I still have to write more cmake to actually find and use
         | it.
         | 
         | I have this solved at our company. We have a tool built on top
         | of vcpkg, to manage internal + external dependencies. Our cmake
         | linker logic leverages the port names and so all you really do
         | is declare your manifest file (vcpkg.json) then declare which
         | one of them you will export publicly.
         | 
         | Everything after that is automatic including the exported cmake
         | config for your library.
        
       | tombert wrote:
       | This certainly seems less awful than the typical C building
       | process.
       | 
       | What I've been doing to manage dependencies in a way that doesn't
       | depress me much has been Nix flakes, which allows me a pretty
       | straightforward `nix build` with the correct dependencies built
       | in.
       | 
       | I'm just a bit curious though; a lot of C libraries are system-
       | wide, and usually require the system package manager (e.g.
       | libsdl2-dev) does this have an elegant way to handle those?
        
         | randerson_112 wrote:
         | Yes, many libraries are system wide that is true. This is
         | something I had on the list of features to add. System
         | dependencies. Thank you for the feedback!
        
       | delduca wrote:
       | Compared to Conan, what are the advantages?
        
         | randerson_112 wrote:
         | Craft has project management and generates starter project
         | structure. You can generate header and source files with
         | boilerplate starter code. Craft manages the building of the
         | project so you don't need to write much CMake. You can also
         | save project structures as templates and instantiate those
         | templates in new projects ready to go.
        
           | delduca wrote:
           | How you can be better than CMake?
        
       | bluGill wrote:
       | Anyone can make a tool that solves a tiny part of the problem.
       | however the reason no such tool has caught on is because of all
       | the weird special cases you need to handle before it can be
       | useful. Even if you limit your support to desktop: OS/X and
       | Windows that problem will be hard, adding various linux flavors
       | is even more difficult, not to mention BSD. The above is the
       | common/mainstream choices, there Haiku is going to be very
       | different, and I've seen dozens of others over the years, some of
       | them have a following in their niche. Then there are people
       | building for embedded - QNX, vxworks, or even no OS just bare
       | metal - each adding weirdness (and implying cross compiling which
       | makes everything harder because your assumptions are always
       | wrong).
       | 
       | I'm sorry I have to be a downer, but the fact is if you can use
       | the word "I" your package manager is obviously not powerful
       | enough for the real world.
        
         | the__alchemist wrote:
         | I will categorize this as a pattern I've seen which leads to
         | stagnation, or is at least aiming for it. Usually these are
         | built on one or more assumption which doesn't hold. The flow of
         | this pattern:                 - Problem exists       -
         | Proposals of solutions, (varying quality), or not       - "You
         | can't just solve this. It's complicated! This problem must
         | exist". (The post I'm replying to       - Problem gets solved,
         | hopefully.
         | 
         | Anecdotes I'm choosing based on proximity to this particular
         | problem: uv and cargo. uv because people said the same thing
         | about python packaging, and cargo because its adjacent to C and
         | C++ in terms of being a low-level compiled language used for
         | systems programming, embedded/bare-metal etc.
         | 
         | The world is rich in complexity, subtlety, and exceptions to
         | categorization. I don't think this should block us from solving
         | problems.
        
           | bluGill wrote:
           | I didn't say the problem couldn't be solved. I said the
           | problem can't be solved by one person. There is a difference.
           | (maybe it can be solved by one person over a few decades)
        
             | tekne wrote:
             | I mean -- if I'm going to join a team to solve the hard
             | 20%, I'd like to see the idea validated against the easy
             | 80% first.
             | 
             | If it's really bad, at least the easy 20%.
        
             | randerson_112 wrote:
             | This is true. There is no way I could solve a problem of
             | this scale by myself. That is why this is an open source
             | project and open to everyone to make changes on. There is
             | still much more to improve, this is only day 1 of release
             | to the public.
        
         | omcnoe wrote:
         | There are so many reasons why C/C++ build systems struggle, but
         | imo power is the last of them. "Powerful" and "scriptable"
         | build systems are what has gotten us into the swamp!
         | 
         | * Standards committee is allergic to standardizing anything
         | outside of the language itself: build tools, dependency
         | management, even the concept of a "file" is controversial!
         | 
         | * Existing poor state of build systems is viral - any new build
         | system is 10x as complex as a clean room design because you
         | have to deal with all the legacy "power" of previous build
         | tooling. Build system flaws propagate - the moment you need
         | hacks in your build, you start imposing those hacks on
         | downstream users of your library also.
         | 
         | Even CMake should be a much better experience than it is - but
         | in the real world major projects don't maintain their CMake
         | builds to the point you can cleanly depend on them. Things like
         | using raw MY_LIB_DIR variables instead of targets, hacky/broken
         | feature detection flags etc. Microsoft tried to solve this
         | problem via vcpkg, ended up having to patch builds of 90% of
         | the packages to get it to work, and it's still a poor
         | experience where half the builds are broken.
         | 
         | My opinion is that a new C/C++ build/package system is actually
         | a solvable problem now with AI. Because you can point Opus 4.6
         | or whoever at the massive pile of open source dependencies, and
         | tell it for each one "write a build config for this package
         | using my new build system" which solves the gordian knot of the
         | ecosystem problem.
        
           | bluGill wrote:
           | No scripts sounds nice until you are doing something weird
           | that the system doesn't cover. Cmake is starting to get all
           | the possible weirdness right without scripts but there are
           | still a few cases it can't handle.
        
       | seniorThrowaway wrote:
       | Having to work around a massive C++ software project daily, I
       | wish you luck. We use conan2, and while it can be very
       | challenging to use, I've yet to find something better that can
       | handle incorporating as dependencies ancient projects that still
       | use autoconf or even custom build tooling. It's also very good at
       | detecting and enforcing ABI compatibility, although there are
       | still some gaps. This problem space is incredibly hard and
       | improving it is a prime driver for the creation of many of the
       | languages that came after C/C++
        
         | mgaunard wrote:
         | I find that conan2 is mostly painful with ABI. Binaries from
         | GCC are all backwards compatible, as are C++ standard versions.
         | The exception is the C++11 ABI break.
         | 
         | And yet it will insist on only giving you binaries that match
         | exactly. Thankfully there are experimental extensions that
         | allow it to automatically fall back.
        
       | gavinray wrote:
       | The least painful C/C++ build tool I've used is xmake
       | 
       | https://github.com/xmake-io/xmake
       | 
       | The reason why I like it (beyond ease-of-use) is that it can spit
       | out CMakeLists.txt and compile_commands.json for IDE/LSP
       | integration and also supports installing Conan/vcpkg libraries or
       | even Git repos.                   set_project("myapp")
       | set_languages("c++20")
       | add_requires("conan::fmt/11.0.2", {alias = "fmt"})
       | add_requires("vcpkg::fmt", {alias = "fmt"})
       | add_requires("git://github.com/fmtlib/fmt v11.0.2", {alias =
       | "fmt"})              target("myapp")
       | set_kind("binary")             add_files("src/*.cpp")
       | add_packages("fmt")
       | 
       | Then you use it like                 # Generate
       | compile_commands.json and CMakeLists.txt       $ xmake project -k
       | compile_commands       $ xmake project -k cmake            #
       | Build + run       $ xmake && xmake run myapp
        
         | delta_p_delta_x wrote:
         | Agreed, xmake seems very well-thought-out, and supports the
         | most modern use-cases (C++20 named modules, header unit
         | modules, and `import std`, which CMake still has a lot of
         | ceremony around). I should switch to it.
        
         | ethin wrote:
         | I would happily switch to it in a heartbeat if it was a lot
         | more well-documented and if it supported even half of what
         | CMake does.
         | 
         | As an example of what I mean, say I want to link to the FMOD
         | library (or any library I legally can't redistribute as an
         | SDK). Or I want to enable automatic detection on Windows where
         | I know the library/SDK is an installer package. My solution, in
         | CMake, is to just ask the registry. In XMake I still can't
         | figure out how to pull this off. I know that's pretty niche,
         | but still.
         | 
         | The documentation gap is the biggest hurtle. A lot of the
         | functions/ways of doing things are poorly documented, if they
         | are at all. Including a CMake library that isn't in any of the
         | package managers for example. It also has some weird quirks:
         | automatic/magic scoping (which is NOT a bonus) along with a
         | hack "import" function instead of using native require.
         | 
         | All of this said, it does work well when it does work.
         | Especially with modules.
        
         | IshKebab wrote:
         | I've had some experience with this but it seems to be rather
         | slow, very niche and tbh I can't see a reason to use it over
         | CMake.
        
         | NekkoDroid wrote:
         | Similar to premake I have never been a fan of the global state
         | for defining targets. Give me an object or some handle that I
         | call functions on/pass to functions. CMake at some point ended
         | up somewhat right with that to going to target based defining
         | for its stuff and since I've really learned it I have been
         | kinda happy with it.
        
         | eqvinox wrote:
         | actually looks very similar to Meson [https://mesonbuild.com/],
         | which is getting a lot of traction in FOSS
         | [https://mesonbuild.com/Users.html]
         | 
         | e.g. from their docs:                 project('sdldemo', 'c',
         | default_options: 'default_library=static')
         | sdl2_dep = dependency('sdl2')       sdl2_main_dep =
         | dependency('sdl2main')              executable('sdlprog',
         | 'sdlprog.c',                  win_subsystem: 'windows',
         | dependencies: [sdl2_dep, sdl2_main_dep])
        
           | cassepipe wrote:
           | Meson is a python layer over the ninja builder, like cmake
           | can be. xmake is both a build tool and a package manager fast
           | like ninja and has no DSL, the build file is just lua. It's
           | more like cargo than meson is.
        
             | eqvinox wrote:
             | I didn't claim it was a package manager, just that it
             | looked similar. The root post said "build tool", and that's
             | what Meson is as well.
             | 
             | Other than that, both "python layer" and "over the ninja
             | builder" are technically wrong. "python layer" is off since
             | there is now a second implementation, Muon
             | [https://muon.build/], in C. "over the ninja builder" is
             | off since it can also use Visual Studio's build
             | capabilities on Windows.
             | 
             | Interestingly, I'm unaware of other build-related systems
             | that have multiple implementations, except Make (which is
             | in fact part of the POSIX.1 standard.) Curious to know if
             | there are any others.
        
       | shevy-java wrote:
       | Will take C only 51 years to adopt.
        
       | dima55 wrote:
       | If you think cmake isn't very good, the solution isn't to add
       | more layers of crap around cmake, but to replace it. Cmake itself
       | exists because a lot of humans haven't bothered to read the gnu
       | make manual, and added more cruft to manage this. Please don't
       | add to this problem. It's a disease
        
         | dymk wrote:
         | As much of a dog as cmake is, "just use make!" does not solve
         | many of the problems that cmake makes a go at. It's like saying
         | go write assembler instead of C because C has so many footguns.
        
           | dima55 wrote:
           | GNU Make has a debugger. This alone makes it far superior to
           | every other build tool I've ever seen. The cmake debugging
           | experience is "run a google search, and try random stuff
           | recommended by other people that also have no idea how the
           | thing works". This shouldn't be acceptable.
        
             | beckford wrote:
             | That hasn't been true for a few years at least.
             | https://www.jetbrains.com/help/clion/cmake-debug.html is
             | has had CMake debugging since cmake 3.27. Ditto for vscode
             | and probably other C IDEs I am not familiar with. So does
             | Gradle for Java. GNU make is hardly exclusive.
        
         | wiseowise wrote:
         | I'm all for shitting on CMake, but Jesus, to suggest Make as a
         | replacement/improvement is an unhinged take.
        
           | dima55 wrote:
           | I'm suggesting that people creating build systems read the
           | make manual. Surely this isn't controversial?
        
             | nnevatie wrote:
             | People using CMake might want to build the same code on
             | multiple platforms - this is trivially achievable, unlike
             | with Make.
        
         | randerson_112 wrote:
         | This is very true. My thought process was that since majority
         | of projects already run on CMake, I would simply build off of
         | that and take advantage of what CMake is good at while making
         | the more difficult operations easier. Thank you for your
         | feedback!
        
       | mutkach wrote:
       | Please consider adding `cargo watch` - that would be a killer
       | feature!
        
         | randerson_112 wrote:
         | Yes! This is definitely on the list of features to add. Thank
         | you for the feedback!
        
       | looneysquash wrote:
       | Besides Cargo, you might want to take a look at Python's
       | pyproject.toml standard.
       | https://packaging.python.org/en/latest/guides/writing-pyproj...
       | 
       | It's similar, but designed for an existing ecosystem. Cargo is
       | designed for `cargo`, obviously.
       | 
       | But `pyproject.toml` is designed for the existing tools to all
       | eventually adopt. (As well as new tools, of course.)
        
       | kjksf wrote:
       | In the age of AI tools like this are pointless. Especially new
       | ones, given existence of make, cmake, premake and a bunch of
       | others.
       | 
       | C++ build system, at the core, boils down to calling gcc foo.c -o
       | foo.obj / link foo.obj foo.exe (please forgive if I got they
       | syntax wrong).
       | 
       | Sure, you have more .c files, and you pass some flags but that's
       | the core.
       | 
       | I've recently started a new C++ program from scratch.
       | 
       | What build system did I write?
       | 
       | I didn't. I told Claude:
       | 
       | "Write a bun typescript script build.ts that compiles the .cpp
       | files with cl and creates foo.exe. Create release and debug
       | builds, trigger release build with -release cmd-line flag".
       | 
       | And it did it in minutes and it worked. And I can expand it with
       | similar instructions. I can ask for release build with all the
       | sanitize flags and claude will add it.
       | 
       | The particulars don't matter. I could have asked for a makefile,
       | or cmake file or ninja or a script written in python or in ruby
       | or in Go or in rust. I just like using bun for scripting.
       | 
       | The point is that in the past I tried to learn cmake and good
       | lord, it's days spent learning something that I'll spent 1 hr
       | using.
       | 
       | It just doesn't make sense to learn any of those tools given that
       | claude can give me working any build system in minutes.
       | 
       | It makes even less sense to create new build tools. Even if you
       | create the most amazing tool, I would still choose spending a
       | minute asking claude than spending days learning arbitrary syntax
       | of a new tool.
        
         | duped wrote:
         | You're missing finding library/include paths, build
         | configuration (`-D` flags for conditional compilation),
         | fetching these from remote repositories, and versioning.
        
         | randerson_112 wrote:
         | This is a fair and valid point. However, why leave your
         | workflow to write a prompt to an AI when you can run simple
         | commands in your workspace. Also you are most likely paying to
         | use the AI while Craft is free and open source and will only
         | continue to improve. I respect your feedback though, thank you!
        
         | nnevatie wrote:
         | The same AI tool could have written a de-facto CMakeLists.txt
         | file for you.
        
       | adev_ wrote:
       | Feedback of someone who is used to manage large (>1500) software
       | stack in C / C++ / Fortran / Python / Rust / etc:
       | 
       | - (1) Provide a way to compile without internet access and
       | specify the associated dependencies path manually. This is
       | absolutely critical.
       | 
       | Most 'serious' multi-language package managers and integration
       | systems are building in a sandbox without internet access for
       | security reasons and reproducibility reasons.
       | 
       | If your build system does not allow to build offline and with
       | manually specified dependencies, you will make life of
       | integrators and package managers miserable and they will avoid
       | your project.
       | 
       | (2) _Never_ _ever_ build in  '-03 -march=native' by default. This
       | is always a red flag and a sign of immaturity. People expect code
       | to be portable and shippable.
       | 
       | Good default options should be CMake equivalent of
       | "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
       | 
       | -O3 can be argued. -march=native is always always a mistake.
       | 
       | - (3) Allow your build tool to be built by an other build tool
       | (e.g CMake).
       | 
       | Anybody caring about reproducibility will want to start from
       | sources, not from a pre-compiled binary. This also matter for
       | cross compilation.
       | 
       | - (4) Please offer a compatibility with pkg-config
       | (https://en.wikipedia.org/wiki/Pkg-config) and if possible CPS
       | (https://cps-org.github.io/cps/overview.html) for both
       | consumption and generation.
       | 
       | They are what will allow interoperability between your system and
       | other build systems.
       | 
       | - (5) last but not least: Consider seriously the cross-
       | compilation use case.
       | 
       | It is common in the world of embedded systems to cross compile.
       | Any build system that does not support cross-compilation will be
       | de facto banned from the embedded domain.
        
         | moralestapia wrote:
         | >15000
         | 
         | 15000 what?
        
           | adev_ wrote:
           | 1500 C/C++ individual software components.
           | 
           | The 15000 was a typo on my side. Fixed.
        
             | moralestapia wrote:
             | I see, thanks. I didn't mind the number it just wasn't
             | clear what was it about.
        
         | tgma wrote:
         | > _-march=native is always always a mistake_
         | 
         |  _Gentoo user_ : hold my beer.
        
           | jjmarr wrote:
           | It's also an option on NixOS but I haven't managed to get it
           | working unlike Gentoo.
        
           | CarVac wrote:
           | Gentoo binaries aren't shipped that way
        
             | greenavocado wrote:
             | Gentoo..... _distributes binaries_?
        
               | rascul wrote:
               | Yes
               | 
               | https://wiki.gentoo.org/wiki/Gentoo_Binary_Host_Quickstar
               | t
        
               | digitalPhonix wrote:
               | But not with march=native?
               | 
               | The distirbuted binaries use two standard instruction
               | sets for x86-64 and one for arm like "march=x86-64-v3"
               | 
               | https://wiki.gentoo.org/wiki/Gentoo_binhost/Available_pac
               | kag...
        
         | CoastalCoder wrote:
         | > Never ever build in '-03 -march=native' by default. This is
         | always a red flag and a sign of immaturity.
         | 
         | Perhaps you can see how there are some assumptions baked into
         | that statement.
        
           | eqvinox wrote:
           | What assumptions would that be?
           | 
           | Shipping anything built with -march=native is a horrible
           | idea. Even on homogeneous targets like one of the clouds, you
           | never know if they'll e.g. switch CPU vendors.
           | 
           | The correct thing to do is use microarch levels (e.g.
           | x86-64-v2) or build fully generic if the target architecture
           | doesn't have MA levels.
        
             | tempest_ wrote:
             | I build on the exact hardware I intend to deploy my
             | software to and ship it to another machine with the same
             | specs as the one it was built on.
             | 
             | I am willing to hear arguments for other approaches.
        
               | eqvinox wrote:
               | I'm willing to hear arguments for your approach?
               | 
               | it certainly has scale issues when you need to support
               | larger deployments.
               | 
               | [P.S.: the way I understand the words, "shipping" means
               | "passing it off to someone else, likely across org
               | boundaries" whereas what you're doing I'd call
               | "deploying"]
        
               | teo_zero wrote:
               | So, do you see now the assumptions baked in your
               | argument?
               | 
               | > when you need to support larger deployments
               | 
               | > shipping
               | 
               | > passing it off to someone else
        
               | zahllos wrote:
               | Not the OP, but: -march says the compiler can assume that
               | the features of that particular CPU architecture family,
               | which is broken out by generation, can be relied upon. In
               | the worst case the compiler could in theory generate code
               | that does not run on older CPUs of the same family or
               | from different vendors.
               | 
               | -mtune says "generate code that is optimised for this
               | architecture" but it doesn't trigger arch specific
               | features.
               | 
               | Whether these are right or not depends on what you are
               | doing. If you are building gentoo on your laptop you
               | should absolutely -mtune=native and -march=native. That's
               | the whole point: you get the most optimised code you can
               | for your hardware.
               | 
               | If you are shipping code for a wide variety of
               | architectures and crucially the method of shipping is
               | binary form then you want to think more about what you
               | might want to support. You could do either: if you're
               | shipping standard software pick a reasonable baseline
               | (check what your distribution uses in its cflags). If
               | however you're shipping compute-intensive software
               | perhaps you load a shared object per CPU family or build
               | your engine in place for best performance. The Intel
               | compiler quite famously optimised per family, included
               | all the copies in the output and selected the worst one
               | on AMD ;) (https://medium.com/codex/fixing-intel-
               | compilers-unfair-cpu-d...)
        
               | dijit wrote:
               | What?! seriously?!
               | 
               | I've never heard of anyone doing that.
               | 
               | If you use a cloud provider and use a remote development
               | environment (VSCode remote/Jetbrains Gateway) then you're
               | wrong: cloud providers swap out the CPUs without telling
               | you and can sell newer CPUs at older prices if theres
               | less demand for the newer CPUs; you can't rely on that.
               | 
               | To take an old naming convention, even an E3-Xeon CPU is
               | not equivalent to an E5 of the same generation. I'm
               | willing to bet it mostly works but your claim "I build on
               | the exact hardware I ship on" is much more strict.
               | 
               | The majority of people I know use either laptops or
               | workstations with Xeon workstation or Threadripper CPUs--
               | but when deployed it will be a Xeon scalable datacenter
               | CPU or an Epyc.
               | 
               | Hell, I work in gamedev and we cross compile basically
               | everything for consoles.
        
               | ninkendo wrote:
               | ... not everyone uses the cloud?
               | 
               | Some people, _gasp_ , run physical hardware, that they
               | bought.
        
               | lkjdsklf wrote:
               | We use physical hardware at work, but it's still not the
               | way you build/deploy unless it's for a workstation/laptop
               | type thing.
               | 
               | If you're deploying the binary to more than one machine,
               | you quickly run into issues where the CPUs are different
               | and you would need to rebuild for each of them. This is
               | feasible if you have a couple of machines that you
               | generally upgrade together, but quickly falls apart at
               | just slightly more than 2 machines.
        
               | dijit wrote:
               | And all your deployed and dev machines run the same spec-
               | same CPU entirely?
               | 
               | And you use them for remote development?
               | 
               | I think this is highly unusual.
        
               | izacus wrote:
               | So you buy exact same generation of Intel and AMD chips
               | to your developers than your servers and your cutomsers?
               | And encode this requirement into your development process
               | for the future?
        
               | tom_ wrote:
               | On every project I've worked on, the PC I've had has been
               | much better than the minimum PC required. Just because
               | I'm writing code that will run nicely enough on a slow
               | PC, that doesn't mean I need to use that same slow PC to
               | build it!
               | 
               | And then, the binary that the end user receives will
               | actually have been built on one of the CI systems. I bet
               | they don't all have quite the same spec. And the above
               | argument applies anyway.
        
               | eslaught wrote:
               | Just popping in here because people seem to be surprised
               | by
               | 
               | > I build on the exact hardware I intend to deploy my
               | software to and ship it to another machine with the same
               | specs as the one it was built on.
               | 
               | This is exactly the use case in HPC. We always build
               | -march=native and go to some trouble to enable all the
               | appropriate vectorization flags (e.g., for PowerPC) that
               | don't come along automatically with the -march=native
               | setting.
               | 
               | Every HPC machine is a special snowflake, often with its
               | own proprietary network stack, so you can forget about
               | binaries being portable. Even on your own machine you'll
               | be recompiling your binaries every time the machine goes
               | down for a major maintenance.
        
               | pjmlp wrote:
               | So I get you don't do neither cloud, embedded, game
               | consoles, mobile devices.
               | 
               | Quite hard to build on the exact hardware for those
               | scenarios.
        
           | izacus wrote:
           | Not assumptions, experience.
           | 
           | I fully concur with that whole post as someone who also
           | maintained a C++ codebase used in production.
        
           | PufPufPuf wrote:
           | The only time I used -march=native was for a university
           | assignment which was built and evaluated on the same server,
           | and it allowed juicing an extra bit of performance. Using it
           | basically means locking the program to the current CPU only.
           | 
           | However I'm not sure about -O3. I know it can make the binary
           | larger, not sure about other downsides.
        
             | hmry wrote:
             | -O3 also makes build times longer (sometimes
             | significantly), and occasionally the resulting program is
             | actually slightly slower than -O2.
             | 
             | IME -O3 should only be used if you have benchmarks that
             | show -O3 actually produces a speedup for your specific
             | codebase.
        
               | fyrn_ wrote:
               | This various a lot between compilers. Clang for example
               | treats O3 perf regressions a bugs In many cases at least)
               | and is a bit more reasonable with O3 on. GCC goes full
               | mad max and you don't know what it's going to do.
        
             | adev_ wrote:
             | > The only time I used -march=native
             | 
             | It is completely fine to use -march=native, just do _not_
             | make it the default for someone building your project.
             | 
             | That should always be something to opt-in.
             | 
             | The main reason is that software _are_ a composite of
             | (many) components. It becomes quickly a pain in the ass of
             | maintainability if any tiny library somewhere try to sneak
             | in  '-march=native' that will make the final binary
             | randomly crash with an illegal instruction error if
             | executed on _any_ CPU that is not exactly the same than the
             | host.
             | 
             | When you design a build system configuration, think for the
             | others first (the users of your software), and yourself
             | after.
        
             | pclmulqdq wrote:
             | If you have a lot of "data plane" code or other looping
             | over data, you can see a big gain from -O3 because of more
             | aggressive unrolling and vectorization (HPC people use -O3
             | quite a lot). CRUD-like applications and other things that
             | are branchy and heavy on control flow will often see a mild
             | performance regression from use of -O3 compared to -O2
             | because of more frequent frequency hits due to AVX
             | instructions and larger binary size.
        
         | Teknoman117 wrote:
         | As someone who has also spent two decades wrangling C/C++
         | codebases, I wholeheartedly agree with every statement here.
         | 
         | I have an even stronger sentiment regarding cross compilation
         | though - In any build system, I think the distinction between
         | "cross" and "non-cross" compilation is an anti-pattern.
         | 
         | Always design build systems assuming cross compilation. It
         | hurts nothing if it just so happens that your host and target
         | platform/architecture end up being the same, and saves you
         | everything down the line if you need to also build binaries for
         | something else.
        
           | bsder wrote:
           | > In any build system, I think the distinction between
           | "cross" and "non-cross" compilation is an anti-pattern.
           | 
           | This is one of the huge wins of Zig. Any Zig host compiler
           | can produce output for any supported target. Cross compiling
           | becomes straightforward.
        
         | pjmlp wrote:
         | Agree with the feedback.
         | 
         | Also the problem isn't creating a cargo like tool for C and
         | C++, that is the easy part, the problem is getting more
         | userbase than vcpkg or conan for it to matter for those
         | communities.
        
       | nesarkvechnep wrote:
       | As long as it's for C/C++ and not C or C++, I'm skeptical.
        
         | randerson_112 wrote:
         | Why do you say this? I respect it, I'm just curious.
        
           | avadodin wrote:
           | C/C++ is HR-newspeak out of the 1990s(at the time it was not
           | clear that anyone would still want to use C and MSVC did move
           | their compiler to C++).
           | 
           | It signals that the speaker doesn't understand that the two
           | are different languages with very different communities.
           | 
           | I don't really think that C users are entirely immune to
           | dependency hell, if that's what OP meant, though. It is
           | orthogonal.
           | 
           | As a user, I do believe it sucks when you depend on something
           | that is not included by default on all target platforms(and
           | you fail to include it and maintain it within your source
           | tree*).
        
             | unclad5968 wrote:
             | What part of the build process is different for C?
        
               | avadodin wrote:
               | I explained why C/C++ rubbed op the wrong way. It has
               | nothing to do with a build process.
               | 
               | It is probably true that more average C programs can be
               | built with plain Makefiles or even without a Makefile
               | than C++, though.
               | 
               | You can of course add dependencies on configure scripts,
               | m4, cmake, go, python or rust when building a plain self-
               | contained C program and indeed many do.
        
       | einpoklum wrote:
       | Impression before actually trying this:
       | 
       | CMake is a combination of a warthog of a specification language,
       | and mechanisms for handling a zillion idiosyncracies and corners
       | cases of everything.
       | 
       | I doubt than < 10,000 lines of C code can cover much of that.
       | 
       | I am also doubtful that developers are able to express the exact
       | relations and semantic nuances they want to, as opposed to some
       | default that may make sense for many projects, but not all.
       | 
       | Still - if it helps people get started on simpler or more
       | straightforward projects - that's neat :-)
        
       | randerson_112 wrote:
       | Thank you everyone for the feedback so far! I just wanted to say
       | that I understand this is not a fully cohesive and functional
       | project for every edge case. This is the first day of releasing
       | it to the public and it is only the beginning of the journey. I
       | do not expect to fully solve a problem of this scale on my own,
       | Craft is open source and open to the community for development. I
       | hope that as a community this can grow into a more advanced and
       | widely adopted tool.
        
         | alonsovm wrote:
         | this project is something i'd do, i had the idea about the same
         | time as you did, "why something like cargo for c++ doesn't
         | exist?" and you did it, thanks I guess.
        
       | wild_pointer wrote:
       | What about cmkr?
       | 
       | https://cmkr.build/
        
       | thegrim33 wrote:
       | Project description is AI generated, even the HN post is AI
       | generated, why should I spend any energy looking into your
       | project when all you're doing is just slinging AI slop around and
       | couldn't be bothered to put any effort in yourself?
        
       | forrestthewoods wrote:
       | Cmake is infamously not a build system. It is a build system
       | generator.
       | 
       | This is now a build system generator generator. This is the wrong
       | solution imho. The right solution is to just build a build system
       | that doesn't suck. Cmake sucks. Generating suck is the wrong
       | angle imho.
        
         | nnevatie wrote:
         | Cmake might suck, but is arguably the de-facto now. It's not
         | standard, since the C++ committee does not want to deal with
         | the real world (tooling).
        
           | forrestthewoods wrote:
           | Python was also a shitshow and UV became the new standard in
           | literally less than a year.
           | 
           | That's an existence proof that a new tool that doesn't suck
           | can take over an ecosystem.
        
             | nnevatie wrote:
             | Completely agreed. However, typically a new tool needs to
             | be significantly better for that to happen. In many ways, I
             | see Meson already being that but it hasn't really gained
             | traction at scale.
        
       | littlestymaar wrote:
       | "Show HN" has really become a Claude code showcase in the last 6
       | months, maybe it's time to sunset the format at this point ...
        
         | bangaladore wrote:
         | Yup, I read "-- think Cargo, but for C/C++." and closed the
         | tab.
        
           | nnevatie wrote:
           | The em dash - it's always the em dash.
        
       | sebastos wrote:
       | The tough truth is that there already is a cargo for C/C++:
       | Conan2. I know, python, ick. I know, conanfile.py, ick. But
       | despite its warts, Conan fundamentally CAN handle every part of
       | the general problem. Nobody else can. Profiles to manage host vs.
       | target configuration? Check. Sufficiently detailed modeling of
       | ABI to allow pre-compiled binary caching, local and remote?
       | Check, check, check. Offline vs. Online work modes? Check.
       | Building any relevant project via any relevant build system,
       | including Meson, without changes to the project itself? Check.
       | Support for pulling build-side requirements? Check. Version
       | ranges? Check. Lockfiles? Check. Closed-source, binary-only
       | dependencies? Check.
       | 
       | Once you appreciate the vastness of the problem, you will see
       | that having a vibrant ecosystem of different competing package
       | managers sucks. This is a problem where ONE standard that can
       | handle every situation is incalculably better than many different
       | solutions which solve only slices of the problem. I don't care
       | how terse craft's toml file is - if it can't cross compile, it's
       | useless to me. So my project can never use your tool, which
       | implies other projects will have the same problem, which implies
       | you're not the one package manager / build system, which means
       | you're part of the problem, not the solution. The Right Thing is
       | to adopt one unilateral standard for all projects. If you're
       | remotely interested in working on package managers, the best way
       | to help the human race is to fix all of the outstanding things
       | about Conan that prevent it from being the One Thing. It's the
       | closest to being the One Thing, and yet there are still many
       | hanging chads:
       | 
       | - its terribly written documentation
       | 
       | - its incomplete support for editable packages
       | 
       | - its only nascent support for "workspaces"
       | 
       | - its lack of NVIDIA recipes
       | 
       | If you really can't stand to work on Conan (I wouldn't blame
       | you), another effort that could help is the common package
       | specification format (CPS). Making that a thing would also be a
       | huge improvement. In fact, if it succeeds, then you'd be free to
       | compete with conan's "frontend" ergonomics without having to
       | compete with the ecosystem.
        
         | looneysquash wrote:
         | > The tough truth is that there already is a cargo for C/C++:
         | Conan2
         | 
         | Is it though?
         | 
         | When I read the tutorial:
         | https://docs.conan.io/2/tutorial/consuming_packages/build_si...
         | 
         | It says to hand write a `CMakeLists.txt` file. This is before
         | it has me create a `conanfile.txt` even.
         | 
         | I have the same complaint about vcpkg.
         | 
         | It seems like it takes: `(conan | vcpkg) + (cmake | autotools)
         | + (ninja | make)` to do the basics what cargo does.
        
       | singpolyma3 wrote:
       | Next build a nice way to use normal Makefile with rust
        
       | linzhangrun wrote:
       | Well done, but it's been a struggle. C++ has such a heavy
       | history, and 2026 is already too late.
        
       | resonancel wrote:
       | Can't take this lib seriously when there're lots of gems like
       | these in the codebase.                 // Open source directory
       | dir_t* dir = open_dir(source_dir);            // Find where dot
       | is           char* dot = strrchr(file, '.');
       | 
       | I thought ShowHN had banned LLM-generated contents, I can't be
       | more wrong.
        
       | Panzerschrek wrote:
       | > You describe your project in a simple craft.toml
       | 
       | I don't like it. Such format is generally restricted (is not
       | Turing-complete), which doesn't allow doing something non-
       | trivial, for example, choosing dependencies or compilation
       | options based on some non-trivial conditions. That's why CMake is
       | basically a programming language with variables, conditions,
       | loops and even arithmetic.
        
         | kakwa_ wrote:
         | While I do get why CMake is a scripted build system, I cannot
         | help but notice that other languages don't need it.
         | 
         | In Rust, you have Cargo.toml, in go, it's a rather simple
         | go.mod.
         | 
         | And even in embedded C, you have platformio which manages to
         | make due with a few .ini files.
         | 
         | I would honestly love to see the cpp folks actually
         | standardizing a proper build system and dependency manager.
         | 
         | Today, just building a simple QT app is usually a daunting
         | task, and other compiled ecosystems show us it doesn't have to
         | be.
        
           | fisf wrote:
           | Platformio is not simple by any means. That few .ini files
           | generate a whole bunch of python, and this again relies on
           | scons as build system.
           | 
           | That's a nice experience as long as you stay within
           | predefined, simple abstractions that somebody else provided.
           | But it is very much a scripted build system, you just don't
           | see it for trivial cases.
           | 
           | For customizations, let alone a new platform, you will end up
           | writing python scripts, and digging through the 200 pages
           | documentation when things go wrong.
        
       | hulitu wrote:
       | Supply chain attack made easy.
        
       | chris_wot wrote:
       | Can it handle modules?
        
       | macgyverismo wrote:
       | I have to say, since CMakes FetchContent module has been
       | available I have not had a need for a dependency manager outside
       | of CMake itself.
       | 
       | What exactly is it you do/need that can't be reasonably solved
       | using the FetchContent module?
       | 
       | https://cmake.org/cmake/help/latest/module/FetchContent.html
        
       | 0xMalotru wrote:
       | Relevant XKCD: https://xkcd.com/927/
        
       | sourcegrift wrote:
       | Given how pathetic toml is with arrays, kinda sad people go with
       | it.
        
       ___________________________________________________________________
       (page generated 2026-04-10 12:01 UTC)