Feedback of someone who is used to manage large (>1500) software stack in C / C++ / Fortran / Python / Rust / etc:
- (1) Provide a way to compile without internet access and specify the associated dependencies path manually. This is absolutely critical.
Most 'serious' multi-language package managers and integration systems are building in a sandbox without internet access for security reasons and reproducibility reasons.
If your build system does not allow to build offline and with manually specified dependencies, you will make life of integrators and package managers miserable and they will avoid your project.
(2) Neverever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity. People expect code to be portable and shippable.
Good default options should be CMake equivalent of "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
-O3 can be argued. -march=native is always always a mistake.
- (3) Allow your build tool to be built by an other build tool (e.g CMake).
Anybody caring about reproducibility will want to start from sources, not from a pre-compiled binary. This also matter for cross compilation.
They are what will allow interoperability between your system and other build systems.
- (5) last but not least: Consider seriously the cross-compilation use case.
It is common in the world of embedded systems to cross compile. Any build system that does not support cross-compilation will be de facto banned from the embedded domain.
Teknoman117 22 hours ago [-]
As someone who has also spent two decades wrangling C/C++ codebases, I wholeheartedly agree with every statement here.
I have an even stronger sentiment regarding cross compilation though - In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
Always design build systems assuming cross compilation. It hurts nothing if it just so happens that your host and target platform/architecture end up being the same, and saves you everything down the line if you need to also build binaries for something else.
sebastos 7 hours ago [-]
Amen. It always baffled me that cross compiling was ever considered a special, weird, off-nominal thing. I’d love to understand the history of that better, because it seems like it should have been obvious from the start that building for the exact same computer you’re compiling from is a special case.
bsder 21 hours ago [-]
> In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
This is one of the huge wins of Zig. Any Zig host compiler can produce output for any supported target. Cross compiling becomes straightforward.
pjmlp 13 hours ago [-]
Agree with the feedback.
Also the problem isn't creating a cargo like tool for C and C++, that is the easy part, the problem is getting more userbase than vcpkg or conan for it to matter for those communities.
CoastalCoder 24 hours ago [-]
> Never ever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity.
Perhaps you can see how there are some assumptions baked into that statement.
eqvinox 24 hours ago [-]
What assumptions would that be?
Shipping anything built with -march=native is a horrible idea. Even on homogeneous targets like one of the clouds, you never know if they'll e.g. switch CPU vendors.
The correct thing to do is use microarch levels (e.g. x86-64-v2) or build fully generic if the target architecture doesn't have MA levels.
tempest_ 22 hours ago [-]
I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.
I am willing to hear arguments for other approaches.
zahllos 22 hours ago [-]
Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.
Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.
If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)
account42 7 hours ago [-]
> Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
Or on newer CPUs of the same vendor (e.g. AMD dropped some instructions in Zen that Intel didn't pick up) or even in different CPUs of the same generation (Intel market segmenting shenanigans with AVX512).
eslaught 15 hours ago [-]
Just popping in here because people seem to be surprised by
> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.
This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.
Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.
tempest_ 2 hours ago [-]
If you get enough of them they can start to look like cattle.
Still, they are all the same breed.
eqvinox 22 hours ago [-]
I'm willing to hear arguments for your approach?
it certainly has scale issues when you need to support larger deployments.
[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]
teo_zero 15 hours ago [-]
So, do you see now the assumptions baked in your argument?
> when you need to support larger deployments
> shipping
> passing it off to someone else
pjmlp 13 hours ago [-]
So I get you don't do neither cloud, embedded, game consoles, mobile devices.
Quite hard to build on the exact hardware for those scenarios.
tom_ 19 hours ago [-]
On every project I've worked on, the PC I've had has been much better than the minimum PC required. Just because I'm writing code that will run nicely enough on a slow PC, that doesn't mean I need to use that same slow PC to build it!
And then, the binary that the end user receives will actually have been built on one of the CI systems. I bet they don't all have quite the same spec. And the above argument applies anyway.
dijit 21 hours ago [-]
What?! seriously?!
I’ve never heard of anyone doing that.
If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.
To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.
The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.
Hell, I work in gamedev and we cross compile basically everything for consoles.
ninkendo 21 hours ago [-]
… not everyone uses the cloud?
Some people, gasp, run physical hardware, that they bought.
lkjdsklf 19 hours ago [-]
We use physical hardware at work, but it's still not the way you build/deploy unless it's for a workstation/laptop type thing.
If you're deploying the binary to more than one machine, you quickly run into issues where the CPUs are different and you would need to rebuild for each of them. This is feasible if you have a couple of machines that you generally upgrade together, but quickly falls apart at just slightly more than 2 machines.
dijit 15 hours ago [-]
And all your deployed and dev machines run the same spec- same CPU entirely?
And you use them for remote development?
I think this is highly unusual.
ninkendo 8 hours ago [-]
Lots of organizations buy many of a single server spec. In fact that should be the default plan unless you have a good reason to buy heterogeneous hardware. With the way hardware depreciation works they tend to move to new server models “in bulk” as well, replacing entire clusters/etc at once. I’m not sure why this seems so foreign to folks…
Nobody is saying dev machines are building code that ships to their servers though… quite the opposite, a dev machine builds software for local use… a server builds software for running on other servers. And yes, often build machines are the same spec as the production ones, because they were all bought together. It’s not really rare. (Well, not using the cloud in general is “rare” but, that’s what we’re discussing.)
tempest_ 3 hours ago [-]
There is a large subset of devs who have worked their entire career on abstracted hardware which is fine I guess, just different domains.
The size of your L1/L2/L3 cache or the number of TLB misses doesn't matter too much if your python web service is just waiting for packets.
izacus 13 hours ago [-]
So you buy exact same generation of Intel and AMD chips to your developers than your servers and your cutomsers? And encode this requirement into your development process for the future?
PufPufPuf 23 hours ago [-]
The only time I used -march=native was for a university assignment which was built and evaluated on the same server, and it allowed juicing an extra bit of performance. Using it basically means locking the program to the current CPU only.
However I'm not sure about -O3. I know it can make the binary larger, not sure about other downsides.
adev_ 21 hours ago [-]
> The only time I used -march=native
It is completely fine to use -march=native, just do not make it the default for someone building your project.
That should always be something to opt-in.
The main reason is that software are a composite of (many) components. It becomes quickly a pain in the ass of maintainability if any tiny library somewhere try to sneak in '-march=native' that will make the final binary randomly crash with an illegal instruction error if executed on any CPU that is not exactly the same than the host.
When you design a build system configuration, think for the others first (the users of your software), and yourself after.
hmry 23 hours ago [-]
-O3 also makes build times longer (sometimes significantly), and occasionally the resulting program is actually slightly slower than -O2.
IME -O3 should only be used if you have benchmarks that show -O3 actually produces a speedup for your specific codebase.
fyrn_ 13 hours ago [-]
This various a lot between compilers. Clang for example treats O3 perf regressions a bugs In many cases at least) and is a bit more reasonable with O3 on. GCC goes full mad max and you don't know what it's going to do.
pclmulqdq 17 hours ago [-]
If you have a lot of "data plane" code or other looping over data, you can see a big gain from -O3 because of more aggressive unrolling and vectorization (HPC people use -O3 quite a lot). CRUD-like applications and other things that are branchy and heavy on control flow will often see a mild performance regression from use of -O3 compared to -O2 because of more frequent frequency hits due to AVX instructions and larger binary size.
atiedebee 6 hours ago [-]
I made a program with some inline assembly and tried O3 with clang once. Because the assembly was in a loop, the compiler probably didn't have enough information on the actual code and decided to fully unroll all 16 iterations, making performance drop by 25% because the cache locality was completely destroyed. What I'm trying to say, is that loop unrolling is definitely not a guarantee for faster code in exchange for binary size
izacus 23 hours ago [-]
Not assumptions, experience.
I fully concur with that whole post as someone who also maintained a C++ codebase used in production.
1 days ago [-]
criticalfault 7 hours ago [-]
since you have a lot of experience, can I ask what do you think about this:
- skipping cmake completely? would this be feasible?
- integration of other languages in the project?
- how to handle qt?
adev_ 2 hours ago [-]
> skipping cmake completely? would this be feasible?
Feasible but difficult. CMake has a tremendous user mass, so you do want to be able to use a CMake-based project as a dependency. The CMake Target/Config export system expose CMake internals and make that difficult to consume a CMake built project without CMake.
The cleanest way to do that is probably what xmake is doing: Calling cmake and extract targets information from CMake to your own build system with some scripting. It is flaky but xmake has proven it is doable.
That's said: CPS should make that easier on the longer term.
Please also consider that CMake is doing a lot of work under the hood to contains compiler quirks that you will have to do manually.
> integration of other languages in the project?
Trying to integrate higher level languages (Python, JS) in package managers of lower level languages (C, C++) is generally a bad idea.
The dependency relation is inverted and interoperability betweens package managers is always poor. Diamond dependency and conflicting versions will become quickly a problem.
I would advise to just expose properly your build system with the properties I described and use a multi-language package manager (e.g Nix) or, at default, the higher level language package manager (e.g uv with a scikit-build-core equivalent) on top of that.
This will be one order of magnitude easier to do.
> how to handle qt?
Qt is nothing special to handle.
Qt is a multi language framework (C++, MOC, QML, JS and even python for PySide) and need to be handle as such.
tgma 1 days ago [-]
> -march=native is always always a mistake
Gentoo user: hold my beer.
CarVac 1 days ago [-]
Gentoo binaries aren't shipped that way
account42 4 hours ago [-]
They are shipped to a new system when you upgrade because reinstalling is for suckers.
The reason why I like it (beyond ease-of-use) is that it can spit out CMakeLists.txt and compile_commands.json for IDE/LSP integration and also supports installing Conan/vcpkg libraries or even Git repos.
# Generate compile_commands.json and CMakeLists.txt
$ xmake project -k compile_commands
$ xmake project -k cmake
# Build + run
$ xmake && xmake run myapp
ethin 1 days ago [-]
I would happily switch to it in a heartbeat if it was a lot more well-documented and if it supported even half of what CMake does.
As an example of what I mean, say I want to link to the FMOD library (or any library I legally can't redistribute as an SDK). Or I want to enable automatic detection on Windows where I know the library/SDK is an installer package. My solution, in CMake, is to just ask the registry. In XMake I still can't figure out how to pull this off. I know that's pretty niche, but still.
The documentation gap is the biggest hurtle. A lot of the functions/ways of doing things are poorly documented, if they are at all. Including a CMake library that isn't in any of the package managers for example. It also has some weird quirks: automatic/magic scoping (which is NOT a bonus) along with a hack "import" function instead of using native require.
All of this said, it does work well when it does work. Especially with modules.
delta_p_delta_x 1 days ago [-]
Agreed, xmake seems very well-thought-out, and supports the most modern use-cases (C++20 named modules, header unit modules, and `import std`, which CMake still has a lot of ceremony around). I should switch to it.
NekkoDroid 24 hours ago [-]
Similar to premake I have never been a fan of the global state for defining targets. Give me an object or some handle that I call functions on/pass to functions. CMake at some point ended up somewhat right with that to going to target based defining for its stuff and since I've really learned it I have been kinda happy with it.
Meson is a python layer over the ninja builder, like cmake can be. xmake is both a build tool and a package manager fast like ninja and has no DSL, the build file is just lua. It's more like cargo than meson is.
eqvinox 22 hours ago [-]
I didn't claim it was a package manager, just that it looked similar. The root post said "build tool", and that's what Meson is as well.
Other than that, both "python layer" and "over the ninja builder" are technically wrong. "python layer" is off since there is now a second implementation, Muon [https://muon.build/], in C. "over the ninja builder" is off since it can also use Visual Studio's build capabilities on Windows.
Interestingly, I'm unaware of other build-related systems that have multiple implementations, except Make (which is in fact part of the POSIX.1 standard.) Curious to know if there are any others.
IshKebab 1 days ago [-]
I've had some experience with this but it seems to be rather slow, very niche and tbh I can't see a reason to use it over CMake.
It's similar, but designed for an existing ecosystem. Cargo is designed for `cargo`, obviously.
But `pyproject.toml` is designed for the existing tools to all eventually adopt. (As well as new tools, of course.)
randerson_112 1 days ago [-]
Thank you everyone for the feedback so far! I just wanted to say that I understand this is not a fully cohesive and functional project for every edge case. This is the first day of releasing it to the public and it is only the beginning of the journey. I do not expect to fully solve a problem of this scale on my own, Craft is open source and open to the community for development. I hope that as a community this can grow into a more advanced and widely adopted tool.
alonsovm 16 hours ago [-]
this project is something i'd do, i had the idea about the same time as you did, "why something like cargo for c++ doesn't exist?" and you did it, thanks I guess.
bluGill 1 days ago [-]
Anyone can make a tool that solves a tiny part of the problem. however the reason no such tool has caught on is because of all the weird special cases you need to handle before it can be useful. Even if you limit your support to desktop: OS/X and Windows that problem will be hard, adding various linux flavors is even more difficult, not to mention BSD. The above is the common/mainstream choices, there Haiku is going to be very different, and I've seen dozens of others over the years, some of them have a following in their niche. Then there are people building for embedded - QNX, vxworks, or even no OS just bare metal - each adding weirdness (and implying cross compiling which makes everything harder because your assumptions are always wrong).
I'm sorry I have to be a downer, but the fact is if you can use the word "I" your package manager is obviously not powerful enough for the real world.
omcnoe 23 hours ago [-]
There are so many reasons why C/C++ build systems struggle, but imo power is the last of them. "Powerful" and "scriptable" build systems are what has gotten us into the swamp!
* Standards committee is allergic to standardizing anything outside of the language itself: build tools, dependency management, even the concept of a "file" is controversial!
* Existing poor state of build systems is viral - any new build system is 10x as complex as a clean room design because you have to deal with all the legacy "power" of previous build tooling. Build system flaws propagate - the moment you need hacks in your build, you start imposing those hacks on downstream users of your library also.
Even CMake should be a much better experience than it is - but in the real world major projects don't maintain their CMake builds to the point you can cleanly depend on them. Things like using raw MY_LIB_DIR variables instead of targets, hacky/broken feature detection flags etc. Microsoft tried to solve this problem via vcpkg, ended up having to patch builds of 90% of the packages to get it to work, and it's still a poor experience where half the builds are broken.
My opinion is that a new C/C++ build/package system is actually a solvable problem now with AI. Because you can point Opus 4.6 or whoever at the massive pile of open source dependencies, and tell it for each one "write a build config for this package using my new build system" which solves the gordian knot of the ecosystem problem.
bluGill 15 hours ago [-]
No scripts sounds nice until you are doing something weird that the system doesn't cover. Cmake is starting to get all the possible weirdness right without scripts but there are still a few cases it can't handle.
the__alchemist 1 days ago [-]
I will categorize this as a pattern I've seen which leads to stagnation, or is at least aiming for it. Usually these are built on one or more assumption which doesn't hold. The flow of this pattern:
- Problem exists
- Proposals of solutions, (varying quality), or not
- "You can't just solve this. It's complicated! This problem must exist". (The post I'm replying to
- Problem gets solved, hopefully.
Anecdotes I'm choosing based on proximity to this particular problem: uv and cargo. uv because people said the same thing about python packaging, and cargo because its adjacent to C and C++ in terms of being a low-level compiled language used for systems programming, embedded/bare-metal etc.
The world is rich in complexity, subtlety, and exceptions to categorization. I don't think this should block us from solving problems.
bluGill 1 days ago [-]
I didn't say the problem couldn't be solved. I said the problem can't be solved by one person. There is a difference. (maybe it can be solved by one person over a few decades)
randerson_112 1 days ago [-]
This is true. There is no way I could solve a problem of this scale by myself. That is why this is an open source project and open to everyone to make changes on. There is still much more to improve, this is only day 1 of release to the public.
tekne 1 days ago [-]
I mean -- if I'm going to join a team to solve the hard 20%, I'd like to see the idea validated against the easy 80% first.
If it's really bad, at least the easy 20%.
looneysquash 1 days ago [-]
Nice. I have been thinking of making something similar. Now hopefully I don't have to!
Not sure how big your plans are.
My thoughts would be to start as a cmake generator but to eventually replace it. Maybe optionally.
And to integrate suppoet for existing package managers like vcpkg.
At the same time, I'd want to remain modular enough that's it's not all or nothing. I also don't like locking.
But right now package management and build system are decoupled completely. And they are not like that in other ecosystems.
For example, Cmake can use vcpkg to install a package but then I still have to write more cmake to actually find and use it.
psyclobe 1 days ago [-]
> For example, Cmake can use vcpkg to install a package but then I still have to write more cmake to actually find and use it.
I have this solved at our company. We have a tool built on top of vcpkg, to manage internal + external dependencies. Our cmake linker logic leverages the port names and so all you really do is declare your manifest file (vcpkg.json) then declare which one of them you will export publicly.
Everything after that is automatic including the exported cmake config for your library.
seniorThrowaway 1 days ago [-]
Having to work around a massive C++ software project daily, I wish you luck. We use conan2, and while it can be very challenging to use, I've yet to find something better that can handle incorporating as dependencies ancient projects that still use autoconf or even custom build tooling. It's also very good at detecting and enforcing ABI compatibility, although there are still some gaps. This problem space is incredibly hard and improving it is a prime driver for the creation of many of the languages that came after C/C++
mgaunard 1 days ago [-]
I find that conan2 is mostly painful with ABI. Binaries from GCC are all backwards compatible, as are C++ standard versions. The exception is the C++11 ABI break.
And yet it will insist on only giving you binaries that match exactly. Thankfully there are experimental extensions that allow it to automatically fall back.
lgtx 1 days ago [-]
The installation instructions being a `curl | sh` writing to the user's bashrc does not inspire confidence.
ori_b 1 days ago [-]
They did say it was inspired by cargo, which is often installed using rustup as such:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
account42 4 hours ago [-]
Sure but being inspired by something doesn't mean you have to cargo cult the worst aspects of it.
bikelang 1 days ago [-]
I don’t love this approach either (what a security nightmare…) - but it is easy to do for users and developers alike. Having to juggle a bunch of apt-like repositories for different distros is a huge time sink and adds a bunch of build complexity. Brew is annoying with its formulae vs tap vs cask vs cellar - and the associated ruby scripting… And then there’s windows - ugh.
I wish there was a dead simple installer TUI that had a common API specification so that you could host your installer spec on your.domain.com/install.json - point this TUI at it and it would understand the fine grained permissions required, handle required binary signature validation, manifest/sbom validation, give the user freedom to customize where/how things were installed, etc.
uecker 1 days ago [-]
This is fitting for something simulating cargo, which is a huge supply chain risk itself.
maccard 1 days ago [-]
Given you're about to run a binary, it's no worse than that.
hyperhopper 1 days ago [-]
It is definitely worse. At leas a binary is constant, on your system, can be analyzed. Curl|sh can give you different responses than just curling. Far far worse
maccard 23 hours ago [-]
Only if you download an analyse it. You’re free to download the install script and analyze that too in the same way. The advantage that the script has is it’s human readable unlike the binary you’re about to execute blindly.
jjgreen 1 days ago [-]
[flagged]
Bjartr 1 days ago [-]
If you'd just left off "to fuck" you'd end up way less downvoted, if it even happened at all.
account42 4 hours ago [-]
Probably not. This isn't prime time TV, some foul language is tolerated - but complaining about down-votes, especially preemptively, has a predictable response (IMO rightfully so).
jjgreen 1 days ago [-]
With fucks, without fucks, in iambic pentameter, anything vaguely critical of Rust will be downvoted. As you can see.
KPGv2 1 days ago [-]
[flagged]
jvanderbot 1 days ago [-]
Knowing the reason something is considered bad does not immediately change that fact that it is considered bad.
Social / emotional signals still exist around that word.
Panzerschrek 15 hours ago [-]
> You describe your project in a simple craft.toml
I don't like it. Such format is generally restricted (is not Turing-complete), which doesn't allow doing something non-trivial, for example, choosing dependencies or compilation options based on some non-trivial conditions. That's why CMake is basically a programming language with variables, conditions, loops and even arithmetic.
kakwa_ 14 hours ago [-]
While I do get why CMake is a scripted build system, I cannot help but notice that other languages don't need it.
In Rust, you have Cargo.toml, in go, it's a rather simple go.mod.
And even in embedded C, you have platformio which manages to make due with a few .ini files.
I would honestly love to see the cpp folks actually standardizing a proper build system and dependency manager.
Today, just building a simple QT app is usually a daunting task, and other compiled ecosystems show us it doesn't have to be.
fisf 9 hours ago [-]
Platformio is not simple by any means. That few .ini files generate a whole bunch of python, and this again relies on scons as build system.
That's a nice experience as long as you stay within predefined, simple abstractions that somebody else provided. But it is very much a scripted build system, you just don't see it for trivial cases.
For customizations, let alone a new platform, you will end up writing python scripts, and digging through the 200 pages documentation when things go wrong.
mixmastamyk 4 hours ago [-]
Would be great if there were a standard for these toml/ini project files across languages and tools.
flohofwoe 1 days ago [-]
Heh, looks like cmake-code-generators are all the rage these days ;)
Here's my feeble attempt using Deno as base (it's extremely opinionated though and mostly for personal use in my hobby projects):
One interesting chicken-egg-problem I couldn't solve is how to figure out the C/C++ toolchain that's going to be used without running cmake on a 'dummy project file' first. For some toolchain/IDE combos (most notably Xcode and VStudio) cmake's toolchain detection takes a lot of time unfortunately.
apparatur 1 days ago [-]
I'm intrigued by the idea of writing one's own custom build system in the same language as the target app/game; it's probably not super portable or general but cool and easy to maintain for smaller projects: https://mastodon.gamedev.place/@pjako/115782569754684469
macgyverismo 13 hours ago [-]
I have to say, since CMakes FetchContent module has been available I have not had a need for a dependency manager outside of CMake itself.
What exactly is it you do/need that can't be reasonably solved using the FetchContent module?
Uses CMAKE, Sorry not for me. Call me old but i prefere good old make or batch. Maybe it's because i can understand those tools. Debugging CMAKE build problems made me hate it. Also i code for embedded CPU and most of the time CMAKE is just overkill and does not play well the compiler/binutils provided. The Platform independency is just not happening in those environments.
delta_p_delta_x 1 days ago [-]
> most of the time CMAKE is just overkill and does not play well the compiler/binutils provided
You need to define a CMake toolchain[1] and pass it to CMake with --toolchain /path/to/file in the command-line, or in a preset file with the key `toolchainFile` in a CMake preset. I've compiled for QNX and ARM32 boards with CMake, no issues, but this needs to be done.
When you need a configuration step, cmake will actually save you a lot of time, especially if you work cross platform or even cross compile. I love to hate cmake as much as the next guy, and it would be hard to design a worse scripting language, but I'll take it any time over autoconf. Some of the newer tools may well be more convenient - I tried Bazel, and it sure wasn't (for me).
If you're happy to bake one config in a makefile, then cmake will do very little for you.
Night_Thastus 1 days ago [-]
For toy projects good old Make is fine...but at some point a project gets large enough that you need something more powerful. If you need something that can deal with multiple layers of nested sub-repositories, third-party and first-party dependencies, remote and local projects, multiple build configurations, dealing with non-code assets like documentation, etc, etc, etc - Make just isn't enough.
bluGill 1 days ago [-]
For simple projects. Make is easier for simple things I will grant. However when your projects gets complex at all make becomes a real pain and cmake becomes much easier.
Cmake has a lot of warts, but they have also put a lot of effort into finding and fixing all those weird special cases. If your project uses CMake odds are high it will build anywhere.
lkjdsklf 18 hours ago [-]
Also, for better or worse, cmake is pretty much the "standard" for C/C++ these days.
Fighting the standard often creates it's own set of problems and nightmares that just aren't worth it. Especially true in C++ where yhou often have to integrate with other projects and their build systems. Way easier if you just use cmake like everyone else.
Even the old hold outs, boost and google open source, now use cmake for their open source stuff.
tosti 1 days ago [-]
Odds are high the distro maintainer will lose hair trying to package it
nesarkvechnep 1 days ago [-]
As long as it's for C/C++ and not C or C++, I'm skeptical.
randerson_112 1 days ago [-]
Why do you say this? I respect it, I'm just curious.
avadodin 21 hours ago [-]
C/C++ is HR-newspeak out of the 1990s(at the time it was not clear that anyone would still want to use C and MSVC did move their compiler to C++).
It signals that the speaker doesn't understand that the two are different languages with very different communities.
I don't really think that C users are entirely immune to dependency hell, if that's what OP meant, though. It is orthogonal.
As a user, I do believe it sucks when you depend on something that is not included by default on all target platforms(and you fail to include it and maintain it within your source tree*).
unclad5968 19 hours ago [-]
What part of the build process is different for C?
avadodin 18 hours ago [-]
I explained why C/C++ rubbed op the wrong way. It has nothing to do with a build process.
It is probably true that more average C programs can be built with plain Makefiles or even without a Makefile than C++, though.
You can of course add dependencies on configure scripts, m4, cmake, go, python or rust when building a plain self-contained C program and indeed many do.
ethanc8 8 hours ago [-]
KDE already has a meta-build tool for C++ called Craft, which handles dependency management and cross - compilation for CMake-built applications and libraries.
resonancel 15 hours ago [-]
Can't take this lib seriously when there're lots of gems like these in the codebase.
// Open source directory
dir_t* dir = open_dir(source_dir);
// Find where dot is
char* dot = strrchr(file, '.');
I thought ShowHN had banned LLM-generated contents, I can't be more wrong.
cherryteastain 1 days ago [-]
Seems to solve a problem very similar to Conan or vcpkg but without its own package archive or build scripts. In general, unlike Cargo/Rust, many C/C++ projects dynamically link libraries and often require complex Makefile/shell script etc magic to discover and optionally build their dependencies.
How does craft handle these 'diamond' patterns where 2 dependencies may depend on versions of the same library as transitive dependencies (either for static or dynamic linking or as header-only includes) without custom build scripts like the Conan approach?
delduca 1 days ago [-]
Compared to Conan, what are the advantages?
randerson_112 24 hours ago [-]
Craft has project management and generates starter project structure. You can generate header and source files with boilerplate starter code. Craft manages the building of the project so you don’t need to write much CMake. You can also save project structures as templates and instantiate those templates in new projects ready to go.
delduca 21 hours ago [-]
How you can be better than CMake?
tombert 1 days ago [-]
This certainly seems less awful than the typical C building process.
What I've been doing to manage dependencies in a way that doesn't depress me much has been Nix flakes, which allows me a pretty straightforward `nix build` with the correct dependencies built in.
I'm just a bit curious though; a lot of C libraries are system-wide, and usually require the system package manager (e.g. libsdl2-dev) does this have an elegant way to handle those?
randerson_112 1 days ago [-]
Yes, many libraries are system wide that is true. This is something I had on the list of features to add. System dependencies. Thank you for the feedback!
kjksf 1 days ago [-]
In the age of AI tools like this are pointless. Especially new ones, given existence of make, cmake, premake and a bunch of others.
C++ build system, at the core, boils down to calling gcc foo.c -o foo.obj / link foo.obj foo.exe (please forgive if I got they syntax wrong).
Sure, you have more .c files, and you pass some flags but that's the core.
I've recently started a new C++ program from scratch.
What build system did I write?
I didn't. I told Claude:
"Write a bun typescript script build.ts that compiles the .cpp files with cl and creates foo.exe. Create release and debug builds, trigger release build with -release cmd-line flag".
And it did it in minutes and it worked. And I can expand it with similar instructions. I can ask for release build with all the sanitize flags and claude will add it.
The particulars don't matter. I could have asked for a makefile, or cmake file or ninja or a script written in python or in ruby or in Go or in rust. I just like using bun for scripting.
The point is that in the past I tried to learn cmake and good lord, it's days spent learning something that I'll spent 1 hr using.
It just doesn't make sense to learn any of those tools given that claude can give me working any build system in minutes.
It makes even less sense to create new build tools. Even if you create the most amazing tool, I would still choose spending a minute asking claude than spending days learning arbitrary syntax of a new tool.
randerson_112 1 days ago [-]
This is a fair and valid point. However, why leave your workflow to write a prompt to an AI when you can run simple commands in your workspace. Also you are most likely paying to use the AI while Craft is free and open source and will only continue to improve. I respect your feedback though, thank you!
1 days ago [-]
nnevatie 15 hours ago [-]
The same AI tool could have written a de-facto CMakeLists.txt file for you.
duped 1 days ago [-]
You're missing finding library/include paths, build configuration (`-D` flags for conditional compilation), fetching these from remote repositories, and versioning.
thegrim33 1 days ago [-]
Project description is AI generated, even the HN post is AI generated, why should I spend any energy looking into your project when all you're doing is just slinging AI slop around and couldn't be bothered to put any effort in yourself?
littlestymaar 1 days ago [-]
“Show HN” has really become a Claude code showcase in the last 6 months, maybe it's time to sunset the format at this point …
bangaladore 1 days ago [-]
Yup, I read "— think Cargo, but for C/C++." and closed the tab.
Please consider adding `cargo watch` - that would be a killer feature!
randerson_112 1 days ago [-]
Yes! This is definitely on the list of features to add. Thank you for the feedback!
wg0 1 days ago [-]
Yesterday I had to wrestle with CMake.
But how this tool figures out where the header files and build instructions for the libraries are that are included? Any expected layout or industry wide consensus?
integricho 1 days ago [-]
I believe it supports only projects having a working cmake setup, no extra magic
flohofwoe 1 days ago [-]
I suspect it depends on a specific directory structure, e.g. look at this generated cmake file:
...and for custom requirements a manually created CMakeLists.extras.txt as escape hatch.
Unclear to me how more interesting scenarios like compiler- and platform-specific build options (enable/disable warnings, defines, etc...), cross-compilation via cmake toolchain files (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be handled. E.g. just trivial things like "when compiling for Emscripten, include these source files, but not those others".
eliemichel 1 days ago [-]
CMakes piles up various generations of idioms so there are multiple ways of doing it, but personally I’ve learned to steer away from find_package() and other magical functions. Get all your dependencies as subdirectories (whichever way you prefer) and use add_subdirectory(). Use find_package() only in so-called "config" mode where you explicitly instruct cmake where to find the config for large precompiled dependencies only
mathstuf 3 hours ago [-]
FD: I am a CMake developer.
Yes, config packages are better. But I think doing find_package everywhere is better. Assuming you install an SDK for others to use your project. If you're a "product", vendor away. The issue comes when you want to vendor X and Y and both vendor Z independently. Then you're stuck de-vendoring at least one and figuring out how to provide it yourself internally. IMO, better to just let Z make its own install tree and find it as a package from there.
One can write good Find modules, but there is some "taste" involved. I wish we had more good examples to use as templates.
linzhangrun 19 hours ago [-]
Well done, but it's been a struggle. C++ has such a heavy history, and 2026 is already too late.
1 days ago [-]
azizam 5 hours ago [-]
Great, no issue that a bit of LLM slop can't fix. Why even say "I built X"? I'd respect it more if you just said "Claude built X" or something.
dima55 1 days ago [-]
If you think cmake isn't very good, the solution isn't to add more layers of crap around cmake, but to replace it. Cmake itself exists because a lot of humans haven't bothered to read the gnu make manual, and added more cruft to manage this. Please don't add to this problem. It's a disease
dymk 1 days ago [-]
As much of a dog as cmake is, "just use make!" does not solve many of the problems that cmake makes a go at. It's like saying go write assembler instead of C because C has so many footguns.
dima55 1 days ago [-]
GNU Make has a debugger. This alone makes it far superior to every other build tool I've ever seen. The cmake debugging experience is "run a google search, and try random stuff recommended by other people that also have no idea how the thing works". This shouldn't be acceptable.
beckford 1 days ago [-]
That hasn't been true for a few years at least. https://www.jetbrains.com/help/clion/cmake-debug.html is has had CMake debugging since cmake 3.27. Ditto for vscode and probably other C IDEs I am not familiar with. So does Gradle for Java. GNU make is hardly exclusive.
randerson_112 1 days ago [-]
This is very true. My thought process was that since majority of projects already run on CMake, I would simply build off of that and take advantage of what CMake is good at while making the more difficult operations easier. Thank you for your feedback!
wiseowise 1 days ago [-]
I'm all for shitting on CMake, but Jesus, to suggest Make as a replacement/improvement is an unhinged take.
dima55 1 days ago [-]
I'm suggesting that people creating build systems read the make manual. Surely this isn't controversial?
nnevatie 14 hours ago [-]
People using CMake might want to build the same code on multiple platforms - this is trivially achievable, unlike with Make.
forrestthewoods 1 days ago [-]
Cmake is infamously not a build system. It is a build system generator.
This is now a build system generator generator. This is the wrong solution imho. The right solution is to just build a build system that doesn’t suck. Cmake sucks. Generating suck is the wrong angle imho.
nnevatie 15 hours ago [-]
Cmake might suck, but is arguably the de-facto now. It's not standard, since the C++ committee does not want to deal with the real world (tooling).
forrestthewoods 13 hours ago [-]
Python was also a shitshow and UV became the new standard in literally less than a year.
That’s an existence proof that a new tool that doesn’t suck can take over an ecosystem.
nnevatie 12 hours ago [-]
Completely agreed. However, typically a new tool needs to be significantly better for that to happen. In many ways, I see Meson already being that but it hasn't really gained traction at scale.
forrestthewoods 1 hours ago [-]
also agreed.
UV was so good it was just obviously significantly better.
All I really want is Bazel/Buck but in a simple and easy to use way. I feel like this can be done.
singpolyma3 21 hours ago [-]
Next build a nice way to use normal Makefile with rust
duped 1 days ago [-]
FWIW: there is something fundamentally wrong with a meta-meta build system. I don't think you should bother generating or wrapping CMake, you should be replacing it.
flohofwoe 1 days ago [-]
Cmake is doing a lot of underappreciated work under the hood that would be very hard to replicate in another tool, tons of accumulated workarounds for all the different host operating systems, compiler toolchains and IDEs, it's also one of few build tools which properly support Windows and Visual Studio.
Just alone reverse engineering the Xcode and Visual Studio project file formats for each IDE version isn't fun, but this "boring" grunt work is what makes cmake so valuable.
The core ideas of cmake are sound, it's only the scripting language that sucks.
Build systems don't plan to converge in the future =)
SpaceNoodled 1 days ago [-]
My thoughts exactly. I thought this was going to be some new thing, but it's just yet another reason that I'll stick with Makefiles.
flohofwoe 1 days ago [-]
Do your Makefiles work across Linux, macOS and Windows (without WSL or MingW), GCC, Clang and MSVC, or allow loading the project into an IDE like Xcode or Visual Studio though? That's why meta-build-systems like cmake were created, not to be a better GNU Make.
uecker 1 days ago [-]
There is something fundamentally wrong with Windows or Visual Studio that it requires ugly solutions.
account42 4 hours ago [-]
At least you can use the compiler/linker without them.
delta_p_delta_x 1 days ago [-]
Windows and Visual Studio solutions are perfectly fine. MSBuild is a declarative build syntax in XML, it's not very different from a makefile.
uecker 1 days ago [-]
XML is already terrible. But the main problem seems to be that they created something similar but incompatible to make.
flohofwoe 1 days ago [-]
Ok, then just cl.exe instead of gcc or clang. Completely different set of command line options from gcc and clang, but that's fine. C/C++ build tooling needs to be able to deal with different toolchains. The diversity of C/C++ toolchains is a strength, not a weakness :)
One nice feature of MSVC is that you can describe the linker dependencies in the source files (via #pragma comment(lib, ...)), this enables building fairly complex single-file tools trivially without a build system like this:
cl mytool.c
...without having to specify system dependencies like kernel32 etc... on the cmdline.
account42 4 hours ago [-]
> Completely different set of command line options from gcc and clang, but that's fine.
Clang does have clang-cl with similar command-line options.
sebastos 22 hours ago [-]
The tough truth is that there already is a cargo for C/C++: Conan2. I know, python, ick. I know, conanfile.py, ick. But despite its warts, Conan fundamentally CAN handle every part of the general problem. Nobody else can. Profiles to manage host vs. target configuration? Check. Sufficiently detailed modeling of ABI to allow pre-compiled binary caching, local and remote? Check, check, check. Offline vs. Online work modes? Check. Building any relevant project via any relevant build system, including Meson, without changes to the project itself? Check. Support for pulling build-side requirements? Check. Version ranges? Check. Lockfiles? Check. Closed-source, binary-only dependencies? Check.
Once you appreciate the vastness of the problem, you will see that having a vibrant ecosystem of different competing package managers sucks. This is a problem where ONE standard that can handle every situation is incalculably better than many different solutions which solve only slices of the problem. I don't care how terse craft's toml file is - if it can't cross compile, it's useless to me. So my project can never use your tool, which implies other projects will have the same problem, which implies you're not the one package manager / build system, which means you're part of the problem, not the solution. The Right Thing is to adopt one unilateral standard for all projects. If you're remotely interested in working on package managers, the best way to help the human race is to fix all of the outstanding things about Conan that prevent it from being the One Thing. It's the closest to being the One Thing, and yet there are still many hanging chads:
- its terribly written documentation
- its incomplete support for editable packages
- its only nascent support for "workspaces"
- its lack of NVIDIA recipes
If you really can't stand to work on Conan (I wouldn't blame you), another effort that could help is the common package specification format (CPS). Making that a thing would also be a huge improvement. In fact, if it succeeds, then you'd be free to compete with conan's "frontend" ergonomics without having to compete with the ecosystem.
looneysquash 22 hours ago [-]
> The tough truth is that there already is a cargo for C/C++: Conan2
It says to hand write a `CMakeLists.txt` file. This is before it has me create a `conanfile.txt` even.
I have the same complaint about vcpkg.
It seems like it takes: `(conan | vcpkg) + (cmake | autotools) + (ninja | make)`
to do the basics what cargo does.
einpoklum 1 days ago [-]
Impression before actually trying this:
CMake is a combination of a warthog of a specification language, and mechanisms for handling a zillion idiosyncracies and corners cases of everything.
I doubt than < 10,000 lines of C code can cover much of that.
I am also doubtful that developers are able to express the exact relations and semantic nuances they want to, as opposed to some default that may make sense for many projects, but not all.
Still - if it helps people get started on simpler or more straightforward projects - that's neat :-)
- (1) Provide a way to compile without internet access and specify the associated dependencies path manually. This is absolutely critical.
Most 'serious' multi-language package managers and integration systems are building in a sandbox without internet access for security reasons and reproducibility reasons.
If your build system does not allow to build offline and with manually specified dependencies, you will make life of integrators and package managers miserable and they will avoid your project.
(2) Never ever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity. People expect code to be portable and shippable.
Good default options should be CMake equivalent of "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
-O3 can be argued. -march=native is always always a mistake.
- (3) Allow your build tool to be built by an other build tool (e.g CMake).
Anybody caring about reproducibility will want to start from sources, not from a pre-compiled binary. This also matter for cross compilation.
- (4) Please offer a compatibility with pkg-config (https://en.wikipedia.org/wiki/Pkg-config) and if possible CPS (https://cps-org.github.io/cps/overview.html) for both consumption and generation.
They are what will allow interoperability between your system and other build systems.
- (5) last but not least: Consider seriously the cross-compilation use case.
It is common in the world of embedded systems to cross compile. Any build system that does not support cross-compilation will be de facto banned from the embedded domain.
I have an even stronger sentiment regarding cross compilation though - In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
Always design build systems assuming cross compilation. It hurts nothing if it just so happens that your host and target platform/architecture end up being the same, and saves you everything down the line if you need to also build binaries for something else.
This is one of the huge wins of Zig. Any Zig host compiler can produce output for any supported target. Cross compiling becomes straightforward.
Also the problem isn't creating a cargo like tool for C and C++, that is the easy part, the problem is getting more userbase than vcpkg or conan for it to matter for those communities.
Perhaps you can see how there are some assumptions baked into that statement.
Shipping anything built with -march=native is a horrible idea. Even on homogeneous targets like one of the clouds, you never know if they'll e.g. switch CPU vendors.
The correct thing to do is use microarch levels (e.g. x86-64-v2) or build fully generic if the target architecture doesn't have MA levels.
I am willing to hear arguments for other approaches.
-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.
Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.
If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)
Or on newer CPUs of the same vendor (e.g. AMD dropped some instructions in Zen that Intel didn't pick up) or even in different CPUs of the same generation (Intel market segmenting shenanigans with AVX512).
> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.
This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.
Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.
Still, they are all the same breed.
it certainly has scale issues when you need to support larger deployments.
[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]
> when you need to support larger deployments
> shipping
> passing it off to someone else
Quite hard to build on the exact hardware for those scenarios.
And then, the binary that the end user receives will actually have been built on one of the CI systems. I bet they don't all have quite the same spec. And the above argument applies anyway.
I’ve never heard of anyone doing that.
If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.
To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.
The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.
Hell, I work in gamedev and we cross compile basically everything for consoles.
Some people, gasp, run physical hardware, that they bought.
If you're deploying the binary to more than one machine, you quickly run into issues where the CPUs are different and you would need to rebuild for each of them. This is feasible if you have a couple of machines that you generally upgrade together, but quickly falls apart at just slightly more than 2 machines.
And you use them for remote development?
I think this is highly unusual.
Nobody is saying dev machines are building code that ships to their servers though… quite the opposite, a dev machine builds software for local use… a server builds software for running on other servers. And yes, often build machines are the same spec as the production ones, because they were all bought together. It’s not really rare. (Well, not using the cloud in general is “rare” but, that’s what we’re discussing.)
The size of your L1/L2/L3 cache or the number of TLB misses doesn't matter too much if your python web service is just waiting for packets.
However I'm not sure about -O3. I know it can make the binary larger, not sure about other downsides.
It is completely fine to use -march=native, just do not make it the default for someone building your project.
That should always be something to opt-in.
The main reason is that software are a composite of (many) components. It becomes quickly a pain in the ass of maintainability if any tiny library somewhere try to sneak in '-march=native' that will make the final binary randomly crash with an illegal instruction error if executed on any CPU that is not exactly the same than the host.
When you design a build system configuration, think for the others first (the users of your software), and yourself after.
IME -O3 should only be used if you have benchmarks that show -O3 actually produces a speedup for your specific codebase.
I fully concur with that whole post as someone who also maintained a C++ codebase used in production.
- skipping cmake completely? would this be feasible?
- integration of other languages in the project?
- how to handle qt?
Feasible but difficult. CMake has a tremendous user mass, so you do want to be able to use a CMake-based project as a dependency. The CMake Target/Config export system expose CMake internals and make that difficult to consume a CMake built project without CMake.
The cleanest way to do that is probably what xmake is doing: Calling cmake and extract targets information from CMake to your own build system with some scripting. It is flaky but xmake has proven it is doable.
That's said: CPS should make that easier on the longer term.
Please also consider that CMake is doing a lot of work under the hood to contains compiler quirks that you will have to do manually.
> integration of other languages in the project?
Trying to integrate higher level languages (Python, JS) in package managers of lower level languages (C, C++) is generally a bad idea.
The dependency relation is inverted and interoperability betweens package managers is always poor. Diamond dependency and conflicting versions will become quickly a problem.
I would advise to just expose properly your build system with the properties I described and use a multi-language package manager (e.g Nix) or, at default, the higher level language package manager (e.g uv with a scikit-build-core equivalent) on top of that.
This will be one order of magnitude easier to do.
> how to handle qt?
Qt is nothing special to handle.
Qt is a multi language framework (C++, MOC, QML, JS and even python for PySide) and need to be handle as such.
Gentoo user: hold my beer.
https://wiki.gentoo.org/wiki/Gentoo_Binary_Host_Quickstart
The distirbuted binaries use two standard instruction sets for x86-64 and one for arm like “march=x86-64-v3”
https://wiki.gentoo.org/wiki/Gentoo_binhost/Available_packag...
15000 what?
The 15000 was a typo on my side. Fixed.
https://github.com/xmake-io/xmake
The reason why I like it (beyond ease-of-use) is that it can spit out CMakeLists.txt and compile_commands.json for IDE/LSP integration and also supports installing Conan/vcpkg libraries or even Git repos.
Then you use it likeAs an example of what I mean, say I want to link to the FMOD library (or any library I legally can't redistribute as an SDK). Or I want to enable automatic detection on Windows where I know the library/SDK is an installer package. My solution, in CMake, is to just ask the registry. In XMake I still can't figure out how to pull this off. I know that's pretty niche, but still.
The documentation gap is the biggest hurtle. A lot of the functions/ways of doing things are poorly documented, if they are at all. Including a CMake library that isn't in any of the package managers for example. It also has some weird quirks: automatic/magic scoping (which is NOT a bonus) along with a hack "import" function instead of using native require.
All of this said, it does work well when it does work. Especially with modules.
e.g. from their docs:
Other than that, both "python layer" and "over the ninja builder" are technically wrong. "python layer" is off since there is now a second implementation, Muon [https://muon.build/], in C. "over the ninja builder" is off since it can also use Visual Studio's build capabilities on Windows.
Interestingly, I'm unaware of other build-related systems that have multiple implementations, except Make (which is in fact part of the POSIX.1 standard.) Curious to know if there are any others.
It's similar, but designed for an existing ecosystem. Cargo is designed for `cargo`, obviously.
But `pyproject.toml` is designed for the existing tools to all eventually adopt. (As well as new tools, of course.)
I'm sorry I have to be a downer, but the fact is if you can use the word "I" your package manager is obviously not powerful enough for the real world.
* Standards committee is allergic to standardizing anything outside of the language itself: build tools, dependency management, even the concept of a "file" is controversial!
* Existing poor state of build systems is viral - any new build system is 10x as complex as a clean room design because you have to deal with all the legacy "power" of previous build tooling. Build system flaws propagate - the moment you need hacks in your build, you start imposing those hacks on downstream users of your library also.
Even CMake should be a much better experience than it is - but in the real world major projects don't maintain their CMake builds to the point you can cleanly depend on them. Things like using raw MY_LIB_DIR variables instead of targets, hacky/broken feature detection flags etc. Microsoft tried to solve this problem via vcpkg, ended up having to patch builds of 90% of the packages to get it to work, and it's still a poor experience where half the builds are broken.
My opinion is that a new C/C++ build/package system is actually a solvable problem now with AI. Because you can point Opus 4.6 or whoever at the massive pile of open source dependencies, and tell it for each one "write a build config for this package using my new build system" which solves the gordian knot of the ecosystem problem.
The world is rich in complexity, subtlety, and exceptions to categorization. I don't think this should block us from solving problems.
If it's really bad, at least the easy 20%.
Not sure how big your plans are.
My thoughts would be to start as a cmake generator but to eventually replace it. Maybe optionally.
And to integrate suppoet for existing package managers like vcpkg.
At the same time, I'd want to remain modular enough that's it's not all or nothing. I also don't like locking.
But right now package management and build system are decoupled completely. And they are not like that in other ecosystems.
For example, Cmake can use vcpkg to install a package but then I still have to write more cmake to actually find and use it.
I have this solved at our company. We have a tool built on top of vcpkg, to manage internal + external dependencies. Our cmake linker logic leverages the port names and so all you really do is declare your manifest file (vcpkg.json) then declare which one of them you will export publicly.
Everything after that is automatic including the exported cmake config for your library.
And yet it will insist on only giving you binaries that match exactly. Thankfully there are experimental extensions that allow it to automatically fall back.
I wish there was a dead simple installer TUI that had a common API specification so that you could host your installer spec on your.domain.com/install.json - point this TUI at it and it would understand the fine grained permissions required, handle required binary signature validation, manifest/sbom validation, give the user freedom to customize where/how things were installed, etc.
Social / emotional signals still exist around that word.
I don't like it. Such format is generally restricted (is not Turing-complete), which doesn't allow doing something non-trivial, for example, choosing dependencies or compilation options based on some non-trivial conditions. That's why CMake is basically a programming language with variables, conditions, loops and even arithmetic.
In Rust, you have Cargo.toml, in go, it's a rather simple go.mod.
And even in embedded C, you have platformio which manages to make due with a few .ini files.
I would honestly love to see the cpp folks actually standardizing a proper build system and dependency manager.
Today, just building a simple QT app is usually a daunting task, and other compiled ecosystems show us it doesn't have to be.
That's a nice experience as long as you stay within predefined, simple abstractions that somebody else provided. But it is very much a scripted build system, you just don't see it for trivial cases.
For customizations, let alone a new platform, you will end up writing python scripts, and digging through the 200 pages documentation when things go wrong.
Here's my feeble attempt using Deno as base (it's extremely opinionated though and mostly for personal use in my hobby projects):
https://github.com/floooh/fibs
One interesting chicken-egg-problem I couldn't solve is how to figure out the C/C++ toolchain that's going to be used without running cmake on a 'dummy project file' first. For some toolchain/IDE combos (most notably Xcode and VStudio) cmake's toolchain detection takes a lot of time unfortunately.
What exactly is it you do/need that can't be reasonably solved using the FetchContent module?
https://cmake.org/cmake/help/latest/module/FetchContent.html
You need to define a CMake toolchain[1] and pass it to CMake with --toolchain /path/to/file in the command-line, or in a preset file with the key `toolchainFile` in a CMake preset. I've compiled for QNX and ARM32 boards with CMake, no issues, but this needs to be done.
[1]: https://cmake.org/cmake/help/latest/manual/cmake-toolchains....
If you're happy to bake one config in a makefile, then cmake will do very little for you.
Cmake has a lot of warts, but they have also put a lot of effort into finding and fixing all those weird special cases. If your project uses CMake odds are high it will build anywhere.
Fighting the standard often creates it's own set of problems and nightmares that just aren't worth it. Especially true in C++ where yhou often have to integrate with other projects and their build systems. Way easier if you just use cmake like everyone else.
Even the old hold outs, boost and google open source, now use cmake for their open source stuff.
It signals that the speaker doesn't understand that the two are different languages with very different communities.
I don't really think that C users are entirely immune to dependency hell, if that's what OP meant, though. It is orthogonal.
As a user, I do believe it sucks when you depend on something that is not included by default on all target platforms(and you fail to include it and maintain it within your source tree*).
It is probably true that more average C programs can be built with plain Makefiles or even without a Makefile than C++, though.
You can of course add dependencies on configure scripts, m4, cmake, go, python or rust when building a plain self-contained C program and indeed many do.
How does craft handle these 'diamond' patterns where 2 dependencies may depend on versions of the same library as transitive dependencies (either for static or dynamic linking or as header-only includes) without custom build scripts like the Conan approach?
What I've been doing to manage dependencies in a way that doesn't depress me much has been Nix flakes, which allows me a pretty straightforward `nix build` with the correct dependencies built in.
I'm just a bit curious though; a lot of C libraries are system-wide, and usually require the system package manager (e.g. libsdl2-dev) does this have an elegant way to handle those?
C++ build system, at the core, boils down to calling gcc foo.c -o foo.obj / link foo.obj foo.exe (please forgive if I got they syntax wrong).
Sure, you have more .c files, and you pass some flags but that's the core.
I've recently started a new C++ program from scratch.
What build system did I write?
I didn't. I told Claude:
"Write a bun typescript script build.ts that compiles the .cpp files with cl and creates foo.exe. Create release and debug builds, trigger release build with -release cmd-line flag".
And it did it in minutes and it worked. And I can expand it with similar instructions. I can ask for release build with all the sanitize flags and claude will add it.
The particulars don't matter. I could have asked for a makefile, or cmake file or ninja or a script written in python or in ruby or in Go or in rust. I just like using bun for scripting.
The point is that in the past I tried to learn cmake and good lord, it's days spent learning something that I'll spent 1 hr using.
It just doesn't make sense to learn any of those tools given that claude can give me working any build system in minutes.
It makes even less sense to create new build tools. Even if you create the most amazing tool, I would still choose spending a minute asking claude than spending days learning arbitrary syntax of a new tool.
https://cmkr.build/
But how this tool figures out where the header files and build instructions for the libraries are that are included? Any expected layout or industry wide consensus?
https://github.com/randerson112/craft/blob/main/CMakeLists.t...
...and for custom requirements a manually created CMakeLists.extras.txt as escape hatch.
Unclear to me how more interesting scenarios like compiler- and platform-specific build options (enable/disable warnings, defines, etc...), cross-compilation via cmake toolchain files (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be handled. E.g. just trivial things like "when compiling for Emscripten, include these source files, but not those others".
Yes, config packages are better. But I think doing find_package everywhere is better. Assuming you install an SDK for others to use your project. If you're a "product", vendor away. The issue comes when you want to vendor X and Y and both vendor Z independently. Then you're stuck de-vendoring at least one and figuring out how to provide it yourself internally. IMO, better to just let Z make its own install tree and find it as a package from there.
One can write good Find modules, but there is some "taste" involved. I wish we had more good examples to use as templates.
This is now a build system generator generator. This is the wrong solution imho. The right solution is to just build a build system that doesn’t suck. Cmake sucks. Generating suck is the wrong angle imho.
That’s an existence proof that a new tool that doesn’t suck can take over an ecosystem.
UV was so good it was just obviously significantly better.
All I really want is Bazel/Buck but in a simple and easy to use way. I feel like this can be done.
Just alone reverse engineering the Xcode and Visual Studio project file formats for each IDE version isn't fun, but this "boring" grunt work is what makes cmake so valuable.
The core ideas of cmake are sound, it's only the scripting language that sucks.
Build systems don't plan to converge in the future =)
One nice feature of MSVC is that you can describe the linker dependencies in the source files (via #pragma comment(lib, ...)), this enables building fairly complex single-file tools trivially without a build system like this:
...without having to specify system dependencies like kernel32 etc... on the cmdline.Clang does have clang-cl with similar command-line options.
Once you appreciate the vastness of the problem, you will see that having a vibrant ecosystem of different competing package managers sucks. This is a problem where ONE standard that can handle every situation is incalculably better than many different solutions which solve only slices of the problem. I don't care how terse craft's toml file is - if it can't cross compile, it's useless to me. So my project can never use your tool, which implies other projects will have the same problem, which implies you're not the one package manager / build system, which means you're part of the problem, not the solution. The Right Thing is to adopt one unilateral standard for all projects. If you're remotely interested in working on package managers, the best way to help the human race is to fix all of the outstanding things about Conan that prevent it from being the One Thing. It's the closest to being the One Thing, and yet there are still many hanging chads:
- its terribly written documentation
- its incomplete support for editable packages
- its only nascent support for "workspaces"
- its lack of NVIDIA recipes
If you really can't stand to work on Conan (I wouldn't blame you), another effort that could help is the common package specification format (CPS). Making that a thing would also be a huge improvement. In fact, if it succeeds, then you'd be free to compete with conan's "frontend" ergonomics without having to compete with the ecosystem.
Is it though?
When I read the tutorial: https://docs.conan.io/2/tutorial/consuming_packages/build_si...
It says to hand write a `CMakeLists.txt` file. This is before it has me create a `conanfile.txt` even.
I have the same complaint about vcpkg.
It seems like it takes: `(conan | vcpkg) + (cmake | autotools) + (ninja | make)` to do the basics what cargo does.
CMake is a combination of a warthog of a specification language, and mechanisms for handling a zillion idiosyncracies and corners cases of everything.
I doubt than < 10,000 lines of C code can cover much of that.
I am also doubtful that developers are able to express the exact relations and semantic nuances they want to, as opposed to some default that may make sense for many projects, but not all.
Still - if it helps people get started on simpler or more straightforward projects - that's neat :-)