[HN Gopher] The C-Shaped Hole in Package Management
       ___________________________________________________________________
        
       The C-Shaped Hole in Package Management
        
       Author : tanganik
       Score  : 53 points
       Date   : 2026-01-27 10:34 UTC (20 hours ago)
        
 (HTM) web link (nesbitt.io)
 (TXT) w3m dump (nesbitt.io)
        
       | rwmj wrote:
       | Please don't. C packaging in distros is working fine and doesn't
       | need to turn into crap like the other language-specific package
       | managers. If you don't know how to use pkgconf then that's your
       | problem.
        
         | aa-jv wrote:
         | ^ This.
         | 
         | Plus, we already have great C package management. Its called
         | CMake.
        
           | rwmj wrote:
           | I hate autotools, but I have stockholm syndrome so I still
           | use it.
        
             | aa-jv wrote:
             | Its not so hard once you learn it. Of course, you will
             | carry that trauma with you, and rightly so. ;)
        
             | kergonath wrote:
             | I hated auto tools until I had to use cmake. Now, I still
             | hate auto tools, but I hate cmake more.
        
           | bluGill wrote:
           | CMake is not a package management tool, it is a build tool.
           | It can be abused to do package management, but that isn't
           | what it is for.
        
         | JohnFen wrote:
         | I agree entirely. C doesn't need this. That I don't have to
         | deal with such a thing has become a new and surprising
         | advantage of the language for me.
        
           | sebastos wrote:
           | I find this sentiment bewildering. Can you help me understand
           | your perspective? Is this specifically C or C++? How do you
           | manage a C/C++ project across a team without a package
           | manager? What is your methodology for incorporating third
           | party libraries?
           | 
           | I have spent the better half of 10 years navigating around
           | C++'s deplorable dependency management story with a slurry of
           | Docker and apt, which had better not be part of everyone's
           | story about how C is just fine. I've now been moving our team
           | to Conan, which is also a complete shitshow for the reasons
           | outlined in the article: there is still an imaginary line
           | where Conan lets go and defers to "system" dependencies, with
           | a completely half-assed and non-functional system for
           | communicating and resolving those dependencies which doesn't
           | work at all once you need to cross compile.
        
             | spauldo wrote:
             | You're confusing two different things.
             | 
             | For most C and C++ software, you use the system packaging
             | which uses libraries that (usually) have stable ABIs. If
             | your program uses one of those problematic libraries, you
             | might need to recompile your program when you update the
             | library, but most of the time there's no problem.
             | 
             | For your company's custom mission critical application
             | where you need total control of the dependencies, then yes
             | you need to manage it yourself.
        
               | sebastos wrote:
               | Ok - it sounds like you're right, but I think despite
               | your clarification I remain confused. Isn't the linked
               | post all about how those two things always have a
               | mingling at the boundary? Like, suppose I want to develop
               | and distribute a c++ user-space application in a cross
               | platform way. I want to manage all my dependencies at the
               | language level, and then there's some collection of
               | system libraries that I may or may not decide to rely on.
               | How do I manage and communicate that surface area in a
               | cross platform and scalable way? And what does this feel
               | like for a developer - do you just run tests for every
               | supported platform in a separate docker container?
        
         | hliyan wrote:
         | When I used to work with C many years ago, it was basically:
         | download the headers and the binary file for your platform from
         | the official website, place them in the header/lib paths,
         | update the linker step in the Makefile, #include where it's
         | needed, then use the library functions. It was a little bit
         | more work than typing "npm install", but not so much as to
         | cause headaches.
        
           | fredrikholm wrote:
           | And with header only libraries (like stb) its even less than
           | that.
           | 
           | I primarily write C nowadays to regain sanity from doing my
           | day job, and the fact that there is zero bit rot and
           | setup/fixing/middling to get things running is in stark
           | contrast to the horrors I have to deal with professionally.
        
           | zbentley wrote:
           | What do you do when the code you downloaded refers to symbols
           | exported by libraries not already on your system? How do you
           | figure out where those symbols should come from? What if it
           | expects version-specific behavior and you've already
           | installed a newer version of libwhatever on your system (I
           | hope your distro package manager supports downgrades)?
           | 
           | These are very, very common problems; not edge cases.
           | 
           | Put another way: y'all know we got all these other package
           | management/containerization/isolation systems in large part
           | _because_ people tried the C-library-install-by-hand /system-
           | package-all-the-things approaches and found them severely
           | lacking, right? CPAN was considered a godsend for a reason.
           | NPM, for all its hilarious failings, even moreso.
        
             | JohnFen wrote:
             | > These are very, very common problems; not edge cases.
             | 
             | Honestly? Over the course of my career, I've only rarely
             | encountered these sorts of problems. When I have, they've
             | come from poorly engineered libraries anyway.
        
               | bengarney wrote:
               | Here is a thought experiment (for devs who buy into
               | package managers). Take the hash of a program and all its
               | dependency. Behavior is different for every unique hash.
               | With package managers, that hash is different on every
               | system, including hashes in the future that are
               | unknowable by you (ie future "compatible" versions of
               | libraries).
               | 
               | That risk/QA load can be worth it, but is not always. For
               | an OS, it helps to be able to upgrade SSL (for instance).
               | 
               | In my use cases, all this is a strong net negative. npm-
               | base projects randomly break when new "compatible"
               | version of libraries install for new devs. C/C++ projects
               | don't build because of include/lib path issues or lack of
               | installation of some specific version or who knows what.
               | 
               | If I need you to install the SDL 2.3.whatever libraries
               | exactly, or use react 16.8.whatever to be sure the app
               | runs, what's the point of using a complex system that
               | will almost certainly ensure you have the wrong version?
               | Just check it in, either by an explicit version or by
               | committing the library's code and building it yourself.
        
               | sebastos wrote:
               | Check it in and build it yourself using the common build
               | system that you and the third party dependency definitely
               | definitely share, because this is the C/C++ ecosystem?
        
             | tpoacher wrote:
             | You are conflating development with distribution of
             | binaries (a problem which interpreted languages do not
             | have, I hasten to add).
             | 
             | 1. The accepted solution to what you're describing in terms
             | of development, is passing appropriate flags to
             | `./configure`, specifying the path for the alternative
             | versions of the libraries you want to use. This is as
             | simple as it gets.
             | 
             | As for where to get these libraries from in the event that
             | the distro doesn't provide the right version, `./configure`
             | is basically a script. Nothing stopping you from printing a
             | couple of ftp mirrors in the output to be used as a target
             | to wget.
             | 
             | 2. As for the problem of distribution of binaries and
             | related up-to-date libraries, the appropriate solution is a
             | distro package manager. A c package manager wouldn't come
             | into this equation at all, unless you wanted to compile
             | from scratch to account for your specific circumstances, in
             | which case, goto 1.
        
           | krautsauer wrote:
           | And then you got some minor detail different from the
           | compiled library and boom, UB because some struct is layed
           | out differently or the calling convention is wrong or you
           | compiled with a different -std or ...
        
             | rwmj wrote:
             | Which is exactly why you should leave it to the distros to
             | construct a consistent build environment. If your distro
             | regularly gets this wrong then you do have a problem.
        
         | zbentley wrote:
         | I mean ... it clearly isn't working well if problems like "what
         | is the libssl distribution called in a given Linux distro's
         | package manager?" and "installing a MySQL driver in four of the
         | five most popular programming languages in the world requires
         | either bundling binary artifacts with language libraries or
         | invoking a compiler toolchain in unspecified, unpredictable,
         | and failure-prone ways" are both incredibly common and
         | incredibly painful for many/most users and developers.
         | 
         | The idea of a protocol for "what artifacts in what languages
         | does $thing depend on and how will it find them?" as discussed
         | in the article would be incredibly powerful...IFF it were
         | adopted widely enough to become a real standard.
        
           | rwmj wrote:
           | Assuming that your distro is, say, Debian, then you'll know
           | the answer to that is always libssl-dev, and if you cannot
           | find it then there's a handy search tool (both CLI and web
           | page: https://packages.debian.org) to help you.
           | 
           | I'm not very familiar with MySQL, but for C (which is what
           | we're talking about here) I typed mysql here and it gave me a
           | bunch of suggestions: https://packages.debian.org/search?suit
           | e=default&section=all... Debian doesn't ship binary blobs, so
           | I guess that's not a problem.
           | 
           | "I have to build something on 10 different distros" is not
           | actually a problem that many people have.
           | 
           | Also, let the distros package your software. If you're not
           | doing that, or if you're working against the distros, then
           | you're storing up trouble.
        
             | lstodd wrote:
             | Actually "build something on 10 different distros" is not a
             | problem either, you just make 10 LXC containers with those
             | distros on a $20/mo second-hand Hetzner box, sick Jenkins
             | with trivial shell scripts on them and forget about it for
             | a couple years or so until a need for 11th distro arrives,
             | in which case you spend half an hour or so to set it up.
        
           | fc417fc802 wrote:
           | > what is the libssl distribution called in a given Linux
           | distro's package manager?
           | 
           | I think you're going to need to know that either way if you
           | want to run a dynamically linked binary using a library
           | provided by the OS. A package manager (for example Cargo)
           | isn't going to help here because you haven't vendored the
           | library.
           | 
           | To match the npm or pip model you'd go with nix or guix or
           | cmake and you'd vendor everything and the user would be
           | expected to build from scratch locally.
           | 
           | Alternatively you could avoid having to think about distro
           | package managers by distributing with something like flatpak.
           | That way you only need to figure out the name of the libssl
           | package the one time.
           | 
           | Really issues shouldn't arise unless you try to use a library
           | that doesn't have a sane build system. You go to vendor it
           | and it's a headache to integrate. I guess there's probably
           | more of those in the C world than elsewhere but you could
           | maybe just try not using them?
        
         | duped wrote:
         | > C packaging in distros is working fine
         | 
         | GLIBC_2.38 not found
        
           | Joker_vD wrote:
           | Like, seriously. It's impossible to run Erlang/OTP 21.0 on a
           | modern Ubuntu/Debian because of libssl/glibc shenanigans so
           | your best bet is to take a container with the userspace of
           | Ubuntu 16 (which somehow works just fine on modern kernel,
           | what a miracle! Why can't Linux's userspace do something like
           | that?) and install it in there. Or just listen to "JuST doN'T
           | rUN ouTdaTED SoftWAre" advices. Yeah, thanks a lot.
        
           | amiga386 wrote:
           | If you have a distro-supplied binary that doesn't link with
           | the distro-supplied glibc, something is very very wrong.
           | 
           | If you're supplying your own binaries and not
           | compiling/linking them against the distro-supplied glibc,
           | that's on you.
        
             | duped wrote:
             | Linking against every distro-supplied glibc to distribute
             | your own software is as unrealistic as getting
             | distributions to distribute your software for you. The
             | model is backwards from what users and developers expect.
             | 
             | But that's not the point I'm making. I'm attacking the idea
             | that they're "working just fine" when the above is a bug
             | that nearly everyone hits in the wild as a user and a
             | developer shipping software on Linux. It's not the only one
             | caused by the model, but it's certainly one of the most
             | common.
        
               | amiga386 wrote:
               | It's hardly unrealistic - _most_ free software has been
               | packaged, by each distro. Very handy for the developer:
               | just email the distro maintainers (or post on your
               | mailing list) that the new version is out, they 'll get
               | round to packaging it. Very handy for the user, they just
               | "apt install foo" and ta-da, Foo is installed.
               | 
               | That was very much the point of using a Linux distro (the
               | clue is in the name!) Trying to work in a Windows/macOS
               | way where the "platform" does fuck-all and the developer
               | has to do it all themselves is the opposite of how
               | distros work.
        
               | duped wrote:
               | User now waits for 3rd party "maintainers" to get around
               | to manipulating the software they just want to use from
               | the 1st party developer they have a relationship with. If
               | ever.
               | 
               | I understand this is how distros work. What I'm saying is
               | that the distros are wrong, this is a bad design. It
               | leads to actual bugs and crashes for users. There have
               | been significant security mistakes made by distro
               | maintainers. Distros strip bug fixes and package old
               | versions. It's a mess.
               | 
               | And honestly, a lot of software is not free and won't be
               | packaged by distros. Most software I use on my own
               | machines is not packaged by my distro. _ALL_ the software
               | I use professionally is vendored independently of _any_
               | distribution. And when I 've shipped to various
               | distributions in the past, I go to great lengths to
               | _never_ link anything if possible that could be from the
               | distro, because my users do not know how to fix it.
        
         | Joker_vD wrote:
         | Well, if you're fine with using 3-year old versions of those
         | libraries packaged by severely overworked maintainers who at
         | one point seriously considered blindly converting everything
         | into Flatpaks and shipping _those_ simply because they can 't
         | muster enough of manpower, sure.
         | 
         | "But you can use 3rd party repositories!" Yeah, and I also can
         | just download the library from its author's site. I mean, if I
         | trust them enough to run their library, why do I need
         | opinionated middle-men?
        
           | rwmj wrote:
           | If this is a concern (which it rarely is) then you can pitch
           | in with distro packaging. Volunteers are always welcome.
           | 
           | > "But you can use 3rd party repositories!"
           | 
           | That's not something I said.
        
             | sebastos wrote:
             | >(which it rarely is)
             | 
             | You're saying it's _rare_ for developers to want to advance
             | a dependency past the ancient version contained in
             | <whatever the oldest release they want to support> is?
             | 
             | Speaking for the robotics and ML space, that is simply the
             | opposite of a true statement where I work.
             | 
             | Also doesn't your philosophy require me to figure out the
             | packaging story for every separate distro, too? Do you just
             | maintain multiple entirely separate dependency graphs, one
             | for each distro? And then say to hell with Windows and Mac?
             | I've never practiced this "just use the system package
             | manager" mindset so I don't understand how this actually
             | works in practice for cross-platform development.
        
         | amluto wrote:
         | I've contemplated this quite a bit (and I personally maintain a
         | C++ artifact that I deploy to production machines, and I
         | generally prefer not to use containers for it), and I think I
         | disagree.
         | 
         | Distributions have solved a very specific problem quite nicely:
         | they are building what is effectively one application (the
         | distro) with many optional pieces, it has one set of
         | dependencies, and the users update the whole thing when they
         | update. If the distro wants to patch a dependency, it does so.
         | ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1
         | opt in to the distro's set of dependencies. This all works
         | remarkably well and a lot of tooling has been built around it.
         | 
         | But a lot of users don't work in this model. We build C/C++
         | programs that have their own set of dependencies. We want to
         | try patching some of them. We want to try omitting some. We
         | want to write programs that are hermetic in the sense that we
         | are guaranteed to notice if we accidentally depend on something
         | that's actually an optional distro package. The results ... are
         | really quite bad, unless the software you are building is built
         | within a distro's build system.
         | 
         | And the existing tooling is _terrible_. Want to write a program
         | that opts out of the distro 's library path? Too bad --
         | DT_INTERP really really wants an absolute path, and the one and
         | only interpreter reliably found at an absolute path will not
         | play along. glibc doesn't know how to opt out of the distro's
         | library search path. There is no ELF flag to do it, nor is
         | there an environment variable. It doesn't even really support a
         | mode where DT_INTERP is not used but you can still do dlopen!
         | So you _can 't_ do the C equivalent of Python venvs without a
         | giant mess.
         | 
         | pkgconf does absolutely nothing to help. Sure, I can write a
         | makefile that uses pkgconf to find the distro's libwhatever,
         | and _if I 'm willing to build from source on each machine* (or
         | I'm writing the distro itself) and _if libwhatever is an
         | acceptable version* and _if the distro doesn 't have a
         | problematic patch to it_, then it works. This is completely
         | useless for people like me who want to build something remotely
         | portable. So instead people use enormous kludges like
         | Dockerfile to package the entire distro with the application in
         | a distinctly non-hermetic way.
         | 
         | Compare to solutions that actually do work:
         | 
         | - Nix is somewhat all-encompassing, but it can simultaneously
         | run multiple applications with incompatible sets of
         | dependencies.
         | 
         | - Windows has a distinct set of libraries that are on the
         | system side of the system vs ISV boundary. They spend decades
         | doing an admirable job of maintaining the boundary. (Okay, they
         | seem to have forgotten how to maintain anything in 2026, but
         | that's a different story.) You can build a Windows program on
         | one machine and run it somewhere else, and it works.
         | 
         | - Apple bullies everyone into only targeting a small number of
         | distros. It works, kind of. But ask people who like software
         | like Aperture whether it still runs...
         | 
         | - Linux (the syscall interface, not GNU/Linux) outdoes
         | Microsoft in maintaining compatibility. This is part of why
         | Docker works. Note that Docker and all its relatives basically
         | completely throw out the distro model of interdependent
         | packages all with the same source. OCI tries to replace it with
         | a sort-of-tree of OCI layers that are, in theory, independent,
         | but approximately no one actually uses it as such and instead
         | uses Docker's build system and layer support as an incredibly
         | poorly functioning and unreliable cache.
         | 
         | - The BSDs are basically the distro model except with one
         | single distro each that includes the kernel.
         | 
         | I would _love_ functioning C virtual environments. Bring it on,
         | please.
        
         | geraldcombs wrote:
         | What "distro" package manager is available on Windows and
         | macOS? vcpkg doesn't provide binary packages and has quite a
         | few autotools-shaped holes. Homebrew is great as long as you're
         | building for your local machine's macOS version and
         | architecture, but if you want to support an actual user
         | community you're SOL.
        
         | dminik wrote:
         | No, just no.
         | 
         | Using system/distro packages is great when you're writing
         | server software and need your base system to be stable.
         | 
         | But, for software distributed to users, this model fails hard.
         | You generally need to ship across OSs, OS versions and for that
         | you need consistent library versions. Your software being
         | broken because a distro maintainer has decided that a 3 year
         | old version of your dependency is close enough is terrible.
        
           | MiiMe19 wrote:
           | If you software is not being distributed by that distribution
           | and is using some external download tool, it is inherently
           | not supported and the only way to make sure it works is to
           | compile from source.
        
       | Piraty wrote:
       | very related:
       | https://michael.orlitzky.com/articles/motherfuckers_need_pac...
        
         | manofmanysmiles wrote:
         | One of my favorite blog posts. I enjoy it every time I read it.
         | I've implemented two C package managers and they... were fine.
         | I think it's a pretty genuinely hard thing to get right outside
         | of a niche.
         | 
         | I've written two C package managers in my life. The most recent
         | one is mildly better than the first from a decade ago, but
         | still not quite right. If I ever build one I think is good
         | enough I'll share, only to mostly likely learn about 50 edge
         | cases I didn't think of :)
        
         | smw wrote:
         | The fact that the first entry in his table says that apt
         | doesn't have source packages is a good marker of the quality of
         | this post.
        
       | josefx wrote:
       | > Conan and vcpkg exist now and are actively maintained
       | 
       | I am not sure if it is just me, but I seem to constantly run into
       | broken vcpkg packages with bad security patches that keep them
       | from compiling, cmake scripts that can't find the binaries,
       | missing headers and other fun issues.
        
         | fsloth wrote:
         | C++ community would be better off without Conan.
         | 
         | Avoid at all cost.
        
         | adzm wrote:
         | I've never had a problem with vcpkg, surprisingly. Perhaps it
         | is just a matter of which packages we are using.
        
         | Piraty wrote:
         | yes, i found conan appears to have lax rules regarding package
         | maintenance which leads to incosistent recipes
        
       | xyzsparetimexyz wrote:
       | C*** shaped?
        
       | CMay wrote:
       | I don't trust any language that fundamentally becomes reliant on
       | package managers. Once package managers become normalized and
       | pervasively used, people become less thoughtful and investigative
       | into what libraries they use. Instead of learning about who
       | created it, who manages it, what its philosophy is, people
       | increasingly just let'er rip and install it then use a few
       | snippets to try it. If it works, great. Maybe it's a little
       | bloated and that causes them to give it a side-eye, but they can
       | replace it later....which never comes.
       | 
       | That would be fine if it only effected that first layer, of a
       | basic library and a basic app, but it becomes multiple layers of
       | this kind of habit that then ends up in multiple layers of
       | software used by many people.
       | 
       | Not sure that I would go so far as to suggest these kinds of
       | languages with runaway dependency cultures shouldn't exist, but I
       | will go so far as to say any languages that don't already have
       | that culture need to be preserved with respect like uncontacted
       | tribes in the Amazon. You aren't just managing a language, you
       | are also managing process and mind. Some seemingly inefficient
       | and seemingly less powerful processes and ways of thinking have
       | value that isn't always immediately obvious to people.
        
       | duped wrote:
       | Missing in this discussion is that package management is tightly
       | coupled to module resolution in nearly every language. It is not
       | enough to merely install dependencies of given versions but to do
       | so in a way that the language toolchain and/or runtime can find
       | and resolve them.
       | 
       | And so when it comes to dynamic dependencies (including shared
       | libraries) that are not resolved until runtime you hit language-
       | level constraints. With C libraries the problem is not merely
       | that distribution packagers chose to support single versions of
       | dependencies because it is _easy_ but because the loader
       | (provided by your C toolchain) isn 't designed to support it.
       | 
       | And if you've ever dug into the guts of glibc's loader it's 40
       | years of unreadable cruft. If you want to take a shot at the
       | C-shaped hole, take a look at that and look at decoupling it from
       | the toolchain and add support for multiple version resolution and
       | other basic features of module resolution in 2026.
        
         | pif wrote:
         | > And if you've ever dug into the guts of glibc's loader it's
         | 40 years of unreadable cruft.
         | 
         | You meant: it's 40 years of debugged and hardened run-
         | everywhere never-fails code, I suppose.
        
           | duped wrote:
           | No, I meant 40 years of unreadable cruft. It's not hard to
           | write a correct loader. It's very hard to understand glibc's
           | implementation.
        
       | krautsauer wrote:
       | Why is meson's wrapdb never mentioned in these kinds of posts, or
       | even the HN discussion of them?
        
         | johnny22 wrote:
         | probably because meson doesn't have a lot of play outside
         | certain ecosystems.
         | 
         | I like wrapdb, but I'd rather have a real package manager.
        
       | conorbergin wrote:
       | I use a lot of obscure libraries for scientific computing and
       | engineering. If I install it from pacman or manage to get an AUR
       | build working, my life is pretty good. If I have to use a Python
       | library the faff becomes unbearable, make a venv, delete the
       | venv, change python version, use conda, use uv, try and install
       | it globally, change python path, source .venv/bin/activate. This
       | is less true for other languages with local package management,
       | but none of them are as frictionless as C (or Zig which I use
       | mostly). The other issue is .venvs, node_packages and equivalents
       | take up huge amounts of disk and make it a pain to move folders
       | around, and no I will not be using a git repo for every throwaway
       | test.
        
         | auxym wrote:
         | uv has mostly solved the python issue. IME it's dependency
         | resolution is fast and just works. Packages are hard linked
         | from a global cache, which also greatly reduces storage
         | requirements when you work with multiple projects.
        
           | amluto wrote:
           | uv does nothing to help when you have old, crappy, barely
           | maintained Python packages that don't work reliably.
        
           | storystarling wrote:
           | uv is great for resolution, but it seems like it doesn't
           | really address the build complexity for heavy native
           | dependencies. If you are doing any serious work with torch or
           | local LLMs, you still run into issues where wheels aren't
           | available for your specific cuda/arch combination. That is
           | usually where I lose time, not waiting for the resolver.
        
           | drowsspa wrote:
           | You still need to compile when those libraries are not pre
           | compiled.
        
         | megolodan wrote:
         | compiling an open source C project isn't time consuming?
        
       | advael wrote:
       | I think system package managers do just fine at wrangling static
       | library dependencies for compiled languages, and if you're
       | building something that somehow falls through the cracks of them
       | then I think you should probably just be using git or some kinda
       | vcs for whatever you're doing, not a package manager
       | 
       | But on the other hand, I am used to arch, which both does
       | package-management ala carte as a rolling release distro and has
       | a pretty extensively-used secondary open community ecosystem for
       | non-distro-maintained packages, so maybe this isn't as true in
       | the "stop the world" model the author talks about
        
       | pornel wrote:
       | Has anyone here even read the article?! All the comments here
       | assume they're building a package manager for C!
       | 
       | They're writing a tool to discover and index all indirect
       | dependencies across languages, including C libraries that were
       | smuggled inside other packages and weren't properly declared as a
       | dependency anywhere.
       | 
       | "Please don't" what? Please don't discover the duplicate and
       | potentially vulnerable C libraries that are out of sight of the
       | system package manager?
        
       | arkt8 wrote:
       | The biggest difficult is not that, is the many assumptions you
       | need when writing a makefile and how to use different versions of
       | same library. The LD_PATH is something had as potentially risky.
       | Not that it be... but assumptions of the past, like big monsters,
       | are a barrier to the simpler C tooling.
        
       ___________________________________________________________________
       (page generated 2026-01-28 07:01 UTC)