_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
 (HTM) Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
 (HTM)   Go is portable, until it isn't
       
       
        mbrumlow wrote 1 day ago:
        I really hate this type of blog. It pollutes the world with this
        attitude of “I messed up, how I have to frame the problem in a way,
        and write a blog that lets my ego stay intact”, which results in
        blogs like this showing in decision making process as “why you should
        not use go”. And mostly people never look past the title.
        
        The fact is go is portable, it provides the ability to cross compile
        out of the box and reasonably executed on other platforms it supports. 
        But in this case, a decision that had little to do with go, the desire
        to use c code, a non go project, with their go project made things
        harder.
        
        These are not “just a set of constraints you only notice once you
        trip over them”, this is trivializing the mistake.
        
        Entire blog can be simplified to the following.
        
        We were ignorant, and then had to do a bunch of work because we were
        ignorant.  It’s a common story in software. I don’t expect
        everybody to get it right the first time.  But what we don’t need is
        sensational titled blogs full of fluff to try to reason readers out of
        concluding the obvious.  Somebody in charge made decisions uninformed
        and as a result the project became more complicated and probably took
        longer.
       
        p0w3n3d wrote 1 day ago:
        Basically everything is portable unless it isn't. Java - the same. We
        fly in abstractions unless you need to delete a file
       
        regularfry wrote 1 day ago:
        All of this, every last bit of complexity and breakage and sweat, is
        downstream of this:
        
        > Journal logs are not stored in plain text. They use a binary format
        
        And it was entirely predictable and predicted that this sort of problem
        would be the result when that choice was made.
       
          dwattttt wrote 13 hours 23 min ago:
          That's why I hexify all binary files, to make it easier to understand
          them.
       
          valbaca wrote 1 day ago:
          Unix philosophy strikes again
       
        r_lee wrote 1 day ago:
        more like C is portable, until it isn't
       
        thomashabets2 wrote 1 day ago:
        The portability story for Go is awful. I've blogged about this before:
        [1] It's yet another example of Go authors just implementing the
        least-effort without even a slight thought to what it would mean down
        the line, creating a huge liability/debt forever in the language.
        
 (HTM)  [1]: https://blog.habets.se/2022/02/Go-programs-are-not-portable.ht...
       
        colonwqbang wrote 1 day ago:
        Has nothing to do with go. You added a dependency which is not
        portable. It is well known that systemd project only targets Linux.
        
        Vendorise systemd and compile only the journal parts, if they are
        portable and can be isolated from the rest. Otherwise just shell out to
        journalctl.
       
        orochimaaru wrote 1 day ago:
        So you can’t pull in c libraries built for different distributions
        and expect this to work.
        
        If you use pure go, things are portable. The moment you use C API, that
        portability doesn’t exist. This should be apparent.
       
          jeremyjh wrote 1 day ago:
          My assumption was that they were using a C API just from reading the
          headline. I don't use Go but these sorts of problems are common to
          any project doing that in just about any language.
       
        novoreorx wrote 1 day ago:
        This article reminds me of the days before LLMs ruled the world, when
        the word "agent" was most commonly used in the DevOps area,
        representing the program that ran on a remote machine to execute
        dispatched jobs or send metrics. Now I wonder how many developers would
        look at "agent" and think of this meaning.
       
        nicman23 wrote 1 day ago:
        so like every other language
       
        arianvanp wrote 1 day ago:
        FWIW I maintain an official implementation of the journal wire format
        in go now. [1] so you can at least log to the journal now without CGO
        
        But that's just the journal Wire format which is a lot simpler than the
        disk format.
        
        I think a journal disk format parser in go would be a neat addition
        
 (HTM)  [1]: https://github.com/systemd/slog-journal
       
          mbrock wrote 1 day ago:
          I've got a pure Go journald file writer that works to some
          extent—it doesn't split, compress, etc, but it produces journal
          files that journalctl/sdjournal can read, concurrently. Only stress
          tested by running a bunch of parallel integration tests, will most
          likely not maintain it seriously, total newbie garbage, etc, but may
          be of interest to someone. I haven't really seen any other working
          journald file writers.
          
 (HTM)    [1]: https://github.com/lessrest/swash/tree/main/pkg/journalfile
       
        combiBean wrote 1 day ago:
        Was there not a third option: Calling the journalctl CLI as a child
        process and consume the parsed logs from the standard output? This
        might have avoided both the requirement to use CGO and also to write a
        custom parser. But I guess I am missing something.
       
          tkone wrote 1 day ago:
          Ironically this is EXACTLY what the journald receiver for
          OpenTelemetry does, which, as they noted, is written in go.
          
          Specifically because you're only supposed to use that OR the c
          bindings by design because they want the ability to change in the
          internal format when it's necessary.
       
          IshKebab wrote 1 day ago:
          It's generally less robust to run CLI tools and scrape the output.
          Usually it isn't intended to be machine readable, and you have to
          handle extra failure modes, like incompatible tool versions, missing
          tools, incorrect parsers, etc.
          
          It's the lazy-but-bad solution.
       
            1718627440 wrote 3 hours 4 min ago:
            > Usually it isn't intended to be machine readable
            
            It usually is, because that is the UNIX philosophy and programs
            that intermingle output with layout often stop doing that, when
            they don't write to a terminal.
       
            zbentley wrote 1 day ago:
            I think a lot is riding on that “generally”. You’re right
            that the default approach/majority of cases should avoid shelling
            out wherever possible, but there are a large minority of situations
            where doing that does make sense, including:
            
            Calling a CLI tool which will be present everywhere your program
            might reasonably be installed (e.g. if your program is a MySQL
            extension, it can probably safely assume the existence of mysqld).
            
            The CLI tool you want to call is vendored into or downloaded by
            your wrapper program, reducing installation requirements overhead
            (this is not always a good idea for other reasons, but it does
            address a frequently cited reason not to shell out).
            
            The CLI tool’s functionality is both disjoint with the rest of
            your program and something that you have a frequent need to
            hard-kill. (Forking is much more error prone than running a
            discrete subprocess; you can run your own program as a subprocess
            too, but in that case the functionality is probably not
            disjointed).
            
            Talking to POSIX CLI tools in a POSIX compatible way (granted most
            things those tools do are easier/faster in a language’s stdlib).
       
              mxey wrote 8 hours 36 min ago:
              I recently wrote some Go code for running containers and chose to
              use the docker CLI instead of an API client. The CLI is more well
              known and better documented, and this is what replacements like
              Podman support. When there’s a problem, it’s easier to
              reproduce it by running the same CLI command. It also meant I
              wouldn’t need a whole lot of dependencies, and we needed the
              docker CLI anyway.
              
              Obviously you shouldn’t try to parse human-readable output.
       
            jeremyjh wrote 1 day ago:
            journalctl is designed for these use cases and has options to solve
            those issues. The lazy part here is you not doing any research
            about this tool before dismissing it as "not best practice", which
            is exactly what the fuckups who wrote this article did.
       
              khazit wrote 1 day ago:
              We dismissed using journalctl at the very start. We’ve had
              similar experiences with other CLI tools: the moment you start
              embedding them inside a program, you introduce a whole new class
              of problems. What if journalctl exits? What if it outputs an
              error? What if it hangs? On top of that, you have to manage the
              subprocess lifecycle yourself. It’s not as easy as it may seem.
              
              You can also argue that sd_journal (the C API) exists for this
              exact reason, rather than shelling out to journalctl. These are
              technical trade-offs, doesn't mean we're fuckups
       
                mxey wrote 8 hours 39 min ago:
                > You can also argue that sd_journal (the C API) exists for
                this exact reason, rather than shelling out to journalctl.
                
                Quoting from [1] > If you need access to the raw journal data
                in serialized stream form without C API our recommendation is
                to make use of the Journal Export Format, which you can get via
                journalctl -o export or via systemd-journal-gatewayd.
                
                Certainly sounds like running journalctl, or using the gateway,
                is a supported option.
                
 (HTM)          [1]: https://systemd.io/JOURNAL_FILE_FORMAT/
       
                jeremyjh wrote 1 day ago:
                Does Go really not have any libraries capable of supervising an
                external program? If you'd considered journalctl, why didn't
                you mention it in the article? As many have pointed out here,
                it is the obvious and intended way to do this, and the path you
                chose was harder for reasons that seemed to surprise you but
                were entirely foreseeable.
       
                  mxey wrote 8 hours 38 min ago:
                  JFTR, of course it has a library for it
                  
 (HTM)            [1]: https://pkg.go.dev/os/exec
       
            sigwinch wrote 1 day ago:
            journalctl with -o export produces a binary interchange format.
            Would you rather have bugs or API rot from that, or in an internal
            tool?
       
          redrove wrote 1 day ago:
          Yeah looks like they missed the forest for the trees.
          
          I see this kind of thing in our industry quite often; some Rube
          Goldberg machine being invented and kept on life support for years
          because of some reason like this, where someone clearly didn’t do
          the obvious thing and everyone now just assumes it’s the only
          solution and they’re married to it.
          
          But I’m too grumpy, work me is leaking into weekend me. I had
          debates around crap like this all week and I now see it everywhere.
       
          cookiengineer wrote 1 day ago:
          Also there is a --json (or -o json) flag for journalctl which will
          output line based json log entries. And it can simply be called with
          a Command as you pointed out.
       
          dardeaup wrote 1 day ago:
          This was the first thought that occurred to me too when I saw this
          post.
       
        ifh-hn wrote 1 day ago:
        This stuff is out of my frame of reference. I've never used Go before
        and have never had the need to go this low level (C APIs, etc); so
        please keep this in mind with my following questions, which are likely
        to sound stupid or ignorant.
        
        Can this binary not include compiled dependacies along side it? I'm
        thinking like how on windows for portable apps they include the DLLs
        and other dependant exes in subfolders?
        
        Out of interest, and in relation to a less well liked Google
        technology, could dart produce what they are after? My understanding is
        dart can produce static binaries, though I'm not sure if these are
        truly portable compile once run everywhere sense.
       
        trashburger wrote 1 day ago:
        Cross-compiling doesn't work because you're not defining your
        dependencies correctly and relying on the existence of things like
        system libraries and libc. Use `zig cc` with Go which will let you
        compile against a stub Glibc, or go all the way and use a hermetic
        build system (you should do this always anyhow).
       
        vb-8448 wrote 1 day ago:
        i wonder, for their use case, why not just submit journal in binary
        format to the server and let the serve do the parsing?
       
          xmodem wrote 1 day ago:
          It's crucial to be able to do some processing locally to filter out
          sensitive/noisey logging sources.
       
        hollow-moe wrote 1 day ago:
        Hashicorp's Vault go binary is a whopping 512Mb beast. Recently
        considered using its agent mode to grab secrets for applications in
        containers but the size of the layer it adds is unviably big. And they
        don't seem interested into making a split server/client binary
        either...
       
        davvid wrote 1 day ago:
        > We did not want to spend time maintaining a backward compatible
        parser or doing code archaeology. So this option was discarded.
        
        Considering all of the effort and hoop-jumping involved in the route
        that was chosen, perhaps this decision might be worth revisiting.
        
        In hindsight, maintaining a parser might be easier and more
        maintainable when compared to the current problems that were overcome
        and the future problems that will arise if/when the systemd libraries
        decide to change their C API interfaces.
        
        One benefit of a freestanding parser is that it could be made into a
        reusable library that others can use and help maintain.
       
          khazit wrote 1 day ago:
          There is an existing pure Go library [1] written by someone else. The
          issue is that we weren’t confident we could ship a reliable parser.
          We even included an excerpt from the systemd documentation, which
          didn’t exactly reassure us:
          
          > Note that the actual implementation in the systemd codebase is the
          only ultimately authoritative description of the format, so if this
          document and the code disagree, the code is right
          
          This required a lot of extra effort and hoop-jumping, but at least
          it’s on our side rather than something users have to deal with at
          deploy time.
          
          [1] 
          
 (HTM)    [1]: https://github.com/Velocidex/go-journalctl
       
          bb88 wrote 1 day ago:
          That's what I was thinking too.  A go native library is 10 times
          better in the go ecosystem than a c library linked to a go
          executable.
          
          Also in the age of AI it seems possible to have it do the rewrite for
          you, for which you can iterate on further.
       
        ksajadi wrote 1 day ago:
        I tink the title is a bit misleading. This is about very low level
        metrics collection from the system which by definition is very system
        dependent. The term “portable” in a programming language usually
        means portability for applications but this more portability of
        utilities.
        
        Expecting a portable house and a portable speaker to have the same
        definition of portable is unfair.
       
        Rucadi wrote 1 day ago:
        If you really need a portable binary that uses shared libraries I would
        recommend building it with nix, you get all the dependencies including
        dynamic linker and glibc.
       
        larusso wrote 1 day ago:
        I think this is true for nearly all compiled languages. I had the same
        fun with rust and openSSL and glibC. OP didn’t mentioned the fun with
        glib-c when compiling on a fairly recent distro and trying it to run on
        an older one. There is the “many Linux” project which provides
        docker images with a minimum glib c version  installed so it’s
        compatible with newer ones. 
        The switch to a newer open ssl version on Debian/Ubuntu created some
        issues for my tool. I replaced it with rust tls to remove the dynamic
        linked library. I prefer complete statically linked binaries though.
        But that is really hard to do and damn near impossible on Apple     
        systems.
       
        liampulles wrote 1 day ago:
        Well now you've gone and linked to a fascinating tool which I'm going
        to have to dive into and learn: [1] Thanks.
        
 (HTM)  [1]: https://kaitai.io/
       
        nasretdinov wrote 1 day ago:
        Go was never truly portable on Linux unfortunately due to its
        dependency on libc for DNS and user name resolution (because of PAM and
        other C-only API). Sure, pure Go implementation exists, but it doesn't
        cover all cases, so, in order to build a "good" binary for Linux you
        still needed to build the binary on (oldest supported) Linux distro.
        
        If your production doesn't have any weird PAM or DNS then you can
        indeed just cross-compile everything and it works
       
        0xbadcafebee wrote 1 day ago:
        There's no such thing as a portable application; only programs limited
        enough to be lucky not to conflict with the vagaries of different
        systems.
        
        That said, in my personal experience, the most portable programs tend
        to be written in either Perl or Shell. The former has a crap-ton of
        portability documentation and design influence, and the latter is
        designed to work from 40 year old machines up to today's. You can learn
        a lot by studying old things.
       
        t43562 wrote 1 day ago:
        Systemd. Binary logs are wonderful aren't they?
       
          LtWorf wrote 1 day ago:
          It's not that hard to read them without linking their library. The
          format is explained on their documentation.
          
 (HTM)    [1]: https://github.com/appgate/journaldreader
       
        pjmlp wrote 1 day ago:
        And a set of people rediscovered why cross compiling only works up to
        certain extent, regardless of the marketing on the tin.
        
        The point one needs to touch APIs that only exists on the target
        system, the fun starts, regardless of the programming language.
        
        Go, Zig, whatever.
       
          dwattttt wrote 1 day ago:
          You're thinking of cross platform codebases. There's nothing about
          cross compilation that stops the toolchain from knowing what APIs are
          present & not present on a target system.
       
            pjmlp wrote 1 day ago:
            Cross compilation and cross platform are synonymous in compiled
            languages, in regards of many issues that one needs to care about.
            
            Cross platform goes beyond in regards to UI, direction locations,
            user interactions,...
            
            Yeah, if you happen to have systemd Linux libraries on macOS to
            facilitate cross compilation into a compatible GNU/Linux system
            than it works, that is how embedded development has worked for
            ages.
            
            What doesn't work is pretending that isn't something to care about.
       
              IshKebab wrote 1 day ago:
              > Cross compilation and cross platform are synonymous in compiled
              languages
              
              Err, no. Cross-platform means the code can be compiled natively
              on each platform. Cross-compilation is when you compile the
              binaries on one platform for a different platform.
       
                pjmlp wrote 1 day ago:
                Not at all, cross platform means executing the same application
                in many platforms, regardless of the hardware and OS specific
                features of each platform.
                
                Cross-compilation is useless if you don't actually get to
                executed the created binaries in the target platform.
                
                Now, how do you intend to compile from GNU/Linux into z/OS, so
                that we can execute the generated binary out from the C
                compiler ingesting the code written in GNU/Linux platform, in
                the z/OS language environment inside an enclave, not configured
                in POSIX mode?
                
                Using z/OS, if you're feeling more modern, it can be UWP
                sandboxed application with identity in Windows.
       
                  IshKebab wrote 1 day ago:
                  > cross platform means executing the same application in many
                  platforms, regardless of the hardware and OS specific
                  features of each platform.
                  
                  That is a better definition yes. But it's still not
                  synonymous with cross-compilation, obviously. Most
                  cross-platform apps are not cross-compiled because it's
                  usually such a pain.
       
        cyberax wrote 1 day ago:
        Cgo is terrible, but if you just want some simple C calls from a
        library, you can use [1] to generate the bindings.
        
        It is a bit cursed, but works pretty well. I'm using it in my
        hardware-backed KMIP server to interface with PKCS11.
        
 (HTM)  [1]: https://github.com/ebitengine/purego
       
        nurettin wrote 1 day ago:
        Go is portable until you have to deploy on AS/400
       
        CGamesPlay wrote 1 day ago:
        Use dlopen? I haven’t tried this in Go, but if you want a binary that
        optionally includes features from an external library, you want to use
        dlopen to load it.
       
          immibis wrote 1 day ago:
          It only works in a dynamically-linked binary, because the dynamic
          linker needs to be loaded.
       
        cosmin800 wrote 1 day ago:
        Well, that was pretty obvious that the portability is gone, especially
        when you start linking into systemd, even on the host system you have
        to link with the shared libs into systemd, you cannot link statically.
       
        jen20 wrote 1 day ago:
        From the article:
        
        > In the observability world, if you're building an agent for metrics
        and logs, you're probably writing it in Go.
        
        I'm pretty unconvinced that this is the case unless you happen to be on
        the CNCF train. Personally I'd write in Rust these days, C used to be
        very common too.
       
        ghola2k5 wrote 1 day ago:
        I’ve had some success using Zig for cross compiling when CGO is
        required.
       
          dilyevsky wrote 1 day ago:
          There's still some bugs when interacting with gold and
          cross-compiling to linux/arm64 but fixable with some workarounds...
       
          hansvm wrote 1 day ago:
          That's Uber's approach, right?
       
            sgt wrote 1 day ago:
            Is Uber using Zig for other things by now?
       
              hansvm wrote 1 day ago:
              I haven't heard.
       
        nunez wrote 1 day ago:
        You hit this real quick when trying to build container images from the
        scratch. Theoretically you can drop a Go binary into a blank rootfs and
        it will run. This works most of the time, but anything that depends on
        Go's Postgres client requires libpq which requires libc. Queue EFILE
        runtime errors after running the container.
       
          nateb2022 wrote 1 day ago:
          > anything that depends on Go's Postgres client requires libpq which
          requires libc
          
          Try
          
 (HTM)    [1]: https://github.com/lib/pq
       
            mxey wrote 1 day ago:
            > For users that require new features or reliable resolution of
            reported bugs, we recommend using pgx which is under active
            development.
       
            AlbinoDrought wrote 1 day ago:
            I've also seen [1] used in many projects
            
 (HTM)      [1]: https://github.com/jackc/pgx
       
        kccqzy wrote 1 day ago:
        Interesting that it uses the C API to collect journals. I would’ve
        thought to just invoke journalctl CLI. On platforms like macOS where
        the CLI doesn’t exist it’s an error when you exec, not a build time
        error.
       
          LtWorf wrote 1 day ago:
          I did this a while ago but it only reads journal files sequentially
          and I didn't implement the needed stuff to use the indexes.
          
 (HTM)    [1]: https://github.com/appgate/journaldreader
       
          amiga386 wrote 1 day ago:
          That's also what gopsutils does, IIRC: it tries to look up process
          information with kernel APIs but can fall back to invoking
          /usr/bin/ps (which is setuid root on most systems) at the cost of
          being much less performant.
       
          ajross wrote 1 day ago:
          That's really not such a weird choice.    The systemd library is
          pervasive and compatible.
          
          The weird bit is the analysis[1], which complains that a Go binary
          doesn't run on Alpine Linux, a system which is explicitly and
          intentionally (also IMHO ridiculously, but that's editorializing)
          binary-incompatible with the stable Linux C ABI as it's existed for
          almost three decades now.  It's really no more "Linux" than is
          Android, for the same reason, and you don't complain that your Go
          binaries don't run there.
          
          [1] I'll just skip without explaination how weird it was to see the
          author complain that the build breaks because they can't get systemd
          log output on... a mac.
       
            khazit wrote 1 day ago:
            The macOS bit wasn’t about trying to get systemd logs on mac. The
            issue was that the build itself fails because libsystemd-dev
            isn’t available. We  (naively) expected journal support to be
            something that we can detect and handle at runtime.
       
              ajross wrote 1 day ago:
              Well... yeah.  It's a Linux API for a Linux feature only
              available on Linux systems.  If you use a platform-specific API
              on a multiplatform project, the portability work falls on you.    
              Do you expect to be able to run your Swift UI  on Windows?  Same
              thing!
       
        mmulet wrote 1 day ago:
        I ran into this issue when porting term.everything[0] from typescript
        to go. I had some c library dependencies that I did need to link, so I
        had to use cgo. 
        My solution was to do the build process on alpine linux[1] and use
        static linking[2]. This way it statically links musl libc, which is
        much friendlier with static linking than glibc.
        Now, I have a static binary that runs in alpine, Debian, and even bare
        containers.
        
        Since I have made the change, I have not had anyone open any issues
        saying they had problems running it on their machines. (Unlike when I
        was using AppImages, which caused  much more trouble than I expected)
        
        [0] [1] look at distribute.sh and the makefile to see how I did it.
        
        [1]in a podman or docker container
        
        [2] -ldflags '-extldflags "-static"'
        
 (HTM)  [1]: https://github.com/mmulet/term.everything
       
          pansa2 wrote 1 day ago:
          > do the build process on alpine linux and […] statically link musl
          libc
          
          IIRC it used to be common to do builds on an old version of RHEL or
          CentOS and dynamically link an old version of glibc. Binaries would
          then work on newer systems because glibc is backwards compatible.
          
          Does anyone still use that approach?
       
            Xylakant wrote 1 day ago:
            If you need glibc for any kind of reason, that approach is still
            used. But that won’t save you if no glibc is available. And since
            the folks here want to produce a musl build anyways for alpine, the
            easier approach is to just go for musl all the way.
       
          imcritic wrote 1 day ago:
          What troubles did you have with AppImages?
       
            mmulet wrote 1 day ago:
            List of troubles: [1] [2]  (although this issue later gets
            sidetracked to a build issue) [3] [4]
            
 (HTM)      [1]: https://github.com/mmulet/term.everything/issues/28
 (HTM)      [2]: https://github.com/mmulet/term.everything/issues/18
 (HTM)      [3]: https://github.com/mmulet/term.everything/issues/14
 (HTM)      [4]: https://github.com/mmulet/term.everything/issues/7
       
          johnisgood wrote 1 day ago:
          I use `-ldflags '-extldflags "-static"` as well.
          
          From the .go file, you just do `// #cgo LDFLAGS: -L. -lfoo`.
          
          You definitely do not need Alpine Linux for this. I have done this on
          Arch Linux. I believe I did not even need musl libc for this, but I
          potentially could have used it.
          
          I did not think I was doing something revolutionary!
          
          In fact, let me show you a snippet of my build script:
          
            # Build the Go project with the static library
            if go build -o $PROG_NAME -ldflags '-extldflags "-static"'; then
              echo "Go project built with static library linkage"
            else
              echo "Error: Failed to build the Go project with static library"
              exit 1
            fi
          
            # Check if the executable is statically linked
            if nm ./$PROG_NAME | grep -q "U "; then
              echo "Error: The generated executable is dynamically linked"
              exit 1
            else
              echo "Successfully built and verified static executable
          '$PROG_NAME'"
            fi
          
          And like I said, the .go file in question has this:
          
            // #cgo LDFLAGS: -L. -lfoo
          
          It works perfectly, and should work on any Linux distribution.
       
            mmulet wrote 1 day ago:
            I use alpine for this [1] reason, but I will admit that this is a
            premature-optimization. I haven’t actually ran into the problem
            myself.
            
            ——
            
            Your code is great, I do basically the same thing (great minds
            think alike!). The only thing I want to add is that cgo supports
            pkg-config directly [2] via
            
              // #cgo pkg-config: $lib
            
            So you don’t have to pass in linker flags manually. It’s
            incredibly convenient. [1]
            
 (HTM)      [1]: https://stackoverflow.com/questions/57476533/why-is-static...
 (HTM)      [2]: https://github.com/mmulet/term.everything/blob/def8c93a3db...
       
          nly wrote 1 day ago:
          > and even bare containers.
          
          Strange, i thought the whole point of containers was to solve this
          problem.
       
            jdub wrote 1 day ago:
            Depends how much you care about the size and security footprint of
            your container images.
       
          nickcw wrote 1 day ago:
          That is a nice approach. I'll have to give that a try with rclone. I
          tried lots of things in the past but not using Alpine which is a
          great idea
          
          Another alternative is [1] You can use this to dynamic load shared
          objects / DLLs so in the OP example they could disable systemd
          support if the systemd shared object did not load.
          
          This technique is used in the cgofuse library ( [2] ) rclone uses
          which means rclone can run even if you don't have libfuse/winfsp
          installed. However the rclone mount subcommand won't work.
          
          The purego lib generalizes this idea. I haven't got round to trying
          this yet but it looks very promising.
          
 (HTM)    [1]: https://github.com/ebitengine/purego
 (HTM)    [2]: https://github.com/winfsp/cgofuse
       
            kokada wrote 1 day ago:
            I am using purego indirectly in two pet projects of mine. While it
            has its own issues it definitely solves the issue of
            cross-compilation.
            
            In this particular case it may be that they will need to write a
            wrapper to abstract differences between the systemd C API if it is
            not stable, but at least they still can compile a binary from macOS
            to Linux without issues.
            
            The other issue as other said is to use journalctl and just parse
            the JSON format. Very likely that this would be way more stable,
            but not sure if it is performant enough.
       
          tasuki wrote 1 day ago:
          Huh. Does term.everything just work, or are there some gotchas? This
          seems like it could be supremely useful!
       
            mmulet wrote 1 day ago:
            It works so far! No major gotchas that I know of yet. From the
            perspective of the apps, they are just talking to a normal Wayland
            compositor, so everything works as expected.
            Just try it for your workflow, and if you run into any problems
            just open an issue and I’ll fix it.
       
          apitman wrote 1 day ago:
          Note that you don't have to compile on an Alpine system to achieve
          this. These instructions should work on most distros:
          
 (HTM)    [1]: https://www.arp242.net/static-go.html
       
          jchw wrote 1 day ago:
          IMO this is the best approach, but it is worth noting that musl libc
          is not without its caveats. I'd say for most people it is best to
          tread carefully and make sure that differences between musl libc and
          glibc don't cause additional problems for the libraries you are
          linking to.
          
          There is a decent list of known functional differences on the musl
          libc wiki: [1] Overall, though, the vast majority of software works
          perfectly or near perfectly on musl libc, and that makes this a very
          compelling option indeed, especially since statically linking glibc
          is not supported and basically does not work. (And obviously, if
          you're already using library packages that are packaged for Alpine
          Linux in the first place, they will likely already have been tested
          on musl libc, and possibly even patched for better compatibility.)
          
 (HTM)    [1]: https://wiki.musl-libc.org/functional-differences-from-glibc...
       
          cxr wrote 1 day ago:
          I didn't see an explanation in the README that part of what the first
          GIF[1] shows is an effect created by video editing software (and not
          a screencapture that's just demonstrating the program actually
          running).  "Screen images simulated" are the words usually chosen to
          start off the disclaimers in fine print shown at the bottom of the
          screen when similar effects appear in commercials.  I think that it
          would make sense to adopt a similar explanation wrt the effect used
          for the GIF.
          
          1. < [1] >
          
 (HTM)    [1]: https://github.com/mmulet/term.everything/blob/main/resource...
       
            nebezb wrote 1 day ago:
            > “in commercials where such effects appear”
            
            Good thing this isn’t a commercial then.
       
            plufz wrote 1 day ago:
            Why would an open source project need to have any disclaimer? They
            are not selling anything.
       
              nofriend wrote 1 day ago:
              Because lying is wrong even when open source projects do it.
       
                cxr wrote 1 day ago:
                I don't feel that the person I responded to is lying or being
                intentionally deceptive.
       
                plufz wrote 1 day ago:
                I think it is a big stretch calling this visual effect lying.
                
                I don’t know if it is a cultural American thing or just
                difference in interpretation but I had no difficulty
                understanding that this was a visual effect. But in my country
                ads don’t come with disclaimers. Do you feel like these
                disclaimers are truly helpful?
       
        bitbasher wrote 1 day ago:
        Once you use CGO, portability is gone. Your binary is no longer
        staticly compiled.
        
        This can happen subtley without you knowing it. If you use a function
        in the standard library that happens to call into a CGO function, you
        are no longer static.
        
        This happens with things like os.UserHomeDir or some networking things
        like DNS lookups.
        
        You can "force" go to do static compiling by disabling CGO, but that
        means you can't use _any_ CGO. Which may not work if you require it for
        certain things like sqlite.
       
          hiAndrewQuinn wrote 1 day ago:
          You don't need CGO for SQLite in most cases; I did a deep dive into
          it here.
          
 (HTM)    [1]: https://til.andrew-quinn.me/posts/you-don-t-need-cgo-to-use-...
       
          PunchyHamster wrote 1 day ago:
          > Which may not work if you require it for certain things like
          sqlite.
          
          there is cgo-less sqlite implementation [1] it seems to not be
          maintained much tho
          
 (HTM)    [1]: https://github.com/glebarez/go-sqlite
       
            IceWreck wrote 1 day ago:
            You're linking to a different version - this is the one that most
            people use
            
 (HTM)      [1]: https://github.com/modernc-org/sqlite
       
              debugnik wrote 1 day ago:
              Yes and no, the package above is a popular `database/sql` driver
              for the same SQLite port you linked.
       
          silverwind wrote 1 day ago:
          > This happens with things like os.UserHomeDir or some networking
          things like DNS lookups.
          
          The docs do not mention this CGO dependency, are you sure?
          
 (HTM)    [1]: https://pkg.go.dev/os#UserHomeDir
       
            purpleidea wrote 1 day ago:
            I was surprised too, that I had to check the docs, so I assume the
            user was misinformed.
       
              bitbasher wrote 1 day ago:
              Perhaps I misremembered or things changed? For instance, the
              os/user results in a dynamicly linked executable: [1] There are
              multiple standard library functions that do it.. I recall some in
              "net" and some in "os".
              
 (HTM)        [1]: https://play.golang.com/p/7QsmcjJI4H5
       
                telotortium wrote 1 day ago:
                os.UserHomeDir is specified to read  the HOME environment
                variable, so it doesn’t require CGo. os/user does, but only
                to support NSS and LDAP, which are provided by libc. That’s
                also why net requires CGo- for getaddrinfo using resolv.conf
       
          ncruces wrote 1 day ago:
          There are at least a couple of ways to run SQLite without CGO.
       
            tptacek wrote 1 day ago:
            I think the standard answer here is modernc.org/sqlite.
       
              apitman wrote 1 day ago:
              Careful, you're responding to the author of a wasm-based
              alternative.
       
                ncruces wrote 1 day ago:
                No need to be careful. I won't bite. ;)
       
          swills wrote 1 day ago:
          You can definitely use CGO and still build statically, but you do
          need to set ldflags to include -static.
       
            tptacek wrote 1 day ago:
            You can even cross-compile doing that.
       
              swills wrote 1 day ago:
              Yes, indeed, I do.
       
        daviddever23box wrote 7 days ago:
        This is an (organizational) tooling problem, not a language problem -
        and is no less complicated when musl libc enters the discussion.
       
          laladrik wrote 7 days ago:
          The conclusion of the article says that it's not the language problem
          either.  Under the title "So, is Go the problem?"  Or do you mean
          something else here?
       
            saghm wrote 1 day ago:
            Given that the title implies the opposite, I think it's a fair
            criticism. Pointing out clickbait might be tedious, but not more so
            than clickbait itself.
       
        necovek wrote 7 days ago:
        This seems to imply that Go's binaries are otherwise compatible with
        multiple platforms like amd64 and arm64, other than the issue with
        linking dynamic libraries.
        
        I suspect that's not true either even if it might be technically
        possible to achieve it through some trickery (and why not risc-v, and
        other architectures too?).
       
          cxr wrote 1 day ago:
          For a single binary that will actually run across both architectures,
          see < [1] >.
          
          Original discussion: < [2] >.
          
 (HTM)    [1]: https://cosmo.zip/
 (HTM)    [2]: https://news.ycombinator.com/item?id=24256883
       
          khazit wrote 7 days ago:
          Of course you still need one binary per CPU architecture. But when
          you rely on a dynamic link, you need to build from the same
          architecture as the target system. At that point cross-compiling
          stops being reliable.
       
            vbezhenar wrote 1 day ago:
            Is it some tooling issue? Why is is an issue to cross-compile
            programs with dynamic linking?
       
              cxr wrote 1 day ago:
              It's a tooling issue.  No one has done the work to make things
              work as smoothly as they could.
              
              Traditionally, cross-compilers generally didn't even work the way
              that the Zig and Go toolchains approach it—achieving
              cross-compilation could be expected to be a much more trying
              process.  The Zig folks and the Go folks broke with tradition by
              choosing to architect their compilers more sensibly for the 21st
              century, but the effects of the older convention remains.
       
              dekhn wrote 1 day ago:
              In general, cross compilers can do dynamic linking.
       
                spijdar wrote 1 day ago:
                In my experience, the cross-compiler will refuse to link
                against shared libraries that "don't exist", which they usually
                don't in a cross compiler setup (e.g. cross compiling an
                aarch64 application that uses SDL on a ppc64le host with
                ppc64le SDL libraries)
                
                The usual workaround, I think, is to use dlopen/dlsym from
                within the program. This is how the Nim language handles
                libraries in the general case: at compile time, C imports are
                converted into a block of dlopen/dl* calls, with compiler
                options for indicating some (or all) libraries should be passed
                to the linker instead, either for static or dynamic linking.
                
                Alternatively I think you could "trick" the linker with a stub
                library just containing the symbol names it wants, but never
                tried that.
       
                  1718627440 wrote 2 hours 54 min ago:
                  Well, you need to link against them and you can't do that
                  when they don't exist.    I don't understand the purpose of a
                  stub library, it is also only a file and if you need to
                  provide that, you can also provide the real thing right away.
       
                  dwattttt wrote 1 day ago:
                  You just need a compiler & linker that understand the target
                  + image format, and a sysroot for the target. I've cross
                  compiled from Linux x86 clang/lld to macOS arm64, all it took
                  was the target SDK & a couple of env vars.
                  
                  Clang knows C, lld knows macho, and the SDK knows the target
                  libraries.
       
            swills wrote 1 day ago:
            I happily and reliably cross build Go code that uses CGO and
            generate static binaries on amd64 for arm64.
       
            necovek wrote 6 days ago:
            I am complaining about the language (phrasing) used: a Python,
            TypeScript or Java program might be truly portable across
            architectures too.
            
            Since architectures are only brought up in relation to dynamic
            libraries, it implied it is otherwise as portable as above
            languages.
            
            With that out of the way, it seems like a small thing for the Go
            build system if it's already doing cross compilation (and thus has
            understanding of foreign architectures and executable formats). I
            am guessing it just hasn't been done and is not a big lift, so
            perhaps look into it yourself?
       
              arccy wrote 1 day ago:
              they're only portable if you don't count the architecture
              specific runtime that you need to somehow obtain...
              
              go doesn't require dynamic linking for C, if you can figure out
              the right C compiler flags you can cross compile statically
              linked go+c binaries as well.
       
       
 (DIR) <- back to front page