[HN Gopher] Using Git Offline
       ___________________________________________________________________
        
       Using Git Offline
        
       Author : l_nk
       Score  : 153 points
       Date   : 2024-01-09 09:44 UTC (13 hours ago)
        
 (HTM) web link (www.gibbard.me)
 (TXT) w3m dump (www.gibbard.me)
        
       | zelphirkalt wrote:
       | I'm doing this for private notes, that I don't want on a git
       | hoster. Of course without network delay, everything is super
       | snappy. Only need to make sure, that you have backups, in case
       | one of your disks goes up in flames or so.
        
       | wdfx wrote:
       | And don't forget that each git clone can have multiple remotes.
       | 
       | So your working copy can be simultaneously linked to any or all
       | of GitHub, usb, local network, nas, etc.
        
         | cbm-vic-20 wrote:
         | Most users believe the 'd' in "git" (or "github") stands for
         | "distributed".
        
           | wdfx wrote:
           | The real problem is that even though being distributed is
           | cool and can help to ensure that your work exists in more
           | than one place either for reasons of collaboration or backup,
           | humans still like to cling to the idea of having a "single
           | source of truth".
           | 
           | So, if Alice collaborates by pull/pushing with Bob, and Bob
           | works by patches over email with Charlie, and David exchanges
           | with Charlie and Alice - then what is the "true" state of the
           | repository exactly?
           | 
           | The easy way out is that everyone pulls/pushes with github
           | and we use that as "truth".
        
             | Double_a_92 wrote:
             | Yeah either there is a central "source of truth" or all
             | distributed copies have to always be kept in sync somehow.
             | 
             | I don't understand all those people that romanticize
             | "distributed" things but then just stop thinking about the
             | practical implications of that...
        
               | wdfx wrote:
               | > or all distributed copies have to always be kept in
               | sync somehow
               | 
               | This isn't actually true. The collaborators simply have
               | to ... collaborate. It depends on what the goal is with
               | the data being kept in git. Git doesn't tell you how to
               | collaborate.
               | 
               | It could be that the group elects one person (e.g. Alice)
               | to do releases, so, it stands that only that code which
               | reaches Alice will ever get released. If that doesn't
               | happen, you haven't collaborated correctly.
               | 
               | It could be that any one of the group could release, in
               | that case you collaborate to get your commits to any of
               | those people.
               | 
               | It could be that there are no releases ever made, and the
               | group loosely exchanges their branches to build whatever
               | interests them.
        
           | baz00 wrote:
           | Most of the people I work with can barely tell the difference
           | between git and github.
        
           | e12e wrote:
           | Looks like the "d" in "git" was upside down from the start...
        
         | globular-toast wrote:
         | How do people even use GitHub without knowing this? You have to
         | add your fork as a separate remote.
        
           | wdfx wrote:
           | Can you not also push back to your fork and GH guides you to
           | make a PR from your fork/branch to the upstream? IIRC, you
           | don't have to deal with the upstream locally?
        
             | kevincox wrote:
             | Yes. You only need to add the upstream repo if you want to
             | fetch the newest changes. So it is something that you will
             | probably only run into on your second PR.
             | 
             | I wouldn't be surprised if the vast majority of GitHub
             | users never send a second PR to a third-party projects.
             | Probably the majority of users only contribute to their own
             | (or their company's own) repos. Then a small number send at
             | least one pull request to another user's repo. Fewer still
             | would send more than one in a way that requires pulling new
             | changes from upstream.
        
       | dangus wrote:
       | It never occurred to me that the remote location could be a file
       | path!
        
         | dailykoder wrote:
         | I just learned that last year, too, when my last employer did
         | not have any versioning tool whatsoever, except windows network
         | shares (yes, there are more than enough companies that do it
         | that way. It's the second time i encountered such a horror
         | scenario). So I just set up a bare git repository on the
         | network share and used this to keep my project there. Glad they
         | understood that git does make sense and setup a gitlab server
         | soon after.
         | 
         | After all it seems kinda obvious, but hey.
        
         | lordgrenville wrote:
         | I learned that from a different article posted here last week
         | :)
         | https://jeremymikkola.com/posts/2017_07_15_move_commits_betw...
        
         | FactualActuals wrote:
         | I had to set up an entire government team's workflow that was
         | required to be isolated completely from any network. I had to
         | create multiple remotes that were filepaths that pointed to
         | specific USB drives. Depending on which USB drive was connected
         | to their laptop, a developer was able to push and pull any
         | changes to their codebase.
         | 
         | It felt unintuitive at the time but thinking back, this team
         | was able to produce code much faster than other teams that
         | didn't have a similar workflow set up.
        
         | OskarS wrote:
         | Another cool thing that I believe git does in these situations
         | is that it hard-links the blobs in the .git folder if you do a
         | local clone. It makes sense: the blobs are immutable and
         | content-addressed, no need to store two copies! Just have two
         | links to the same file system object, save a bunch of disk
         | space.
        
       | fjfaase wrote:
       | It is also possible to exchange difference between different
       | location using patch file with the git commands 'format-patch'
       | and 'apply'. Patch file are usually a bit smaller and can also be
       | easily mailed.
        
         | vifon wrote:
         | A minor correction: it's usually preferable to apply commits
         | with `git am` instead of `git apply`, as it applies the commit
         | with all its metadata, not just the diff.
        
         | palata wrote:
         | > Patch file are usually a bit smaller and can also be easily
         | mailed.
         | 
         | The best way to email patches is to use `git send-email`. So
         | that the receiver can directly apply it. It's easy to get it
         | wrong when done manually.
        
       | shoover wrote:
       | Local clones and bundles are cool. I used to use hg bundle for
       | one-off transport workflows back in the day.
       | 
       | An offline cheat sheet of those git commands comes in hand
       | occasionally, too.
        
       | dusted wrote:
       | I often use local bare repos for experiementation, and for backup
       | to other machines on my lan, in case my main hard drive goes dead
       | or github decides to unexist.
        
       | palata wrote:
       | It's amazing how git is a distributed system, but everybody
       | chooses to use it in a centralized manner with PRs (mostly on
       | GitHub).
       | 
       | BTW GitHub is having issues today.
        
         | chaxor wrote:
         | It would be wonderful if it were distributed by default more
         | easily. For instance, an ipfs or torrent backend which
         | automatically provides a backbone of thousands of computers
         | with the repo on them as the remote, rather than just the
         | single github server.
         | 
         | Ipfs remote for git should be the default.
        
           | wepple wrote:
           | How reliable is IPFS for things like this?
           | 
           | If I wanted to use git on ipfs, I could host a node and push
           | all my git commits there, but I'd have to convince others to
           | host it as well, correct? So perhaps it would be awesome for
           | majorly popular projects, but still a end up having a SPOF
           | for smaller projects right?
        
             | chaxor wrote:
             | The idea is to have the daemon automatically start and
             | serve the git repo anytime a git clone is done, that way
             | the number of nodes hosting the repo (speed of download)
             | grows with the popularity of the repo. That way you could
             | saturate a 10G connection on a popular repo, whereas with
             | github you'll likely be pretty throttled or limited.
        
           | palata wrote:
           | It was designed around the email workflow, which is
           | distributed by default: you send your patches to a mailing
           | list, which is distributed between all the email servers and
           | clients. And it's easy to host mirrors, too.
           | 
           | GitHub and the PR workflow tend to make it all centralized.
        
             | chaxor wrote:
             | Sure, but the work of typing out 1000 people's IP addresses
             | as remotes and managing that sounds like a nightmare that
             | should be offloaded somehow. Adding "ipfs" to the remote
             | and having it managed by a system to push to thousands of
             | devices (however many have cloned the repo) is much more
             | concise and simple.
        
               | palata wrote:
               | > the work of typing out 1000 people's IP addresses as
               | remotes and managing that sounds like a nightmare
               | 
               | Not sure I get that. You do `git send-email
               | --to=<mailing-list-address>` and that's it. Everyone on
               | the mailing list gets a copy of your patch, that they can
               | apply if they want.
        
               | chaxor wrote:
               | Wow, perhaps I am unaware of some of the wonderful
               | capabilities of git!
               | 
               | So if everything on the entirety of github.com was
               | deleted, you could still do a git clone somehow and it
               | would pull it from everyone who has ever cloned that
               | repo?
               | 
               | Because that's what I'm referring to here - p2p not hub-
               | spoke architectures.
               | 
               | I'm fairly certain git doesn't inherently have this
               | feature, unless the backend remote could automatically
               | start an iOS or torrent daemon and deal with torrent or
               | ipfs for pulling and pushing.
        
               | palata wrote:
               | I am talking about the collaboration process: sharing
               | patches with collaborators.
               | 
               | I think you are talking about distributing the git server
               | itself over p2p. Which I find less important because I
               | don't think that the server bandwidth is usually a
               | problem. Or is it?
        
         | Double_a_92 wrote:
         | How would that even work without merging all the relevant
         | changes at some point? If I want to use some open source
         | software, I don't want to connect to every single forked
         | repository to see if they might have local changes that I could
         | need. ... and then dealing with all sorts of merge conflicts.
        
           | palata wrote:
           | Of course, at some point you probably want an authoritative
           | main branch from which you make your releases (though there
           | can be forks, too).
           | 
           | I meant more in terms of development. The way git is used
           | most of the time is by having a main branch on GitHub, that
           | contributors clone, work on a feature branch, then make a PR,
           | have QA test it, and merge it. If that cycle is too slow,
           | many times the devs will start complaining because they are
           | "stuck until the branch is merged". Because the assumption is
           | that everybody branches from the main branch.
           | 
           | But git was designed around the email workflow. Instead of a
           | PR, you make a patch (or a group of patches), that you can
           | share with others. For instance, you can share a patch with a
           | colleague from your team, who will review it and incorporate
           | it in their branch before it gets merged into main. That way
           | they are testing it by using it. At some point your changes
           | get merged into the main branch, and your coworkers can just
           | apply their patches on top of it.
           | 
           | We can imagine a workflow with a hierarchy of maintainers:
           | devs at the lower level send patches to their supervisor, who
           | after a while sends them up to their supervisor, up to the
           | main branch.
           | 
           | PRs flatten all that. Devs typically never learn how to deal
           | with patches, and the review process ends up being "rebase on
           | main, run the CI, then I'll skim through your code, I'll
           | complain about some variable names and I'll approve without
           | even pulling your code" (I'm exaggerating a bit, of course).
           | 
           | I think there is a lot of value in learning the git email
           | workflow.
           | 
           | This post (not mine) is about this: https://web.archive.org/w
           | eb/20170823054920/https://dpc.pw/bl...
           | 
           | And of course SourceHut (https://sourcehut.org/, not mine) is
           | a great forge built around the email workflow.
        
             | avgcorrection wrote:
             | > I meant more in terms of development. The way git is used
             | most of the time is by having a main branch on GitHub, that
             | contributors clone, work on a feature branch, then make a
             | PR, have QA test it, and merge it. If that cycle is too
             | slow, many times the devs will start complaining because
             | they are "stuck until the branch is merged". Because the
             | assumption is that everybody branches from the main branch.
             | 
             | That my VCS is distributed is a goes-without-saying at this
             | point. Nothing else is good enough. But a completely
             | different dimension is integration. And I want to be as
             | integrated as possible. And for everyone else as well. I
             | don't want three different cliques working on disparate
             | things. And I warn against such things at work as well
             | (i.e., hey let's make a branch for this project which 3/9
             | of us is going to work on for a month...).
             | 
             | > But git was designed around the email workflow. Instead
             | of a PR, you make a patch (or a group of patches), that you
             | can share with others. For instance, you can share a patch
             | with a colleague from your team, who will review it and
             | incorporate it in their branch before it gets merged into
             | main. That way they are testing it by using it. At some
             | point your changes get merged into the main branch, and
             | your coworkers can just apply their patches on top of it.
             | 
             | This certainly has merit. I mean peer-to-peer integration
             | branching. But you can do the same with forges and
             | branches. In fact it's more difficult to keep track of
             | patches via email, i.e. what comes from where, have I
             | included this already, etc. Just consider the conversations
             | that seem to keep coming up on how to deal with identifying
             | patches.[1]
             | 
             | > We can imagine a workflow with a hierarchy of
             | maintainers: devs at the lower level send patches to their
             | supervisor, who after a while sends them up to their
             | supervisor, up to the main branch.
             | 
             | I mean you should expect Whatever The Law about the org
             | hierarchy being reflected in processes, not vice versa.
             | Apparently there isn't a process for this command hierarchy
             | to be reflected in.
             | 
             | > PRs flatten all that. Devs typically never learn how to
             | deal with patches, and the review process ends up being
             | "rebase on main, run the CI, then I'll skim through your
             | code, I'll complain about some variable names and I'll
             | approve without even pulling your code" (I'm exaggerating a
             | bit, of course).
             | 
             | Patches or not doesn't really change that. If one single
             | email thread becomes the focal point of a month-long
             | development "PR" then that's the same thing.
             | 
             | But yes. Getting out of that dang "fork" mindset is good.
             | You can be more loosey goosey via email since you just
             | inline your suggestions as patches (either with a commit
             | message or without).
             | 
             | [1] https://lore.kernel.org/git/bdbe9b7c1123f70c0b4325d778a
             | f1df8...
        
           | riddley wrote:
           | I may be wrong, but I think when Torvalds created it, he
           | envisioned an email based workflow using patch files.
           | 
           | In his use case, he is the single source of truth who decides
           | what gets merged.
        
             | melagonster wrote:
             | I remember he mentioned this in speech. he can merge
             | changes from trusted people, and everyone have a personal
             | list of people they trust... so he doesn't need to check
             | everything by himself, and if anything goes wrong, they
             | always can go back to last version.
        
           | jayd16 wrote:
           | You don't look for them. If you're the project owner, they
           | request that you pull. Instead of a website, it can be done
           | with email.
           | 
           | If you're simply using the software, you would pull the
           | maintainer or whoever is hosting the main branch or what have
           | you.
        
         | colonwqbang wrote:
         | PRs are decentralised. Everyone has their own "fork" of the
         | repo, makes changes there, then tries to convince the
         | maintainer of the "main" repo to pull.
         | 
         | This is exactly what the decentralised model was meant to do.
         | It doesn't mean that every copy of the repo is equal in
         | importance, but they are equal in functionality.
         | 
         | (Or, maybe you mean that it's "centralised" in the sense that
         | it's all on github.com?)
        
           | palata wrote:
           | Well the git repo is decentralized (everyone has a copy of
           | it). PRs are not. All the PRs are living on one centralized
           | server (usually github.com), such that if that server is
           | down, nobody has access to the PR.
           | 
           | When you send your patch to a mailing list (with `git send-
           | email`), then that patch is immediately distributed to all
           | the mail servers. This is decentralized: everyone gets a copy
           | of your email (which is the equivalent of a PR in the email
           | workflow).
           | 
           | > Or, maybe you mean that it's "centralised" in the sense
           | that it's all on github.com?
           | 
           | That's the other thing: the PR workflow tends to naturally
           | push towards a monopoly. It is annoying to have to create an
           | account on every GitLab instance under the sun just to send a
           | PR. More and more, if your project is not on GitHub, people
           | will not bother to create an account for you.
           | 
           | With the email workflow, you don't need an account: you just
           | send your email to the mailing list (you don't even have to
           | subscribe to the mailing list) or to whoever you want to send
           | it. This makes it much easier to deal with different servers
           | (contributing to 100 projects on 100 different servers does
           | not makes you create 100 accounts).
        
         | 8organicbits wrote:
         | I used a pull request over emaila workflow at my first job ~14
         | years ago. This is decentralized as email is decentralized. It
         | works quite well, but I prefer the centralized tools.
         | 
         | Linux development is the most public example of this, here's an
         | example from today:
         | 
         | https://lore.kernel.org/linux-block/20240109071332.2216253-1...
        
           | palata wrote:
           | Do you know SourceHut? https://sourcehut.org/
           | 
           | I would be really curious to have your opinion about it as
           | compared to how it was 14 years ago. I feel like SourceHut
           | does a really good job helping on the tooling side (it's
           | super easy to setup a mailing list).
           | 
           | Also I find Aerc very cool for that. The author describes it
           | here: https://drewdevault.com/2022/07/25/Code-review-with-
           | aerc.htm...
        
             | 8organicbits wrote:
             | I haven't used it, but I'm watching the progress. I really
             | like that you can collaborate even if you don't have an
             | account.
             | 
             | Unfortunately, all my projects require tools that claim to
             | be past "alpha" status, so I can't consider it yet.
        
       | classified wrote:
       | I never knew I can use a folder path as a remote in git. Very
       | handy.
        
       | heresie-dabord wrote:
       | git is foundational tooling, i.e. one of the tools that a
       | developer must know.
       | 
       | But git is useful beyond the development community. A huge impact
       | for a "weekend project".
       | 
       | "The development of Git began on 3 April 2005. Torvalds announced
       | the project on 6 April and became self-hosting the next day. The
       | first merge of multiple branches took place on 18 April. Torvalds
       | achieved his performance goals; on 29 April, the nascent Git was
       | benchmarked recording patches to the Linux kernel tree at a rate
       | of 6.7 patches per second. On 16 June, Git managed the kernel
       | 2.6.12 release." [1]
       | 
       | [1] _ https://en.wikipedia.org/wiki/Git#History
        
         | ubnt00 wrote:
         | I agree, it's very beneficial for non-devs. I use it to backup
         | important configs.
        
           | tranceylc wrote:
           | I even use git for my resume. Makes changing the wording or
           | rearranging things for a specific company to be slightly
           | easier on me
        
           | JamesLeonis wrote:
           | I back up my Minecraft server with GIT, and use tags to track
           | when I updated the version.
        
         | never_inline wrote:
         | Git is foundational because it's a bunch of tools dealing with
         | a very general data structure (the DAG of file versions, or
         | whatever the semantically correct thing to say is).
         | 
         | Docker is something similarly powerful. It wraps around a few
         | things (bunch of kernel namespaces, kinda reproducible, layered
         | image format) and it is useful in many use-cases beyond
         | microservices.
         | 
         | There are few other tools which I can say the same about. `jq`
         | and curl are powerful and ubiquitous. But jq is a language, and
         | curl is a tool for interacting with so many protocols. I don't
         | know if I can put them in the same ballpark as git and docker.
        
           | flexagoon wrote:
           | Another one is nginx. It seems like any time I google
           | something http-related the answer is always "nginx".
        
       | chriswarbo wrote:
       | Cool, I didn't know about 'git bundle'; nice to have another tool
       | in my arsenal :)
       | 
       | I like to keep a bare copy of each repo locally, and use those as
       | remotes for my "working copies". The `git worktree` command can
       | be used in a similar way, but I feel safer using separate clones.
       | 
       | The article focuses on removable media (USB drives, CDs, etc.)
       | which make automation awkward. If your remotes are more reliable
       | (e.g. on the same machine, a LAN, or indeed the Internet) then
       | git hooks can be useful, e.g. to propagate changes. For example,
       | my local bare repos used hooks to (a) push to remotes on
       | chriswarbo.net, (b) push to backups on github, (c) generate
       | static HTML of the latest HEAD, and copy that to chriswarbo.net
       | and IPFS.
       | 
       | Since the article mentions bundles, a related feature is git's
       | built-in mail support. This can be used to convert commits into a
       | message, and apply a message as a patch. I've used this a lot to
       | e.g. moves files from one project to another (say, helper
       | functions from an application to a library) in a way which
       | preserves their history (thanks to
       | https://stackoverflow.com/a/11426261/884682 )
        
         | TacticalCoder wrote:
         | And converting a Git repository to a bare Git repository is
         | super easy (and can be done at any time).
         | 
         | I keep my bare Git repositories on another machine on my LAN
         | and push/pull/fetch/whatever using SSH (typically from Emacs,
         | using Magit).
        
         | eichin wrote:
         | I'm a little surprised that they talk that much about removable
         | media without ever mentioning Git Annex...
        
           | kristjansson wrote:
           | Does git-annex bring anything if you're directly committing
           | everything to git?
        
       | cookiengineer wrote:
       | What's amazing is that you can also use git locally, without a
       | server, using "git daemon".
       | 
       | I've created a small little bash function for that, and then I
       | just pull from machine to machine (or IP to IP) directly without
       | needing to bother with the internet. Git is very useful this way
       | on hackathons or when there's not much internet bandwidth to
       | begin with.
       | 
       | git-serve() {                   git daemon --reuseaddr --verbose
       | --base-path=$PWD --export-all --enable=receive-pack -- $PWD/.git;
       | }
        
         | nofunsir wrote:
         | You don't need a daemon. You can push and pull to any url,
         | including file urls
         | 
         | as long as you have access to the url(e.g. local file
         | permission or network authentication with a samba server, or a
         | shared folder on another pc), you can use it as a remote
        
           | ramses0 wrote:
           | Including ssh, of course. `git clone
           | $USER@example.com:/tmp/blah.git`, and `git init --bare` (for
           | a non-checked-out, non-working-dir, "just the .git folder"
           | file location).
        
             | lloeki wrote:
             | Back when GitHub was in infancy, git-web was the top crop
             | UI, our tracker was either one of Trac or Redmine, CI was a
             | Hudson/Jenkins hellscape, the world "cloud" referred to
             | water in the sky, reliable VMs were a distant dream, and
             | servers were not cheap, our team of three~four people set
             | up decentralised git over ssh.
             | 
             | Each developer workstation had a git user with ssh enabled
             | and restricted to some git invocation I can't recall, and
             | chgrp git / chmod g+rwS a conventional path, and remotes
             | named from team members. PRs were literally that: either
             | emails or one shouting to another over our desk that
             | someone could git pull from one's machine straight from
             | their (non-bare) repo.
             | 
             | The whole development process was entirely decentralised,
             | any one's machine was as worthy as the next one and there
             | was no single point of failure.
        
               | _ZeD_ wrote:
               | I would have used svn and a proper server.
               | 
               | and if you start bragging about the "decentralization"...
               | well, there is a reason anyone now uses a gitHUB, where
               | there is a proper, centralized, version online
        
               | palata wrote:
               | > where there is a proper, centralized, version online
               | 
               | Linux has a proper, centralized, version online. Still
               | their collaboration workflow (they use the email
               | workflow) is decentralised.
               | 
               | I think everybody uses GitHub now because they don't know
               | the email workflow, and it's more convenient to have one
               | account on GitHub than having one account on every
               | instance of GitLab you contribute to. And I guess most
               | people only know the GitHub web interface and don't
               | really feel like using something else.
               | 
               | My point being that there are many reasons why everybody
               | is using GitHub, but it does not mean that the PR
               | workflow is better. What do you think?
        
               | pmontra wrote:
               | All my current customers are using Bitbucket, but it's
               | the same.
               | 
               | They may look like centralized repositories but they are
               | not. We have our own local repositories, sometimes even
               | more than one for the same project, and we use the GitHub
               | or Bitbucket one only to sync some branches between
               | developers.
               | 
               | It's not what used to be with centralized systems. The
               | only copy of the repository was on the server. Locally
               | developers only had files sometimes with a global lock
               | such that nobody could work on the same file at the same
               | time.
               | 
               | That's the real advantage of having a system like git in
               | combination with a server like GitHub.
               | 
               | The PRs are nice but not everybody use them. Sometimes
               | it's only merge and push.
        
               | palata wrote:
               | > That's the real advantage of having a system like git
               | in combination with a server like GitHub.
               | 
               | I would say that it is the real advantage of having a
               | system like git, period. Right?
        
               | chrisfinazzo wrote:
               | One other distinction I would make between this and the
               | email centric workflow that (almost) every mailing list
               | that I know of these days has is the inclusion of a
               | publicly accessible URL with an index of messages.
        
               | palata wrote:
               | Not sure I get that. Are you saying that they lack an
               | index of messages?
        
               | Jenk wrote:
               | This reads like a chapter out of my own biography.
               | 
               | This experience started on a self-hosted SVN + Hudson
               | server (not VM). That server was repurposed to run ESX
               | that the hosted the VM of its former self. Which felt a
               | bit pointless.
               | 
               | Then we moved to git from svn but kept everything else
               | the same. Had groovy scripts coming out of our build and
               | deployment ears. _shudder_
        
             | nofunsir wrote:
             | yes, and it's worth mentioning the next step evolution
             | after ssh was gitosis/gitolite to manage who has access to
             | what repos on a single ssh entry point, by taking advantage
             | of the fact that multiple authorized keys could be used for
             | one ssh local user.
             | 
             | enter two companies to rip off and monetize
             | gitosis/gitolite, and eventually rewrite them into their
             | own service, and presto, everyone has forgotten that git is
             | both free and decentralized
        
         | avgcorrection wrote:
         | > What's amazing is that you can also use git locally, without
         | a server, using "git daemon".
         | 
         | I don't think that's amazing. What was amazing to me was back
         | when I couldn't set up a Subversion repository without a
         | "server".
         | 
         | Someone asks on StackOverflow about why `git push` behaves a
         | certain way? The principle of Laziness dictates that you try to
         | reproduce the issue with two sibling directories where one is a
         | remote of the other one. Like they apparently do in one of
         | Git's integration tests:                   git remote add $1 up
         | ../mirror
        
           | tempodox wrote:
           | You can run a Subversion repository on localhost with the
           | svnserve(1) command:
           | 
           | https://www.visualsvn.com/support/svnbook/ref/svnserve/re/
        
             | avgcorrection wrote:
             | Yes, another kind of server.
        
             | COMMENT___ wrote:
             | Or in serverless mode without any server at all. Just open
             | the repository via file://.
        
         | adastra22 wrote:
         | Sorry, I'm honestly not trying to be snarky, but what's amazing
         | here? `man git-daemon` displays:
         | 
         | > git-daemon - A really simple server for Git repositories
         | 
         | So you can "use git locally, without a server"... by using a
         | server?
         | 
         | Btw you don't need a daemon at all. You can just use file URLs.
        
         | e12e wrote:
         | Fwiw I think fossil excels for this type of use-case.
        
         | I_complete_me wrote:
         | Ok, git_noobie here. What is git http-backend [1] for then?
         | 
         | [1] https://git-scm.com/docs/git-http-backend
         | 
         | I discovered this by tabbing git ?TAB in a zsh shell and going
         | through the alphabet.
         | 
         | FYI - no commands starting with e, j, k, o, w, x, y or z using
         | this method. Is there a definitive list of all git commands I
         | wonder?
        
           | eichin wrote:
           | If you want to use git through a web server, it's one way of
           | doing that. (For example, if you want to use some existing
           | http auth for your users.) You don't _particularly_ need to
           | do that - it 's just an option; git over ssh works fine too,
           | as does git with file: local urls (or without "remotes" at
           | all.) Professionally, ssh auth (especially with
           | ControlPersist) has always been the better/faster option, but
           | that's more of a cultural thing.
           | 
           | (My hobby workflow is to just start writing in a directory,
           | and then after a little bit do a "git init"... and then once
           | I have enough to pick a name, I have a "git save-project
           | projectname" script that goes off and does a git init --bare
           | on a homelab server and git push --set-upstream so it's now a
           | remote. Just gradually escalating how "seriously" I'm
           | treating it and therefore how much plumbing I bother with.)
        
           | Jenk wrote:
           | git help -a
           | 
           | lists all available commands.
           | 
           | It's important to note that the git binaries are (usually)
           | both client and server - by design what with the hole
           | "decentralised" thing.
           | 
           | So the default package will let you run server functionality
           | such as the http-backend (which allows the machine you run it
           | on to serve http(s):// schema repositories)
        
         | chrisfinazzo wrote:
         | How is this different if you work alone, on one machine, and
         | simply don't use a remote?
         | 
         | I realize this is a bastardization of the concept, but for
         | personal projects, it still sounds reasonable.
         | 
         |  _Reads further..._
         | 
         | Eh, I _guess_ USB drives count as separate machines in this
         | case -\\_(tsu)_ /-
        
       | bazil376 wrote:
       | I had to use git bundle at a government contract job where they
       | took over a month to issue my hardware that was able to access
       | their GitHub repo. Pretty convenient actually (compared to
       | whatever the alternatives may be)
        
       | avgcorrection wrote:
       | There's also the "patch workflow" for when you have access to the
       | upstream repo but you (perhaps) don't have your own clone on the
       | Internet.
       | 
       | https://linux.die.net/man/7/gitworkflows
        
         | palata wrote:
         | I call this the "email workflow", as usually the natural way to
         | share the patches is over email :-)
        
       | globular-toast wrote:
       | Far too many people think of git as a tool to push/pull code from
       | a remote location. A glorified scp basically. Git is a
       | distributed version control system.
        
       | minroot wrote:
       | Is this viable as a file synchronization system?
        
         | dogleash wrote:
         | depends on your use case and requirements
         | 
         | it's a source control tool, the whole purpose is file
         | synchronization tailored to a specific use case
        
         | urda wrote:
         | git is not a great choice for large files, which is why git-lfs
         | is a thing too.
        
       | tonymet wrote:
       | git is great for non-centralized workflows. Sometimes your local
       | copy lacks internet or authentication credentials ...
       | 
       | * git-ssh to "deploy" to a remote repo that is not on the
       | internet or lacks authentication keys
       | 
       | * git remote with files (local) to push changes to a deployment
       | outside of the development directory -- e.g. to /etc , /bin or
       | another build location
       | 
       | * bare git repo for config, e.g. /etc or for dotfiles (~/) . Then
       | git fetch to your proper repo for recording history.
        
       | jpc0 wrote:
       | Seeing as it's come up a ton in this comment section...
       | 
       | https://git-scm.com/docs/gitworkflows
       | 
       | Also there...
       | 
       | https://git-scm.com/docs/giteveryday
        
       | velcrovan wrote:
       | Fossil (https://www.fossil-scm.org) is superior for this use case
       | in almost every way I can think of. It was in many ways designed
       | for this use case.
       | 
       | A fossil repository is a single-file SQLite database. You can
       | copy that single file to another computer and treat it like a
       | remote, sync with it, etc. with a simple "fossil sync" command.
       | The single file includes all the ticketing (issues), wiki,
       | discussion forum and all branches, and those are all synced as
       | well. There's no need to do any special packaging or bundling.
       | Plus you get a built in web UI.
        
       | nayuki wrote:
       | Great article. Also note that git-bundle can be used to manually
       | transfer a range of commits between two computers. Suppose the
       | sender's repository is at version 10 but the receiver is at
       | version 4. On the sender's side, you can request to create a
       | bundle of versions 5 through 10 and save that as a single file.
       | You can move the file to the receive using whatever method you
       | choose. On the receiver side, you can essentially "git pull" that
       | set of patches. This technique has helped me in quite a few
       | environments.
        
       ___________________________________________________________________
       (page generated 2024-01-09 23:01 UTC)