[HN Gopher] BorgBackup 2 has no server-side append-only anymore
       ___________________________________________________________________
        
       BorgBackup 2 has no server-side append-only anymore
        
       Author : jaegerma
       Score  : 178 points
       Date   : 2025-06-07 18:39 UTC (1 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | LeoPanthera wrote:
       | Is that a big deal? You should probably be doing this with zfs
       | immutable snapshots anyway. Or equivalent feature for your
       | filesystem.
        
         | topato wrote:
         | I'm also completely confused why this was at the top of my
         | hacki, seems completely innocuous
        
           | ajb wrote:
           | Ideally a backup system should be implementable in such a way
           | that no credential on the machines being backed up, enable
           | the deletion or modification of existing backups. That's so
           | that if your machines are hacked a) the backups can't be
           | deleted or encrypted in a ransom attack and b) If you can
           | figure out when the first compromise occurred, you know that
           | before that date the backup data is not compromised.
           | 
           | I guess some people might have been relying on this feature
           | of borgbackup to implement that requirement
        
         | philsnow wrote:
         | The purpose of the append-only feature of borgbackup is to
         | prevent an attacker from being able to overwrite your existing
         | backups if they compromise the device being backed up.
         | 
         | Are you talking about using ZFS snapshots on the remote backup
         | target? Trying to solve the same problem with local snapshots
         | wouldn't work because the attack presumes that the device
         | that's sending the backups is compromised.
        
           | LeoPanthera wrote:
           | > Are you talking about using ZFS snapshots on the remote
           | backup target?
           | 
           | Yes.
        
         | homebrewer wrote:
         | There's not much sense in using these advanced backup tools if
         | you're already on ZFS, even if it's just on the backup server,
         | I would stick with something simpler. Their whole point is in
         | reliable checksums, incremental backups, deduplication,
         | snapshotting on top of a 'simple' classical filesystem. Sounds
         | familiar to any ZFS user?
        
           | nijave wrote:
           | Dedupe is efficient in Borg. The target needs almost no RAM
        
           | globular-toast wrote:
           | Are there any good options for an off-site zfs backup server
           | besides a colo?
           | 
           | Would be interested to know what others have set up as I'm
           | not really happy with how I do it. I have zfs on my NAS
           | running locally. I backup to that from my PC via rsync
           | triggered by anacron daily. From my NAS I use rclone to send
           | encrypted backups to Backblaze.
           | 
           | I'd be happier with something more frequent from PC to NAS.
           | Syncthing maybe? Then just do zfs sync to some off site zfs
           | server.
        
             | gaadd33 wrote:
             | I think Rsync.net supports zfs send/receive
        
             | aeadio wrote:
             | Aside from rsync.net which was mentioned in a sibling
             | comment, there's also https://zfs.rent, or any VPS with
             | Linux or FreeBSD installed.
        
               | globular-toast wrote:
               | zfs.rent is in the wrong location and I can't see
               | anything about zfs send/receive support on rsync.net.
               | What kind of VPS product has multiple redundant disks
               | attached? Aren't they usually provided with virtual
               | storage?
        
               | Tharre wrote:
               | It's documented here:
               | https://www.rsync.net/products/zfsintro.html
               | 
               | Do note the 5 TiB minimum order for it though, it's not
               | something that's enabled on other accounts.
        
           | PunchyHamster wrote:
           | well, till lightning fries your server. Or you fat finger
           | command and fuck something up.
        
       | aborsy wrote:
       | Borg2 has been in beta testing for a very long time.
       | 
       | Anyone knows when will it come out of beta?
        
         | ThomasWaldmann wrote:
         | The usual answer: "when it is ready".
         | 
         | For low-latency storage (like file: and maybe ssh:) it already
         | works quite nicely, but there might be a lot to do still for
         | high-latency storage (like cloud stuff).
        
           | dawnerd wrote:
           | It's a shame because the current version has had bugs that v2
           | supposedly fixed for a while.
        
             | ThomasWaldmann wrote:
             | Bugs?
             | 
             | I don't know about any show-stoppers in borg 1.x.
             | 
             | Design limitations?
             | 
             | Yes, there are some, that's why borg2 will be quite
             | different. But these are no easy or small changes.
             | 
             | Also, borg2 will be a breaking release (offering borg
             | transfer to copy existing archives from borg 1.x repos). It
             | takes long because we try to put all breaking changes into
             | borg2, so you won't have to transfer again too soon after
             | borg2 release.
        
       | mrtesthah wrote:
       | FYI for those using restic, you can use rest-server to achieve a
       | server-side-enforced append-only setup. The purpose is to protect
       | against ransomware and other malicious client-side operations.
        
       | homebrewer wrote:
       | For anyone looking to migrate off borg because of this, append-
       | only is available in restic, but only with the rest-server
       | backend:
       | 
       | https://github.com/restic/restic
       | 
       | https://github.com/restic/rest-server
       | 
       | which has to be started with --append-only. I use this systemd
       | unit:                 [Unit]       After=network-online.target
       | [Install]       WantedBy=multi-user.target            [Service]
       | ExecStart=/usr/local/bin/rest-server --path /mnt/backups
       | --append-only --private-repos       WorkingDirectory=/mnt/backups
       | User=restic       Restart=on-failure       ProtectSystem=strict
       | ReadWritePaths=/mnt/backups
       | 
       | I also use nginx with HTTPS + HTTP authentication in front of it,
       | with a separate username/password combination for each server.
       | This makes rest-server completely inaccessible to the rest of the
       | internet and you don't have to trust it to be properly protected
       | against being hammered by malicious traffic.
       | 
       | Been using this for about five years, it saved my bacon a few
       | times, no problems so far.
        
         | champtar wrote:
         | If you want to use some object storage instead of local disk,
         | rclone can be a restic server:
         | https://rclone.org/commands/rclone_serve_restic/
        
         | rsync wrote:
         | You can achieve append-only without _exposing_ a rest server
         | provided that  'rclone' can be called on the remote end:
         | rclone serve restic --stdio
         | 
         | You add something like this to ~/.ssh/authorized_keys:
         | restrict,command="rclone serve restic --stdio --append-only
         | backups/my-restic-repo" ssh-rsa ...
         | 
         | ... and then run a command like this:                 ssh
         | user@rsync.net rclone serve restic --stdio ...
         | 
         | We just started deploying this on rsync.net servers - which is
         | to say, we maintain an arguments allowlist for every binary you
         | can execute here and we never allowed 'rclone serve' ... but
         | now we do, IFF it is accompanied by --stdio.
        
           | zacwest wrote:
           | You then use `restic` telling it to use rclone like...
           | restic ... --option=rclone.program="ssh -i <identity>
           | user@host" --repo=rclone:
           | 
           | which has it use the rclone backend over ssh.
           | 
           | I've been doing this on rsync.net since at least February;
           | works great!
        
         | snickerdoodle12 wrote:
         | I use restic+rclone+b2 with an api key that can't hard delete
         | files. This gives me dirt-cheap effectively append-only object
         | storage with automatic deletion of soft deleted backups after X
         | days.
        
           | fl0id wrote:
           | Which is exactly what the borg suggest in their issue.
        
         | cvalka wrote:
         | Use rustic instead of restic!
        
           | Too wrote:
           | Care to explain more?
        
           | champtar wrote:
           | https://github.com/rustic-rs/rustic?tab=readme-ov-
           | file#stabi...
           | 
           | rustic currently is in beta state and misses regression
           | tests. It is not recommended to use it for production
           | backups, yet.
        
         | twhb wrote:
         | restic's rest-server append-only mode unfortunately doesn't
         | prevent data deletion under normal usage. More here: https://re
         | stic.readthedocs.io/en/stable/060_forget.html#secu.... Their
         | workaround is pretty weak, in my opinion: a compromised client
         | can still delete all your historic backups, and you're on a
         | tight timeline to notice and fix it before they can delete the
         | rest of your backups, too.
        
         | mike-cardwell wrote:
         | You say "only with the restic backend" but you can do it with a
         | simple Nginx backend too
         | https://www.grepular.com/Nginx_Restic_Backend - The restic
         | server part is redundant
        
         | JeremyNT wrote:
         | I'm curious if there is any reason to use Borg these days.
         | 
         | I had the impression that in the beginning Borg started as a
         | fork of Restic to add missing features, but Restic was the more
         | mature project.
         | 
         | Is there still anything Borg has that Restic lacks?
        
           | remram wrote:
           | My number one problem with Restic is the memory usage. On
           | some of my workloads, Restic consumes _dozens of gigabytes_
           | of memory during backup.
           | 
           | I am very much in the market for a replacement (looking at
           | Rustic for example).
        
             | nadir_ishiguro wrote:
             | That's very interesting. Never had noticed anything like
             | that. What kind of workloads are you seeing this with?
        
           | lutoma wrote:
           | Borg is a fork of Attic, not restic. Restic is also written
           | in Go while Attic/Borg is in Python.
           | 
           | For me the reason to use Borg over Restic has always been
           | that it was _much_ faster due to using a server-side daemon
           | that could filter/compress things. The downside being you
           | can't use something like S3 as storage (but services like
           | Borgbase or Hetzner Storage Boxes support Borg).
           | 
           | That's probably changed with the server backend, but with the
           | same downside.
        
             | KingOfCoders wrote:
             | We used borg with the very nice people at rsync.net in two
             | startups.
        
         | nine_k wrote:
         | While at it, what do you think about Kopia [1]? It seems to use
         | architectural decisions similar to Restic and Borg, but appears
         | to be much faster in certain cases by exploiting parallel
         | access. It's v0.20 though.
         | 
         | [1]: https://kopia.io/docs/
        
       | dblitt wrote:
       | It seems the suggested solution is to use server credentials that
       | lack delete permissions (and use credentials that have delete for
       | compacting the repo), but does that protect against a compromised
       | client simply overriding files without deleting them?
        
         | qeternity wrote:
         | Append-only would imply yes. There is no overwriting in append-
         | only. There is only truncate and append.
        
           | mosselman wrote:
           | You have misread I think.
           | 
           | There used to be append-only, they've removed it and suggest
           | using a credential that has no 'delete' permission. The
           | question asked here is whether this would protect against
           | data being overwritten instead of deleted.
        
             | ThomasWaldmann wrote:
             | Yes, it also disallows overwriting.
        
         | throwaway984393 wrote:
         | No. Delete and overwrite are different. You need overwrite
         | protection in addition to delete protection. The solution will
         | vary depending on the storage system and the use case. (The
         | comment in the PR is not an exhaustive description of potential
         | solutions)
        
         | ThomasWaldmann wrote:
         | no-delete disallows any kind of deleting information, that
         | includes object deletion and object overwriting.
        
       | TheFreim wrote:
       | I've been using Borg for a while, I've been thinking about
       | looking at the backup utility space again to see what is out
       | there. What backup utilities do you all use and recommend?
        
         | TiredOfLife wrote:
         | Kopia
        
           | conception wrote:
           | Kopia is surprisingly good. I use it with a b2 backend, had
           | percentage based restore verification for regulatory items
           | and is super fast. Only downside is lack of enterprise
           | features/centralized management.
        
         | Saris wrote:
         | Restic is nice. Backrest if you like a webUI.
        
         | singhrac wrote:
         | I spent too long looking into this and settled on restic. I'm
         | satisfied with the performance for our large repo and datasets,
         | though we'll probably supplement it with filesystem-based
         | backups at some point.
         | 
         | Borg has the issue that it is in limbo, i.e. all the new
         | features (including object storage support) are in Borg2, but
         | there's no clear date when that will be stable. I also did not
         | like that it was written in Python, because backups are not
         | always IO blocked (we have some very large directories, etc.).
         | 
         | I really liked borgmatic on Borg, but we found resticprofile
         | which is pretty much the same thing (it is underdiscussed).
         | After some testing I think it is important to set GOGC and
         | read-concurrency parameters, as a tip. All the GUIs are very
         | ugly, but we're fine a CLI.
         | 
         | If rustic matures enough and is worth a switch we might
         | consider it.
        
         | muppetman wrote:
         | restic
         | 
         | Single binary, well supported, dedup, compression, excellent
         | snapshots, can mount a backup to restore a single file easily
         | etc etc.
         | 
         | It's made my backups go from being a chore to being a joy.
        
           | rsync wrote:
           | ... also you can point restic at any old SFTP server ...
        
         | actuallyalys wrote:
         | I still use borg for local backups but use restic for all my
         | offsite backups. Off-hand I don't think redtic lacks any
         | feature borg has (although there's probably at least one) after
         | they added compression a few years ago.
        
       | seymon wrote:
       | Borg vs Restic vs Kopia ?
       | 
       | They are so similar in features. How do they compare? Which to
       | choose?
        
         | aborsy wrote:
         | Restic is the winner. It talks directly to many backends, is a
         | static binary (so you can drop the executable in operating
         | systems which don't allow package installation like a NAS OS)
         | and has a clean CLI. Kopia is a bit newer and less tested.
         | 
         | All three have a lot of commands to work with repositories.
         | Each one of them is much better than closed source proprietary
         | backup software that I have dealt with, like Synology
         | hyperbackup nonsense.
         | 
         | If you want a better solution, the next level is ZFS.
        
           | seymon wrote:
           | I am already using zfs on my NAS where I want my backups to
           | be. But I didn't consider it for backups till now
        
             | aeadio wrote:
             | You can consider something like syncthing to get the
             | important files onto your NAS, and then use ZFS snapshots
             | and replication via syncoid/sanoid to do the actual backing
             | up.
        
               | aborsy wrote:
               | Or install ZFS also on end devices, and do ZFS
               | replication to NAS, which is what I do. I have ZFS on my
               | laptop, snapshot data every 30 minutes, and replicate
               | them. Those snapshots are very useful, as sometimes I
               | accidentally delete data.
               | 
               | With ZFS, all file system is replicated. The backup will
               | be consistent, which is not the case with file level
               | backup. With latter, you have to also worry about lock
               | files, permissions, etc. The restore will be more natural
               | and quick with ZFS.
        
               | fc417fc802 wrote:
               | I can't speak to zfs but I don't find btrfs snapshots to
               | be a viable replacement for borgbackup. To your
               | filesystem consistency point I snapshot, back the
               | snapshot up with borg, and then delete the snapshot. I
               | never run borg against a writable subvolume.
        
           | PunchyHamster wrote:
           | Kopia is VERY similar to Restic, main differences is Kopia
           | getting half decent UI vs Restic being a bit more friendly
           | for scripting
           | 
           | > If you want a better solution, the next level is ZFS.
           | 
           | Not a backup. Not a bad choice for storage for backup server
           | tho
        
             | jopsen wrote:
             | IMO the UI is a killer feature.
             | 
             | I don't need to configure and monitor cron jobs.
        
         | the_angry_angel wrote:
         | Kopia is awesome. With exception to it's retention policies,
         | but work like no other backup software that I've experienced to
         | date. I don't know if it's just my stupidity, being stuck in 20
         | year thinking or just the fact it's different. But for me, it
         | feels like a footgun.
         | 
         | The fact that Kopia has a UI is awesome for non-technical
         | users.
         | 
         | I migrated off restic due to memory usage, to Kopia. I am
         | currently debating switching back to restic purely because of
         | how retention works.
        
           | zargon wrote:
           | I'm confused. Is Kopia awesome or is it a footgun? (Or are
           | words missing?)
        
         | spiffytech wrote:
         | I picked Kopia when I needed something that worked on Windows
         | and came with a GUI.
         | 
         | I was setting up PCs for unsophisticated users who needed to be
         | able to do their own restores. Most OSS choices are only
         | appropriate for technical users, and some like Borg are *nix-
         | only.
        
         | noAnswer wrote:
         | I use Borg since eight years and it has never let me down.
         | Including a full 8TB disaster restore. It's super resilient to
         | crashes.
         | 
         | When I tested Restic (eight years ago) it was super slow.
         | 
         | No opinion about Kopia, never heard of it.
        
           | liotier wrote:
           | Same here: my selection boiled down to Borg vs. Restic. I
           | started with Restic because my friends used it and, while it
           | was perfectly satisfactory functionally, found it unbearably
           | slow with large backups. Changed to Borg and I've been happy
           | everafter !
        
         | herewulf wrote:
         | I don't know about the other two but restic seems to have a
         | very good author/maintainer. That is to say that he is very
         | active in fixing problems, etc..
        
       | jbverschoor wrote:
       | Moved to duplicacy. Works great for me
        
         | jbverschoor wrote:
         | Not to be confused with duplicati or duplicity
        
       | neilv wrote:
       | I used to have a BorgBackup server at home that used append-only
       | and restricted-SSH.
       | 
       | It wasn't perfect, but it did protect against some scenarios in
       | which a device could be majorly messed up, yet the server was
       | more resistant to losing the data.
       | 
       | For work, the backup schemes include separate additional
       | protection of the data server or media, so append-only added to
       | that would be nice, as redundant protection, but not as
       | necessary.
        
       | nathants wrote:
       | Do something simpler. Backups shouldn't be complex.
       | 
       | This should be simpler still:
       | 
       | https://github.com/nathants/backup
        
         | yread wrote:
         | Uh, who has the money to store backups in AWS?!
        
           | nathants wrote:
           | Depends how big they are. My high value backups go into S3,
           | R2, and a local x3 disk mirror[1].
           | 
           | My low value backups go into a cheap usb hdd from Best Buy.
           | 
           | 1. https://github.com/nathants/mirror
        
           | seized wrote:
           | Glacier Deep Archive is the cheapest cloud backup option at
           | $1USD/month/TB.
           | 
           | Google Cloud Store Archive Tier is a tiny bit more.
        
             | mananaysiempre wrote:
             | Both would be pretty expensive to actually restore from,
             | though, IIRC.
        
               | fc417fc802 wrote:
               | Quite expensive, but it should only ever be a last resort
               | after your local backups have all failed in some way or
               | another. For $1/mo/TB you purchase the opportunity to pay
               | an exorbitant amount to recover from an otherwise
               | catastrophic situation.
        
             | ikiris wrote:
             | To quote the old mongodb video: If you don't care about
             | restores, /dev/null is even cheaper, and its webscale.
        
           | PunchyHamster wrote:
           | Support for S3 means you can just have minio server somewhere
           | acting as backup storage (and minio is pretty easy to
           | replicate). I have local S3 on my NAS replicated to cheapo
           | OVH serwer for backup
        
         | orsorna wrote:
         | Is this a joke?
         | 
         | I don't see what value this provides that rsync, tar and `aws
         | s3 cp` (or AWS SDK equivalent) provides.
        
           | nathants wrote:
           | How do you version your rsync backups?
        
             | iforgotpassword wrote:
             | Dirvish
        
               | nathants wrote:
               | Perl still exists?
        
             | somat wrote:
             | I use rsyncs --link-dest
             | 
             | abridged example:                   rsync --archive --link-
             | dest 2025-06-06 backup_role@backup_host:backup_path/
             | 2025-06-07/
             | 
             | Actual invocation is this huge hairy furball of an rsync
             | command that appears to use every single feature of rsync
             | as I worked on my backup script over the years.
             | rsync_cmd = [           '/usr/bin/rsync',           '--
             | archive',           '--numeric-ids',           '--owner',
             | '--delete',           '--delete-excluded',           '--no-
             | specials',           '--no-devices',           '--
             | filter=merge backup/{backup_host}/filter.composed'.format(*
             | *rsync_param),           '--link-dest={cwd}/backup/{backup_
             | host}/current/{backup_path}'.format(**rsync_param),
             | '--rsh=ssh -i {ssh_ident}'.format(**rsync_param),
             | '--rsync-path={rsync_path}'.format(**rsync_params),
             | '--log-file={cwd}/log/{backup_id}'.format(**rsync_params),
             | '{remote_role}@{backup_host}:/{backup_path}'.format(**rsync
             | _params),           'backup/{backup_host}/work/{backup_path
             | }'.format(**rsync_params) ]
        
               | nathants wrote:
               | This is cool. Do you always --link-dest to the last
               | directory, and that traverses links all the way back as
               | far as needed?
        
               | somat wrote:
               | Yes, this adds a couple of nice features, it is easy to
               | go back to any version using only normal filesysem access
               | and because they are hard links it only uses space for
               | changed files and you can cull old versions without
               | worrying about loosing the backing store for the diff.
               | 
               | I think it sort of works like apples time-machine but I
               | have never used that product so... (shrugs)
               | 
               | Note that it is not, in the strictest sense, a very good
               | "backup" mainly because it is too "online", to solve that
               | I have a set of removable drives that I rotate through,
               | so with three drives, each ends up with every third day.
        
               | DavideNL wrote:
               | Sounds like "rsnapshot" :
               | 
               | https://rsnapshot.org/
        
         | Too wrote:
         | Index of files stored in git pointing to a remote storage. That
         | sounds exactly like git LFS. Is there any significant
         | difference? In particular in terms of backups.
        
           | nathants wrote:
           | Definitely similar.
           | 
           | Git LFS is 50k loc, this is 891 loc. There are other
           | differences, but that is the main one.
           | 
           | I don't want a sophisticated backup system. I want one so
           | simple that it disappears into the background.
           | 
           | I want to never fear data loss or my ability to restore with
           | broken tools and a new computer while floating on a raft down
           | a river during a thunder storm. This is what we train for.
        
         | ajb wrote:
         | Cool, but looks like it's going to miss capabilities, so not
         | suitable for a full OS backup (see
         | https://github.com/python/cpython/issues/113293)
        
           | nathants wrote:
           | Interesting. I'm not trying to restore bootable systems, just
           | data. Still, probably worthwhile to rebuild in Go soon.
        
       | puffybuf wrote:
       | I've been using device mapper+encryption to backup my files to
       | encrypted filesystem on regular files. (cryptsetup on linux,
       | vnconfig+bioctl on openbsd). Is there a reason for me to use
       | borgbackup? Maybe to save space?
       | 
       | I even wrote python scripts to automatically cleanup and unmount
       | if something goes wrong (not enough space etc). On openbsd I can
       | even Double encrypt with blowfish(vnconfig -K) and then a diff
       | alg for bioctl.
        
         | anyfoo wrote:
         | Does your solution do incremental backups at all? I have
         | backups going back years, because through incremental backups
         | each delta is not very large.
         | 
         | Every once in a while things gets sparsed out, so that for
         | example I have daily backups for the recent past, but only
         | monthly and then even yearly for further back.
        
           | hcartiaux wrote:
           | I maintain my incremental backups and handle the rotation
           | with a shell script (bontmia) based on rsync with `--link-
           | dest` (it creates hard links for unchanged files from the
           | last backup). I've been using this on top of
           | cryptsetup/luks/ext4 or xfs for > 10 years.
           | 
           | Bonus: the backups are readable without any specific tools,
           | you don't have to be able to reinstall a backup software to
           | restore files, which may or may not be difficult in 10 years.
           | 
           | This is the tool I use: https://github.com/hcartiaux/bontmia
           | 
           | It's forked from an old project which is not online anymore,
           | I've fixed a few bugs and cleaned the code over the years.
        
       | gausswho wrote:
       | My current approach is restic, but I'd prefer to have asymmetric
       | passwords, essentially the backup machine only having write
       | access (while maintaining deduplication). This way if the backup
       | machine were compromised, and therefore the password it needs to
       | write, the backup repo itself would still be secure since it
       | would use a different password for reading.
       | 
       | Is this what append-only achieved for Borg?
        
       | antoniomika wrote:
       | This has been replaced with a permissions feature that still
       | provides both delete and overwrite protections. The difference is
       | the underlying store needs to implement it rather than running a
       | server that understands the permission differences. You can read
       | more about this change here:
       | https://github.com/borgbackup/borg/issues/8823#issuecomment-...
        
         | bayindirh wrote:
         | This comment needs to be pinned, alongside what the developers
         | say [0] since the change is very misunderstood.
         | 
         | > The "no-delete" permission disallows deleting objects as well
         | as overwriting existing objects.
         | 
         | [0]:
         | https://github.com/borgbackup/borg/pull/8798#issuecomment-29...
        
           | zargon wrote:
           | Isn't this "no-delete permission" just a made-up mode for
           | testing the borg storage layer while simulating a lack of
           | permissions for deleting and overwriting? In actual
           | deployment, whatever backing store is used must have the
           | access control primitives to implement such a restriction. I
           | don't know how to do this on a posix filesystem, for example.
           | Gemini gave me a convoluted solution that requires the client
           | to change permissions after creating the files.
        
             | antoniomika wrote:
             | Currently, you can either provide the
             | `BORG_REPO_PERMISSIONS` env var to borg [0] or
             | `--permissions` flag to `borg serve` [1]. You can then
             | enforce this as part of your `authorized_keys` command, for
             | example.
             | 
             | [0] https://github.com/borgbackup/borg/blob/3cf8d7cf2f36246
             | ded75...
             | 
             | [1] https://github.com/borgbackup/borg/blob/3cf8d7cf2f36246
             | ded75...
        
               | zargon wrote:
               | Ah, I was searching borgstore for no-delete, but it gets
               | exploded into itemized permissions in borg. Documentation
               | seems to be non-existent, as the only mention seems to be
               | the changelog where it suggests this only exists for
               | testing. But I suppose it's not released yet.
        
             | ThomasWaldmann wrote:
             | at first it was implemented to easily test permission
             | restricted storages (can't easily test on all sorts of
             | cloud storages).
             | 
             | it was implemented for "file:" (which is also used for
             | "ssh://" repos) and there are automated tests for how borg
             | behaves on such restricted permissions repos.
             | 
             | after the last beta I also added cli flags to "borg serve",
             | so it now also can be used via .ssh/authorized_keys more
             | easily.
             | 
             | so it can now also be used for practical applications, not
             | just for testing.
             | 
             | not for production yet though, borg2 is still in beta.
             | 
             | help with testing is very welcome though!
        
         | jaegerma wrote:
         | Thanks for that link. That issue somehow didn't come up when I
         | researched the removal of append-only. The only hint I had was
         | the vague "remove remainders of append-only and quota support"
         | in the change log without any further information.
        
         | formerly_proven wrote:
         | The old append-only mode was a hack that wasn't very useful in
         | practice anyway, because there were no tools to dissect changes
         | in a repository and the datastructures wouldn't support that
         | anyway.
         | 
         | Making e.g. snapshots on the backing storage was always the
         | better approach.
        
       | 3036e4 wrote:
       | I use rsync.net for borg backups. They create daily ZFS snapshots
       | that are read-only to the user, specifically for ransomware
       | protection.
       | 
       | But this was a good reminder I should probably figure out some
       | good way to monitor my borg repo for unintended changes. Having
       | snapshots to roll back to is only useful if a problem is detected
       | in time.
        
       | radarsat1 wrote:
       | I've been using btrbk with a local linux machine i use as a file
       | server. Works well for incremental snapshot backups, no need to
       | "unthaw" an image, I can directly fetch files from a previous
       | snapshot. The only thing I haven't figured out with btrfs is how
       | to efficiently handle incremental backs to S3. I guess there's
       | not much choice than to use image diffs using btrfs-send because
       | you don't have hard/ref links. But I don't like this because then
       | if i want to retrieve a file from some version I'd have to have
       | an extra 30 TB free to install the base image and progressively
       | all the diffs up to the point I want to retrieve, seems a lot
       | harder. So to make this reasonable I'd have to choose to make
       | periodic non-incremental base images, starts getting complicated.
        
       | ThomasWaldmann wrote:
       | borgbackup developer here:
       | 
       | TL;DR: don't panic, all is good. :-)
       | 
       | Longer version:
       | 
       | - borg 1.x style "append-only" was removed, because it heavily
       | depended on how the 1.x storage worked (it was a transactional
       | log, always only appending PUT/DEL/COMMIT entries to segment
       | files - except when compacting segments [then it also deleted
       | segment files after appending their non-deleted entries to new
       | segments])
       | 
       | - borg 2 storage (based on borgstore) does not work like that
       | anymore (for good reasons), there is no "appending". thus "--
       | append-only" would be a misnomer.
       | 
       | - master branch (future borg 2 beta) has "borg serve
       | --permissions=..." (and BORG_PERMISSIONS env var) so one can
       | restrict permissions: "all", "no-delete", "write-only", "read-
       | only" offer more functionality than "append only" ever had. "no-
       | delete" disallows data deleting as well as data overwriting.
       | 
       | - restricting permissions in a store on a server requires
       | server/store side enforced permission control. "borg serve"
       | implements that (using the borgstore posixfs backend), but it
       | could be also implemented by configuring a different kind of
       | store accordingly (like some cloud storage). it's hard to test
       | that with all sorts of cloud storage providers though, so
       | implementing it in the much easier to automatically test posixfs
       | was also a motivation to add the permissions code.
       | 
       | Links:
       | 
       | - docs: https://github.com/borgbackup/borg/pull/8906/files
       | 
       | - code: https://github.com/borgbackup/borg/pull/8893/files
       | 
       | - code: https://github.com/borgbackup/borg/pull/8844/files
       | 
       | - code: https://github.com/borgbackup/borg/pull/8837/files
       | 
       | Please upvote, so people don't get confused.
        
         | nadir_ishiguro wrote:
         | Thank you for borg
        
       ___________________________________________________________________
       (page generated 2025-06-08 23:01 UTC)