[HN Gopher] Timeshift: System Restore Tool for Linux
___________________________________________________________________
Timeshift: System Restore Tool for Linux
Author : gballan
Score : 327 points
Date : 2024-07-22 21:23 UTC (1 days ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| yuumei wrote:
| Has the btrfs sub volume quota bug been fixed yet? I always had
| issues when using it
| sschueller wrote:
| I don't know but synology uses BTRFS now as well and if
| something crucial like that was broken I don't think they would
| support it on a NAS.
| marcus0x62 wrote:
| Synology uses custom extensions to BTRFS for much of their
| functionality.
| metadat wrote:
| Can timeshift work with ext4 filesystems?
|
| I know it won't have the atomicity of a CoW fs, but I'd be fine
| with that, as the important files on my systems aren't often
| modified, especially during a backup - I'd configure it to
| disable the systemd timers while the backup process is running.
| gballan wrote:
| Just getting started with it--but I think so, using rsync.
| mbreese wrote:
| Can't you also snapshot LVM volumes directly? So if you have an
| LVM volume, it shouldn't matter what the filesystem is,
| provided it is sync'd... in theory.
|
| (I've only done this on VMs that could be paused before the
| snapshot, so YMMV.)
| nijave wrote:
| Yeah, you can take live snapshots with LVM. You can use wyng-
| backup to incrementally take and back them up somewhere
| outside LVM. This has been working pretty well for me to
| backup libvirt domains backed by LVs
| tamimio wrote:
| Yep, been using it for a while, incl ext4, you can have
| scheduled snapshots too, saved my arse few times, especially
| when you install something that cannot be easily uninstalled
| like hyperland or similar.
| tombert wrote:
| This reminds me of the default behavior of NixOS. Whenever you
| make a change in the configuration for NixOS and rebuild it, it
| takes a snapshot of the system configurations and lets you
| restore after a reboot if you screw something up.
|
| Similarly, it doesn't do anything in regards to user files.
| choward wrote:
| I can't tell you the number of times I see a project and think
| to myself "NixOS already solves that problem but better."
| fallingsquirrel wrote:
| In fairness, this app supports snapshotting your home
| directory as well, and that's not solvable with Nix alone. In
| fact, I'm running NixOS and I've been meaning to set up
| Timeshift or Snapper for my homedir, but alas, I haven't
| found the time.
| __MatrixMan__ wrote:
| Is there something about your home directory that you'd
| want to back up that is not covered by invoking home
| manager as a nix module as part if nixos-rebuild?
|
| https://nix-community.github.io/home-
| manager/index.xhtml#sec...
|
| To me, it's better than a filesystem-backup because the
| things that make it into home manager tend to be exactly
| the things that I want to back up. The rest of it (e.g.
| screenshots, downloads) aren't something I'd want in a
| backup scheme anyhow.
| fallingsquirrel wrote:
| I want to keep snapshots of my work. I run nightly
| backups which have come in handy numerous times, but
| accessing the cloud storage is always slow, and sometimes
| I've even paid a few cents in bandwidth to download my
| own files. It would be a lot smoother if everything was
| local and I could grep through
| /.snapshots/<date>/<project>.
| SAI_Peregrinus wrote:
| Data (documents, pictures, source code, etc.) is not
| handled by home-manager. Backing up home.nix saves your
| config, but the data is just as if not more important.
| __MatrixMan__ wrote:
| Hmm, different strokes I guess. Maybe it's just that too
| much kubernetes has gone to my head, but I see files as
| ephemeral.
|
| Code and docs are in source control. My phone syncs
| images to PCloud when I take them. Anything I download is
| backed up... wherever I downloaded it from.
| SAI_Peregrinus wrote:
| Cloud sync != backup. Cloud sync won't help if you
| accidentally delete the file, backups will. Cloud sync
| won't help if you make an undesired edit, backups will.
| autoexecbat wrote:
| I've seen the configuration.nix file, it doesn't look like it
| captures specific versions. How does it handle snapshotting?
| somnic wrote:
| For managing your configuration.nix file itself you can
| just use whichever VCS you want, it's a text file that
| describes one system configuration and managing multiple
| versions and snapshots within that configuration file is
| out of scope.
|
| For the system itself, each time you run "nixos-rebuild
| switch" it builds a system out of your configuration.nix,
| including an activation script which sets environment
| variables and symlinks and stops and starts services and so
| on, adds this new system to the grub menu, and runs the
| activation script. It specifically _doesn 't_ delete any of
| your old stuff from the nix store or grub menu, including
| all your older versions of packages, and your old
| activation scripts. So if your new system is borked you can
| just boot into a previous one.
| alfalfasprout wrote:
| The problem, unfortunately, is that Nix often finds itself in
| a chicken and egg scenario where nixpkgs fails to provide a
| lot of important packages or has versions that are old(er).
| But for there to be more investment in adding more packages,
| etc. you need more people using the ecosystem.
| atlintots wrote:
| Luckily Nix is also an excellent build system, and does
| provide escape hatches here and there when you really need
| them (e.g nix-ld).
| NoThisIsMe wrote:
| What are you talking about? Nixpkgs is one of the largest
| and most up-to-date distro package repos out there.
| arianvanp wrote:
| Nixpkgs is the largest _and_ most up to date package
| repository according to https://repology.org/
|
| I'm honestly curious what packages you have a problem with
| SAI_Peregrinus wrote:
| Proprietary package vendors often provide a. deb that
| assumes Ubuntu. Maybe also a. rpm for RedHat if you're
| lucky.
| tombert wrote:
| That's definitely true, but maybe I've just been lucky,
| pretty much every proprietary program I've wanted to
| install in NixOS _has_ been in Nixpkgs.
|
| Skype, Steam, and Lightworks are all directly available
| in the repos and seem to work fine as far as I can tell.
| I'm sure there are proprietary packages that don't work
| or aren't in the repo, but I haven't really encountered
| them.
| SAI_Peregrinus wrote:
| I've unfortunately encountered a few. TotalPhase's Data
| Center software for their USB protocol analyzers is my
| current annoyance, someday I'll figure out how to get it
| to work but thus far it's been easier to just dedicate a
| second laptop to it.
| pmarreck wrote:
| Imagine installing an entirely new window manager without
| issue, and then undoing it without issue.
|
| NixOS does that. And I'm pretty sure that no other flavor of
| Linux does. First time I realized I could just blithely "shop
| around window managers" simply by changing a couple of
| configuration lines, I was absolutely floored.
|
| NixOS is the first Linux distro that made me actually feel
| like I was free to enjoy and tinker with ALL of Linux at
| virtually no risk.
|
| There is nothing else like it. (Except Guix. But I digress.)
| tombert wrote:
| Completely agree; being able to transparently know what the
| system is going to do by just looking at a few lines of
| text is sort of game-changing. It's trivial to add and
| remove services, and you can be assured that you _actually_
| added and removed them, instead of just being "pretty
| sure" about it.
|
| Obviously this is just opinion (no need for someone to
| supply nuance) but from my perspective the NixOS model is
| so obviously the "correct" way of doing an OS that it
| really annoys me that it's not the standard for every
| operating system. Nix itself is an annoying configuration
| language, and there are some more arcane parts of config
| that could be smoothed over, but the _model_ is so
| obviously great that I 'm willing to put up with it. If
| nothing else, being able to trivially "temporarily" install
| a program with nix-shell is a game-changer to me; it
| changes the entire way of how I think about how to use a
| computer and I love it.
|
| Flakes mostly solve my biggest complaint with NixOS, which
| was that it was kind of hard to add programs that weren't
| merged directly into the core nixpkgs repo.
| pmarreck wrote:
| > but from my perspective the NixOS model is so obviously
| the "correct" way of doing an OS that it really annoys me
| that it's not the standard for every operating system
|
| - Literally every person who's read the Nix paper and
| drank the kool-aid thinks this lol.
|
| I STILL don't completely understand every element of my
| nix config but it's still quite usable. Adding software
| requires adding it to the large-ish config file, largely
| because I created overlay namespaces of
| "master.programname", "unstable.programname" and
| "stable.programname" (with the default being "unstable"
| in my case) but those would all ideally be moved out into
| 2 text files, 1 for system level (maybe called
| system_packages.txt) and one for a named user (perhaps
| called <username>_packages.txt) and if those could be
| imported somehow into the configuration.nix, I think that
| would make things a bit easier for end-users, at least
| initially.
|
| The commandline UI (even the newer `nix` one) could still
| use an overhaul IMHO. The original CL utils were CLEARLY
| aimed directly at Nix developers, and not so much at end-
| users...
|
| I've been working on my own wrapper to encapsulate the
| most common use-cases I need the underlying TUI for
| https://github.com/pmarreck/ixnay < and that's it so far.
| phoe-krk wrote:
| I'd like some sort of a comparison with Duplicity/Deja Dup that
| seems to be the default on Gnome/Cinnamon.
| fallingsquirrel wrote:
| Different categories of app. Duplicity is geared toward backing
| up files to a separate machine, and this tool snapshots your
| filesystem on the same machine.
| phoe-krk wrote:
| OK, thanks. I was confused because Time Machine is capable of
| backing up to a remote device.
| mkesper wrote:
| Is that usable nowadays? Last time I checked it was hellishly
| slow compared to borg.
| phoe-krk wrote:
| Usable enough for me. I don't mind since it's running in the
| background anyway.
| exe34 wrote:
| oh this brings back memories, i found a script that did this
| about 15 years ago. it kept three versions of backups using rsync
| and hard-links to avoid duplication.
| nijave wrote:
| https://rsnapshot.org/ ?
| exe34 wrote:
| > rsnapshot was originally based on an article called Easy
| Automated Snapshot-Style Backups with Linux and Rsync, by
| Mike Rubel.
|
| must have been this one :-D thanks for finding it!
| dmitrygr wrote:
| > similar to the System Restore feature in Windows and the Time
| Machine tool in Mac OS
|
| This makes no sense! System Restore is a useless wart that just
| wastes time making "restore points" at every app/driver install
| and can rarely (if ever) produce a working system when used to
| "restore" anything. It does not back up user data at all. Time
| Machine is a whole-system backup solution that seems to work
| quite well and does back up user data.
|
| To me the quoted statement might as well read "a tool similar to
| knitting needles (in hobby shops) and dremels (in machine shops)"
|
| Reading their description further, it seems like they are
| implementing something similar to TimeMachine (within the
| confines of what linux makes possible), and not at all like
| "System Restore". This seems sane as this implements something
| that is actually useful. They, sadly, seem to gloss over what the
| consequences are of using non-btrfs FS with this tool, only
| mentioning that btrfs is needed for byte-exact snapshots. They do
| not mention what sort of byte-inexactness ext4 users should
| expect...
| nijave wrote:
| I believe System Restore takes a registry backup and can
| recover from a bad driver install but it's been years since I
| used it last. I think just about anything System Restore does
| can be replicated by "just fixing it" in Safe Mode but I think
| System Restore is geared for less technical folks.
|
| Newer versions of Windows have File History to backup user data
| (I don't think they have an integrated system/file solution
| quite like Time Machine though).
|
| However it makes some sense to keep system/user data separate.
| You don't want to lose your doc edits because you happened to
| have a bad driver upgrade at the same time. Likewise, you don't
| want to roll your entire system back to get an old version of a
| doc.
|
| Time Machine is trivial to implement (without the UI) with disk
| snapshots (that's what it does--store disk snapshots to an
| external disk)
| twodave wrote:
| My main use of system restore was to return to a "clean"
| install + just the bare minimum installs I needed back when
| windows was more likely to atrophy over time. I agree it is
| mostly useless today.
| magicalhippo wrote:
| They're talking about the Volume Shadow Copy Service[1], which
| effectively provides snapshots[2] of the filesystem.
|
| Which files are part of a shadow copy is determined by the one
| creating a shadow copy, so it could include user data.
|
| You can view and access the files in a shadow copy using
| ShadowExplorer[3] if you don't have the pro versions.
|
| [1]: https://learn.microsoft.com/en-us/windows-
| server/storage/fil...
|
| [2]: https://learn.microsoft.com/en-us/windows/win32/vss/the-
| vss-...
|
| [3]: https://www.shadowexplorer.com/
| ThinkBeat wrote:
| A bit of a side note and a bit of old man reveal, it would be
| nifty to have the backup system write the snapshots to
| cd/dvd/bluray disk.
|
| I remember working in a company that had a robot WORM system. It
| would grab a disc, it would be processed, take it out, place it
| among the archives. If a restore as needed the robot would find
| the backup, and read off the data.
|
| I never worked directly on the system, and I seem to remember
| there was a window that the system could keep track of
| (naturally) but older disks were stored off site somewhere for
| however long that window was.
|
| (Everything was replicated to a fully 100% duplicate system
| geographically highly separated from the production system.
| gballan wrote:
| AFAIK timeshift can use any mount. I tried a USB stick, but it
| was too slow. Now I'm experimenting with a partition on a
| second drive.
| umvi wrote:
| > Creates filesystem snapshots using rsync+hardlinks
|
| Sounds like it works similarly to git fork on GitHub? That is, if
| no files have changed, the snapshot doesn't take up any extra
| room?
| Izkata wrote:
| Directories and hardlinks take up space, just very little.
|
| It would make sense to hardlink a directory if everything in
| that tree was unchanged, but no filesystem will allow
| hardlinking a directory due to the risk of creating a loop
| (hardlinking to a parent directory), so directories are always
| created new and all files in the tree get their own hardlink.
|
| Apple's Time Machine was given an exception in their filesystem
| to allow it, since they have control over it and can ensure no
| such loops are created. So it doesn't have that penalty
| creating hardlinks for every single individual file every time.
| nurettin wrote:
| Timeshift saved my system so many times over the past 6-7 years.
| Botched upgrades, experimenting with desktop environments,
| destroying configuration defaults, it works and does what it says
| on the tin.
| tamimio wrote:
| Can't agree more with this, it does what it says!
| prmoustache wrote:
| How can you "botch" upgrades so many times?
|
| I may have had only one update that went wrong in 30 years of
| using Linux and that was just a bug introduced by a gfx driver
| in a new minor kernel version. I downgraded it and waited for
| the bug to be fixed upstream and that was it.
| nurettin wrote:
| bravo, I guess?
| e12e wrote:
| Hmm, this doesn't appear to be what I hoped it was:
|
| > Timeshift is similar to applications like rsnapshot, BackInTime
| and TimeVault but with different goals. It is designed to protect
| only system files and settings. User files such as documents,
| pictures and music are excluded.
|
| On the other hand, a quick search looking for "that zfs based
| time machine thing" did reveal a new (to me) project that looks
| very interesting:
|
| https://github.com/kimono-koans/httm
| tamimio wrote:
| You can include the user files too in the home directory. I
| have some snapshots that include them and some that do not, so
| you are covered both ways.
| OldMatey wrote:
| I adore Timeshift. It has made my time on Linux so much more
| trouble free.
|
| I have used Linux for 10+ years but over the I have spent hours,
| days and weeks trying to undo or fix little issues I introduce by
| tinkering around with things. Often I seem to break things at the
| worst times, right as I am starting to work on some new project
| or something that is time sensitive.
|
| Now, I can just roll back to an earlier stable version if I don't
| want to spend the time right then on troubleshooting.
|
| I've enabled this on all my family members machines and teach
| them to just roll back when Linux goes funky.
| pmarreck wrote:
| While it's not quite average-user-friendly (YET), one of the
| reasons I switched to NixOS is because it provides this out-of-
| the-box. I was frustrated with every other Linux for the
| reasons you cite, but NixOS I can deal with, since 1) screwing
| up the integrity of a system install is hard to begin with, 2)
| if you DO manage to do it, you can reboot into any of N
| previous system updates (where you set N).
|
| Linux is simultaneously the most configurable and the most
| brittle OS IMHO. NixOS takes away all the brittleness and
| leaves all the configurability, with the caveat that you have
| to declaratively configure it using the Nix DSL.
| rrix2 wrote:
| NixOS also has out of the box support for zfs auto snapshots,
| where you can tell it to keep 3 months, four weeks, 24
| hourly, and frequent snapshots evert fifteen minutes so you
| can time shift your home directory, too
| pmarreck wrote:
| I'm zfs on root and haven't set that up yet! I should
| gooseyman wrote:
| I enabled this four months ago and I have had the same
| experience.
|
| It's not that I couldn't retype the config file I accidentally
| wrote over while tinkering, but I like the safety that comes
| with Timeshift to try and fail a few times.
|
| Hard lessons come hard. This softens those lessons a little
| while maintaining the learning.
| LorenDB wrote:
| I prefer using openSUSE, which is tightly integrated with
| snapper[0], making it simple to recover from a botched update.
| I've only ever had to use it when an update broke my graphics
| drivers, but when you need it, it's invaluable.
|
| Snapper on openSUSE is integrated with both zypper (package
| manager) and YaST (system configuration tool) [1], so you get
| automatic snapshots before and after destructive actions. Also,
| openSUSE defaults to btrfs, so the snapshots are filesystem-
| native.
|
| [0]: http://snapper.io/
|
| [1]: https://en.opensuse.org/Portal:Snapper
| Arnavion wrote:
| And it's also integrated into the bootloader (if you use one of
| the supported ones). The bootloader shows you one boot entry
| per snapshot so you can boot an old snapshot directly.
| jwrallie wrote:
| Very nice, sometimes people claim that the only difference
| between distros is the repository and package management
| tools.
|
| It is when the defaults make the parts integrate nicely like
| this that the "greater is more than the sum of its parts"
| come into place.
| Spunkie wrote:
| This is a feature I've really been missing since switching
| from grub to systemd-boot.
|
| Has anyone figured out an easy way to get this back with
| systemd-boot?
| Arnavion wrote:
| Some time ago they did add systemd-boot as a supported
| option and apparently it also generates one entry per
| snapshot.
|
| https://news.opensuse.org/2024/03/05/systemd-boot-
| integratio...
|
| https://en.opensuse.org/Systemd-
| boot#Installation_with_full_...
|
| https://github.com/openSUSE/sdbootutil
|
| I haven't tried it though so I don't know for sure. (I have
| my own custom systemd-boot setup that predates theirs, and
| since my setup uses signed UKIs and theirs doesn't, I don't
| care to switch to theirs. I can still switch snapshots
| manually with `btrfs subvol` anyway; it just might require
| a live CD in case the default snapshot doesn't boot.)
| Vogtinator wrote:
| I'm using Tumbleweed with btrfs snapshots, systemd-boot
| and transparent disk encryption (using TPM + measured
| boot), works fine.
|
| Currently this needs to be set up semi-manually (select
| some options in the installer, then run some commands
| after install), but it'll be automatic soon.
| boomboomsubban wrote:
| systemd-boot has relatively recently added support for
| loading filesystems, https://github.com/systemd/systemd/blo
| b/71e5a35a5be99a1f244d... meaning you should be able to set
| up something similar. I wouldn't describe it as "easy" yet.
| Barrin92 wrote:
| openSUSE honestly is so criminally underrated. I've been using
| Tumbleweed for a few years for my dev/work systems and YaST is
| just great. Also that they ship fully tested images for their
| rolling release is just so much saner. OBS is another fantastic
| tool that I see so few people talking about, despite software
| distribution still being such a sore point in the linux
| ecosystem.
| Rinzler89 wrote:
| _> openSUSE honestly is so criminally underrated_
|
| Because it's not very popular in the US which has mostly
| cemented around fedora/ubuntu/arch so you don't hear much
| about any other distros, and most other countries around the
| world tend to just adopt what they learn from the US, due to
| the massively influential gravitational field the US has on
| the tech field.
|
| But in the german speaking world many know about it. It's a
| shame that despite the internet being relatively borderless
| it's still quite insular and divided. I'm not a native german
| speaker but it helps to know it since there's a lot of good
| linux content out there that's written in german.
| whiztech wrote:
| I use btrfs-assistant with Kubuntu because I can't get
| Timeshift to work properly. It's basically some kind of front-
| end for snapper and btrfsmaintenance.
|
| [0]: https://gitlab.com/btrfs-assistant/btrfs-assistant
| abbbi wrote:
| for RHEL based distributions you can do the same with an LVM
| and using boom boot manager.
|
| https://github.com/snapshotmanager/boom-boot
| pixelmonkey wrote:
| I've probably spent way too much time thinking about Linux backup
| over the years. But thankfully, I found a setup that works really
| well for me in 2018 or so, used it for the last few years, and I
| wrote up a detailed blog post about it just a month ago:
|
| https://amontalenti.com/2024/06/19/backups-restic-rclone
|
| The tools I use on Linux for backup are restic + rclone, storing
| my restic repo on a speedy USB3 SSD. For offsite, I use rclone to
| incrementally upload the entire restic repository to Backblaze
| B2.
|
| The net effect: I have something akin to Time Machine (macOS) or
| Arq (macOS + Windows), but on my Linux laptop, without needing to
| use ZFS or btrfs everywhere.
|
| Using restic + some shell scripting, I get full support for de-
| duplicated, encrypted, snapshot-based backups across all my
| "simpler" source filesystems. Namely: across ext4, exFAT, and
| (occasionally) FAT32, which is where my data is usually stored.
| And pushing the whole restic repo offsite to cloud storage via
| rclone + Backblaze completes the "3-2-1" setup straightforwardly.
| tlavoie wrote:
| One question, why use rclone for the Backblaze B2 part? I use
| restic as well, configured with autorestic. One command backs
| up to the local SSD, local NAS, and B2.
| pixelmonkey wrote:
| I explain in the post. Here's a copypasta of the relevant
| paragraph:
|
| "My reasoning for splitting these two processes -- restic
| backup and rclone sync -- is that I run the local restic
| backup procedure more frequently than my offsite rclone sync
| cloud upload. So I'm OK with them being separate processes,
| and, what's more, rclone offers a different set of handy
| options for either optimizing (or intentionally throttling)
| the cloud-based uploads to Backblaze B2."
| tlavoie wrote:
| So you did! Sorry, hadn't read the post beforehand. Oh, and
| I too mourned the loss of CrashPlan. Being in Canada, I
| didn't have the option offered to have a restore drive sent
| if needed, but thought it was a brilliant idea. On the
| other hand, I think Backblaze might!
| ratorx wrote:
| One problem with file based backups is that they are not atomic
| across the filesystem. If you ever back up a database (or
| really any application that expects atomicity while it's
| running), then you might corrupt the database and lose data.
| This might not seem like a big problem, but can affect e.g.
| SQLite, which is quite popular as a file format.
|
| Then again, the likelihood that the backup will be inconsistent
| is fairly low for a desktop, so it's probably fine.
|
| I think the optimal solution is:
|
| 1) file system level atomic snapshot (ZFS, BTRFS etc)
|
| 2) Backup the snapshot at a file level (restic, borg etc)
|
| This way you get atomicity as well as a file-based backup which
| is redundant against filesystem-level corruption.
| pixelmonkey wrote:
| I agree with you, of course. On macOS, Arq uses APFS
| snapshots, and on Windows, it uses VSS. It'd be nice to use
| something similar on Linux with restic.
|
| In my linked post above, I wrote about this:
|
| "You might think btrfs and zfs snapshots would let you create
| a snapshot of your filesystem and then backup that rather
| than your current live filesystem state. That's a good idea,
| but it's still an open issue on restic for something like
| this to be built-in (link). There's a proposal about how you
| could script it with ZFS in this nice article (link) on the
| snapshotting problem for backups."
|
| The post contains the links with further information.
|
| My imperfect personal workaround is to run the restic backup
| script from a virtual console (TTY) occasionally with my
| display server / login manager service stopped.
| vladvasiliu wrote:
| I run this from a ZFS snapshot. What I want backed up from
| my home dir lives on the same volume, so I don't have to
| launch restic multiple times. I have dedicated volumes for
| what I specifically want excluded from backups and ZFS
| snapshots (~/tmp, ~/Downloads, ~/.cache, etc).
|
| I've been thinking of somehow triggering restic by zrepl
| whenever it takes a snapshot, but I haven't figured a way
| of securely grabbing credentials for it to unlock the
| repository and to upload to s3 without requiring user
| intervention.
| magicalhippo wrote:
| Windows' Volume Shadow Copy Service[1] allows applications
| like databases to be informed[2] when a snapshot is about to
| be taken, so they can ensure their files are in a safe state.
| They also participate in the restore.
|
| While Linux is great at many things, backups is one area I
| find lacking compared to what I'm used to from Windows. There
| I take frequent incremental whole-disk backups. The backup
| program uses the Volume Shadow Copy Service to provide a
| consistent state (as much as possible). Being incremental
| they don't take much space.
|
| If my disk crashes I can be back up and running like (almost)
| nothing happened in less than an hour. Just swap out the disk
| and restore. I know, as I've had to do that twice.
|
| [1]: https://learn.microsoft.com/en-us/windows/win32/vss/the-
| vss-...
|
| [2]: https://learn.microsoft.com/en-
| us/windows/win32/vss/overview...
| lmz wrote:
| LVM snapshots are copy on write and can be used the same
| way.
| magicalhippo wrote:
| Any backup software that utilizes LVM in this way?
|
| Ie automatically creates a snapshot and sends the
| incremental changes since previous snapshot to a backup
| destination like a NAS or S3 blob storage.
| lmz wrote:
| I don't think the diffs are usable that way. They're
| actually more like an "undo log" in that the snapshot
| space is taken by "old blocks" when the actual volume is
| taking writes. It's useful for the same reasons as volume
| shadow copy: a consistent snapshot of the block device.
| (Also this can be very bad for write performance as any
| writes are doubled - to snapshot and to to the real
| device)
| magicalhippo wrote:
| Yeah ok, that makes sense. Write performance is a
| concern, but usually the backups run when there's little
| activity.
| _flux wrote:
| I think block-level snapshots would be very difficult to
| use this way.
|
| I just make a full dedupped backups from LVM snapshots
| with kopia, but I've set that up only on one system, on
| others I just use kopia as-is.
|
| It takes some time, but that's fine for me. Previous
| backup of 25 GB an hour ago took 20 minutes. I suppose if
| it only walked files it knew were changed it would be a
| lot faster.
| magicalhippo wrote:
| Thanks, sounds interesting. So you create a snapshot,
| then let kopia process that snapshot rather than the live
| filesystem, and then remove the snapshot?
|
| > I suppose if it only walked files it knew were changed
| it would be a lot faster.
|
| Right, for me I'd want to set it up to do the full disk,
| so could be millions of files and hundreds of GB. But
| this trick should work with other backups software, so
| perhaps it's a viable option.
| _flux wrote:
| Exactly so.
|
| Here's the script, should it be of benefit to someone,
| even if it of course needs to be modified:
| #!/bin/sh success=false teardown() {
| umount /mnt/backup/var/lib/docker || true
| umount /mnt/backup/root/.cache || true umount
| /mnt/backup/ || true for lv in root docker-
| data; do lvremove --yes /dev/hass-vg/$lv-
| snapshot || true done if [
| "$1" != "no-exit" ]; then $success
| exit $? fi } set -x
| set -e teardown no-exit trap teardown
| EXIT for lv in root docker-data; do
| lvcreate --snapshot -L 1G -n $lv-snapshot /dev/hass-
| vg/$lv done mount /dev/hass-
| vg/root-snapshot /mnt/backup mount /dev/hass-
| vg/docker-data-snapshot /mnt/backup/var/lib/docker
| mount /root/.cache /mnt/backup/root/.cache -o bind
| chroot /mnt/backup kopia --config-
| file="/root/.config/kopia/repository.config" --log-
| dir="/root/.cache/kopia" snap create / /var/lib/docker
| kopia --config-
| file="/root/.config/kopia/repository.config" --log-
| dir="/root/.cache/kopia" snap create /boot /boot/efi
| success=true
| magicalhippo wrote:
| Awesome, thanks!
| abbbi wrote:
| wyng backup does this. It uses the device mappers
| thin_dump tools to allow for incremental backups between
| snapshots, too:
|
| https://github.com/tasket/wyng-backup
|
| edit: requires lvm thin provisioned volumes
|
| There is also thin-send-recv which basically does the
| same as zfs send/recv just with lvm:
|
| https://github.com/LINBIT/thin-send-recv
|
| it uses the same functions of the device mapper to allow
| incremental sync of lvm thin volumes.
| magicalhippo wrote:
| Thanks for the pointers, looks very relevant.
|
| It's just such a low-effort peace of mind. Just a few
| clicks and I know that regardless what happens to my disk
| or my system, I can be up and running in very little time
| with very little effort.
|
| On Linux it's always a bit more work, but backups and
| restore is one of those things I prefer is not too
| complicated, as stress level is usually high enough when
| you need to do restore to worry about forgetting some
| incantation steps.
| abbbi wrote:
| it depends. Doing a complete disaster recovery of a
| windows system IMHO can be a real struggle. Especially if
| you have to restore a system to different hardware, which
| the system state backup that microsoft offers does not
| support afaik.
|
| Backing up a linux system in combination with REAR:
|
| https://github.com/rear/rear
|
| and a backup utility of your choice for the regular
| backup has never failed me so far. I used it to restore
| linux systems to complete different hardware without any
| troubles.
| magicalhippo wrote:
| For my cases it's been quite easy, but then I've mostly
| had quite plain hardware so didn't need vendor drivers to
| recover.
|
| While I've had to recover in anger twice, I've used the
| same procedure to migrate to new hardware many times.
| Just restore to the new disk in the new machine, and let
| Windows reboot a few times and off I went.
|
| REAR looks useful, hadn't seen that before.
| _flux wrote:
| You can also use lvm2 and then you get atomic snapshots with
| any file system (I think it needs to support fsfreeze, I
| guess all of them do).
| pixelmonkey wrote:
| I never knew this. Thanks for sharing!
| Am4TIfIsER0ppos wrote:
| lvm requires unallocated space in the volume which makes it
| kind of garbage to use for snapshots
| hashworks wrote:
| While I do that, is that really the case? I can imagine
| database snapshots are consistent most of the time, but it
| can't be guaranteed, right? In the end it's like a server
| crash, the database suddenly stops.
| lmz wrote:
| Your DB is supposed to guarantee consistency even in server
| crashes. (The Consistency, Durability part of ACID).
| mdavidn wrote:
| That consistency is built on assumptions about the
| filesystem that may not hold true of a copy made
| concurrently by a backup tool.
|
| e.g. The database might append to write-ahead logs in a
| different order than the order in which the backup tool
| reads them.
| grumbelbart2 wrote:
| That's why you do a filesystem snapshot before the
| backup, something supported by all systems. The snapshot
| is constant to the backup tool, and read order or
| subsequent writes don't matter.
|
| The main difference is that Windows and MacOS have a
| mechanism that communicates with applications that a
| snapshot is about to be taken, allowing the applications
| (such as databases) to build a more "consistent" version
| of their files.
|
| In theory, of course, database files should always be in
| a logically consistent state (what if power goes out?).
| Sakos wrote:
| > something supported by all systems
|
| Well, supported by Windows and MacOS. Linux only if you
| happen to use zfs or btrfs, and also only if the backup
| tool you use happens to rely on those snapshots.
| c45y wrote:
| I believe basically any filesystem will work if you have
| it on LVM. Bonus of lv snaps being thin snapshots too
| jlokier wrote:
| That works if the backup uses a snapshot of the filesystem
| or a point in time. Then the backup state is equivalent to
| what you'd get if the server suddenly lost power, which all
| good ACID databases handle.
|
| The GP is talking about when the backup software reads
| database files gradually from the live filesystem at the
| same time as the database is writing the same files. This
| can result in an inconsistent "sliced" state in the backup,
| which is different from anything you get if the database
| crashes or the system crashes or loses power.
|
| The effect is a bit like when "fsync" and write barriers
| are not used before a server crash, and an inconsistent mix
| of things end up in the file. Even databases that claim to
| be append-only and resistant to this form of corruption
| usually have time windows where they cannot maintain that
| guarantee, e.g. when recycling old log space if the backup
| process is too slow.
| bongobingo1 wrote:
| Do you have much of an opinion on why you went with Restic over
| Borg? The single Go binary is an obvious one, perhaps that
| alone is enough. I remember some people having un-bound memory
| usage with Restic but that might have been a very old version.
| hashworks wrote:
| I use both, and I never had problems with any of them. Restic
| has the advantage that it supports a lot more endpoints than
| ssh/borg, f.e. S3 (or anything that rclone supports). Also
| borg might be a little bit more complicated to get started
| with than restic.
| dsissitka wrote:
| The big one for me was
| https://borgbackup.readthedocs.io/en/stable/faq.html#can-
| i-b....
| _flux wrote:
| This was basically one big reason why I went with
| https://kopia.io . The other might have been its native S3
| support.
| pixelmonkey wrote:
| For me, these traits made restic initially attractive:
|
| - encrypted, chunk-deduped, snapshotted backups
|
| - single Go binary, so I could even backup the binary used to
| create my backups
|
| - reasonable versioning and release scheme
|
| - I could read, and understand, its design document:
| https://github.com/restic/restic/blob/master/doc/design.rst
|
| I then just tried using it for a year and never hit any
| issues with it, so kept going, and now it's 6+ years later.
| marcus0x62 wrote:
| I use both to try to mitigate the risk of losing data due to
| a backup format/program bug[1]. If I wasn't worried about
| that, I'd probably go with Borg but only because my offsite
| backup provider can be made to enforce append-only backups
| with Borg, but not Restic, at least not that I could find.[2]
| Otherwise, I have not found one to be substantially better
| than the other in practice.
|
| 1 - some of my first experiences with backup failures were
| due to media problems -- this was back in the days when
| "backup" pretty much meant "pipe tar to tape" and while the
| backup format was simple, tape quality was pretty bad. These
| days, media -- tape or disk -- is much more reliable, but
| backup formats are much more complex, with encryption, data
| de-dup, etc. Therefore, I consider the backup format to be at
| least as much of a risk to me now as the media. So, anyway, I
| do two backups: the local one uses restic, the cloud backup
| uses borg.
|
| 2 - I use rsync.net, which I generally like a lot. I wrote up
| my experiences with append-only backups, including what I did
| to make them work with rsync.net here:
| https://marcusb.org/posts/ransomware-resistant-backups/
| bobek wrote:
| I have ended up with something very similar. Restic/rclone is
| awesome combo. https://bobek.cz/restic-rclone/
| PhilippGille wrote:
| Do you only back up your home directory, or also others? I
| didn't find info about that in your post.
| pixelmonkey wrote:
| I backup everything except for scratch/tmp/device style
| directories. Bytes are cheap to store, my system is a
| rounding error vs my /home, and deduping goes a long way.
| PhilippGille wrote:
| I'm less worried about the size and more about something
| breaking when doing a recovery.
|
| Let's say you're running Fedora with Gnome and you want to
| switch to KDE without doing a fresh install. You make a
| backup, then go through the dozens of commands to switch,
| with new packages installed, some removed, display managers
| changed etc. Now something doesn't work. Would recovering
| from the restic backup reliably bring the system back in
| order?
|
| The tool from the original post seems to be geared towards
| that, while most Restic and rclone examples seem to be
| geared towards /home backup, so I wonder how much this is
| actually an alternative.
| pixelmonkey wrote:
| Oh, I see what you're saying. I personally wouldn't use
| it to do a 100% filesystem restore. For the sake of
| simplicity, I'd just use dd/ddrescue to make a .img file
| and then load that .img file directly into a partition to
| boot from a new piece of hardware. Likewise if I were
| doing a big system change like GNOME to KDE or vice
| versa, I'd just make an .img file before and restore from
| it if it went wrong.
|
| I think of restic system backups covering something like
| losing a customized /etc file in an apt upgrade and
| wanting to get it back.
| kmarc wrote:
| For home backup, I have a similar setup with dedup,
| local+remote backups.
|
| Borgbackup + rclone (or aws) [1]
|
| It works so well, I even use this same script on my work
| laptop(s). rclone enables me to use whatever quirky file
| sharing solution the current workplace has.
|
| [1]:
| https://github.com/kmARC/dotfiles/blob/master/bin/backup.sh
| dikei wrote:
| I used to use restic with scripting, then I discovered
| resticprofile, and swiftly replace all my scripts with it.
|
| https://github.com/creativeprojects/resticprofile
|
| I also use Kopia as an alternative to Restic, in case some
| critical bugs happen to either one of them.
|
| https://kopia.io/
| AdaX wrote:
| Personally, I've had some issues with Kopia.
|
| I found their explanation here:
|
| https://github.com/kopia/kopia/issues/1764
|
| https://github.com/kopia/kopia/issues/544
|
| Still not solved after many years :(
|
| Now I use Borg + Restic and I am happy
|
| + GUI for Restic https://github.com/garethgeorge/backrest
|
| + GUI for Borg https://github.com/borgbase/vorta
| e12e wrote:
| I've been mulling over setting up restic/kopia backups - and
| recently discovering httm[1] support restic directly in
| addition to zfs (and) more - I think I finally will.
|
| [1] https://github.com/kimono-koans/httm
| pixelmonkey wrote:
| I only discovered httm thanks to this thread, and I'll
| definitely be trying it out for the first time today. Maybe
| I'll add an addendum to my blog post about it.
| bulletmarker wrote:
| I have used pretty much the same setup for the last 6 years. I
| run borg to a small server then rclone the encrypted backup
| nightly to B2 storage.
| carderne wrote:
| Enjoyed the post, thanks. One question: why don't you use
| restic+rclone on macOS? They both support it and I'd assume you
| could simplify your system a bit...
| pixelmonkey wrote:
| I only have one macOS system (a Mac Mini) and Arq works well
| for me. Also I prefer to use Time Machine for the local
| backups (to a USB3 SSD) on macOS since Apple gives Time
| Machine all sorts of special treatment in the OS, especially
| when it comes time to do a hardware upgrade.
| setopt wrote:
| I've also found Arq to be brilliant on MacOS. It's
| especially nice on laptops, where you can e.g. set it to
| pause on battery and during working hours. Also, APFS
| snapshots is a nice thing given how many Mac apps use
| SQLite databases under the hood (Photos, Notes, Mail,
| etc.).
|
| On Linux, the system I liked best was rsnapshot: I love its
| brutal simplicity (cron + rsync + hardlinks), and how easy
| it is to browse previous snapshots (each snapshot is a real
| folder with real files, so you can e.g. ripgrep through a
| date range). But when my backups grew larger I eventually
| moved to Borg to get better deduplication + encryption.
| pixelmonkey wrote:
| rsnapshot was definitely my favorite Linux option before
| restic. I find that restic gives me the benefits of
| chunk-based deduplication and encryption, but via `restic
| find` and `restic mount` I can also get many of the
| benefits of rsnapshot's simplicity. If you use `restic
| mount` against a local repo on a USB3 SSD, the FUSE
| filesystem is actually pretty fast.
| setopt wrote:
| Thanks for the info, I'll have a closer look at Restic
| then. Borg also has a FUSE interface, but last time I
| tried it I found it abysmally slow - much slower than
| just restoring a folder to disk and then grepping through
| it. I used a Raspberry Pi as my backup server though, so
| the FUSE was perhaps CPU bound on my system.
| pixelmonkey wrote:
| Yea, I don't want to oversell it. The restic FUSE mount
| isn't anywhere near "native" performance. But, it's fast
| enough that if you can narrow your search to a directory,
| and if you're using a local restic repo, using grep and
| similar tools is do-able. To me, using `restic mount`
| over a USB3 SSD repo makes the mount folder feel sorta
| like a USB2 filesystem rather than a USB3 one.
| jenscow wrote:
| I use BackInTime, which works in a similar way but is much more
| configurable. I have hourly backups of all my code for the past
| day, then a single daily for the past week, etc.
|
| Saved my ass a few times.
| Springtime wrote:
| Sounds like rsnapshot (rsync with hardlinks and scheduling) but
| the BackInTime repo doesn't mention any comparison of how it's
| different, though Timeshift says they're similar. Anyone have
| experience with BiT vs rsnapshot?
| bayindirh wrote:
| BackInTime works similar to Apple TimeMachine. It uses
| hardlinks + new files. Plus, it keeps settings for that
| backup inside the repository itself, so you can install the
| tool, show the folder, and start restoring.
|
| On top of that BiT supports network backups and multiple
| profiles. I'm using it on my desktop systems with multiple
| profiles for years and it's very reliable.
|
| However it's a GUI first application, so for server
| applications Borg is a much better choice.
| raudette wrote:
| I've used BackInTime since 2010. I loved that, even without
| using the tool, you could just poke through the file structure,
| and get an old version of any backed up file.
| gchamonlive wrote:
| I use a series of scripts to make daily Borg backups to a local
| repository: https://github.com/gchamon/borg-automated-backups
|
| Currently the local folder is a samba mount so it's off-site.
|
| The only tip I'd have for people using Borg is to verify your
| backups frequently. It can get corrupted without much warning.
| Also if you want quick and somewhat easy monitoring of backups
| being created you can use webmin to watch for the modifications
| in the backup folder and send an email if there isn't a backup
| being sent in a while. Similarly, you can regularly scan the Borg
| repo and send email in case of failures for manual investigation.
|
| This is low tech, at least lower tech than elastic stack or
| promstack, but it gets the job done.
| khimaros wrote:
| i've had positive experience with borgmatic which is available
| in debian repos.
| gchamonlive wrote:
| Neat! I'll take a look, thanks!
| pmarreck wrote:
| Yet another solution that is wholly unnecessary in NixOS. Nice
| idea, though, since you can too easily screw up every other
| Linux.
| Lord_Zero wrote:
| I just switched from windows to Mint and the first thing it asked
| me was to configure backups and snapshots and stuff. Pretty cool!
| Groxx wrote:
| Mint's first-launch welcome-list is excellent. It's a
| relatively small thing but it helps _a lot_.
| stevefan1999 wrote:
| Can someone recommend a solution that works well with immutable
| distros such as Project Bluefin or Fedora Kinoite/Silverblue? We
| just need to backup maybe the etc and dotfiles. Also great if it
| can backup NixOS too.
| trinsic2 wrote:
| Don't forget Atpik. great for migrating a system to a new distro.
| sieve wrote:
| ZFS Snapshots + Sanoid and Syncoid to manage and trigger them is
| what people should be doing. Unfortunately, booting from ZFS
| volumes seems to be some form of black art unless things have
| changed over the last couple of years.
|
| The license conflict and OpenZFS always having to chase kernel
| releases often resulting in delayed releases for new kernels
| means I cannot confidently use them with rolling release distros
| on the boot drive. If I muck something up, the data drives will
| be offline for a few minutes till I fix the problem. Doing the
| same with the boot drive is pain I can live without.
| rabf wrote:
| Best option to date: https://github.com/zbm-dev/zfsbootmenu
|
| A shame most distro's installers don't support it natively, but
| an encrypted rootfs on ZFS is great once you get it setup.
| sieve wrote:
| Yeah.
|
| I am somewhat wary of trying this, mucking something up and
| wasting a lot of time wrestling with it. Will probably play
| around with it in a vm and use it during the next ssd
| upgrade.
|
| Would have been so much better if the distros showed more
| interest in ZFS
| aeadio wrote:
| In principle there's no reason you can't install this next
| to GRUB in case you're wary. If you're not using ZFS native
| encryption, and make sure not to enable some newer zpool
| features, GRUB booting should work for ZFS-on-root.
|
| That said, I've been using the tool for a while now and
| it's been really rock solid. And once you have it installed
| and working, you don't really have to touch it again, until
| some hypothetical time when a new backward-incompatible
| zpool feature gets added that you want to use, and you need
| a newer ZFSBootMenu build to support it.
|
| Because it's just an upstream Linux kernel with the OpenZFS
| kmod, and a small dracut module to import the pool and
| display a TUI menu, it's mechanically very simple, and
| relying on core ZFS support in the Linux kernel module and
| userspace that's already pretty battle tested.
|
| After seeing people in IRC try to diagnose recent GRUB
| issues with very vanilla setups (like ext4 on LVM), I'm
| becoming more and more convinced that the general approach
| used by ZFSBootMenu is the way to go for modern EFI
| booting. Why maintain a completely separate implementation
| of all the filesystems, volume managers, disk encryption
| technologies, when a high quality reference implementation
| already exists in the kernel? The kernel knows how to boot
| itself, unlock and mount pretty much any combination of
| filesystem and volume manager, and then kexec the
| kernel/initrd inside.
|
| The upsides to ZFSBootMenu, OTOH, *
| Supports all ZFS features from the most recent OpenZFS
| versions, since it uses the OpenZFS kmod * Select
| boot environment (and change the default boot environment)
| right from the boot loader menu * Select specific
| kernels within each boot environment (and change the
| default kernel) * Edit kernel command line
| temporarily * Roll back boot environments to a
| previous snapshot * Rewind to a pool checkpoint
| * Create, destroy, promote and orphan boot environments
| * Diff boot environments to some previous snapshot to see
| all file changes * View pool health / status
| * Jump into a chroot of a boot environment * Get
| a recovery shell with a full suite of tools available
| including zfs and zpool, in addition to many helper scripts
| for managing your pool/datasets and getting things back
| into a working state before either relaunching the boot
| menu, or just directly booting into the selected
| dataset/kernel/initrd pair. * Even supports user
| mode SecureBoot signing -- you just need to pass the
| embedded dracut config the right parameters to produce a
| unified image, and sign it with your key of choice. No need
| to mess around with shim and separate kernel signing.
| croniev wrote:
| Timeshift does not work for me because I encrypted my ssd,
| decrypt on boot, but linux sees every file twice, once encrypted
| and once decrypted, thinking that my storage is full, and thus
| timeshift refuses to make backups due to no storage. At least
| thats as far as I'm understanding it atm
| sulandor wrote:
| > linux sees every file twice, once encrypted and once
| decrypted
|
| fixing this should prove profitable
| kkfx wrote:
| Nice UI :-)
|
| Random notes/suggestions
|
| - rsync is not a snapthot tool, so while in most of the cases we
| can rsync a live volume issueless on a desktop it's not a good
| idea doing so
|
| - zfs support in 2024 is a must, btrfs honestly is the proof of
| how NOT to manage storage, like stratis
|
| - it seems not much a backup tool, witch is perfectly fine but
| since the target seems to be end users not too much IT literate
| it should be stated clear...
| ivanjermakov wrote:
| Magical thing about timeshift is that you can use it straight
| from your live CD. It will find root, backups, and restore it
| together with a boot partition.
| Shorel wrote:
| My system is different and simpler:
|
| The root partition / and the home partition /home are different.
|
| There's a /home/etc/ folder with a very small set of
| configuration files I want to save, everything else is nuked on
| reinstall.
|
| When I do a reinstall, the root partition is formatted, the /home
| partition is not.
|
| This allows me to test different distros and not be tied to any
| particular distro or any particular backup tool, if I test a
| distro and I don't like it, then it is very easy to change it.
| ijustlovemath wrote:
| /home/etc or ~/etc?
| birdiesanders wrote:
| Those are equivalent.
| michaelmior wrote:
| On most systems, that is not the case. Typically a user's
| home directory is `/home/USERNAME` so `~/etc` would be
| `/home/USERNAME/etc`.
| execat wrote:
| No. ~etc is equivalent to /home/etc. ~/etc is the same as
| /home/<current user>/etc.
| ijustlovemath wrote:
| Try it for yourself:
|
| [ /home/etc = ~/etc ] || echo theyre different
| dataflow wrote:
| The implication here is that your home directory can actually
| work across distros? How in the world do you do that? Surely
| you have to encounter errors sometimes when cached data or
| configs point to nonexistent paths, or other incompatibilities
| come up?
| ijustlovemath wrote:
| Typically ~ contains user specific config files for
| applications, which are (usually) programmed to be distro
| agnostic. If you're installing the same applications across
| distros, I don't see why this wouldn't work without too much
| effort. After all, most distros are differentiated by just
| two things:
|
| - their package management tooling
|
| - their filesystem layout (eg where do libraries etc go)
| 8organicbits wrote:
| I've found Debian Stable to be extremely stable, especially in
| recent years, I honestly don't think about system restore as much
| as I worry about a drive crashing or a laptop getting stolen. I
| assumed Linux Mint LTS was similarly stable.
|
| Folks who have run into issues, what was the root cause?
| nubinetwork wrote:
| Isn't timeshift what apple calls their snapshot/backup thingy?
| aaronmdjones wrote:
| No, that's Time Machine.
| tracker1 wrote:
| I've just got a simple script that uses rclone for most of my
| home directory to my NAS. For nearly everything else, I don't
| mind if I have to start mostly from scratch.
| crabbone wrote:
| My first "real" experience with Linux was with Wubi (Ubuntu
| packaged as a Windows program). I think it was based on Ubuntu
| version 6 or 8.
|
| I also tried to update it, when the graphical shell displayed a
| message saying that update is available. Of course, it bricked
| the system.
|
| I've switched from Ubuntu to Mint to Debian to Fedora to Arch to
| Manjaro for personal use and had to support a much wider variety
| of distributions professionally. My experience so far has been
| that upgrades inevitably damage the system. Most don't survive
| even a single upgrade. Arch-like systems survive several major
| package upgrades, but also start falling apart with time. Every
| few years enough problems accumulate that merit either a complete
| overhaul or just starting from scratch.
|
| With this lesson learned, I don't try to work with backups for my
| own systems. When the inevitable happens, I try to push forward
| to the next iteration, and if some things to be lost, then so be
| it. To complement this, I try to make the personal data as small
| and as simple to replicate and to modify moving forward as
| possible. I.e. I would rule against using filesystem snapshots in
| favor of storing the file contents. I wouldn't use symbolic links
| (in that kind of data) because they can either break or not be
| supported in the archive tool. I wouldn't rely on file ownership
| or permissions (god forbid ACLs!) Try to remove as much of a
| "formatting" information as possible... so I end up with either
| text files or images.
|
| This is not to discourage someone from building automated systems
| that can preserve much richer assembly of data. And for some data
| my approach would simply be impossible due to requirements. But,
| on a personal level... I think it's less of a software problem
| and more of a strategy about how not to accumulate data that's
| easy to lose.
___________________________________________________________________
(page generated 2024-07-23 23:08 UTC)