Helmut10001 a day ago

BorgBackup user here and really happy. It was a set and forget for me and after 7 years, the deduplicated backup is still working flawlessly each week. I recommend pairing it with borgmatic [1], which helps to design away some of the complexities of the underlying borg backup.

[1]: https://github.com/borgmatic-collective/borgmatic

  • dudu24 a day ago

    My problem is I learn some tool like this, set it, and then indeed forget it. Then I avoid testing my backups because of the work it takes to un-forget it. Because of this, I'm leaning more and more towards rsync or tools that have GUI frontends.

    • ohthehugemanate 8 hours ago

      At a minimum you need backup, regular restore tests, and alerts when backups stop or restore tests fail.

      Personally I automate restore testing with cron. I have a script that picks two random files from the filesystem: an old one (which should be in long term storage) and a new one (should be in the most recent backup run, more or less), and tries restoring them both and comparing md5sums to the live file. I like this for two reasons: 1. it's easy to alert when a cronjob fails, and 2. I always have a handy working snippet for restoring from backups when I inevitably forget how to use the tooling.

      IMO alerting is the trickiest part of the whole setup. I've never really gotten that down on my own.

      • amjd 3 hours ago

        I use ntfy.sh for sending push notifications from scripts and such. It's open source and free (they have paid plans as well now, but I didn't encounter any limitations in the free plan).

        Not an endorsement, just a happy user.

        • pohuing an hour ago

          +1 for ntfy: it's also trivial to self host

      • Helmut10001 7 hours ago

        I recently set up email alerting through the syslog agent from Telegraf-Influx-Grafana, where Grafana is used for Email alerting and InfluxDB for filtering for the specific syslogs.

        On another VM, I used postfix to email logs after cronjob (failed or passed), which also works great.

    • belthesar 21 hours ago

      Rather than avoid tools that work well, I would encourage you to adopt solutions that solve your use cases. For instance, if you aren't getting notifications that a backup is running, completing or failing, then all you've set up is a backup job, and not built a BDR process. If you're looking for a tool to solve your entire BDR plan, then you're looking at a commercial solution that bakes in automating restore testing, so on and so forth.

      Not considering all the aspects of a BDR process is what leads to this problem. Not the tool.

  • mbrumlow a day ago

    > set and forget for me and after 7 years

    Please tell me you verify your backups now and then?

    • Helmut10001 a day ago

      Borgmatic runs consistency checks [1] once a month on all repositories and archives and I occasionally retrieve older versions for selected files (archives with --verify-data only once a year or whenever I feel the need - there's 9TB of data in the borg repo, which takes a bit to scan). Note though that borg is not my main backup, it is the fallback "3" in the 3-2-1 principle, where my primary data is a ZFS Raidz2 and my primary backup is an offsite ZFS Raidz2 in pull mode. I added borg because I did not want to rely on a single software (ZFS), although this fear was unstained so far.

      [1]: https://borgbackup.readthedocs.io/en/stable/usage/check.html

    • dewey a day ago

      This always gets repeated, sounds good and makes sense theoretically but in reality there's no good way to do that and it should be the job of a computer to that.

      Restoring one file from the backup, works but what if something else is corrupted?

      Restoring the system from the image, works but what if some directory is not in the backup and you don't see that while testing?

      • l33tman a day ago

        I think the point is that if your data is valuable enough for you, you can't really trust that option in the backup tool to work - maybe you misunderstood some config option and the test now isn't really run, the tool is broken, or is run only on some of the backup files or dirs etc... or your original config might have missed a folder because it was mounted through some other filesystem (happened to me with Borg actually, and my whole /home/user dir especially wasn't backed up for the first 6 months I ran it :).

        Seems to be good to have another tool that you either manually or automatically can setup to run regularly that tries to locate random files from your existing file system in the backups? Something like that.. though that other tool might be broken as well of course... :/

        • dewey a day ago

          It's a very hard problem. In the end everyone is having an increasing amount of data where double checking it manually is not feasible any more and a perfect solution is maybe not possible.

          Reling on software with good defaults that a lot of people use is probably a relatively safe bet combined with a second or third backup system (Personally I use Backblaze and Time Machine).

      • witten a day ago

        borgmatic's "spot" check (probabilistically) protects against both of those failure modes: https://torsion.org/borgmatic/docs/how-to/deal-with-very-lar...

        • dewey a day ago

          Indeed, I think these kind of automated checks are much more helpful than telling people they have to "test" test their backup. If a backup software doesn't do that automatically and reports if there's something off it's not good software or user experience.

    • selcuka a day ago

      > Please tell me you verify your backups now and then?

      Then one can't call it "set and forget", right?

      • semanticist a day ago

        Backup testing can be automated. I don't do this for my personal stuff, but at work there's a box that does a restore of our primary DB from backups, loads it into MySQL, and runs some smoke tests to make sure it looks roughly right. A quick script and a cronjob, and backups get tested every night.

        I'm sure there's more thorough ways to do this kind of testing, but whatever level of confirmation you need automating it should be viable and then you only have to pay attention if/when something breaks.

  • rubenbe a day ago

    Does someone know a good Android client?

    • uhartelightning 14 hours ago

      I syncthing stuff I want off of my phone onto a computer, from which I borg-back-it-up.

    • ThomasWaldmann 21 hours ago

      Some people were installing / using borg on android, but guess that isn't suitable for end users, rather for the nerds.

      Maybe try SeedVault?

krick a day ago

Currently I'm just using bare rclone to backup to my own remote machines, but obviously this isn't very professional solution. Was thinking to add Backblaze B2 as a remote, but I guess using rclone wouldn't be a state-of-the-art solution here. After all, it isn't really a backup tool, is it? It has some built-in encryption, but it's a bit clunky, and I'd think a proper backup tool should automatically divide data into blocks of suitable size (instead of just creating file-per-file - to make it S3/B2 API-friendly), encode whole directories as tar (if needed to preserve links, for example), do deduplication, and whatever else are best practices I have no idea about, but which backup-proficient people probably invented long time ago.

Does anybody have a recommendation?

I briefly looked at restic and duplicati, but surprisingly none are as simple to use as I'd expect a dedicated backup-tool to be (I don't need, and kindda don't want GUI, I'd like all configuration to be stored in a single config-file I can just back-up to a different location like everything else, and re-create on any new machine). More than that, I've read some scary stories about these tools fucking up their indexes so that data turns out to be non-restorable, which sounds insane, since this is something you must be absolutely sure your backup-tool would never do no matter what, because what's even the point of making backups then.

  • RockRobotRock a day ago

    >I'd like all configuration to be stored in a single config-file I can just back-up to a different location like everything else, and re-create on any new machine

    You might want to look into kopia. It accomplishes the same task as restic, but handles configs in a way you might find more appealing. Further reading: https://news.ycombinator.com/item?id=34154052

    Don't even bother with duplicati. I've tried to make it work so many times, but it's just a buggy mess that always fails. It's a shame too, because I really like the interface.

  • unaindz 21 hours ago

    I've been using bupstash since trying to do backups on an rpi and finding Borg too slow to be usable. Since then I upgraded to a proper server at home but kept bupstash as I found it to just work better for the most part. Keep in mind there's not been much progress since the last release two years ago and its still tagged as beta by the author. Tbf I think he has a higher quality standard than in other projects that are not tagged as such.

    Useful backup tool comparison: https://github.com/deajan/backup-bench

  • Mister_Snuggles a day ago

    I'm very happy with Restic backing up to BackBlaze B2.

    I have a "config file", which is really just a shell script to set up the environment (repository location, etc), run the desired backups, and execute the appropriate prune command to implement my desired retention schedule.

    I've been using this setup for years with great success. I've never had to do a full restore, but my experience restoring individual files and directories has been fine.

    Do you have any links related to the index corruption issue? I've never encountered it, but obviously a sample size of one isn't very useful.

  • scorpioxy a day ago

    Whether something is simple or not I'd say depends on the use case. But I found borg to be great. I'd recommend you check it out and go through the quickstart guide in the documentation. It does de-duplication and encryption. It does a lot more but you don't have to use those features if you don't need them. I couple it with borgmatic to implement a backup and disaster recovery procedure that is meant to decrease the risk of data loss. I also use borgbase and they have a good service but using something like B2 with this rclone support would result in a cheaper alternative if you don't need the extra that borgbase provides.

    I've been using it for quite a while now both for my personal projects and paid work and have had a good experience with it.

  • abhinavk a day ago

    restic + autorestic/resticprofile.

    Borg 2 is still beta and Kopia is also there. But it's newer so I am testing it on another redundant backup on the same machine. I have space so why not?

    Every once in a while I run integrity check (with data) so I can trust that metadata and data are fine.

nickcw a day ago

Writing an rclone backend for borg is something I have wanted to do for a long time.

However I found that the backends weren't well abstracted enough in v1 to make that easy.

However for v2 Thomas Waldmann has made a nice abstracted interface and the rclone code ended up being being only <300 lines of Python which only took an afternoon or two to make.

https://github.com/borgbackup/borgstore/blob/master/src/borg...

  • ThomasWaldmann 21 hours ago

    Thanks a lot for writing the rclone backend!

scorpioxy a day ago

Oh very interesting. This has been a requested feature for a while especially with the rise in popularity and the decreased cost of object storage.

Borg working with object storage was not supported though some people did use it that way. From my understanding, most would duplicate a repo and upload instead of borg directly writing/manipulating it. This could problematic if the original repo was corrupt as now the corruption would be duplicated. So this will make things much easier and allow for a more streamlined workflow. Having the tool support rclone instead of specific services seems like a wise and more future-proof choice to me.

dang a day ago

Related. Others?

Borg 2.0 beta (deduplicating backup program with compression and encryption) - https://news.ycombinator.com/item?id=40990425 - July 2024 (1 comment)

Borgctl – borgbackup without bash scripts - https://news.ycombinator.com/item?id=39289656 - Feb 2024 (1 comment)

BorgBackup: Deduplicating archiver with compression and encryption - https://news.ycombinator.com/item?id=34152369 - Dec 2022 (177 comments)

Emborg – Front-End to Borg Backup - https://news.ycombinator.com/item?id=30035308 - Jan 2022 (2 comments)

Deduplicating Archiver with Compression and Encryption - https://news.ycombinator.com/item?id=27939412 - July 2021 (71 comments)

BorgBackup: Deduplicating Archiver - https://news.ycombinator.com/item?id=21642364 - Nov 2019 (103 comments)

Borg – Deduplicated backup with compression and authenticated encryption - https://news.ycombinator.com/item?id=13149759 - Dec 2016 (1 comment)

BorgBackup (short: Borg) is a deduplicating backup program - https://news.ycombinator.com/item?id=11192209 - Feb 2016 (1 comment)

Mister_Snuggles a day ago

Does anyone have an up-to-date comparison of Borg vs Restic? Or a compelling reason to switch from Restic to Borg?

I've previously used Borg, but the inability to use anything other than local files or ssh as a backend became a problem for me. I switched to Restic around the time it gained compression support. So for my use-case of backing up various servers to an S3-compatible storage provider, Restic and Borg now seem to be equivalent.

Obviously I don't want to fix what isn't broken, but I'd also like to know what I'm missing out on by using Restic instead of Borg.

  • kornnflake a day ago

    +1, I'm in a similar situation and be curious too about an up-to-date comparison.

    • ThomasWaldmann 21 hours ago

      Comparisons might be interesting, but one needs to be aware that they would be a bit apples to oranges:

      - unreleased code that is still in heavy development (borg2, especially the new repository code inside borg2).

      - released code (restic) that has practically proven "cloud support" since quite a while.

      borg2 is using rclone for the cloud backend, so that part is at least quite proven, but the layers above that in borg2 are all quite fresh and not much optimized / debugged yet.

cstuder a day ago

If you're looking for cheap online storage for your backups know this: A Microsoft 365 Single subscription comes with 1 TB of OneDrive space (Family subscriptions with 1 TB per person).

I've been using it with restic + rclone successfully for years. It's not very fast, but works.

  • delusional a day ago

    I'd recommend having a look at Hetzner's "storage box" products. It's hard to beat 4€ a month for 1TB of SSH accessible storage.

jjice a day ago

For personal use, at what point would one recommend using Borg over a regular rsync?

I currently use rsync to backup up a set of directories on a drive to another drive and a remote service (rsync.net). It's been working great, but I'm not sure if my use-case is just simple enough where this is a good solution, or if I'm missing a big benefit of Borg. I do envy Borg's encryption, but the complexity of a new tool tied with the paranoia of me maybe screwing up all my data has had me on edge a bit to make the leap. I don't have a ton of data to backup, say about 5TB at the moment.

  • zimpenfish a day ago

    For me, the deduping and compression saves a lot of storage. My mail backup (17 backups covering the last 6 months) is originally 837GB, compressed to 312GB and dedupe'd to 19GB. Same with Postgres - 25GB to 7GB to 900MB.

    You could probably use rsync's hard linking to save space on the mail backup but I'm not sure you'd get it as small without faffing about.

  • remram a day ago

    Usual problem, if you delete/corrupt a file and find out two days later, your daily backup is not going to help you. Having more than one snapshot is very valuable.

    http://www.taobackup.com/ etc

    Rsync is also very slow with lots of files, and doesn't deal with renamed files (will transfer again).

    • eikenberry 19 hours ago

      Rsync backups can be setup to deal with this. I have rsync setup with daily incremental backups, the main sync to a 'current' folder and the old version of changed files staying in a weekday named folder (eg. Monday). So I have a rotating 7 day period to recover files. On top of that I have a monthly long term backup of the last old version of that month. This provides an arbitraribly long monthly window to recover from. Rsync is very versatile.

      • remram 18 hours ago

        Yeah with enough scripting, you can rebuild a slow equivalent to a real backup program, that will also use 10x the disk space.

        • eikenberry 15 hours ago

          Yep. But it works and has worked for over 20 years. Various backup software has come and gone in that time but rsync has been a rock.

          • leetnewb 6 hours ago

            Fwiw, Borg is coming up on 10. 12 if you include the project it forked from. I like the simplicity of rsync approaches, but Borg seems to have longevity and widespread use.

  • ibizaman 19 hours ago

    With rsync, you’re replicating only the last state. With borg, you can see all backups being made and rollback to any previous snapshot. This is true of a lot of backup solutions btw.

    Concretely, if you inadvertently delete a file and this get rsynced, you cannot use the backup to restore that file. With borg you can.

anotherevan 13 hours ago

I'll use Restic or BorgBackup for servers, but ended up going with Kopia for machines that are not always on, like laptops. It has the advantage that it will take something of an opportunistic approach where it will start backing up if it hasn't done so in a while, and seems to be able to restart with aplomb if it gets interrupted (machine shutdown or laptop lid closed).

That and being able to have multiple machines writing to a shared repository at the same time is handy. I have the kids' Windows computers both backing up to the same repo to save a bit of storage. (Now if only Kopia supported VSS on Windows without mucking around with dubious scripts.)

elric a day ago

I've happily been writing borg backups to rsync.net for years. They have support for forcing borg in the ssh session using force-command, and borg has options that can prevent deletion (should the backup ssh key be compromised).

Overall it's a robust solution that isn't too painful to setup.

  • unbrice a day ago

    I second this. I was looking for a solution that prevented a compromised host from deleting its own backups. Forcing the command as you mentioned works for rsync.net, and its snapshots also provide a protection against fat finger errors.

mendym 2 days ago

is there a reason to use the borg encryption[1] over rclone crypt[2] or vise versa?

1. https://borgbackup.readthedocs.io/en/2.0.0b11/quickstart.htm... 2. https://rclone.org/crypt/

  • aborsy a day ago

    Rclone crypt is not much related to Borg. That’s a tool for copying files from one machine to another, in this case encrypting before copying. That’s rsync, working with cloud.

    Borg is a different tool, for backup. It deduplicates, encrypts, snapshots, checksums, compresses, … source directories into a single repository. It doesn’t work with files, rather blocks of data. It includes commands for repository management, like searching data, pruning or merging snapshots, etc. You will then transfer or sync the repository to wherever you want, with a tool such as rsync/SSH or rclone. Rclone is now natively supported, so that you don’t need to store the repository locally and on remote, rather back up directly to remote.

  • misanthr0pe 2 days ago

    I would also wonder what the difference between this package and Restic is. as far as efficiency and encryption.

  • freeqaz a day ago

    How good at deduping is this when encryption is enabled? I was looking at rsync.net and it killed me that they don't support encryption in a sane way.

    • djbusby a day ago

      It's very sane: encrypt the bits, then send it to the host.

      Curious what you think is not right with their methods.

      • freeqaz a day ago

        Sure, but there is some requirement to not just blindly copy everything over-and-over, and that is where I've seen things get tricky before. If you enable encryption you have to re-upload the entire snapshot periodically.

        It's annoying because if you have TBs of stuff that blows. I'm just curious what systems exist for incremental, encrypted backups that don't require full uploading new snapshots.

        See here in the NOTE section. Re-reading this, it might a limitation of Duplicity. https://www.rsync.net/resources/howto/duplicity.html

        • prirun a day ago

          Author of HashBackup here.

          Duplicity is very old backup software that uses the "full + incremental" strategy on a file-by-file basis, like tape backup systems. The full backup must be restored first and then all of the incrementals. This becomes impractical over time, so as with tapes, you must periodically repeat the full backup so the incremental chains do not become too long.

          Modern backup programs split files into blocks and keep track of data at the block level. You still do an initial full backup followed by incrementals, but block tracking allows you to restore any version of any file without restoring the full first and all following incrementals. The trade-off is in complexity: tracking blocks is more complex than tracking files.

          It has nothing to do with encryption.

    • mendym a day ago

      > they don't support encryption in a sane way.

      Should the storage provider provide support for encryption on their end? Would you not want to store the keys locally?

      • immibis a day ago

        The provider should not. It provides a false sense of security.

aquafox a day ago

Also BorgBackup user here: I'm running it on a Raspberry Pi to backup important documents to a Hetzner storage box via ssh. The Pi als runs OpenMediaVault to provide a SMB share on my home LAN. So whenever I scan a new document, just put it on the SMB share and from there it's backed up automatically every day.

wzyboy a day ago

I've always been doing "two-pass" backups to achieve "3-2-1" goal: first pass is to run BorgBackup to backup devices to my home server. The second pass is to use rclone to transfer the repos on home server to an object storage service (B2).

With rclone support built-in, the setup would be much easier.

  • cl3misch a day ago

    I think this is heavily discouraged? Instead you should have multiple separate borg repos to minimize risk of misconfiguration and data corruption.

    • cl3misch a day ago

      I managed to find the source for my statement:

      https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-c...

      It is not "heavily" discouraged. But you have to pay extra attention to perform a clean copy of the borg repo's files, and ideally check both instances regularly for integrity. I would assume it's easy to forget validating the "cold storage" copy of your borg repo in practice.

singhrac a day ago

It wasn't recommended a bit ago to use Borg 2.0 because it wasn't baked enough. Has that changed? Are people using Borg 2?

  • aborsy a day ago

    It is still in beta, and has been in this state for a long time. At some point, the developer thought to delay it further and introduce whatever breaking changes are needed in this release.

    Note that if you use, say, the 2.11 version, you cannot upgrade to 2.12, you cannot go back to 1.X either. People like me were stuck, it turned out you have to discard the repo. Sometime later they better clarified this point:

    >> Borg2 is currently in beta testing and might get major and/or breaking changes between beta releases (and there is no beta to next-beta upgrade code, so you will have to delete and re-create repos).

    I have a 2.X repo. It’s working fine and backs up. I have a lot of snapshots in that repo. If someone knows how to transfer them to a 2.X version once it’s out of beta, let me know.

  • DistractionRect a day ago

    Author calls that out right at the top of the changelog:

    > Beta releases are only for testing on NEW repos - do not use for production.

    • singhrac a day ago

      That's fair. There's a lot of software that is generally ok to use in beta, however, and this has been in beta for a long time.

      • Freak_NL a day ago

        Beta is beta. Hard to fault the developer using this label correctly with a clear notice for the way some other projects treat 'beta'.

rmoriz a day ago

It's a joy that the OSS world has so many active and really good backup tooling projects like Borg, restic and all the fancy wrapper/GUI tools. I use many of them in different environments for customer setups, desktops and my own cloud setup. It's essential to have several different options and each project has its own USP. A big thanks to everyone involved!

  • ensignavenger a day ago

    As you seem familiar with the landscape, do you care to share what you think the strengths/USP and weaknesses of each option you are familiar with are? Or do you know of a high quality blog post somewhere that does?

eternityforest a day ago

So, should I plan to switch to this rather than keep using Kopia?

I was using it for years on an external drive, but then I got a NAS, and did not want to fuss with community packages to get Borg working.

Kopia works fine, aside from the confusing GUI setup process, but it seems to be the least popular up and coming option.

Now it seems that this can directly target SFTP? I wonder what that means for the future of Kopia.

  • _flux a day ago

    Kopia also supported rclone for a long time, though: https://kopia.io/docs/reference/command-line/common/reposito... . However, in my experience backing up over sftp with kopia can be very slow. I suspect it's unable to use parallel sessions for them (or pipelining, but rclone API probably doesn't do that).

    My reason to go with kopia was that previously you were not able to backup multiple hosts into the same repo without great inefficiencies. I'm not sure if they still have resolved that. Another was its native S3 support which I use with ceph.

    A perhaps more superficial personal reason is that at least Go is a statically typed language, even if its type system isn't that great..

tandav 2 days ago

The lack of s3-like remotes support was the reason I switched from borg to restic

  • synergy20 a day ago

    rclone has dedupe, I think it does what restic can do plus multiple cloud support.

    rclone crypt also does encryption.

    so far I think rclone has it all for me.

    • aborsy a day ago

      Rclone doesn’t have deduplication. That’s just finding files with the same name. It’s different from deduplication used in backup software.

      Think of grinding data in a big machine, and removing blocks that are redundant. You may have every file to be a single copy, and get significant space reduction.

      • leetnewb a day ago

        Rclone also has a hash-based dedupe mode. Still different from borg, but it can be a little more robust than name-based.

    • wongogue a day ago

      They also work together. restic uses rclone for backends other than the officially supported 7. rclone also has built-in restic integration.

      rclone on it own is a syncing solution not backup.

      • RockRobotRock a day ago

        >restic uses rclone for backends

        This is cool. It sounds like I can set up restic to copy my backups to multiple S3 buckets, or even to an S3 bucket at the same time as a local drive using a union (https://rclone.org/union/) remote

    • larschdk a day ago

      rclone and restic are not direct alternatives. They have a slight overlap, but are also different. Rclone is more versatile for moving/copying files. Restic has snapshotting, pruning, client side encryption, deduplication, and compression. Restic actually supports rclone as a backend.

    • thangngoc89 a day ago

      Restic also offers encryption and compression. That’s the selling point for me when dealing with dozens of TB

locusm a day ago

I quite like Borg, others worth checking out are Restic and Kopia. Restic didnt have a UI for a long time, not sure thats changed...

  • wongogue a day ago

    Nope. No configuration file either. But they added compression recently.

    autorestic or resticprofile fill the gap well. Backrest does UI.

    • locusm a day ago

      Backrest looks like a great find.

gmuslera a day ago

Used BorgBackup 1.x with rclone, backing up in a local repository and then sending it to S3 with Rclone. For the way that borg works with files, most of the historic data lies on old, untouched files in the repository, only a catalog and the new blocks create new and update files. So it was great for using some of the S3 tiers that move files that doesn't change for some period automatically to a cheaper class.

Having both together makes easier to get this kind of use case.

swoorup a day ago

New to Borg, and backups in general.

Does borg have the ability to split chunks over multiple repository of varying sizes? For example, I might have just 15GB Google Drive Storage, whereas on others I might have 100GB available.

ThomasWaldmann a day ago

Please note that this is rather recent "bleeding edge" code from master branch.

It is available as 2.0.0beta11, but not suitable for production yet.

"Beta" also means that there won't be repository migration code from beta N to N+1.

wg0 a day ago

How can we use it for large database backups > 500 GB and is anyone doing that on daily basis?

  • prirun a day ago

    Author of HashBackup here.

    To use modern block-based backup programs for large databases and VM images (similar situation), you must use a very small block size for dedup to work well. For VM images, that's 4K. For databases, it's the page size, which is 4K for SQlite and 16K for InnoDB by default.

    With very small block sizes, most block-based backup programs kind of fall over, and start downloading lots of data on each backup for the block index, use a lot of RAM for the index, or both. So it's important that you test programs with small block sizes if you expect high dedup across backups. Some backup program allow you to set the block size on a per-file basis (HashBackup does), while others set it at the backup repo level.

    To backup a database, there are generally a couple of options:

    1. Create an SQL text dump of the database and back that up. For this to dedup well, variable-sized blocks must be used, and the smaller the block size, the higher the dedup ratio.

    2. Backup the database while running with a fixed block size equal to the db page size. You could lock the database and do the backup, but it's better to do two backup runs, the first with no locking, and the second with a read lock. The first backup cannot be restored because it would be inconsistent if any changes occur to the database during the backup. But it does not lock out any database users during the backup. The second backup will be much faster because the bulk of the database blocks have already been saved and only the changed blocks have to be re-saved. Since the second backup occurs with a read lock held, the second backup will be a consistent snapshot of the database.

    3. The third way is to get the database write logs involved, which is more complex.

  • Havoc a day ago

    As long as borg is running on both ends it should hash it I think

cyberax a day ago

Interesting. How robust is it in practice?

I've been using Duplicacy for a long while, and I've been pretty happy with it. But I'd love to switch to a full open-source solution (Duplicacy is proprietary with sources publically available).

jas39 a day ago

Too late Borg! I specifically chose Restic for the Rclone support. Can't change backup strategy now. It also save me once.

Pcloud lifetime + Restic, all in one repo, to benefit from dedup.

mrbigbob a day ago

I remember reading quite a few years ago about people working to get Borg to work with windows. Has there been any recent progress with that?

  • ThomasWaldmann 21 hours ago

    Basic stuff might work in a "posix-like" environment, like when using cygwin or WSL.

    The main problem is that there currently aren't active windows developers within the borgbackup project who continuously test and improve the windows specific code parts.

    Since recently, we at least have some working CI on windows again, at least that was fixed.

lasr_velocirptr a day ago

can anybody familiar with the tool comment on if the encryption codein borgbackup is audited or is there a tool whose encryption portion of the code is audited to ensure that there are no glaring bugs in the encryption scheme?

IshKebab a day ago

I used to use Borg but the fact that it can't work with a dumb storage device like SMB meant I eventually moved to Rustic, which is even better:

https://github.com/rustic-rs/rustic

  • snorremd a day ago

    I see they have gotten support for S3 (and other storage providers) via OpenDal. Might need to revisit rustic for my backup needs then! I once started looking at what it would take to build a GUI using Tauri (Rust backend <-> JS/Web frontend), but didn't have time to figure out the APIs.

    What I really like about Rustic is that it understands .gitignore natively so you can backup your entire workspace without dragging a lot of dependencies, compiled binaries, and other unnecessary data with you into your backups.

immibis a day ago

That's really big and really nice. For anyone unfamiliar, rclone is to online storage what ffmpeg is to multimedia files: a Swiss army knife that adapts anything to anything. It supports everything from S3 and Azure storage, to Google Drive and Dropbox, to sftp mounts; encryption and compression layers too.

0xbadcafebee a day ago

It's kind of ridiculous that there is better tooling for Kubernetes to sync files two ways than there is for the Linux desktop. Rclone is a maze of options, which vary based on version/distro. The configurator is a slow readline console script without enough information. The one decent GUI for Rclone has been abandoned, and despite being able to save "tasks", had no ability to just... schedule one every 10 minutes. And yet if you go into most distros and look at packaged Internet apps, or things on Flatpak, you will find 1,000 different open source GUIs for an RSS reader, BitTorrent client, or chat client.

I would love it if there were some kind of "Linux Desktop co-op", with a couple of staff. Users pay membership dues, vote on apps/features, and some devs get paid to develop it, in addition to "resume fame" that can translate over to a higher paying gig. But something tells me the Linux Desktop is so small and nerd-focused that we'd just end up funding more RSS readers and chat clients.

  • RockRobotRock a day ago

    I'm not sure what you're on about. This started as a rant on rclone's CLI options, but ended on desktop Linux.

    rclone is mostly the work of one guy. You can donate to him if you'd like. Making a GUI for a complex, rapidly evolving CLI is not an easy thing to do. There's probably a hundred different attempts to make a good interface for ffmpeg, but you can't please everyone.

  • darthrupert a day ago

    Syncthing?

    • nine_k a day ago

      Syncthing is great, but it's peer to peer, and it requires block storage ("filesystem"). It also has no idea of point-in-time snapshots. Syncthing, is, as is unison, rsync, etc, basically a mirroring tool.

      By contrast, Borg, Restic, Kopia (anything else?) use object storage, aka binary blobs, like S3 or R2 or One Drive. They store both entire copies and small diffs on top of them, much like video codecs, or like git. You can look at the filesystem you've backed up as it was in a particular moment, and you may have a history of many such moments, say, daily snapshots for a month, stored economically, not as 30 full copies. And it all is encrypted on top. If your source FS supports snapshots (ZFS, XFS on LVM, BTRFS), your backups can be entirely consistent views of your filesystem, of its relevant subtrees.

      • oever 20 hours ago

        This is how I run backups privately and at work: hourly local snapshots with btrbk and hourly remote backups of filesystem snapshots with Borg. The local snapshots backups are great for quick restores or finding out recent file changes.

        Prometheus alerts check that latest backup is at most two hours old and that the filesystem is not reporting errors. This setup running for more than a year now and gives great peace of mind.

        • nine_k 19 hours ago

          That's an appreciable integrity assurance!

          Mine is simpler: Syncthing with staggered versioning for important data, periodic Restic backups of the home directory (excluding caches), keeping several recent backups and a couple of older backups.

          I've restored from these backups 4 times, both due to crashes and when moving to a new machine, without any adventures in the process.

      • TiredOfLife a day ago
        • nine_k a day ago

          Indeed; I know and use it. But it's per file, and not very configurable.

          It's not very helpful is you e.g. have a 1 GB file that gets appended 100 kB every day; Syncthing would store a new full-size copy in each version (immediately usable), while Borg / Restic / Kopia would only store the deltas (and would require slow mounting to access a particular version).

          Different tools for different jobs.

  • dddw a day ago

    NextCloud?