serverbackup with backup2l

Discussion in 'Installation/Configuration' started by Ovidiu, Oct 21, 2005.

  1. Ovidiu

    Ovidiu Active Member

    hi guys,

    I am using this howto as the basis of my backup: http://wiki.hetzner.de/index.php/Backup2l

    Anyone else doing something similar?
    I have some questions concerning backup2l maybe someone can answer them.
    First of all here is how I understood backup2l. I backs up the locations I specify (i.e. /usr /var /etc ) into a format I specify (i.e. tar.gz ) stores them locally, encrypts the files for the transfer, puts them on the remote server via ncftp and then deletes the encrypted version of the files.

    Can someone after reading that howto and maybe the manpage (which I did by the way ;-) explain how exactly the next differential backup is done? I mean does it do a complete backup, compare to the one still locally stored and just transfers the diff oder does it somehow only do a diff backup, maybe it sotred information about the full backup somewhere and uses this data to compare?
    Do I have to keep one backup on the Hard disk all the time? it looks to me like 1 backup is always kept on hd. also severall months ago when doing my first tests I had it running so that if one backup from lets say 2 weeks ago was outdated and deleted a new one was performed I mean there was only a deletion taking place when there was a new backup ready so that I had a minimum of backups I specified in the config. At the moment when backups reach a certain age like specified by me, all old ones will be deleted and I am left with only one full backup (the latest one)..

    maybe someone can share his configuration with me or give me some hints.
     
  2. Ovidiu

    Ovidiu Active Member

    solved the problem that backups were deleted when a new cycle was started. I changed the following settings:
    max level to 1
    max per level to 6
    max full to 4
    generations to 4

    this means I will get a full backup, then 6 diff backups then another full one then 6 diffs and so on. the last 4 full backups and 6*4 diff backups will always be avilable which roughly equals one month of backup.
    still I am wondering if the backups really have to be lying around on my server even if I already transfered them to my backupspace?

    @falko: how should I change this so my sql dumb will also be selected:
    FILES=`find . -name 'all.*' -newer timestamp ! -type d` at the moment this selects only the backups without the sqldumb
     
  3. falko

    falko Super Moderator ISPConfig Developer

    You could simply rename your sql dump to something like all.sqldump.sql. :)
     
  4. Ovidiu

    Ovidiu Active Member

    wow, man what a simple solution. I was already intimidated by the above line: I thought about concatenating the search string 'all.*' with another one, already had nightmares about having to read through a lot of man pages,...

    *just kidding* but thx for the easy solution, I'll implement it right now

    still I can't find an answer to this:
     
    Last edited: Oct 22, 2005
  5. Ovidiu

    Ovidiu Active Member

    AND btw. all these
    max level to 1
    max per level to 6
    max full to 4
    generations to 4
    settings only affect local backups, once I put them on the remote storage they accumulate untill its full :-((

    looks like reoback is much smarter although its not doing real incremental backups and it seems its no longer active/alive....

    any other solutions?
     
  6. falko

    falko Super Moderator ISPConfig Developer

  7. killfrog

    killfrog New Member

    Nfs

    I was about to suggest that you set up an NFS partition on the backup server, but I think the tutorial falko gave you is quite good.
    Anyway, you could set up an NFS partition in your backup server that is mounted in /backup dir, for example, and then to set it up in the server you want to back up, so it connects to that folder (in NFS the folder looks to the server as it was a local folder, then it could make the diff comparision for incremental backups as is set up by you in the backup program.
    Ziv
     
  8. Ovidiu

    Ovidiu Active Member

    thx for the input BUT I only have backupspace on an ftp-server given to me by my server provider. I have seen a lot of rdiff howtos on the net, but all needed a setup on the backup seerver which I guess is not possible in my case (using free backup space from strato) AND as far as I have understood to be able to mount NFS this needs to be setup on the backup server as well. Unfortunately as far as I am informed (mind though I might be wrong) strato only gives away ftp storage space...

    talking about reobackup - I guess I'll have to look up that howto again.
     
  9. siggma

    siggma New Member

    Old thread, new issue:

    Greetings.

    I recently upgraded to a multi core processor and have been looking for a way to leverage multiple CPU's when writing a backup. I found PIGZ, a multi threaded compressor that seems to work fine here: http://www.zlib.net/pigz/

    However, it does not play nice with the tar cfz or xzf command options. It leaves leading "/" on filenames in the archive causing restore to fail. To remedy this I've created a driver that uses pipes tar output through pigz to create the archive. Below is the driver. It works OK for me.

    Code:
    USER_DRIVER_LIST="DRIVER_TAR_GZ_PIGZ"
    
    DRIVER_TAR_GZ_PIGZ () {
    # NOTES: USE ONLY WITH MULTI CORE CPU
    # REQUIRES YOU TO DOWNLOAD AND COMPILE PIGZ
    # http://www.zlib.net/pigz/
    # Copy it to your system path somewhere; eg /usr/bin/pigz
    # Create uses PIGZ for threaded, multi file compression
    # Extract uses standard tar xz so you can restore without pigz
        case $1 in
            -test)
                require_tools tar gzip pigz cat
                echo "ok"
                ;;
            -suffix)
                echo "tgz_pigz"
                ;;
            -create) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file       
                tar cf - --files-from=$4 | pigz --best > $3 2>&1
                ;;
            -toc) # Arguments: $2 = BID, $3 = archive file name
    #         Using tar tz  for the toc verifies the PIGZ archive is usable
    #         in case you don't have pigz installed when you restore
                cat $3 | tar tz | sed 's#^#/#'
                ;;
            -extract) # Arguments: $2 = BID, $3 = archive file name, $4 = file list file
    #         The next line should work to decompress using PIGZ but it's not well tested
    #         cat $3 | pigz -d | tar x --same-permission --same-owner --files-from=$4 2>&1
                cat $3 | tar xz --same-permission --same-owner -T $4 2>&1
                ;;
        esac
    }
    Updated driver script available here: http://www.trbailey.net/tech/backup2l.html
     
    Last edited: Jul 19, 2009
  10. killfrog

    killfrog New Member

    You should try with "tar cfzP" and then "tar xzfB" then, this always solved me the leading "/" problems
     
  11. siggma

    siggma New Member

    And this relates to backup2l.conf how?

    Neither of those options changes the way PIGZ works. The issue is PIGZ, not tar. PIGZ is stated to be a "drop-in replacement" for gzip. For some reason if you just overwrite gzip with pigz and run tar with a 'z' option it does not work the same way as gzip did. I found out the hard way when I tried to restore the contents of a backup made using backup2l. I overwrite gzip with pigz and ran a few backup tests and it looked OK, the files listed fine. But when I needed to restore a backup file it failed. Opening the archive revealed all the root directories had a leading "/" but the file list does not.

    In either case, backup2l has an "internal" driver for tar.gz so it requires writing a separate driver block in backup2l.conf to alter the default backup2l tar compression.

    I also added this to my miniscule "tech" page. Newer versions will be posted here:http://www.trbailey.net/tech/backup2l.html
    -Tom
     
  12. killfrog

    killfrog New Member

    This relates to the tar commands I've seen both in your post and within the script, sorry, I just was trying to help, if it's not relevant forget it
     
  13. siggma

    siggma New Member

    The "script" is actually part of a configuration file for the "backup2l" product.

    Backup2l is a set of scripts that use native linux system utilities to perform automated differential system backups. Useful mostly for those of us running a web server.

    The backup2l.cfg file allow you to create a pseudo driver (script based) so you can customize the commands used to perform the actual backup. For example, I recently made a pseudo driver that uses 7zip (p7zip) to compress a system backup. However, the format is somewhat proprietary and it's not included in a standard linux install so it's usefulness in conjunction with a system backup is dubious.

    The backup2l script uses tar with gz compression by default and there is no pseudo driver for it. However, the standard gzip archiving package under linux does not utilize parallel compression when it's presented with a list of files to compress. PIGZ is a front-end for the gzip library that spins off several gzip threads allowing a multi-core cpu to parallel compress files into an archive much faster. This thread, as old as it is, is still very relevant, and the pseudo driver listed above allows pigz (pig-zee)to run correctly with backup2l.

    Pigz is stated to be a drop-in replacement for the linux gzip binary and in most cases you can delete gzip and rename pizg->gzip and it works fine. But for whatever reason it does not work the same as gzip when it's called from tar using "tar cfz". For some reason it fails to strip leading "/" from the file path's in the archive. This causes an issue when trying to restore from a backup archive because the catalog indicates a particular file exists but the default restore command can't find the file in the actual archive. It also prevents restoring the archive to anywhere but the system root, which is not always desirable.

    So, your comment about the tar command is a bit out of context in the overall discussion of backup2l, as the context title of the thread indicates.
    -Tom
     
  14. killfrog

    killfrog New Member

    thanks for the clarification, and again, my apologies for giving useless and out of context suggestions
     
  15. siggma

    siggma New Member

    Debian backup2l & PIGZ update

    Apparently the pigz package has now appeared in both sid and squeeze debian repositories as a drop-in alternative to gzip. I haven't checked other downstream debian distributions but I suspect it will be there as well in upcoming releases if it's not already. Looks like it's available in Karmic now, at least as source. https://launchpad.net/ubuntu/karmic/+source/pigz/2.1.4-1

    Be warned that as of this writing (July 2009), piping tar output through pigz and other external archivers can cause utf8 and other L18n and L10n encoded file names to be excluded from a backup2l initiated backup. It's unclear to me at this time whether this issue occurs with "tar cfz" as well and is simply not reported as an error or if it's unique to piping. At least one known debian package contains unprintable file names. See debian console-tools as an example. Specifically /usr/share/doc/console-tools/examples

    That's it.
     

Share This Page