Hi, i need to set up a backup system for my users, so that they can download their backups. After searchinf the Inet i found that setting $go_info["server"]["do_automated_backups"] = 1 in /home/admispconfig/ispconfig/lib/config.inc.php for ISPconfig 2 did the trick. So I searched for the configfile of ISPConfig 3 and found /usr/local/ispconfig/server/lib/config.inc.php , but this file does not have a PHP array go_info. So I assume that setting it anyway will not work. Is there a way to use this feature with ISPConfig 3? schildhans
Daily backups can easily be implemented using tar running on a cron job. This would be external to the ispconfig framework. http://www.howtoforge.com/forums/showthread.php?t=36714
backup If you have a real working server, my general advice is to install BackupPC on another (backup) server, and backup the /var/clients, /var/www, /var/vmail, /home, /etc, /usr/loca/ispconfig folders (didn't i forget anything?). Or easier just to backup everything from the root, excluding some useless stuff like /var/lib/mysql, /var/log and /var/cache. To store the database, just create its sql export into /var/clients/sql folder before every backup with tcpdump (there is an option in BackupPC to run any script before backup). Then BackupPC will take those new files created by tcpdump. In case of big databases a small optimization is desirable. Here is my script to optimize the disk space usage on backups. It exports all tables of all the databases into separate files. It should be run by BackupPC every time. $ cat /var/clients/sql/export_sql.sh Code: #!/bin/sh #create this dir first! SQL_DIR=/var/clients/sql/mysql #create a user with global read permissions in your mysql DB_USER="backup_reader" DB_PASS="put its password there" umask 0077 # here we create the list of databases, and exclude some of them which we don't need to backup DATABASES=`mysql -u$DB_USER -p$DB_PASS --default-character-set=utf8 --batch --skip-column-names --execute="SHOW DATABASES" | grep -v "test" | grep -v "prosearch" |sort` # We walk through each database and take the names of tables. for DBNAME in `echo $DATABASES` do DB_DIR="$SQL_DIR/$DBNAME/" mkdir $DB_DIR > /dev/null 2>&1 # first we delete all the old sql exported in previous backup rm -f $DB_DIR*.sql.bz2 TABLES=`mysql -u$DB_USER -p$DB_PASS --default-character-set=utf8 --batch --skip-column-names --execute="SHOW TABLES" $DBNAME |sort` for TableName in `echo $TABLES` do # Than we backup each table. /usr/bin/mysqldump -u$DB_USER -p$DB_PASS --default-character-set=utf8 --result-file=$DB_DIR/$TableName.sql $DBNAME $TableName # and bzip each sql file /usr/bin/bzip2 $DB_DIR/$TableName.sql done done It's extremely useful if it's needed to restore one table or its part. On huge tables it's simply saves time on cut of long sql files.
I hear that bzip uses a lot more processor power that tgz. If you have only an 'average' server (like most of us do) then you might want to use tgz compression instead. Backup's will take more room but the load on processor will be much lighter on the webserver.
By far the easiest backup solutions are in this order... 1) Proxmox virtualisation and snapshot backup with webcontrol panel. - VPS restoration can be done with one command on the host server * vzdump --restore vzdump-132.tgz 132 - You can create as many backup schedules of VPS's as you want. (Something like nightly backup's with weekend backups on different folders. NOTICE!!! You have to manually create the backup folders first. etc.) Version 1.4 will support Storage options like iscsi. 2) Webmin's backup. - Not really that efficient but easy webcontrol panel and no need for command line at all. - REALLY EASY MySQL backup!!! Just backup all databases. 3) BackupPC - Really good product for those who do not fear command line work. Personally I have found it good to use all of these options combined. Proxmox for 'Ghost' like total Virtual Private Server (VPS) images. Webmin for easy MySQL backups inside the VPS. And BackuPC for the rest. (I have excluded /backup folder backups on the hosts. Otherwise the amount of data would just be too BIG. ) All of these give me quite easy backup/restoration options on both file and server levels.
In my opinion (as an administrator) ISPConfig2's best feature was the easy backup's of web/db/mail/log. If restoration of those could have been implemented it would have been perfect. After all, backup/restoration is the most important aspect of any system. It directly translates to why virtualisation is so powerfull. It gives us easy ways to backup/restore/move/copy data. At the moment this is still taking its baby steps. Only server level virtualisation (mostly). I hope to see the day when we can move mailboxes and websites from server to server like virtual servers can now be moved from host to host. Imagine the power of such a system.
Hi, maybe this script will help you: http://www.howtoforge.com/forums/showthread.php?p=214894 It's a script which can be used to backup any linux system and it's run from cron. You can find more info in the script header.
thx, but... You're missing the point. As an admin backup's and restoration is not really that big of a problem for me (personally.) I'm talking about easy backup/restoration that a ReSales person can be tought to do. (ie. non-nerd
You need to be careful using mysqldump, it can take a server down during the duration. Experiments led to find the following parameters are needed... --skip-lock-tables --quick --lock-tables=false