Hi, Ubuntu Server 14.04.1 SAMBA filer server RSYNC I'm building a file server. No problems with the file server, but I'm trying to figure out which cron level I'm going to use. I wan't to make a few backup scripts run using cron. After reading several pages/sites I'm learned that "crontab -e" is a editor used on user level to add these rutines. I wan't to make these backup scripts run at root level (admin) Isn't that pretty normal when it's a backup of the whole data disk onto a backupdisk!? I want to backup the whole content of the data disk (RAID1 - md1), all users data and admin home directory as well. Here's one of the suggestions I found: The simplest method for adding a cron job is to edit, as root, /etc/crontab. In your case the entry would look something like this: # 0 9,15 * * * root sh /path/to/your/backup_script That runs the job at 0900 and 1500. Don't put scripts into the /etc/cron.* directories. To run a job daily, create the script somewhere else (I use /usr/local/sbin for all my root scripts), then put a symlink to the file in /etc/cron.daily. And this suggestion, which say a lot I think: cron also reads /etc/crontab, which is in a slightly different format (see crontab(5)). Additionally, cron reads the files in /etc/cron.d: it treats the files in /etc/cron.d as in the same way as the /etc/crontab file (they follow the special format of that file, i.e. they include the user field). However, they are independent of /etc/crontab: they do not, for example, inherit environment variable settings from it. The intended purpose of this feature is to allow packages that require finer control of their scheduling than the /etc/cron.{daily,weekly,monthly} directories to add a crontab file to /etc/cron.d. So, if you need environment variables to be loaded, use /etc/crontab, if you don't, then you can go with a file in /etc/cron.d. Also a reason for putting a file in /etc/cron.d can be, as pointed out in the man page, a finer scheduling control, which you also have in /etc/crontab (but that includes the environment variables). It's just that I can't find "/etc/crontab", may be an older version. I'm not sure which one to chose and exactly how to use it. Do I use "/etc/cron.d" og do I use one of the others e.g. daily.cron ?? And is it OK to edit the file directly using VI?? Backups script: I've made a script using Rsync to make a backup from "/md1/home/ --> /sdc1/" Script name and location: "/home/admin/rsync_dailybackup.sh" Content: #!/bin/bash rsync -av /md1/home/ /sdc1/ Cron rutine - Make the backup script run every morning at 06.00am: # 0 6 * * * root sh /home/admin/rsync_dailybackup.sh Question: 1. What about the "--delete" flag?? rsync -av --delete /Directory1/ /Directory2/ 2. Do I need "root sh" in the Cron command line?? Looking forward to hear from some of you CRON guru's .
You can add --delete to you script? Or did i missed something? root sh is not needed i crontab. Just make the script executable (chmod +x).
Hi Dan, Hope you are well. See the best way I will suggest it to make th cron with root user as and the content must be like this. Keep in mind that the rsync_dailybackup.sh must have execute permissions, if not then make it as After making the changes restart the cron service.
Hi Florian, Thanks! No, you are quite right. The reason I ask is because I'm unfamiliar with rsync and trying to find the way it's usually done, you know "the common way".. And, is there a chance that the flag "--delete" will delete a file some day which was not intended!? It's a scary flag but actually I thought rsync would do this automatically! If a file is removed from the source, then you don't wan't it anymore, right?? But I read somewhere that it works both ways!?!? And because of this I'm not sure I know the consequences of "--delete". In case it works "both ways" and e.g. a file does not get fully "copied/rsync'ed" to the source drive and then nest time the script is executed rsync makes problems!?!? I haven't thought it through, hoping you guys know the facts of this tool Hi Srijan, Thanks I'm well and you too I hope So if even though I want to make a cron rutine at admin/root level it's OK to use the crontab -e (editor) !? I was about to modify the cron.d file manually!! I read somewhere that by using crontab -e you are setting the cron rutine as the current user. So if I've claimed su rights as I always do working the system, then it should be OK to use crontab -e ?! Regarding rsync I found a pretty nice ToDO here: https://help.ubuntu.com/community/rsync Regarding the script, when making backups to 2 disks, is this the way to do it or should I make a real script with loops/rutines which makes backup1 first and then backup2 last? How will rsync handle this little script: Code: #!/bin/bash rsync --delete -avv /home/ /backupdisk1/ rsync --delete -avv /home/ /backupdisk2/ echo "MyServer 06.00 - Daily Backup Successful: $(date)" >> /home/admin/logs/mybackup.log BTW I made a stupid mistake! I believed that I had to use the system path to the source, and the whole path. Like this: Code: I've made a script using Rsync to make a backup from "/md1/home/ --> /sdc1/" rsync -av /md1/home/ /sdc1/ I quickly learned that I had to use the mounted paths. As you might remember we mounted the backupdisks as "/media/backupdisk1" & "/media/backupdisk2" back then. So the way I've made it in the script above using "/home/" and "/backupdisk1/" is OK!? It's not usually done in another way. Does it work at all when trying to use e.g. "/md1" or e.g. "sdd1/some/destination/" ?? I learned that the slash after the source was pretty important too .
To sync everything from /md1/home to /sdc1 (incl. all directorys), use rsync OPTIONS /md1/home/ /sdc1 BTW: i prefer --delete-after instead of --delete. And you should also have a look at the permissions like user and group.
Hi Florian, What's the difference? Didn't know that flag!?! Sounds interesting I'm not sure I explained it properly. What I meant was that it doesn't work when I'm using the "system path" including the drives. md1 is a RAID1 configuration which is mounted " / " (root) or what its called. And the 2 backup drives are mounted as "backupdisk1" and "backupdisk2". In fstab its setup to "/media/backupdisk1" and "/media/backupdisk2". The only way I could make it work was using the mount points, like this: Code: rsync -av /home/ /backupdisk1/ Here's my "/etc/fstab" Code: UUID=23e37e48-fee8-3e6d-af42-53b23916e813 / ext4 errors=remount-ro 0 1 # swap was on /dev/md0 during installation UUID=f4ccdcfe-7760-4c12-be5a-c8c5aa7df3d7 none swap sw 0 0 UUID=59918EAB7F03D5A5 /media/backupdisk1 ntfs defaults,uid=1000,rw 0 0 UUID=c11b3a15-1f40-ff38-b9dc-61b4ac84d7fc /media/backupdisk2 ext4 rw,user,exec 0 0 And another one showing the mountpoints: Code: # df -kh Filesystem Size Used Avail Use% Mounted on /dev/md1 1.8T 1.5G 1.7T 1% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 1.8G 4.0K 1.8G 1% /dev tmpfs 365M 968K 364M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.8G 0 1.8G 0% /run/shm none 100M 0 100M 0% /run/user /dev/sdb1 932G 94M 932G 1% /media/backupdisk1 /dev/sdd1 917G 72M 871G 1% /media/backupdisk2 So when using the disk in the commands I get errors. Like in this example: Code: rsync -av /md1/home/ /sdc1/backups/ This works just fine! Isn't it the way to do it??? Do you use the disk titles or what its called? "MD" being "MultiDisk" part of an array and "sdc1" when defining a command in/for rsync? E.g.: Code: rsync -av /md1/source/ /sdc1/destination/ Sample from Ubuntu Docs: Local Backup Code: sudo rsync -azvv /home/path/folder1/ /home/path/folder2 Backup Over Network Code: sudo rsync --dry-run --delete -azvv -e ssh /home/path/folder1/ [email protected]:/home/path/folder2 And a question regarding the shell-script and making it executable! I've noticed that sometimes there's issues when using this command "chmod +x .....". Why I'm having issues sometimes I don't know. But there's 3 different ways to make a file executable according to my understanding. So why, what and how!? Is it just "the same difference" Here's what I know: Your suggestion: Code: chmod +x /home/admin/rsync_dailybackup.sh From Ubuntu Docs: Code: chmod 700 /home/admin/rsync_dailybackup.sh https://help.ubuntu.com/community/Beginners/BashScripting And this one with another flag "a" ???: Code: chmod a+x /home/admin/rsync_dailybackup.sh .
From the manpage: --delete delete extraneous files from dest dirs --delete-after receiver deletes after transfer, not during You can sync between to directories / mountpoints. I.e. /home to /backup. I never tried it with and umounted partition or by using the volume-name.
Hi Florian, 1. Thanks... 2. I was just because you showed a sample using volumename etc. I ´guess it was because I used it myself in a sample earlier on!? Anyway, lets not use anymore time on that one. I get it, I think 3. What's the advantage? That there will be no corrupt files in case of a bad connection?? Anyway, it sound better to me! If you can explain any better, and you've got the nerves, please do 4. A quote from an earlier post from you: Not sure what to think about this one!? What is it you were thinking on? Please explain, because I read this: From man page Code: -a, --archive archive mode; equals -rlptgoD (no -H,-A,-X) Code: And these flags does this: Descend recursively into all directories (-r), copy symlinks as symlinks (-l), preserve file permissions (-p), preserve modification times (-t), preserve groups (-g), preserve file ownership (-o), and preserve devices as devices (-D). 5. Question: If you are using the -z flag and compress the backups, is it then data which is compressed or is it only the files that is compressed during transfer!? I think it's the last thing but I'm not sure. I read about it, but as always a lot of samples and other explanations make it impossible to understand. I think you'll have to have quite a bit of knowhow before these man pages make sense. Here's a sample of what I'm talking about: Code: -r, --recursive This tells rsync to copy directories recursively. See also --dirs (-d). Beginning with rsync 3.0.0, the recursive algorithm used is now an incremental scan that uses much less memory than before and begins the transfer after the scanning of the first few directo‐ ries have been completed. This incremental scan only affects our recursion algorithm, and does not change a non-recursive transfer. It is also only possible when both ends of the trans‐ fer are at least version 3.0.0. Some options require rsync to know the full file list, so these options disable the incremental recursion mode. These include: --delete-before, --delete-after, --prune-empty-dirs, and --delay-updates. Because of this, the default delete mode when you specify --delete is now --delete-during when both ends of the connection are at least 3.0.0 (use --del or --delete-during to request this improved deletion mode explicitly). See also the --delete-delay option that is a better choice than using --delete-after. What about --delete-delay ??? Is there any reason this should be better? It depends on what is rsync'ed I guess Code: --delete This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. "dir" or "dir/") without using a wildcard for the directory’s contents (e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files’ parent directory. Files that are excluded from the transfer are also excluded from being deleted unless you use the --delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). Prior to rsync 2.6.7, this option would have no effect unless --recursive was enabled. Beginning with 2.6.7, deletions will also occur when --dirs (-d) is enabled, but only for directories whose contents are being copied. This option can be dangerous if used incorrectly! It is a very good idea to first try a run using the --dry-run option (-n) to see what files are going to be deleted. If the sending side detects any I/O errors, then the deletion of any files at the destination will be automatically disabled. This is to prevent temporary filesystem failures (such as NFS errors) on the sending side from causing a massive deletion of files on the destination. You can override this with the --ignore-errors option. The --delete option may be combined with one of the --delete-WHEN options without conflict, as well as --delete-excluded. However, if none of the --delete-WHEN options are specified, rsync will choose the --delete-during algorithm when talking to rsync 3.0.0 or newer, and the --delete-before algorithm when talking to an older rsync. See also --delete-delay and --delete-after. --delete-before Request that the file-deletions on the receiving side be done before the transfer starts. See --delete (which is implied) for more details on file-deletion. Deleting before the transfer is helpful if the filesystem is tight for space and removing extraneous files would help to make the transfer possible. However, it does introduce a delay before the start of the transfer, and this delay might cause the transfer to timeout (if --timeout was specified). It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see --recursive). --delete-during, --del Request that the file-deletions on the receiving side be done incrementally as the transfer happens. The per-directory delete scan is done right before each directory is checked for updates, so it behaves like a more efficient --delete-before, including doing the deletions prior to any per-directory filter files being updated. This option was first added in rsync version 2.6.4. See --delete (which is implied) for more details on file-deletion. --delete-delay Request that the file-deletions on the receiving side be com‐ puted during the transfer (like --delete-during), and then removed after the transfer completes. This is useful when com‐ bined with --delay-updates and/or --fuzzy, and is more efficient than using --delete-after (but can behave differently, since --delete-after computes the deletions in a separate pass after all updates are done). If the number of removed files overflows an internal buffer, a temporary file will be created on the receiving side to hold the names (it is removed while open, so you shouldn’t see it during the transfer). If the creation of the temporary file fails, rsync will try to fall back to using --delete-after (which it cannot do if --recursive is doing an incremental scan). See --delete (which is implied) for more details on file-deletion. --delete-after Request that the file-deletions on the receiving side be done after the transfer has completed. This is useful if you are sending new per-directory merge files as a part of the transfer and you want their exclusions to take effect for the delete phase of the current transfer. It also forces rsync to use the old, non-incremental recursion algorithm that requires rsync to scan all the files in the transfer into memory at once (see --recursive). See --delete (which is implied) for more details on file-deletion. 6. Regarding the crontab -e command. I tried adding logged in as SU (root) as I always do making things on the system. When adding the line to the file and exiting it I got this message. Maybe it's just a comfimation on a CRON being activated!?!?: Code: # crontab -e no crontab for root - using an empty one crontab: installing new crontab 7. Scripts need "#!" to begin with: Code: #!/bin/sh or Code: #!/bin/bash Thanks in advance .