Automate backing up multiple Linux servers to a CentOS server using rsync

Posted by paul on 2015.04.28

Automate backing up multiple Linux servers to a CentOS server using rsync

Created 2014.04.29 Tue

Updated 2015.04.28 Tue

Backing up data from a Linux server to another Linux is fairly simple. Tar zip and rsync the file over. Even automating it is easy by using ssh key (for passwordless login) and cron job (scheduling task). The difficulty rises when you need to repeat this procedure on multiple machines. You could use backup software such as Bacula or NetBackup but they are either too much of an overkill, too complicated or not feasible due to cost. Last year I posted a an automated backup solution using ssh/rsync (all standard tools on Linux). However it lacked tool to automate adding multiple backup clients. Below is updated tutorial with most of the steps now automated. Using this tutorial and the SaltStack solution, you will be able to add dozens of Linux hosts as backup clients with really minimal manual work.

Big part of the updated tutorial includes how to use SaltStack to automate adding new backup clients. I will describe how to use SaltStack (aka Salt) to configure multiple backup clients with minimal work. You can use a few lines of short commands once to remotely configure 1 or dozens of backup client to backup to labfiler1.

End Product

Here's what you will see at the end of the tutorial.

  1. labfiler1: Backup server. All backup files will be sent to this CentOS server.
  2. labweb01: Backup client. Used to test initial setup on labfiler1.
  3. labweb02: Backup client. Used to test adding a second backup client manually.
  4. labweb03 - 04: The 2 backup clients will be added using Salt. Due to automation made possible by Salt, you could easily add 50 or 100 backup clients without having to log into each machine.
  5. Number of backups kept: labfiler1 keeps only latest 5 daily sets of backups in this tutorial. Obviously you could increase it if you like.
  6. At the end you should see following backup sets from labweb01, labweb02, labweb03, etc on labfiler1.
  7. [[email protected] ~]# du -hs /backupinbox/labweb01/*
    9.0M	/backupinbox/labweb01/2015-04-03
    9.0M	/backupinbox/labweb01/2015-04-04
    9.0M	/backupinbox/labweb01/2015-04-05
    9.0M	/backupinbox/labweb01/2015-04-06
    9.0M	/backupinbox/labweb01/2015-04-07
    [[email protected] ~]# du -hs /backupinbox/labweb02/*
    9.0M	/backupinbox/labweb02/2015-04-03
    9.0M	/backupinbox/labweb02/2015-04-04
    9.0M	/backupinbox/labweb02/2015-04-05
    9.0M	/backupinbox/labweb02/2015-04-06
    9.0M	/backupinbox/labweb02/2015-04-07
    [[email protected] ~]# du -hs /backupinbox/labweb03/*
    95M	/backupinbox/labweb03/2015-04-03
    95M	/backupinbox/labweb03/2015-04-04
    95M	/backupinbox/labweb03/2015-04-05
    95M	/backupinbox/labweb03/2015-04-06
    95M	/backupinbox/labweb03/2015-04-07
    
  8. Each source folder will be tar zipped as foldername--yyyy--mm-dd--hh.tar.gz. The file name is basically foldername--year-month-day--hour(in 24format). With file name ending in hour, you can have a backup from every hour. Or just 1 a day. You can easily backup just some subfolders of a big folder if necessary, keeping size of backup sets manageable. It also allows quick visual check of backup status. Below is sample list of source folder and the tar.gz file it will be backed up in.
    1. /etc/ --> etc--2015-04-09--16.tar.gz
    2. /var/named/ --> var-named--2015-04-09--16.tar.gz
  9. File permission: Root on each backup client copies files to [email protected]:/backupinbox/, using a dedicated ssh key. Thus you do not need to change any permission on backup clients, making administration simpler. Root on each backup client is pushing out files.
  10. Security: Root (using a dedicated ssh key) on the backup clients can ONLY rsync files to labfiler1. It cannot interactively log into labfiler1. So if a backup client is compromised, it cannot cause any damage on labfiler1.

Warning to those new to Linux/command-line

Please be careful when following the tutorial if you are not very familiar with linux commands. With one missing / or a minor typo, you can easily destroy a Linux computer. If you are new to Linux and command lines, I strong recommend setting up 2 or 3 CentOS test servers to test the script before using it on your production servers. If you are hesitant to setup multiple test Linux boxes due to lack of hardware, now is the time to consider setting up a KVM host. Consider checking out the short tutorial I created in 2014. It will allow you to set up multiple Linux instances quickly as long as you have just one physical computer.

Overview of major steps

This tutorial consists of 3 sections. Sections I and II are on this webpage. Section III is in a separate webpage. Use this as a map as you follow along.

I. Prep labfiler1 and set up labweb01: you will set up labfiler1 (backup storage server) and labweb01 (backup client) to test everything works.

  1. labfiler1: Prep labfiler1 to act as a central backup server with enough storage. Add user account 'backupuser'. It needs to have rsync/ssh installed so you cannot use Windows or other NAS boxes as central backup storage.
  2. labweb01: On this backup client, create ssh key pair and specify the file name as id_rsa_yourhostname_backupclient. Copy id_rsa_yourhostname_backupclient.pub over to labfiler1.
  3. labfiler1: Append id_rsa_yourhostname_backupclient.pub to /home/backupuser/.ssh/authorized_keys.
  4. labfiler1: Create /home/backupuser/.validate-rsync, to be used to lock down backupuser account on labfiler1.
  5. labfiler1: Update /home/backupuser/.ssh/authorized_keys by prepending following to each line.
  6. command="/home/backupuser/.validate-rsync"
    
  7. labfiler1: Next you will learn about a script, /home/backupuser/bin/add-pub-auth-key.sh, which automates updating /home/backupuser/.ssh/authorized_keys.
  8. labfiler1: Create script, /home/backupuser/bin/backup-purge-old.sh, to purge old backups. Edit /etc/crontab to run the script regularly.
  9. labweb01: Create /home/backupuser/bin/backup-rsync.sh and test running it manually. Files should start showing up on labfiler1:/backupinbox/labweb01/
  10. labweb01: Schedule /root/bin/backup-rsync.sh to run regularly by editing /etc/crontab.

II. Adding a second backup client manually: Setting up more backup clients (labweb02 as example) would be far simpler as labfiler1 is ready.

  1. labweb02: On a backup client, create ssh key pair and specify the file name as id_rsa_yourhostname_backupclient.
  2. labweb02: Copy id_rsa_yourhostname_backupclient.pub over to labfiler1.
  3. labfiler1: As backupuser, execute /home/backupuser/bin/add-pub-auth-key.sh to update /home/backupuser/.ssh/authorized_keys.
  4. labweb02: Create /root/bin/backup-rsync.sh and test running it manually once. Files should start showing up on labfiler1:/backupinbox/labweb02/
  5. labweb02: Schedule /root/bin/backup-rsync.sh to run regularly by adding following line to /etc/crontab.
  6. 0 1 * * * root /root/bin/backup-rsync.sh
    

At this point, adding a new backup client only takes 5 steps. However, to add 10 new backup clients, you would be doing 50 things. You could automate a bit by writing a script, copy it to each backup client (type in password for each Linux), run it etc etc. But the better option would be using Salt.

III. Setting up multiple backup clients with Salt: *Because this blog got rather long, I split off the part on using Salt to a separate webpage . Salt has features like Pillar and State. However you can accomplish a lot with just one feature of Salt, remote execution. With Salt, you can run one line of command on a Salt Master and the command will be executed on multiple remote Linux simultaneously using root privilege. You can easily manipulate dozens or hundreds of remote Linux servers to create ssh key pairs or copy files around by issuing the command only once on Salt Master. No more logging into each remote Linux. Rolling out a change is easy. Maintenance is even easier with Salt. Again, note that following steps need to be done only once, whether you are setting up 5 new backup clients or 500.

  1. Set up Salt Master and Salt Minions. It's very easy and explained in my blog from 2014.
  2. You just need to install a few rpms and edit 1 or 2 config files. Once done with that, return to this tutorial to continue.
  3. Salt Master: with 1 line of command, generate ssh key pairs on multiple remote Linux hosts.
  4. Salt Master: with 1 line of command, gather up id_rsa_yourhostname_backupclient.pub from all remote Linux to Salt Master. Copy gathered .pub files to labfiler1.
  5. labfiler1: As backupuser execute /home/backupuser/bin/add-pub-auth-key.sh, add .pub files to authorized_keys. It will also prepend "command="/home/backupuser/.validate-rsync" to each line of authorized_keys. This is for security.
  6. Salt Master: with 1 line of command, push out backup-rsync.sh to all new backup clients.
  7. Salt Master: update /root/.ssh/known_hosts on backup clients, in order to bypass the requirement to answer 'yes' when a Linux hosts connects to labfiler1 for the 1st time.
  8. Salt Master: with 1 line of command, edit /etc/crontab on all new backup clients to schedule running backup-rsync.sh.
  9. After waiting for cron jobs to run on backup clients, verify backups are present in labfiler1:/backupinbox/hostname/yyyy-mm-dd/. Use du -hs command to verify. You can always remotely initiate running backup-rsync.sh using Salt, which is handy. And lastly, test restoring from the backup.

On to setting up filer1 and labweb01.

I. Prep labfiler1 and set up labweb01

labfiler1: Prep labfiler1.

On labfiler1, you need to create user account/group and folder to accept incoming backup files.

  1. As root, create user and group: backupuser and backuplocalteam. Group backuplocalteam can be used to allow other non-root users on labfiler1 as backup operators.
  2. [[email protected] ~]# useradd backupuser
    [[email protected] ~]# passwd backupuser
    ...
    [[email protected] ~]# groupadd backuplocalteam
    [[email protected] ~]# usermod -a -G backuplocalteam backupuser
    
  3. As root, create 3 folders and set permission.
  4. # create a folder to keep scripts
    [[email protected] ~]$ mkdir ~/bin
    
    [[email protected] ~]# mkdir /backupinbox /backup
    [[email protected] ~]# chown -R backupuser:backuplocalteam /backupinbox /backup
    [[email protected] ~]# chmod -R 750 /backupinbox /backup
    
  5. Install rsync and openssh-server, in case they are not installed.
  6. [[email protected] ~]# yum install rsync openssh-server
    
  7. Become backupuser and create folder (.ssh) and file (.ssh/authorized keys). Also create folders, ~/pubfiles and ~/bin/
  8. # assume the privilege of user, backupuser
    [[email protected] ~]# su backupuser
    
    # create folder and file as backupuser
    [[email protected] root]$ cd
    [[email protected] ~]$ pwd
    /home/backupuser
    [[email protected] ~]$ mkdir ~/.ssh
    [[email protected] ~]$ chmod 755 ~/.ssh/
    [[email protected] ~]$ touch ~/.ssh/authorized_keys
    [[email protected] ~]$ chmod 600 ~/.ssh/authorized_keys
    
    [[email protected] ~]$ mkdir ~/pubfiles ~/bin
    
    [[email protected] ~]$
    
  9. Note /backupinbox is for accepting incoming backup files from backup clients. And /backup is for archiving backups long term, such as weekly or monthly. If you are feel even more security is needed, you can easily use cron job to move new backup sets from /backupinbox/ to /backup/, and lock down /backup/ accessible to only root.

labfiler1: Script to purge old backups

At this point, we need to add a script, backup-purge-old.sh, which will delete older backup sets, necessary to prevent backup sets from filling up the storage of labfiler1. Below script should be set to run every day.

  1. Labfiler1: as backupuser create script /home/backupuser/bin/backup-purge-old.sh with following content.
  2. #!/bin/bash
    #This script purges old backup sets from /backupinbox/.
    #This script should be set to run every night via crontab.
    
    #####
    # Variables you MUST check/update.
    #####
    # Specify how much backup sets to keep.
    # 5 means last 5 days. 2 means last 2 days.
    DaysToKeep=5
    
    # You MUST change this value to match folder path.
    BackupDest=backupinbox   # means /backupinbox/
    
    #####
    # Variables you do not need to update.
    #####
    # This is necessary because of the way 'tail -n +6' works. Nothing to change here.
    DaysToKeepValue=$(($DaysToKeep + 1))
    
    Array1=(/$BackupDest/*)   # No need to change anything here.
    
    
    #####
    # Script
    #####
    # Delete older folders in $FOLDER and keep only newer folders.
    cd /$BackupDest
    
    for i in ${Array1[@]}
    do
    	i01=`basename $i`
    	/bin/ls -dt /$BackupDest/$i01/* | /usr/bin/tail -n +$DaysToKeepValue | /usr/bin/xargs /bin/rm -rf
    	echo -e "\nOnly newest $DaysToKeep folders in $i/ were kept. Olders folders were deleted."
    done
    
  3. Double check values of the 2 variables are what your want. If you want to keep backup sets from lastest 10 days, change value of QtyToKeep to 10. If you do once daily backup and you prefer to keep 10 backup sets, change QtyToKeep to 10. If you do twice daily backup and you prefer to keep backups from lastest 10 days, change QtyToKeep to 20.
  4. Labfiler1: Make /home/backupuser/bin/backup-purge-old.sh executable by backupuser. This script will be tested after first backup of labweb01 is completed.
  5. [[email protected] ~]$ chmod -R 750 /home/backupuser/bin
    [[email protected] ~]$ chown -R backupuser:backupuser /home/backupuser/bin
    
  6. Labfiler1: Edit /etc/crontab and add following 2 lines at the end to schedule backup-purge-old.sh to run daily. Chown and chmod commands are issued right before backup-purge-old.sh is executed in order to prevent any permission issues by backupuser account.
  7. 50 5 * * * root chown -R backupuser:backuplocalteam /backupinbox
    53 5 * * * root chmod -R 770 /backupinbox
    57 5 * * * backupuser /home/backupuser/bin/backup-purge-old.sh
    

labweb01: create ssh key pair

Now you are ready to add the first backup client, labweb01.

  1. Before creating the ssh key pair, on labweb01 run hostname command and check that the hostname is how you want labweb01 to be identified on labfiler1. Without unique hostname for each backup client, this backup solution will not work.
  2. [[email protected] ~]# hostname
    labweb01.base.loc
    
  3. If the hostname is not set, on labweb01 edit /etc/sysconfig/network as shown below and remember to reboot to set it permanently. Remember to use your own value for HOSTNAME
  4. NETWORKING=yes
    HOSTNAME=labweb01.base.loc
    
  5. As root, create a pair of SSH keys which will be used ONLY for rsyncing files from labweb01 to labfiler1:/backupinbox/. You will essentially run following command (don't run it just yet). Note the back quotes surrounding hostname and hostname -s. They are not single quotes.
  6. # Do not run this command yet.
    ssh-keygen -t rsa -C "[email protected]`hostname`" -f /root/.ssh/id_rsa_`hostname -s`_backupclient -N ""
    
    1. I recommend using root.back instead of root to indicate this is for backup use only.
    2. Also you must specify the name of the private key file by using -f /root/.ssh/id_rsa_yourhostname_backupclient (in this case it is id_rsa_labweb01_backupclient). Because this ssh key pair will be dedicated for backup, I am specifying the file name, id_rsa_yourhostname_backupclient, and not accept the default file name, id_rsa.
    3. Using a passphrase is recommended with ssh keys, but not in this instance as the script needs to be executed without any user intervention. Thus I am using -N "" to specify the passphrase will be blank. There is no blank space between the 2 double quotes.
  7. Run the ssh-keygen command and you should see output similar to the one below.
  8. [[email protected] ~]# ssh-keygen -t rsa -C "[email protected]`hostname`" -f /root/.ssh/id_rsa_`hostname -s`_backupclient -N ""
    Generating public/private rsa key pair.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa_backupclient.
    Your public key has been saved in /root/.ssh/id_rsa_backupclient.pub.
    The key fingerprint is:
    23:be:a7:b4:dc:a4:be:98:32:fe:09:9c:19:94:de:39 [email protected]
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |   .             |
    ...
    ...
    |  o. * *.        |
    | ..+=.O+.        |
    +-----------------+
    [[email protected] ~]#
    
  9. Verify you have following 2 files in /root/.ssh/. You should keep id_rsa_lab01_backupclient secure to prevent unauthorized access.
  10. [[email protected] ~]# ls /root/.ssh/i* | sort
    /root/.ssh/id_rsa_labweb01_backupclient
    /root/.ssh/id_rsa_labweb01_backupclient.pub
    
  11. Copy id_rsa_labweb01_backupclient.pub (the file with .pub extension) from labweb01 to [email protected]:~/pubfiles/ with following command. Remember to copy the file to pubfiles directory of user backupuser.
  12. scp /root/.ssh/id_rsa_*.pub [email protected]:/home/backupuser/pubfiles/
    
  13. Here is what you will see.
  14. [[email protected] ~]# scp /root/.ssh/id_rsa_*.pub [email protected]:/home/backupuser/pubfiles/
    
    [email protected]'s password:
    id_rsa_labweb01_backupclient.pub                               100%  401     0.4KB/s   00:00
    

labfiler1: accept labweb01's ssh key

  1. Labfiler1: As backupuser, run following commands to allow passwordless login from labweb01. When running the cat command, it is very important you use >>, NOT > . Essentially you are appending content of id_rsa_labweb01_backupclient.pub to file, /home/backupuser/.ssh/authorized_keys
  2. [[email protected] ~]$ cd
    [[email protected] ~]$ cat ~/pubfiles/id_rsa_labweb01_backupclient.pub >> ~/.ssh/authorized_keys
    [[email protected] ~]$ more ~/.ssh/authorized_keys
    ssh-rsa AAAAB3NzaC1y.........J2Ywo5RrcIV2XHe3idUQ9ldvZUioxb/wLQ5r6LySdDvLFy+/1jRKkmTCV3itrkfTAvPTPzmx5VBtD5iREYvJ2wC+sel96+tH9Q== [email protected]
    [[email protected] ~]$
    

labweb1: test ssh into labfiler1

  1. At this point, root on labweb01 can log into [email protected] using the ssh key. You have to use -i and specify id_rsa_labweb01_backupclient though.
  2. Labweb01: test ssh logging into labfiler1. Here is the command.
  3. ssh -i ~/.ssh/id_rsa_labweb01_backupclient [email protected]
    
        OR
    
    ssh -i ~/.ssh/id_rsa_`hostname -s`_backupclient [email protected]
    
  4. Here is what it should look like.
  5. [[email protected] ~]# ssh -i ~/.ssh/id_rsa_labweb01_backupclient [email protected]
    The authenticity of host 'labfiler1.base.loc (192.168.11.151)' can't be established.
    RSA key fingerprint is ec:74:e......:f2:4d:e5:72:a4:63.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'labfiler1.base.loc' (RSA) to the list of known hosts.
    [[email protected] ~]$
    
  6. Log out from labfiler1.
  7. One important tip is to use consistent hostname convention throughtout this tutorial. If you use IP, use IP throughout. If you use full hostname, use full hostname throughout. If you do not use consistent naming, you may run into some issues when using Salt later.

labfiler1: update authorized_keys to lock down access

For security reason, you need to lock down security so that root from labweb01 (when using backup only ssh key) can only rsync files to labfiler1. Interactive login should be disabled. To accomplish this you need 2 things done on labfiler1 as backupuser: 1) create .validate-rsync (one time step) and 2) edit authorized_keys (whenever new public key is added).

  1. Create .validate-rsync.
  2. Labfiler1: As backupuser create /home/backupuser/.validate-rsync with following content:
  3. #!/bin/sh
    case "$SSH_ORIGINAL_COMMAND" in
    *\&*)
    echo "Rejected"
    ;;
    *\(*)
    echo "Rejected"
    ;;
    *\{*)
    echo "Rejected"
    ;;
    *\;*)
    echo "Rejected"
    ;;
    *\<*)
    echo "Rejected"
    ;;
    *\`*)
    echo "Rejected"
    ;;
    *\|*)
    echo "Rejected"
    ;;
    rsync\ --server*)
    $SSH_ORIGINAL_COMMAND
    ;;
    *)
    echo "Rejected"
    ;;
    esac
    
  4. Set /home/backupuser/.validate-rsync executable.
  5. [[email protected] ~]$ ls -la /home/backupuser/.validate-rsync
    -rw-rw-r-- 1 backupuser backupuser 318 Apr 28 10:47 .validate-rsync
    [[email protected] ~]$ chmod 700 /home/backupuser/.validate-rsync
    [[email protected] ~]$ ls -la /home/backupuser/.validate-rsync
    -rwx------ 1 backupuser backupuser 318 Apr 28 10:47 .validate-rsync
    [[email protected] ~]$
    
  6. Edit authorized_keys.
  7. Labfiler1: As backupuser edit /home/backupuser/.ssh/authorized_keys. Specifically you want to add following text to the beginning of the line that ends with [email protected]. Since virtually all will log into backupuser user on labfiler1 in order to rsync in files, you can just prepend the text to all lines in /home/backupuser/.ssh/authorized_keys.
  8. command="/home/backupuser/.validate-rsync"
    
  9. Before editing authorized_keys
  10. ssh-rsa AAAAB3NzaC1...u3JV1N+29QQ== [email protected]
    
  11. After authorized_keys is edited
  12. command="/home/backupuser/.validate-rsync" AAAAB3NzaC1...u3JV1N+29QQ== [email protected]
    
  13. Now if you try to run same command on labweb01 as root, you will not be able to log in to [email protected] now, as expected.
  14. [[email protected] ~]# ssh -i ~/.ssh/id_rsa_labweb01_backupclient [email protected]
    Rejected
    Connection to labfiler1.base.loc closed.
    [[email protected] ~]#
    
  15. It will be pretty tedious to update /home/backupuser/.ssh/authorized_keys file every time new backup clients are added. I have automated this process with a script which is explained next.

labfiler1: Script to update authorized_keys automatically

Create add-pub-auth-key.sh to automate updating authorized_keys, which requires 2 steps: 1) appending content of each .pub key to /home/backupuser/.ssh/authorized_keys and 2) prepending command="/home/backupuser/.validate-rsync" to each line in /home/backupuser/.ssh/authorized_keys. The script can handle multiple .pub files at once and also will not allow duplicates in authorized_keys. You simply need to keep all .pub files in labfiler1:/home/backupuser/pubfiles/ and execute add-pub-auth-key.sh whenever new pub keys are added. Running add-pub-auth-key.sh will add only new lines to /home/backupuser/.ssh/authorized_keys and prepend "command=..." to each new line added. You can run it multiple times without causing any harm. It is a fire-and-forget script.

  1. Open a new Terminal and log into labfiler1 as backupuser .
  2. Labfiler1: As backupuser create /home/backupuser/bin/add-pub-auth-key.sh with following content.
  3. #!/bin/bash
    
    #####
    # variables
    #####
    
    # If setting up backup solution using user backupuser, run this script
    # as 'backupuser', NOT as root.
    
    UserHome=$HOME
    SourceFiles="${UserHome}/pubfiles"
    TargetFolder="${UserHome}/.ssh"
    TargetFile="authorized_keys"
    
    
    #####
    # script
    #####
    
    # Check root is not running this script. If root is running, produce alert and exit script.
    if [ "root" == `whoami` ]; then
    echo "##################################"
    echo "You should NOT run this script as root."
    echo "Open a new terminal, log into labfiler1 as user "backupuser", and rerun this script."
    echo "Existing the script."
    echo "##################################"
    else
    echo "##################################"
    echo "Proceeding to update $TargetFolder/$TargetFile. Duplicates will not be added."
    echo "##################################"
    fi
    
    # Create empty $TargetFolder$TargetFile if it does not exist.
    cd
    mkdir $TargetFolder 2> /dev/null
    
    if [ ! -f "$TargetFolder/$TargetFile" ]; then
    	touch $TargetFolder/$TargetFile
    fi
    
    # Loop through each backupclient.pub files and add it to $TargetFile,
    # ONLY if it is not already in $TargetFile.
    # As new key is added, 'command=...' is inserted into beginning of the new line.
    for i in $(find $SourceFiles -name "*backupclient.pub")
    do
    	CurrentHost=$(cat $i | awk '{print $(NF-1)" "$NF}')
    	if grep --quiet "$CurrentHost" $TargetFolder/$TargetFile; then
    		echo "Nothing to add. Public key, $i, was already present in $TargetFile."
    		else
    		Insert='command="/home/backupuser/.validate-rsync" '
    		InsertSum=${Insert}`cat $i`
    		echo "$InsertSum" >> $TargetFolder/$TargetFile
    		echo "+ Appended public key, $i, to $TargetFile."
    	fi
    done
    
    chmod 755 $TargetFolder
    
    chmod 600 $TargetFolder/$TargetFile
    
    exit 0
    
  4. Remember to make /home/backupuser/bin/add-pub-auth-key.sh executable by backupuser.
  5. Labfiler1: As backupuser test running the script add-pub-auth-key.sh and you should see this. Nothing is changed in this instance because the only .pub file (labweb01) is already in authorized_keys.
  6. [[email protected] ~]$ add-pub-auth-key.sh
    ##################################
    Proceeding to update /home/backupuser/.ssh/authorized_keys. Duplicates will not be added.
    ##################################
    Nothing to add. Public key, /home/backupuser/pubfiles/id_rsa_labweb01_backupclient.pub, was already present in authorized_keys.
    [[email protected] ~]$
    

labweb01: backup-rsync.sh script

Now you are ready to create the actual script that will push files from labweb01 to labfiler1.

  1. Labweb01: As root create directory /root/bin/.
  2. Labweb01: As root create following script, /root/bin/backup-rsync.sh.
  3. #!/bin/bash
    
    ###
    # variables that  M U S T  be VERIFIED
    ###
    DestHost=labfiler1.base.loc
    DestDir="backupinbox"
    
    ###
    # Backup source. For folders not at root level,
    # enter the path (ex.: /var/named/ should be entered as  var/named ).
    ###
    SRC[1]="etc"
    SRC[2]="root"
    #SRC[3]="var/named"   # Translates to /var/named/.
    #SRC[4]="data"
    
    ###
    # variables
    ###
    HourStart=$(date +"%Y-%m-%d--%H")   # For naming backup .tar.gz and for dir holding them
    DayStart=$(date +"%Y-%m-%d")   # For dir to be created on backup storage accepting incoming .tar.gz
    NowHour=$HourStart
    NowDay=$DayStart
    
    Login=backupuser
    
    
    ###
    # script
    ###
    # create time stamped dir and copy them to the backup server. This dir will contain all backups from 1 particular day.
    /bin/mkdir -p /tmp/`hostname -s`/$NowDay
    /bin/chmod -R 770 /tmp/`hostname -s`/$NowDay
    
    /usr/bin/rsync -ave "ssh -i /root/.ssh/id_rsa_`hostname -s`_backupclient" /tmp/`hostname -s`/$NowDay [email protected]$DestHost:/$DestDir/`hostname -s`
    
    #rsync over files
    for i in "${SRC[@]}"
    do
            [ "$(ls -A /$i)" ] && x=`echo $i | tr / -`; /bin/tar -czf /tmp/$x--$NowHour.tar.gz /$i 2>/dev/null;\
            /usr/bin/rsync -ave "ssh -i /root/.ssh/id_rsa_`hostname -s`_backupclient" --delete /tmp/$x--$NowHour.tar.gz [email protected]$DestHost:/$DestDir/`hostname -s`/$NowDay/ 2>/dev/null;\
            rm /tmp/$x--$NowHour.tar.gz 2>/dev/null
    done
    
    # rsync files that do not need datestamp
    # /usr/bin/rsync -ave "ssh -i /root/.ssh/id_rsa_`hostname -s`_backupclient" --delete /data/www/yumrepo [email protected]$DestHost:/$DestDir/`hostname -s`/
    
    # clean up
    rm -rf /tmp/`hostname -s`
    
    exit 0
    
  4. Make sure the values of the first 4 variables are what you want to use: DestHost, DestDir, SRC[1], SRC[2].
  5. You can add SRC{3}, SRC[4], and so on to include more folders.
  6. Labweb01: As root make backup-rsync.sh executable and execute it. You should see following.
  7. [[email protected] ~]# chmod 750 /root/bin/backup-rsync.sh
    [[email protected] ~]# /root/bin/backup-rsync.sh
    sending incremental file list
    2015-04-21/
    
    sent 46 bytes  received 16 bytes  124.00 bytes/sec
    total size is 0  speedup is 0.00
    sending incremental file list
    created directory /backupinbox/labweb01/2015-04-21
    etc--2015-04-21--14.tar.gz
    
    sent 9184299 bytes  received 31 bytes  6122886.67 bytes/sec
    total size is 9183080  speedup is 1.00
    sending incremental file list
    root--2015-04-21--14.tar.gz
    
    sent 9487 bytes  received 31 bytes  19036.00 bytes/sec
    total size is 9388  speedup is 0.99
    [[email protected] ~]#
    
  8. Labweb01: Edit /etc/crontab to set backup-rsync.sh to run daily. Add following at end of /etc/crontab to have the script run at 3:55am every day. You can have 2 or more backup jobs per day as long as they occur at differ hour of the day (ie: 1, 6, 18).
  9. 55 3 * * * root /root/bin/backup-rsync.sh
    
  10. On labfiler1 as backupuser, you should see the backup data being copied over. Note back single quotes are around the date command.
  11. [[email protected] ~]$ ls /backupinbox/labweb01/`date +%Y-%m-%d` | sort
    etc--2015-04-21--14.tar.gz
    root--2015-04-21--14.tar.gz
    

labfiler1: Test purging old backups

  1. Labfiler1: Note /backupinbox/labweb01/ has only one day's worth of backup set from labweb01. To simulate multi days of backup needed for testing the script /home/backupuser/bin/backup-purge-old.sh, create six more folders.
  2. [[email protected] ~]# cd /backupinbox/labweb01/
    [[email protected] labweb01]$ cp -r 2015-04-21 2015-04-22
    [[email protected] labweb01]$ cp -r 2015-04-21 2015-04-23
    [[email protected] labweb01]$ cp -r 2015-04-21 2015-04-24
    [[email protected] labweb01]$ cp -r 2015-04-21 2015-04-25
    [[email protected] labweb01]$ cp -r 2015-04-21 2015-04-26
    [[email protected] labweb01]$ cp -r 2015-04-21 2015-04-27
    
    [[email protected] labweb01]$ ls -ltr
    total 24
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:51 2015-04-21
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:53 2015-04-22
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:53 2015-04-23
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:53 2015-04-24
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:53 2015-04-25
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:53 2015-04-26
    drwxrwx--- 2 backupuser backupuser 4096 Apr 14 10:53 2015-04-27
    
  3. Labfiler1: As backupuser execute /home/backupuser/bin/backup-purge-old.sh.
  4. Check /backupinbox/labweb01/ again and you should see only the latest 5 folders. Folders with earlier modify time were deleted when backup-purge-old.sh was executed.
  5. [email protected] labweb01]$ ls -ltr /backupinbox/labweb01/
    total 20
    drwxrwx--- 2 backupuser backupuser 4096 Apr 21 14:51 2015-04-23
    drwxrwx--- 2 backupuser backupuser 4096 Apr 21 14:51 2015-04-24
    drwxrwx--- 2 backupuser backupuser 4096 Apr 21 14:51 2015-04-25
    drwxrwx--- 2 backupuser backupuser 4096 Apr 21 14:51 2015-04-26
    drwxrwx--- 2 backupuser backupuser 4096 Apr 21 14:51 2015-04-27
    
  6. To verify a backup is successfully executed by labweb01, you can run following command as root on labfiler1 and you should see something like below.
  7. [[email protected] labweb01]$ find /backupinbox/labweb01/ -maxdepth 2 | sort
    /backupinbox/labweb01/
    /backupinbox/labweb01/2015-04-23
    /backupinbox/labweb01/2015-04-23/etc--2015-04-21--14.tar.gz
    /backupinbox/labweb01/2015-04-23/root--2015-04-21--14.tar.gz
    /backupinbox/labweb01/2015-04-24
    ...
    ...
    [[email protected] labweb01]$
    
  8. Labfiler1: Clean out the folders that were created in /backupinbox/labweb01/ for simulation.
  9. [[email protected] labweb01]$ rm -rf /backupinbox/labweb01/*
    
  10. Labweb01: Rerun /root/bin/backup-rsync.sh once to make sure you have a valid backup until next scheduled backup.
  11. Labweb01: Add following line to /root/crontab to schedule backup-rsync.sh to run regularly. In this case, at 1am everyday.
  12. 0 1 * * * root /root/bin/backup-rsync.sh
    

Restoring from backup

If you are unfortunate enough to have to restore data from the backup, make sure you tar unzip the .tar.gz file as root. This is required for the restored data to have the original owner/permission.

[[email protected] 2015-04-29_21]# ls
etc--2015-04-29_21.tar.gz
[[email protected] 2015-04-29_21]# tar xzf etc--2015-04-29_21.tar.gz
[[email protected] 2015-04-29_21]# ls -lr
total 8592
-rw-r--r--   1 backupuser backupuser 8783695 Apr 29 21:58 etc--2015-04-29_21.tar.gz
drwxr-xr-x 100 root       root         12288 Apr 29 12:10 etc
[[email protected] 2015-04-29_21]#

II. Adding a second backup client manually

Now you have a complete working backup solution, verified with backing up labweb01. Next, try adding a second backup client manually. There are less steps involved.

Adding labweb02, a second backup client

Adding more backup clients is now much easier, only 5 steps. Let's go through adding a second backup client, labweb02.

  1. labweb02: As root create ssh key pair.
  2. ssh-keygen -t rsa -C "[email protected]`hostname`" -f /root/.ssh/id_rsa_`hostname -s`_backupclient -N ""
    
  3. labweb02: Copy id_rsa_labweb02_backupclient.pub over to labfiler1
  4. [[email protected] ~]# scp ~/.ssh/id_rsa_labweb02_backupclient.pub [email protected]:~/pubfiles/
    
        OR
    
    [[email protected] ~]# scp ~/.ssh/id_rsa_`hostname -s`_backupclient.pub [email protected]:~/pubfiles/
    
  5. On labfiler1 as backupuser: Execute /home/backupuser/bin/add-pub-auth-key.sh to update /home/backupuser/.ssh/authorized_keys with new public keys. You should see following. One .pub file is added.
  6. [[email protected] labweb01]$ /home/backupuser/bin/add-pub-auth-key.sh
    ##################################
    Proceeding to update /home/backupuser/.ssh/authorized_keys. Duplicates will not be added.
    ##################################
    + Appended public key, /home/backupuser/pubfiles/id_rsa_labweb02_backupclient.pub, to authorized_keys.
    Nothing to add. Public key, /home/backupuser/pubfiles/id_rsa_labweb01_backupclient.pub, was already present in authorized_keys.
    [[email protected] labweb01]$
    
  7. labweb02: As root, create directory /root/bin. Next create a script, /root/bin/backup-rsync.sh (or copy it from labweb02). Example below shows copying it from labweb01. Make backup-rsync.sh executable.
  8. [[email protected] ~]# mkdir /root/bin
    [[email protected] ~]# scp [email protected]:/root/bin/backup-rsync.sh /root/bin/
    [email protected]'s password:
    backup-rsync.sh                                                      100% 1590     1.6KB/s   00:00
    [[email protected] ~]# chmod 750 /root/bin/backup-rsync.sh
    
  9. labweb02: As root run backup-rsync.sh to test and you should see below.
  10. [[email protected] ~]# backup-rsync.sh
    The authenticity of host 'labfiler1.home.loc (192.168.11.151)' can't be established.
    RSA key fingerprint is ec:74:e2:03:91:83:84:52:37:65:f2:4d:e5:72:a4:63.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'labfiler1.home.loc' (RSA) to the list of known hosts.
    sending incremental file list
    2015-04-22/
    
    sent 46 bytes  received 16 bytes  24.80 bytes/sec
    total size is 0  speedup is 0.00
    sending incremental file list
    etc--2015-04-22--12.tar.gz
    
    sent 9155464 bytes  received 18229 bytes  2621055.14 bytes/sec
    total size is 9154249  speedup is 1.00
    sending incremental file list
    root--2015-04-22--12.tar.gz
    
    sent 9734 bytes  received 115 bytes  19698.00 bytes/sec
    total size is 9635  speedup is 0.98
    [[email protected] ~]#
    
    
  11. labweb02: As root, edit /etc/crontab and schedule /root/bin/backup-rsync.sh to run every night. In this case, 1am everyday.
  12. 0 1 * * * root /root/bin/backup-rsync.sh
    

labfiler1: Check /backupinbox/ on labfiler1

Check content of /backupinbox/ on labfiler1 with find command and you should see backup from labweb01 and labweb02.

[[email protected] ~]# find /backupinbox/labweb* -maxdepth 2 | sort
/backupinbox/labweb01
/backupinbox/labweb01/2015-04-21
/backupinbox/labweb01/2015-04-21/etc--2015-04-21--15.tar.gz
/backupinbox/labweb01/2015-04-21/root--2015-04-21--15.tar.gz
/backupinbox/labweb02
/backupinbox/labweb02/2015-04-21
/backupinbox/labweb02/2015-04-21/etc--2015-04-21--15.tar.gz
/backupinbox/labweb02/2015-04-21/root--2015-04-21--15.tar.gz

Add more backup clients with Salt

Adding the second backup client was much easier, with just 6 steps. Now to add more 10 more backup clients, you could do 60 steps. But this is Linux and you will automate using scripts. In fact you should, as one major reason for using Linux is it's automatable. However, the better way would be using Salt, explained here.