Using Salt to install backup client on CentOS

Posted by paul on 2015.04.28

Using Salt to add backup clients

Created 2015.04.28 Tue

In previous post you learned how to set up an all Linux based backup solution using only rsync/ssh-key. Basic infrastructure is ready. But adding dozens of backup clients is a challenge. In this tutorial, I will explain how to use SaltStack (aka Salt) to add multiple backup clients very quickly. Simply, you execute about 5 lines of commands once, and practically unlimited # of backup clients can be added. You do not have to repeat same commands over and over to add multiple backup clients. If you are learning about Salt, this tutorial could serve as a demonstration on how to take advantage of Salt very quickly, even before learning about States and Pillars.

Labfiler1 and first 2 backup clients

At this point, we have labfiler1 configured as a backup/storage server. Labweb01 and labweb02 are configured to back up files to labfiler1 regularly. Now we want to add 3 more backup clients (labweb03 ~ labweb04). I will use Salt to set up these backup clients quickly.

Set up Salt environment

Setting up SaltStack itself is really simple. If you need help with installing SaltStack, check out my blog. The Salt Master does not have to be the same server as labfiler1. In this tutorial, Salt Master (hostname saltmaster) is a separate server from labfiler1. Salt has features like Pillar and State but for our purpose here, all you need to know is remote execution (aka cmd.run in Salt). Install Salt Minion on labweb03 - labweb04.

Let's test if saltmaster can control the Salt Minions (labweb03 - 04) with some commands.

  1. Test remotely executing commands on the Salt Minions.
  2. [[email protected] ~]# salt "labweb*" cmd.run 'date'
    labweb03.base.loc:
        Thu Apr 16 12:02:33 PDT 2015
    labweb04.base.loc:
        Thu Apr 16 12:02:33 PDT 2015
    
    [[email protected] ~]# salt "labweb*" cmd.run 'hostname'
    labweb04.base.loc:
        labweb04.base.loc
    labweb03.base.loc:
        labweb03.base.loc
    
    [[email protected] ~]# salt "labweb*" cmd.run 'echo `hostname -s`'
    labweb04.base.loc:
        labweb04
    labweb03.base.loc:
        labweb03
    
    
        [[email protected] ~]# salt "labweb*" cmd.run 'whoami'
    labweb03.base.loc:
        root
    labweb04.base.loc:
        root
    
  3. You can see right away Salt will save a ton of typing/time/energy.

Quick review of how to add backup clients

Let's review what needs to be done to add multiple backup clients.

  1. Create ssh key pair on multiple Linux hosts that you are going to add to the backup solution.
  2. Gather up all .pub files from the new backup clients.
  3. On labfiler1, run script add-pub-auth-key.sh to append 1) .pub files to authorized_keys and 2) insert command=... into authorized_keys.
  4. Push out script backup-rsync.sh to all new backup clients.
  5. Edit /etc/crontab on all backup clients to schedule them to run.

If you had to do the steps on 10 Linux hosts, 1 Linux host at a time, it will be really painful. Scripting can help automate most of the steps. However using Salt will make it even more quick and easy. Using Salt, you can add 10 or 100 backup clients by completing just 1 set of 5-6 steps.

Saltmaster: Using Salt to generate ssh key pairs

  1. Here's the Salt command to generate ssh key pairs on the Salt Minions. Note running this one command will generate ssh key pairs on multiple remote Linux hosts. It's one long command, broken into shorter lines with \.
  2. salt "labweb*" cmd.run \
    'ssh-keygen -t rsa -C "[email protected]`hostname`" \
    -f /root/.ssh/id_rsa_`hostname -s`_backupclient -N ""'
    
  3. Here's what you will seen when you run the command on saltmaster.
  4. [[email protected] ~]# salt "labweb*" cmd.run \
    'ssh-keygen -t rsa -C "[email protected]`hostname`" \
    -f /root/.ssh/id_rsa_`hostname -s`_backupclient -N ""'
    labweb03.base.loc:
        Generating public/private rsa key pair.
        Created directory '/root/.ssh'.
        ...
        +--[ RSA 2048]----+
        ...
        | o+..            |
        +-----------------+
    
    labweb04.base.loc:
        Generating public/private rsa key pair.
        Created directory '/root/.ssh'.
        ...
        +--[ RSA 2048]----+
        ...
        |             .+B=|
        +-----------------+
    [[email protected] ~]#
    
  5. FYI, if you needed to issue the command on different hosts with dissimilar names, you can use -L key. with similar name (ex.: nosql01.base.loc, mysql01.base.loc, oracle01.base.loc) you would just need to run following. Hosts are separated by a comma, with no blank space.
  6. [[email protected] ~]# salt -L "nosql01.base.loc,mysql01.base.loc,oracle01.base.loc" cmd.run 'date'
    
  7. Using Salt, check the key pair files were created in /root/.ssh/ on all 2 hosts: labweb03 - labweb04.
  8. [[email protected] ~]# salt "labweb*" cmd.run 'ls /root/.ssh/i* | sort'
    labweb03.base.loc:
        /root/.ssh/id_rsa_labweb03_backupclient
        /root/.ssh/id_rsa_labweb03_backupclient.pub
    labweb04.base.loc:
        /root/.ssh/id_rsa_labweb04_backupclient
        /root/.ssh/id_rsa_labweb04_backupclient.pub
    [[email protected] ~]#
    

Using Salt to pull .pub files from backup clients

Now that the backup clients have ssh key pairs ready, you need to collect the .pub files from the backup clients and copy them over to labfiler1 somehow. Instead of collecting 1 file from each Linux host at a time, let's use Salt and collect all files with 1 line of command.

You can use a Salt module called cp.push_dir to pull folder/files from Salt Minions to Salt Master. Before using the module for first time though, you need to enable Salt Master to accept files pushed from Salt Minions.

  1. Saltmaster: open /etc/salt/master to edit.
  2. Saltmaster: /etc/salt/master before editing.
  3. ...
    #file_recv: False
    ...
    
  4. Salt Master: /etc/salt/master after editing. Removed # and changed value to True.
  5. ...
    file_recv: True
    ...
    
  6. Salt Master: restart salt master service (service salt-master restart)

Now you are ready to collect .pub files from Salt Minions to saltmaster using Salt.

  1. Saltmaster: Run following command. Essentially saltmaster commands each Salt Minion to look in /root/.ssh/ and push all files ending with '*client.pub' to saltmaster.
  2. [[email protected] ~]# salt "labweb*" cp.push_dir /root/.ssh/ glob='*client.pub'
    labweb03.base.loc:
        True
    labweb04.base.loc:
        True
    
    1. 'labweb0*' --> target minions
    2. /root/.ssh/ --> source folder on Salt Minions
    3. glob='*client.pub' --> file that will be pulled from Salt Minion to saltmaster
  3. Saltmaster: use find command to verify the files got copied to saltmaster.
  4. [[email protected] ~]# find /var -name "*backupclient*"
    /var/cache/salt/master/minions/labweb03.base.loc/files/root/.ssh/id_rsa_labweb03_backupclient.pub
    /var/cache/salt/master/minions/labweb04.base.loc/files/root/.ssh/id_rsa_labweb04_backupclient.pub
    [[email protected] ~]#
    
  5. The .pub files are all in separate folders under /var/cache/salt/master/... Let's collect them into 1 folder to prep to copy to labfiler1.
  6. Saltmaster: Create /root/pubfiles-`date +%Y%m%d`/ (which translates to /root/pubfiles-20150416/ as of this writing), find all *backupclient.pub, and copy them into folder /root/pubfiles-`date +%Y%m%d`/. Note \; at the end of the command.
  7. [[email protected] ~]# mkdir /root/pubfiles-`date +%Y%m%d`/
    [[email protected] ~]# find /var -name \*backupclient.pub -exec cp {} /root/pubfiles-`date +%Y%m%d`/ \;
    
  8. Saltmaster: check files are in /root/pubfiles-20150416/
  9. [[email protected] ~]# ls /root/pubfiles-20150416/ | sort
    id_rsa_labweb03_backupclient.pub
    id_rsa_labweb04_backupclient.pub
    [[email protected] ~]#
    
        OR
    
    [[email protected] ~]# ls /root/pubfiles-`date +%Y%m%d`/ | sort
    
    
  10. Saltmaster: tar up /root/pubfiles-`date +%Y%m%d`/ and copy the .tar file to labfiler1:/home/backupuser/. You could copy the .pub files directly into labfiler1:/home/backupuser/pubfiles/. I however like to tar them up whenever new backup clients are added. This helps with keeping track of backup clients and when troubleshooting.
  11. [[email protected] ~]# tar -czf pubfiles-`date +%Y%m%d`.tar.gz /root/pubfiles-`date +%Y%m%d`
    [[email protected] ~]# scp pubfiles-`date +%Y%m%d`.tar.gz [email protected]:~/
    
  12. labfiler1 (backup storage server): untar pubfiles-`date +%Y%m%d`.tar and move the .pub files into /home/backupuser/pubfiles/
  13. [[email protected] ~]$ ls pubfiles-2*
    pubfiles-20150416.tar.gz
    [[email protected] ~]$ tar -xzf pubfiles-20150416.tar.gz -C /tmp
    [[email protected] ~]$ mv -f /tmp/root/pubfiles-20150416/id_rsa_labweb0* /home/backupuser/pubfiles/
    
  14. labfiler1, as backupuser: Now you have collected the *backupclient.pub files from backup clients on labfiler1. Next step is to update /home/backupuser/.ssh/authorized_keys using script, /home/backupuser/bin/add-pub-auth-key.sh. As explained earlier, it appends the text in the *backupclient.pub files and inserts "command=...".
  15. [[email protected] ~]$ /home/backupuser/bin/add-pub-auth-key.sh
    ##################################
    Proceeding to update /home/backupuser/.ssh/authorized_keys. Duplicates will not be added.
    ##################################
    Nothing to add. Public key, /home/backupuser/pubfiles/id_rsa_labweb01_backupclient.pub, was already present in authorized_keys.
    Nothing to add. Public key, /home/backupuser/pubfiles/id_rsa_labweb02_backupclient.pub, was already present in authorized_keys.
    + Appended public key, /home/backupuser/pubfiles/id_rsa_labweb03_backupclient.pub, to authorized_keys.
    + Appended public key, /home/backupuser/pubfiles/id_rsa_labweb04_backupclient.pub, to authorized_keys.
    [[email protected] ~]$
    
  16. Public keys from labweb03 and labweb04 were added to authorized_keys.

Push backup-rsync.sh to backup clients

Now we are ready to push out backup-rsync.sh to the backup clients. All steps are done on saltmaster.

  1. Copy in backup-rsync.sh from labweb01 to saltmaster's /root/. Salt will then push out the file to all salt minions. It would look something like this.
  2. [[email protected] ~]# scp [email protected]:/root/bin/backup-rsync.sh ~/.
    [email protected]'s password:
    backup-rsync.sh                        100% 1590     1.6KB/s   00:00
    [[email protected] ~]#
    
  3. Verify /root/backup-rsync.sh on saltmaster is what you want to push out to backup clients.
  4. Create directory on /root/bin/ on labweb03 and labweb04.
  5. [[email protected] ~]# salt 'labweb*' cmd.run 'mkdir /root/bin'
    labweb03.home.loc:
    
    labweb04.home.loc:
    
    [[email protected] ~]#
    
  6. Using salt-cp command of Salt, copy out saltmaster:/root/backup-rsync.sh out to Salt Minions. You should see following.
  7. [[email protected] ~]# salt-cp 'labweb0*' /root/backup-rsync.sh /root/bin/
    {'labweb03.base.loc': {'/root/bin/backup-rsync.sh': True},
      'labweb04.base.loc': {'/root/bin/backup-rsync.sh': True}}
    [[email protected] ~]#
    
    1. salt-cp --> salt command to copy files from Salt master to Salt minions
    2. 'labweb0*' --> target minions
    3. /root/backup-rsync.sh --> source file, which is on saltmaster
    4. /root/bin/ --> target folder on all salt minions
  8. Check the minions got the files.
  9. [[email protected] ~]# salt 'labweb*' cmd.run 'head /root/bin/backup-rsync.sh'
    labweb03.base.loc:
        #!/bin/bash
    
        ###
        # variables that  M U S T  be VERIFIED
        ###
        DestHost=labfiler1.base.loc
        ...
    
    labweb04.base.loc:
        #!/bin/bash
    
        ###
        # variables that  M U S T  be VERIFIED
        ###
        DestHost=labfiler1.base.loc
        ...
    
  10. Set execute permission on the minions.
  11. [[email protected] ~]# salt 'labweb*' cmd.run 'chown root:root -R /root/bin'
    [[email protected] ~]# salt 'labweb*' cmd.run 'chmod 750 -R /root/bin'
    
  12. You are almost ready to execute /root/bin/backup-rsync.sh on all the Salt Minions but not quite yet. You first need to update /root/.ssh/known_hosts on each Salt Minion.

Updating known_hosts on backup clients

When known_hosts is updated manually

  1. When a user on a Linux host ssh logs in to another account on a different host for the first time, you will be asked this question.
  2. [[email protected] ~]# ssh [email protected]
    The authenticity of host 'labfiler1 (192.168.11.151)' can't be established.
    RSA key fingerprint is ec:74:e2:03:91:83:84:52:37:65:f2:4d:e5:72:a4:63.
    Are you sure you want to continue connecting (yes/no)?
    
  3. You need to answer yes in order to complete logging in. And next time you try to log in, you won't be asked again. You are not asked again because /root/.ssh/known_hosts on labweb03 was updated with following line when you answered yes. It is a long line of text.
  4. [[email protected] ~]# more .ssh/known_hosts
    labfiler1.base.loc ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAz/lv8BBRaSrUP9xceOW/MK1AQw5E0GdJ9eH4h......ILd/qKofrjRBlwFQ==
    

How to update known_hosts automatically with Salt

When you use Salt to run backup-rsync.sh on remote Linux hosts, you don't get a chance to answer yes. To work around this, you need to update /root/.ssh/known_hosts on all Salt Minions with some text.

  1. Saltmaster: Since labweb01 successfully connected to labfiler1 earlier, copy [email protected]:/root/.ssh/known_hosts to saltmaster:/root/known_hosts_labweb01.
  2. [[email protected] ~]# scp [email protected]:/root/.ssh/known_hosts ~/known_hosts_labweb01
    
  3. Saltmaster: edit /root/known_hosts_labweb01 so that it contains only one line of text, that looks like below. The line starts with labfiler1....
  4. [[email protected] ~]# more .ssh/known_hosts
    labfiler1.base.loc ssh-rsa AAAAB3NzaC1yc......ILd/qKofrjRBlwFQ==
    
  5. Saltmaster: Using Salt, we will push out /root/known_hosts_labfiler1 to the backup clients. Next use Salt to cat content of it to known_hosts and set permission on known_hosts.
  6. [[email protected] ~]# salt-cp 'labweb0*' /root/known_hosts_labfiler1 /root/
    {'labweb03.base.loc': {'/root/known_hosts_labfiler1': True},
     'labweb04.base.loc': {'/root/known_hosts_labfiler1': True}}
    
  7. Saltmaster: Check the files are on Salt Minions.
  8. [[email protected] ~]# salt 'labweb0*' cmd.run 'ls /root/known*'
    labweb03.base.loc:
        /root/known_hosts_labfiler1
    labweb04.base.loc:
        /root/known_hosts_labfiler1
    
  9. Saltmaster: Using Salt, on all Salt Minions, append content of /root/known_hosts_labfiler1 to /root/.ssh/known_hosts.
  10. [[email protected] ~]# salt 'labweb0*' cmd.run 'cat /root/known_hosts_labweb01 >> /root/.ssh/known_hosts'
    labweb03.home.loc:
    
    labweb04.home.loc:
    
    

Test copying files after updating known_hosts

Test copying files from one of the newly added backup client to labfiler1.

  1. Log into labweb04 as root.
  2. Use following command to test copying /root/install.log from labweb04 to labfiler1.base.loc:/backupinbox/.
  3. rsync -ave "ssh -i .ssh/id_rsa_`hostname -s`_backupclient" /root/install.log [email protected]:/backupinbox/
    
        OR
    
    rsync -ave "ssh -i .ssh/id_rsa_labweb04_backupclient" /root/install.log [email protected]:/backupinbox/
    
  4. Note for the destination hostname, we are using the full hostname. You need to keep using same hostname convention throughout the tutorial.
  5. File copy should succeed, without having to answer any prompt.
  6. Now you are ready to test running backup-rsync.sh remotely using Salt.

Test running backup-rsync.sh on backup clients

  1. Saltmaster: Using Salt, execute backup-rsync.sh manually on a remote backup client. If everything is set up correctly, you should see following. You may see the first line starting with Warning: which should not reappear next time backup-rsync.sh is executed.
  2. [[email protected] ~]# salt 'labweb03*' cmd.run '/root/bin/backup-rsync.sh'
    labweb03.home.loc:
        Warning: Permanently added the RSA host key for IP address '192.168.1.10' to the list of known hosts.
        sending incremental file list
        2015-04-24/
    
        sent 46 bytes  received 16 bytes  124.00 bytes/sec
        total size is 0  speedup is 0.00
        sending incremental file list
        created directory /backupinbox/labweb03/2015-04-24
        etc--2015-04-24--15.tar.gz
    
        sent 9183992 bytes  received 31 bytes  6122682.00 bytes/sec
        total size is 9182773  speedup is 1.00
        sending incremental file list
        root--2015-04-24--15.tar.gz
    
        sent 9480 bytes  received 31 bytes  6340.67 bytes/sec
        total size is 9381  speedup is 0.99
    
  3. Saltmaster: Next, run backup-rsync.sh command on multiple, remote backup clients (in this case labweb03 and labweb04). You should see something like below.
  4. [[email protected] ~]# salt 'labweb0*' cmd.run '/root/bin/backup-rsync.sh'
    labweb04.base.loc:
        sending incremental file list
        2015-04-20/
    
        sent 46 bytes  received 16 bytes  41.33 bytes/sec
        total size is 0  speedup is 0.00
        sending incremental file list
        etc--2015-04-20--14.tar.gz
        ...
    
        sent 851 bytes  received 115 bytes  1932.00 bytes/sec
        total size is 9595  speedup is 9.93
    labweb03.base.loc:
        sending incremental file list
        2015-04-20/
    
        sent 46 bytes  received 16 bytes  41.33 bytes/sec
        total size is 0  speedup is 0.00
        sending incremental file list
        etc--2015-04-20--14.tar.gz
        ...
    
        sent 851 bytes  received 115 bytes  644.00 bytes/sec
        total size is 9687  speedup is 10.03
    [[email protected] ~]#
    
  5. Labweb01: Check labfiler1:/backupinbox/ and you should see the folders with files.
  6. [[email protected] ~]$ du -hs /backupinbox/*
    8.8M	/backupinbox/labweb01
    8.8M	/backupinbox/labweb02
    27M	    /backupinbox/labweb03
    8.8M	/backupinbox/labweb04
    [[email protected] ~]$
    

Edit cron schedule on backup clients

As we've verified backup-rsync.sh works, you can now schedule the script to run daily using cronjob. To do so, use Salt to edit /etc/crontab on all the backup clients.

  1. Saltmaster: Let's test adding cron job on one backup client, labweb03, by running following command on saltmaster.
  2. [[email protected] ~]# salt "labweb03*" cmd.run 'echo "0 1 * * * root /root/bin/backup-rsync.sh" >> /etc/crontab'
    labweb03.base.loc
    
  3. Labweb03: Check /etc/crontab on labweb03 and you should see the line that you just added with cat command. Labweb03 should run /root/bin/backup-rsync.sh every night at 1am.
  4. If you wanted to edit /etc/crontab on rest of labweb** servers, run following.
  5. [[email protected] ~]# salt "labweb*" cmd.run 'echo "0 1 * * * root /root/bin/backup-rsync.sh" >> /etc/crontab'
    
  6. All backup clients will run backup-rsync.sh at 1am everyday and send backups to labfiler1.
  7. Remember to set cronjob schedule on labweb01 and labweb02 if this is in production network.