Scheduling Individual Backups of all MySQL Databases on Linux

Expanding on from an older post covering a simple usage of the the mysqldump program I finally found some time to write a bash script to control this process and allow all MySQL databases to be backed up separately with one command.

This script uses the MySQL client to pull in a list of all the databases on the server and then loop round and back each one up into its own compressed file (using the gzip program). Once a backup file for all the databases has been created the script then bundles them all into a single tar archive, removing the original individual files. This then allows you to easily keep track of all your backups, by just having one file containing all the data you need to restore all DBs (or just one) to a point in time. A script such as my one could be used to limit the number of backups stored at any one time (to save on disk space).

The script can either be downloaded from here, or simply copy the code below and paste it into a new file on your server giving it the extension .sh

make sure you give the user you plan to run it with the execute permission. If you want to give your current user permission to execute the script simply use the command below:

chmod +x db_backups.sh

You also need to make sure you add the credentials for your DB user to the top of the script, the user needs to have access to ‘lock tables’, ‘select tables’ and ‘show tables’ at a minimum.

db_backups.sh:

#!/bin/bash
# Enter the details for the MYSQL user you want to use below:
PASSWD=;
USER=;
# Get the correct paths to use for the binaries we want to use:
MYSQL=`which mysql`;MYSQLDUMP=`which mysqldump`;GZIP=`which gzip`;TAR=`which tar`;
LOCATION='/home/myuser/db_backups';
TEMP='/individual_backups';
# Does our backup Dir exist
if [ ! -d $LOCATION ]; then
	mkdir $LOCATION
fi
#Does our temporary backup location exist? - probably won't
if [ ! -d $LOCATION$TEMP ]; then
        mkdir $LOCATION$TEMP
fi
 
# get a list of all the databases
DBS=`$MYSQL -u$USER --password=$PASSWD -e"SHOW DATABASES;" --batch -s`
# Loop through all the databases and create a dump of each one, followed by
# piping to gzip to create a compressed verison of the dump.
for db in $DBS; do
	if [ "$db" != "information_schema" ]; then
		$MYSQLDUMP -u$USER --password=$PASSWD $db | $GZIP > $LOCATION$TEMP/$db\_`date +\%d\%m\%y-%H%M`.sql.gz
	else
		 $MYSQLDUMP -u$USER --password=$PASSWD --skip-lock-tables $db | $GZIP > $LOCATION$TEMP/$db\_`date +\%d\%m\%y-%H%M`.sql.gz
	fi
done
# Now create tar of the whole backups dir! - removing the individual gz files
cd $LOCATION$TEMP
$TAR -zc --remove-files -f ../db_backups_`date +\%d\%m\%y-\%H\%M`.tar.gz *
cd $LOCATION
rm -Rf .$TEMP

And then to run the script from the command line (if your working directory is the same one the script lives under) just use:

 ./db_backups.sh

If all goes successfully the script should produce no output.

However whilst running the script directly from the command line is useful for testing and creating spot backups you probably want to set a cron job up to allow the script to run automatically every day or hour or at what ever interval you feel would be helpful to you.

You should make sure that the Linux user you use to run this task is locked down and the script is not accessible to other users on your server as it contains DB credentials in plain text.

I have repeated the contents of a previous post here on how to set up a cron task.

To edit the current users crontab file, enter:

crontab -e

The first five columns (separated by spaces) specify the time and date at which the command will be run, and the sixth column is the command.

The columns:

  1. minute (0-59),
  2. hour (0-23),
  3. day of the month (1-31),
  4. month of the year (1-12),
  5. day of the week (0-6, 0=Sunday)
  6. The command (exactly the same as entered directly in to the terminal)

If you want a command to run every hour, or every day you can use the * character in the correct column to specify this. The * is effectively a wild card and so represents any possible value.

So to run this script every 2 hours we could use:

0 */2 * * * /root/bash_scripts/db_backups.sh > /root/db_backups_cron.log 2>&1

In the above cron entry I have also told it to write the output from running the script to a log file under the root directory.

2>&1

By default Linux will report messages under different levels, with standard messages been under level 1 (STDOUT) and errors under level 2 (STDERR) the above 4 characters tells Linux to record messages under level 2 in the same place as messages from level 1 (in this case in my log file).

The log file will only ever hold the result of the last run of the script (this stops the log file growing in size over time), however if you wanted to append messages to the file each time just use two chevrons >> instead of one.

You could also set the script to run daily by simply using:

@daily /root/bash_scripts/db_backups.sh > /var/log/db_backups_cron.log 2>&1

Once you’ve set up your crontab just use:

crontab -l

To list all the cron jobs for the current user, and you should see you new cron job.

That’s it… your DB’s will be automatically backed up for you. After the script has ran its first set of automatic backups you should make sure you’re able to restore your DBs from them (not much point having backups if they don’t work). It would also be wise to make sure these backups are stored in more than one physical place (i.e. not just on your server), using Rsync or FTP or similar to transfer them to a remote location.