Scheduling Individual Backups of all MySQL Databases on Linux

Expanding on from an older post covering a simple usage of the the mysqldump program I finally found some time to write a bash script to control this process and allow all MySQL databases to be backed up separately with one command.

This script uses the MySQL client to pull in a list of all the databases on the server and then loop round and back each one up into its own compressed file (using the gzip program). Once a backup file for all the databases has been created the script then bundles them all into a single tar archive, removing the original individual files. This then allows you to easily keep track of all your backups, by just having one file containing all the data you need to restore all DBs (or just one) to a point in time. A script such as my one could be used to limit the number of backups stored at any one time (to save on disk space).

Continue reading

Bash script to reduce file count to a given number removing the oldest files first FIFO

I needed a simple automated way to remove any number of old backup tar files from a directory leaving only the newest 30 files. As the only thing in this directory was compressed backup files, the script didn’t need to worry about file names or file types, it could just delete the oldest files in the directory (as all the files were effectively the same but just for a different period in time).

So I wrote this bash script (remove_old_files.sh) to handle it for me, it takes two arguments. The first is the number of files to leave (in my case 30), and the second the absolute file path to the directory you want to keep in check.

Continue reading