NOTE: This script has been superseded by this one: Bash Script to Create new virtual hosts on Nginx each under a different user
Setting up virtual hosts on any sort of web server normally takes at least a few minutes and several commands (if you’re not running a control panel of some variety, and even then it normally takes a good number of clicks and checking boxes to get what you want). All of this can be quite annoying when you have to set up several of these a day.
So I put together a simple bash script to quickly provision the hosting for a new static site running on Nginx. The script was originally build for use on the Amazon AMI running on AWS. The script uses the sudo command as the password is not required for the default user on the AWS AMI, however if you are running as a non root user with access to the sudo command then you should be prompted for your password when running the script.
What does the script do:
- Creates a new vhosts entry for nginx using a basic template
- Creates a new directory for the new vhost and sets nginx as the owner
- Adds a simple index.html file to the new directory to show the site is working.
- Reloads Nginx to allow the new vhost to be picked up
Expanding on from an older post covering a simple usage of the the mysqldump program I finally found some time to write a bash script to control this process and allow all MySQL databases to be backed up separately with one command.
This script uses the MySQL client to pull in a list of all the databases on the server and then loop round and back each one up into its own compressed file (using the gzip program). Once a backup file for all the databases has been created the script then bundles them all into a single tar archive, removing the original individual files. This then allows you to easily keep track of all your backups, by just having one file containing all the data you need to restore all DBs (or just one) to a point in time. A script such as my one could be used to limit the number of backups stored at any one time (to save on disk space).
I needed a simple automated way to remove any number of old backup tar files from a directory leaving only the newest 30 files. As the only thing in this directory was compressed backup files, the script didn’t need to worry about file names or file types, it could just delete the oldest files in the directory (as all the files were effectively the same but just for a different period in time).
So I wrote this bash script (remove_old_files.sh) to handle it for me, it takes two arguments. The first is the number of files to leave (in my case 30), and the second the absolute file path to the directory you want to keep in check.