If you want a simple lightweight tool to monitor the network traffic in and out of your server vnstat might be just what you need. It keeps hourly, daily and monthly records and provides simple estimates of your expected use, it is also easy to link up to a web based frontend for fancy charts and reporting.
Installing and configuring vnstat is very simple, firstly install using your standard package manager, for example:
Then tell vnstat to create a database for the network interfaces you want to listen to (e.g. eth0):
That’s it, wait a few minutes then run vnstat to view a simple console display of the amount of traffic that has traveled though all the interfaces you’re monitoring:
eth0 since 01/22/12
rx: 177.59 MiB tx: 7.78 MiB total: 185.37 MiB
rx | tx | total | avg. rate
Jan '12 177.59 MiB | 7.78 MiB | 185.37 MiB | 0.59 kbit/s
estimated 183 MiB | 7 MiB | 190 MiB |
rx | tx | total | avg. rate
yesterday 12.53 MiB | 1.36 MiB | 13.89 MiB | 1.32 kbit/s
today 8.28 MiB | 127 KiB | 8.40 MiB | 0.88 kbit/s
estimated -- | -- | -- |
You can also get vnstat to dump its output in a programming friendly format (semicolon delimited):
If you do want a nicer looking interface or one that doesn’t require shell access have a look at: vnstat PHP frontend
If you need a bandwidth monitoring solution that records the utilization of individual protocols instead of just received and transmitted traffic then have a look at bandwidthd
This post should help you get a basic syslog server and client(s) up and running in a virtual environment, It will take you through the implementation of a reasonably secure (using rsyslog’s TLS authentication) yet flexible setup useful to most virtual based server architectures I will assume if you’re reading this that you know what syslog is and what it’s used for. (if not have a quick Google then come back)
Why Setup a Centralized Syslog Server
- For convenience – If you for instance have a large number of web servers and you need to diagnose a problem on one of them (maybe not sure which one) you only have to check in one place, if you wanted to compile some statistics from all of them or check if they had all successfully completed a software upgrade.
- For added security – If someone hacks into one of your servers they will probably try and cover their tracks by erasing any log records created by there presence, however if your logs are also sent to another (hardened) server then the logs will still be available to sysadmins.
- Another very useful reason which only really applies to virtual servers is to help retain the log files from a terminated server (e.g. shut-down due to decreased demand on your application).
If when you’re trying to start Fail2ban you just get the following response:
Starting Fail2ban: [FAILED]
And you check in the fail2ban log file (or system log) and find no errors, it is probably caused by your Fail2ban init script writing the output of the fail2ban-client to /dev/null, effectively just discarding the output. The easy way to debug this is to try directly calling the fail2ban-client which will print out any syntax errors found in its config files. Use it like so:
$ fail2ban-client -x start
WARNING 'action' not defined in 'php-url-fopen'. Using default value
WARNING 'action' not defined in 'lighttpd-fastcgi'. Using default value
ERROR Error in action definition #iptables[name=SSH, port=ssh, protocol=tcp]
ERROR Errors in jail 'ssh-iptables'. Skipping...
This should then highlight where the errors are in your config file(s) and allow you to resolve them. You could obviously change the behavior of your init script to stop it discarding the output.
NOTE: This script has been superseded by this one: Bash Script to Create new virtual hosts on Nginx each under a different user
Setting up virtual hosts on any sort of web server normally takes at least a few minutes and several commands (if you’re not running a control panel of some variety, and even then it normally takes a good number of clicks and checking boxes to get what you want). All of this can be quite annoying when you have to set up several of these a day.
So I put together a simple bash script to quickly provision the hosting for a new static site running on Nginx. The script was originally build for use on the Amazon AMI running on AWS. The script uses the sudo command as the password is not required for the default user on the AWS AMI, however if you are running as a non root user with access to the sudo command then you should be prompted for your password when running the script.
What does the script do:
- Creates a new vhosts entry for nginx using a basic template
- Creates a new directory for the new vhost and sets nginx as the owner
- Adds a simple index.html file to the new directory to show the site is working.
- Reloads Nginx to allow the new vhost to be picked up
Expanding on from an older post covering a simple usage of the the mysqldump program I finally found some time to write a bash script to control this process and allow all MySQL databases to be backed up separately with one command.
This script uses the MySQL client to pull in a list of all the databases on the server and then loop round and back each one up into its own compressed file (using the gzip program). Once a backup file for all the databases has been created the script then bundles them all into a single tar archive, removing the original individual files. This then allows you to easily keep track of all your backups, by just having one file containing all the data you need to restore all DBs (or just one) to a point in time. A script such as my one could be used to limit the number of backups stored at any one time (to save on disk space).
I needed a simple automated way to remove any number of old backup tar files from a directory leaving only the newest 30 files. As the only thing in this directory was compressed backup files, the script didn’t need to worry about file names or file types, it could just delete the oldest files in the directory (as all the files were effectively the same but just for a different period in time).
So I wrote this bash script (remove_old_files.sh) to handle it for me, it takes two arguments. The first is the number of files to leave (in my case 30), and the second the absolute file path to the directory you want to keep in check.
After restarting my newly upgraded Ubuntu machine (from 10.04 to 10.10) was greeted with my computer freezing on the loading screen (whilst booting).
After rebooting and getting access to the console I discovered it appeared to be an issue with the Nvidia drivers, which were causing the X server to not start:
Failed to load /usr/lib/xorg/extra-modules/nvidia_drv.so
[ 23.651] (II) UnloadModule: "nvidia"
[ 23.651] (EE) Failed to load module "nvidia" (loader failed, 7)
[ 23.651] (EE) No drivers available.
Failed to load /usr/lib/xorg/extra-modules/nvidia_drv.so[ 23.651] (II) UnloadModule: "nvidia"[ 23.651] (EE) Failed to load module "nvidia" (loader failed, 7)[ 23.651] (EE) No drivers available.
So after trying a number of different ways I finally found out how to fix this issue:
run the nvidia-xconfig program:
Modify the xorg.conf file, and replace the driver name nvidia with nv.
sudo vi /etc/X11/xorg.conf
Then just try and start the Xserver again:
Hopefully this will help others fix this quicker than I managed.
There are a number of ways to automatically backup MySQL databases, the simplest one is just to use the mysqldump program and schedule a cron job.
The mysqldump program typically produces an SQL file which can used to restore any where between one to all databases on a server.
Running the mysqldump program is done using the following command:
mysqldump [OPTIONS] (Database name or list of databases)
NOTE: You can use --all-databases in the place of a single or list of database
names to backup all the databases in one file!
Whilst there are a large number of options to give the program the most important ones are probably the “username” and “password” ones.