If you want a simple lightweight tool to monitor the network traffic in and out of your server vnstat might be just what you need. It keeps hourly, daily and monthly records and provides simple estimates of your expected use, it is also easy to link up to a web based frontend for fancy charts and reporting.
Installing and configuring vnstat is very simple, firstly install using your standard package manager, for example:
Then tell vnstat to create a database for the network interfaces you want to listen to (e.g. eth0):
That’s it, wait a few minutes then run vnstat to view a simple console display of the amount of traffic that has traveled though all the interfaces you’re monitoring:
eth0 since 01/22/12
rx: 177.59 MiB tx: 7.78 MiB total: 185.37 MiB
rx | tx | total | avg. rate
Jan '12 177.59 MiB | 7.78 MiB | 185.37 MiB | 0.59 kbit/s
estimated 183 MiB | 7 MiB | 190 MiB |
rx | tx | total | avg. rate
yesterday 12.53 MiB | 1.36 MiB | 13.89 MiB | 1.32 kbit/s
today 8.28 MiB | 127 KiB | 8.40 MiB | 0.88 kbit/s
estimated -- | -- | -- |
You can also get vnstat to dump its output in a programming friendly format (semicolon delimited):
If you do want a nicer looking interface or one that doesn’t require shell access have a look at: vnstat PHP frontend
If you need a bandwidth monitoring solution that records the utilization of individual protocols instead of just received and transmitted traffic then have a look at bandwidthd
This post should help you get a basic syslog server and client(s) up and running in a virtual environment, It will take you through the implementation of a reasonably secure (using rsyslog’s TLS authentication) yet flexible setup useful to most virtual based server architectures I will assume if you’re reading this that you know what syslog is and what it’s used for. (if not have a quick Google then come back)
Why Setup a Centralized Syslog Server
- For convenience – If you for instance have a large number of web servers and you need to diagnose a problem on one of them (maybe not sure which one) you only have to check in one place, if you wanted to compile some statistics from all of them or check if they had all successfully completed a software upgrade.
- For added security – If someone hacks into one of your servers they will probably try and cover their tracks by erasing any log records created by there presence, however if your logs are also sent to another (hardened) server then the logs will still be available to sysadmins.
- Another very useful reason which only really applies to virtual servers is to help retain the log files from a terminated server (e.g. shut-down due to decreased demand on your application).
As promised a few times now I have finally got round to modifying the code slightly to remove any Symfony dependencies (Although there might still be some).
This download has been created using version 0.5.0 of the Symfony plugin. An example PHP page “example.php” has been added under the web directory to show the required JS scripts and a quick example of creating a chart using it. More details can be found here: http://www.symfony-project.org/plugins/sdInteractiveChartPlugin
This script is a modified version of the original vhost creator script for nginx posted here (Actually it was a modified version of the original version of this script).
The script below will automatically create a new user on the system and adds the nginx user to the new users group. This allows FTP access to be given to the newly created user and the user will only have access to the their site and not to all vhosts running on your server, however as long as the users group has permission to access the files then nginx will still be able to serve them.
NOTE: This setup only helps lock down access to the vhost directories on a web server hosting only static sites, if CGI of any kind (i.e. PHP) is available then this will also need to be locked down so each user has access to a CGI process or set processes running as that user.
What does the script do:
- Creates a new system user for the site
- Creates a new vhost config file for nginx using a basic template
- Creates a new directory for the site, within the new users home directory
- Adds a simple index.html file to the new directory to show the site is working.
- Makes sure the new nginx config syntax is correct before trying to reload nginx
- Reloads Nginx to allow the new vhost to be detected
If when you’re trying to start Fail2ban you just get the following response:
Starting Fail2ban: [FAILED]
And you check in the fail2ban log file (or system log) and find no errors, it is probably caused by your Fail2ban init script writing the output of the fail2ban-client to /dev/null, effectively just discarding the output. The easy way to debug this is to try directly calling the fail2ban-client which will print out any syntax errors found in its config files. Use it like so:
$ fail2ban-client -x start
WARNING 'action' not defined in 'php-url-fopen'. Using default value
WARNING 'action' not defined in 'lighttpd-fastcgi'. Using default value
ERROR Error in action definition #iptables[name=SSH, port=ssh, protocol=tcp]
ERROR Errors in jail 'ssh-iptables'. Skipping...
This should then highlight where the errors are in your config file(s) and allow you to resolve them. You could obviously change the behavior of your init script to stop it discarding the output.
Had this error appear today when trying to send an email using Swift Mailer (on an nginx powered box using PHP FastCGI). It turned out to be caused by the CGI parameter SERVER_NAME, the site was running on a wildcard domain under nginx and as a result the SERVER_NAME param contained the wildcard symbol. (e.g. *.test.example.com) However the SMTP server didn’t like this (and quite rightly so), this is where the Invalid domain name bit comes from.
Adding the following line to my nginx server config for that virtual host fixed the problem.
fastcgi_param SERVER_NAME $host;
The above line just sets the SERVER_NAME parameter to be the same as the host header for the current request. The only problem with this is if the host header in the request is not set then this value can still fall back to the original $server_name one (see nginx manual for more info). However this seemed to fix the problem for me, and I don’t think it should cause any problems as the site is not available directly via an IP address, so host header will always be required.
When creating a new Linux EC2 instance on AWS you’re able to pass extra data to the server to be used late in the boot sequence. This can be in the form of a simple bash script, or the URL(s) of a number of scripts. Or the data could simple by a list of parameters you need that instance to know. Full details on the sorts of things you can pass via the “User Data” parameter can be found on Ubuntu Website or details for the AWS build AMI can be found here. (Not all images can be passed User Data, but the standard ones by Amazon can be along with a range of community ones.)
Now if you pass a script or the URL to a script as User Data to your new instance then this will get run for you, and you don’t need to worry about it. However if you pass it a list of parameters (for example you may pass it the Endpoint URI for your RDS instance so the DB settings for your apps can be set up correctly) you need to know how to read the User Data. Along with reading the User Data you may also want to read in Metadata about your new instance, such as its AMI-ID or the public hostname of your instance. Reading the Metadata is done by querying a simple API:
The base URI of all the query’s is:
Recently when setting up a Symfony project on an Nginx powered server I found the Example configuration script found on the Nginx wiki for Symfony didn’t fully work as expected.
- If you added a slash to the end of the url when requesting backend.php or frontend_dev.php or similar caused the page not be found, even though under Apache this works fine and goes to the correct place. Which seemed to be caused by the script_name server param.
- GET parameters were not getting passed to Symfony. Caused by the arguments been removed during the try_files request.
So I modified the example config to produce the following, which as far as I can tell works exactly the same as the default .htaccess file does under Apache.