Using Puppet Trusted Facts

How to improve the security around holding private information in Puppet.

What are Trusted Facts:

By default when Puppet runs on a node, Facter will discover system information and other custom values and report them back to the Puppet Master as Facts, however you have no guarantee the node is telling the truth, the facts are self-reporting.

This may not be such an issue with certain facts, i.e. kernelversion or swapfree, as if these are reported incorrectly it will probably just result in the Puppet run failng and not be of any real security concern.

However if you’re using the Roles/Profiles Pattern and you store sensitive/private information in your Puppet code or Hiera files (such as private keys) then if the role or tier facts were to be changed this could easily lead to data leakage and one server receiving the private information of another.

Trusted facts however are extracted from the node’s certificate, which can prove that the CA checked and approved them and prevents them been overridden.

Why Should I use them:

Suppose you have the following (very simple) setup, with web accessible servers sharing a Puppet Master. Each server sits in it’s own subnet and is firewalled off from the other subnets. The two web servers can not talk via internal networks directly to each other.

If the the Private key for the x509 cert used for the HTTPS connection on the so called “secure server” is stored within hiera and installed by Puppet, then if someone with malicious intent was able to compromise the “Corporate Site” server and gain root access they could easily change the role Fact over to the same as the “Secure Server” and subsequently gain access to the private key.

Simple server layout topology, each server in own subnet with FW preventing the two frontend servers communicating with each other.

If you were using Trusted Facts however this would not be possible as the role would be baked into the nodes certificate and as a result would require the Puppet Master to sign a new cert before giving up any private information.

Now you may argue that it requires root access (or at least access to the Puppet user account) to make this work. And if root access has been gained then it’s already game over. Well not entirely, because you only have root access to the one server, and this server doesn’t hold anything confidential (still bad obviously but could be a lot worse), there is also no easy way to pivot off this machine to target others. But using Puppet you could easily pull down all the private info about any other machines in other networks (sharing the same Puppet Master) without even gaining any access to them let alone privileged access (and you don’t need find vulnerabilities in Puppet).

How to use Trusted Facts:

On the Puppet Master if using Open Source < v4.0 you will need to enable trusted_node_data within your puppet.conf file. PE has this enabled by default.

[master]
...
trusted_node_data = true
...

Then when bringing up new nodes for the first time, before launching the first Puppet run add a new section in the csr_attributes.yaml setting your facts, for example:

# /etc/puppet/csr_attributes.yaml
extension_requests:
  "1.3.6.1.4.1.34380.1.1.100": "secure-site"
  "1.3.6.1.4.1.34380.1.1.101": "prod"

These facts will then be added into the certificate signed by the Puppet CA (as long as the Puppet CA approves them).
The “1.3.6.1.4.1.34380.1.1.100” bit is an OID, you cannot use a string here unless it is a registered OID because as part of x509 spec this will be mapped to an OID if it’s not one already. Puppet 3.4 – 3.8 registered a few basic ones within the ppRegCertExt OID range: Puppet 3.8 ppRegCertExt OIDs. However Puppet 4 has now introduced a much more compressive list of OIDs: Puppet 4+ ppRegCertExt OIDs.

Note: I have picked the OIDs "1.3.6.1.4.1.34380.1.1.100"
and "1.3.6.1.4.1.34380.1.1.100" arbitrarily, simply using the ppRegCertExt OID range and bumping up the last number to way beyond what Puppet are currently using.
So the example above for Puppet 4 could be simplified to:

extension_requests:
  pp_role: "secure-site"
  pp_environment: "prod"

Within your Puppet code the trusted facts are available through the $trusted hash, but to make them more friendly, and usable by your hiera structure you can set global variables to equal those of your trusted ones:

If you add the following to your initial point of entry .pp file (e.g. default.pp or entry.pp):

$role = $trusted['extensions']['1.3.6.1.4.1.34380.1.1.100']
$tier = $trusted['extensions']['1.3.6.1.4.1.34380.1.1.101']

You can then use the $role and $tier variables in your hiera hierarchy just as you would with normal facts.

:hierarchy:
  - "%{::environment}/hiera/role_%{::role}/tier_%{::tier}"
  - "%{::environment}/hiera/role_%{::role}"
  - "%{::environment}/hiera/osfamily/%{::os_family}"
  - "%{::environment}/hiera/virtual/%{::virtual}"
  - "%{::environment}/hiera/common"

Approving Puppet CSR extensions

Unfortunately Puppets build in cert list command does not have the ability to show CSR extension_requests. So you’ll need check these manually, this could easily be done by using openssl:

openssl req -noout -text -in .pem

Seeing it all in action

To help show this in action I have created Docker images, Puppet Master, Corporate site and Secure Site ones. There is a Vagrant template to enable launching these quickly. You will need to have the following installed:

Then simply clone the the pug-puppet repo onto your machine, and pull in the puppet modules:

git clone https://github.com/sedan07/pug-puppet.git
librarian-puppet install

Then the Vagrant repo:

git clone https://github.com/sedan07/pug-vagrant.git

Copy the config.yaml.dist file to config.yaml and change the puppet_repo_dir line to point to the pug-puppet dir you created above.
Now you can launch the containers:

vagrant up pug-puppet-master
vagrant up pug-web-http
vagrant up pug-web-https

Launch a shell in the containers using the docker exec command:

docker exec -it pug-web-http /bin/bash

From within either of the web servers try launching a puppet run:

puppet agent -t

and see what happens. Then try overriding one of the facts like the role by setting it as an External Fact:

echo "role=secure-site" > /etc/facter/facts.d/role.txt

The pug-puppet repo contains 3 branches:

  • master (Trusted facts enabled and enforced)
  • migration (allows nodes with no trusted data in their cert to still connect, but certs with trusted data must always use those facts)
  • not_trusted (standard no-trusted-facts way of doing things)

The migration branch mentioned above shows a simple way to allow you to migrate your servers from not using Trusted Facts over to using them a few at a time, without breaking all the non-migrated ones.

On a side note:

You should use eyaml(or similar) for storing your private information securely at rest in Puppet. As well as making sure only personnel who actually need to day-to-day access to your Puppet/hiera repo that holds your secrets have access.

Securing a Private Docker Registry

So when I researched this a few weeks back most of the guidance I found suggested using Basic Auth. Now nothing wrong with this method as such, it works after all. However if you’re running a registry for more than one user you obviously don’t want to have just one username/password to access it. This then means having a way to add new users easily to it + “bot” users for your servers and so on.

However there is actually a much better way, using x.509 certificates (the same method used to verify and authenticate access to the docker daemon) with your own self-signed CA. Once a user has their client key and certificate setup the authentication is transparent to them. It also provides stronger security than a simple username/password combo.

The Docker docs go through the basic process for configuring this: https://docs.docker.com/articles/certificates/. However they don’t go through how to create your own CA. There are a few important things to keep in mind first: The security of the entire thing rests on keeping your root CA private key secret. Failure to do so means a miscreant with your private key could easily sign any number of new client certificates gaining full access to your registry. You will also need to ensure you have a process for signing new users certificate requests (CSRs), as you certainly don’t want to be generating the private key for them and emailing it over.

You may already have a a PKI implementation within your organisation that you can reuse here (such as used for providing access to build systems, internal wikis and so fourth), if not you have a few choices: You can either generate the Root CA using OpenSSL (or another encryption toolkit) and manage it all your self or use a PKI system, such as Cloudflare’s CFSSL lib which allows to specify all the config and CSRs in json files.

If you wanted to use OpenSSL here is a quick and dirty guide on how you would go about this:

Setup:
Create a new empty directory to create the root CA in then create the basic dir structure and required files:

mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial

Openssl Configuration:
Copy your systems default openssl.cnf file to the cwd and name it something like: “openssl-ca.cnf”. Then edit it to setup the CA and client sections (un-commenting and editing lines where appropriate):

[ usr_cert ]
basicConstraints=CA:FALSE
nsCertType = client
nsComment = "Docker Registry CA"
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
[ v3_ca ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
basicConstraints = critical,CA:true
keyUsage = cRLSign, keyCertSign

And change the default directory to the current one:

[ CA_default ]
dir = . # Where everything is kept

The usr_cert section is used when creating the client certificates, the v3_ca is used for the CA creation, it simply configs the type of cert we want it what we want it it be usable for.

Generating the Private Key:

openssl genrsa -aes256 -out private/docker_ca.key 4096

Self-signing the certificate to be usable as a CA:

openssl req -new -x509 -days 3650 -key private/docker_ca.key -sha256 -extensions v3_ca -out certs/docker_ca.crt -config openssl-ca.cnf

Now you can go about signing CSRs for the client certificates:

Sign a signing request using your CA key:

openssl ca -keyfile private/docker_ca.key -cert certs/docker_ca.crt -extensions usr_cert -notext -md sha256 -in certs/sebdangerfield.csr -out certs/sebdangerfield.crt -config openssl-ca.cnf -days 366

Troubleshooting:

If when you try and authenticate with your Docker Registry Docker thinks the private key and certificate don’t match make sure your client certificate is the first one in your client.cert file (i.e. the certs making up the chain come after).

Nginx and PHP-FPM, bash script for deleting old vhost’s

If you’re using my bash script to create new nginx vhosts (with php-fpm support) you may also require an easy way to remove old vhosts you no longer need (along with all the configs associated with the vhost). I’ve put together this very simple bash script to automate the process:

#!/bin/bash
# @author: Seb Dangerfield
# http://www.sebdangerfield.me.uk/ 
# Created:   02/12/2012
 
# Modify the following to match your system
NGINX_CONFIG='/etc/nginx/sites-available'
NGINX_SITES_ENABLED='/etc/nginx/sites-enabled'
PHP_INI_DIR='/etc/php5/fpm/pool.d'
NGINX_INIT='/etc/init.d/nginx'
PHP_FPM_INIT='/etc/init.d/php5-fpm'
# --------------END 
SED=`which sed`
CURRENT_DIR=`dirname $0`
 
if [ -z $1 ]; then
	echo "No domain name given"
	exit 1
fi
DOMAIN=$1
 
# check the domain is valid!
PATTERN="^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$";
if [[ "$DOMAIN" =~ $PATTERN ]]; then
	DOMAIN=`echo $DOMAIN | tr '[A-Z]' '[a-z]'`
	echo "Removing vhost for:" $DOMAIN
else
	echo "invalid domain name"
	exit 1 
fi
 
echo "What is the username for this site?"
read USERNAME
HOME_DIR=$USERNAME
 
# Remove the user and their home directory
userdel -rf $USERNAME
# Delete the users group from the system
groupdel $USERNAME
 
# Delete the virtual host config
rm -f $NGINX_CONFIG/$DOMAIN.conf
rm -f $NGINX_SITES_ENABLED/$DOMAIN.conf
 
# Delete the php-fpm config
FPMCONF="$PHP_INI_DIR/$DOMAIN.pool.conf"
rm -f $FPMCONF
 
$NGINX_INIT reload
$PHP_FPM_INIT restart
 
echo -e "\nSite removed for $DOMAIN"

How to use it:

Please note that because it deletes the users home directory all users files will be deleted, so if you want to keep a copy of the files as a backup do this before running the script.

Simply copy the script from above into a new file called something like remove_php_site.sh. If your not using Debian/Ubuntu or have modified the default directories and users for PHP and Nginx then you will need to change the paths and Nginx user at the top of the script to match your system. You will need to change the permissions on the remove_php_site.sh file to make it executable (if it isn’t already):

chmod u+x remove_php_site.sh

Then run the remove_php_site.sh script passing to it the domain name as the only parameter.

./remove_php_site.sh example.com

It will then prompt you for the linux username used for this site and once that has been provided it will delete all the files related to that site and restart Nginx and PHP-FPM.

NSS error -8023 using AWS SDK for PHP

Please note this fix should also work on Fedora, CentOS and Redhat linux distros if you are seeing the NSS error -8023 when using CURL and PHP.

At work we came up against this odd error message from Amazon Web Services(AWS) SDK for PHP when using the SDK in a forked process on the AWS built AMI:

PHP Fatal error:  Uncaught exception 'cURL_Exception' with message 'cURL resource: Resource id #50; cURL error: SSL connect error (cURL error code 35). See http://curl.haxx.se/libcurl/c/libcurl-errors.html for an explanation of error codes.' in /usr/share/pear/AWSSDKforPHP/lib/requestcore/requestcore.class.php:829

Stack trace:
#0 /usr/share/pear/AWSSDKforPHP/sdk.class.php(1035): RequestCore->send_request()
#1 /usr/share/pear/AWSSDKforPHP/services/swf.class.php(1305): CFRuntime->authenticate('TerminateWorkfl...', Array)
#2 ....php(189): AmazonSWF->terminate_workflow_execution(Array)
#3 ....php(83): daemon->checkSWFExecutions()
#4 ....php(350): daemon->run()
#5 {main}
  thrown in /usr/share/pear/AWSSDKforPHP/lib/requestcore/requestcore.class.php on line 829

Now, cURL error 35 means “A problem occurred somewhere in the SSL/TLS handshake. You really want the error buffer and read the message there as it pinpoints the problem slightly more. Could be certificates (file formats, paths, permissions), passwords, and others.” which is a bit vague and didn’t really help. After setting the CURLOPT_VERBOSE flag in the AWS SDK for PHP we were able to see the real error message:

NSS error -8023

Continue reading

Nginx and PHP-FPM, bash script for creating new vhost’s under separate fpm pools

Using worker pools in PHP-FPM can allow you to easily separate out and isolate virtual hosts that make use of PHP. PHP-FPM allows you to run multiple pools of processes all spawned from the master one and each pool can run as a different user and/or group. Each pool can be further isolated by running in a chroot environment and by overriding the default php.ini values on a per pool basis.

Running PHP for each vhost under a different user/group can help to stop a vulnerability in one site potentially exposing another vhost, it can also stop one malicious owner of a vhost from been able to use PHP to access the files of another site owned by someone else on the same server (in a shared hosting environment).

The process of setting up the web server config and a new PHP-FPM pool for each new vhost on a server can become a rather time consuming and boring process. However as this process follows a fairly standard set of steps it can be easily scripted.

Continue reading

Simple bandwidth monitoring on Linux

If you want a simple lightweight tool to monitor the network traffic in and out of your server vnstat might be just what you need. It keeps hourly, daily and monthly records and provides simple estimates of your expected use, it is also easy to link up to a web based frontend for fancy charts and reporting.

Installing and configuring vnstat is very simple, firstly install using your standard package manager, for example:

Debian

apt-get install vnstat

CentOS:

yum install vnstat

Then tell vnstat to create a database for the network interfaces you want to listen to (e.g. eth0):

vnstat -u -i eth0

That’s it, wait a few minutes then run vnstat to view a simple console display of the amount of traffic that has traveled though all the interfaces you’re monitoring:

vnstat
 
   eth0 since 01/22/12
 
          rx:  177.59 MiB      tx:  7.78 MiB      total:  185.37 MiB
 
   monthly
                     rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
       Jan '12    177.59 MiB |    7.78 MiB |  185.37 MiB |    0.59 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated       183 MiB |       7 MiB |     190 MiB |
 
   daily
                     rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
     yesterday     12.53 MiB |    1.36 MiB |   13.89 MiB |    1.32 kbit/s
         today      8.28 MiB |     127 KiB |    8.40 MiB |    0.88 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated        --     |      --     |      --     |

You can also get vnstat to dump its output in a programming friendly format (semicolon delimited):

vnstat --dumpdb

If you do want a nicer looking interface or one that doesn’t require shell access have a look at: vnstat PHP frontend

If you need a bandwidth monitoring solution that records the utilization of individual protocols instead of just received and transmitted traffic then have a look at bandwidthd

Setting up a centralised syslog server in the cloud

This post should help you get a basic syslog server and client(s) up and running in a virtual environment, It will take you through the implementation of a reasonably secure (using rsyslog’s TLS authentication) yet flexible setup useful to most virtual based server architectures I will assume if you’re reading this that you know what syslog is and what it’s used for. (if not have a quick Google then come back)

 Why Setup a Centralized Syslog Server

  • For convenience – If you for instance have a large number of web servers and you need to diagnose a problem on one of them (maybe not sure which one) you only have to check in one place, if you wanted to compile some statistics from all of them or check if they had all successfully completed a software upgrade.
  • For added security – If someone hacks into one of your servers they will probably try and cover their tracks by erasing any log records created by there presence, however if your logs are also sent to another (hardened) server then the logs will still be available to sysadmins.
  • Another very useful reason which only really applies to virtual servers is to help retain the log files from a terminated server (e.g. shut-down due to decreased demand on your application).

Continue reading

Automatically creating new virtual hosts with Nginx (Bash script)

NOTE: This script has been superseded by this one: Bash Script to Create new virtual hosts on Nginx each under a different user

Setting up virtual hosts on any sort of web server normally takes at least a few minutes and several commands (if you’re not running a control panel of some variety, and even then it normally takes a good number of clicks and checking boxes to get what you want). All of this can be quite annoying when you have to set up several of these a day.

So I put together a simple bash script to quickly provision the hosting for a new static site running on Nginx. The script was originally build for use on the Amazon AMI running on AWS. The script uses the sudo command as the password is not required for the default user on the AWS AMI, however if you are running as a non root user with access to the sudo command then you should be prompted for your password when running the script.

What does the script do:

  • Creates a new vhosts entry for nginx using a basic template
  • Creates a new directory for the new vhost and sets nginx as the owner
  • Adds a simple index.html file to the new directory to show the site is working.
  • Reloads Nginx to allow the new vhost to be picked up