Using Elasticsearch Logstash Kibana (ELK) to monitor server performance

There are myriad tools that claim to be able to monitor server performance for you, but when you’ve already got a sizeable bag of tools doing various automated operations, its always nice to be able to fulfil an operational requirement using one of those rather than having to on board another one.

I love Elasticsearch. It can be a bit of minefield to learn, but when you get to grips with it, and bolt on Kibana, you realise that there is very little you can’t do with it.

Even better, Amazon AWS now have their own Elasticsearch Service, so you can reap all the benefits of the technology without having to worry about maintaining a cluster of Elasticsearch servers.

In this case, my challenge was to expose performance data from a large fleet of Amazon EC2 server instances. Yes, there is certain amount of data available in AWS Cloudwatch, but it lacks key metrics like memory usage and load average, which are invariably the metrics you must want to review.

One approach to this would be to put some sort of agent on the servers and have a server poll the agent, but again, that’s extra tools. Another approach would be to put scripts on the servers that push metrics to Cloudwatch, so that you can augment the existing EC2 Cloudwatch data. This was something we considered, but with this method, the metrics aren’t logged to the same place in Cloudwatch as the EC2 data, so it all felt a bit clunky. And you only get 2 weeks of backlog.

This is where we turned to Elasticsearch. We were already using Elasticsearch to store information about access to our S3 buckets, which we were happy with. I figured there had to be a way to leverage this to monitor server performance, so set about some testing.

Our basic setup was a Logstash server using the S3 Input plugin, and the Elasticsearch output plugin, which was configured to send output to our Elasticsearch domain in AWS

output {
 if [type] == "s3-access" {
     elasticsearch {
         index => "s3-access-%{+YYYY.MM.dd}"
         hosts => ["search-*********-5isan2svbmpipm2xznyupbeabe.us-west-2.es.amazonaws.com:443"]
         ssl => true
    }
 } 
}

We now wanted to created a different type of index, which would hold our performance metric data. This data was going to be taken from lots of servers, so Logstash needed a way to ingest the data from lots of remote hosts. The easiest way to do this is with the Logstash input plugin syslog. We first set up Logstash to listen for syslog input.

input {
     syslog {
         type => syslog
         port => 8514
     }
}

We then get our servers to send their syslog output to our Logstash server, by giving them a universal rsyslogd configuration, where logs.domain.com is our Logstash server:

#Logstash Configuration
$WorkDirectory /var/lib/rsyslog # where to place spool files
$template LogFormat,"%HOSTNAME% ops %syslogtag% %msg%"
*.* @@logs.mydomain.com:8514;LogFormat

We now update our output plugin in Logstash to create the necessary Index in Elasticsearch:

output {
 if [type] == "syslog" {
    elasticsearch {
       index => "test-syslog-%{+YYYY.MM.dd}"
       hosts => ["search-*********-5isan2svbmpipm2xznyupbeabe.us-west-2.es.amazonaws.com:443"]
       ssl => true
    }
 } else {
    elasticsearch {
       index => "s3-access-%{+YYYY.MM.dd}"
       hosts => ["search-*********-5isan2svbmpipm2xznyupbeabe.us-west-2.es.amazonaws.com:443"]
       ssl => true
    }
 }
}

Note that I have called the syslog Index “test-syslog-…”. I will explain this in a moment, but its important that you do this.

Once these steps have been completed, it should be possible to see syslog data in Kibana, as indexed by Logstash and stored in our AWS Elasticsearch domain.

Building on this, all we had to do next was get our performance metric data into the syslog stream on each of our servers. This is very easy. Logger is a handly little utility that comes pre-installed on most Linux distros that allows you send messages to syslog (/var/log/messages by default).

We trialled this with Load Average. To get the data to syslog, we set up the following cronjob on each server:

* * * * * root cat /proc/loadavg | awk '{print "LoadAverage: " $1}' | xargs logger

This writes the following line to /var/log/messages every minute:

Jun 21 17:02:01 server1 root: LoadAverage: 0.14

It should then be possible to search for this line in Kibana

message: "LoadAverage"

to verify that it is being stored in Elasticsearch. When we do find results in Kibana, we can see that the LogFormat template we used in our server rsyslog conf has converted the log line to:

server1 ops root: LoadAverage: 0.02

To really make this data useful however, we need to be able to perform visualisation logic on the data in Kibana. This means exposing the fields we require and making sure those field have the correct data type for numerical visualisations. This involves using some extra filters in your Logstash configuration.

filters {
   if [type] == "syslog" {
       grok {
          match => { "message" => '(%{HOSTNAME:hostname})\s+ops\s+root:\s+(%{WORD:metric-name}): (%{NUMBER:metric-value:float})' }
       }
   }
}

This filter operates on the message field after it has been converted by ryslog, rather than on the format of the log line in /var/log/messages. The crucial part of this is to expose the Load Average value (metric-value) as a float integer, so that Kibana/Elasticsearch can deal with it as an integer rather than a string. If you only specify NUMBER as your grok data type, it will be exposed as a string, so you need to add the “:float” to complete the data type conversion to type integer.

To verify that it is exposed as a string, look in Kibana under Settings -> Indices. You should only have a single Index Pattern at this point (test-syslog-*). Refresh the field list for this, and search for “metric-value”. At this point, it may indicate that the data type for this is “String”, which we can now deal with. If it already has data type “Number”, you’re all set.

In Elasticsearch indices, you can only set the data type for a field when the index is created. If your “test-syslog-” index was created before we properly converted “metric-value” to an integer, you can now create a new index and verify that metric-value is an integer. To do this, update the output plugin in your Logstash configuration and restart Logstash.

output {
 if [type] == "syslog" {
    elasticsearch {
       index => "syslog-%{+YYYY.MM.dd}"
       hosts => ["search-*********-5isan2svbmpipm2xznyupbeabe.us-west-2.es.amazonaws.com:443"]
       ssl => true
    }
 } 
}

A new Index (syslog-) will now be created. Delete the existing Index pattern in Kibana and create a new one for syslog-*, using @timestamp as the default time field. Once this has been created, Kibana will obtain and updated field list (after a few seconds), and in this, you should see that “metric-value” now has a data type of “Number”.

(For neatness, you may want to replace the “test-syslog-” index with a properly named index even if you data type for “metric-value” is already “Number”).

Now that you have the data you need in Elasticsearch, you can graph it with a visualisation.

First, set your interval to “Last Hour” and create/save a Search for what you want to graph, eg:

metric-name: "LoadAverage" AND hostname: "server1"

Now, create a Line Graph visualisation for that Search, setting the Y-Axis to Average for field “metric-value” and the X-axis to Data Histogram. Click “Apply” and you should see a graph like below:

Screen Shot 2016-06-22 at 10.32.56

 

 

Rackspace – Engineered to Fail

I’ve read several articles over the last few years comparing the cloud infrastructure services offered by Rackspace and Amazon AWS.

Typically, these articles arrive at no firm conclusion as to which is better, referring to issues like cost, support, availability etc.

As someone who has used both services for over 5 years, I find these articles incomprehensible. From a technical viewpoint, there is no comparison between these services. Its like comparing an iPhone 6 to a pocket calculator. Both have a screen, a battery and a digital pulse, but when it comes to sophistication and functionality, they are for all intents and purposes different services.

To put it bluntly, Rackspace is a truly awful experience. They position themselves as a “managed” cloud services provider, which should begin to give an indication of the problem. The beauty of cloud services is that they don’t need to be managed viagra online canada mastercard. You buy them, consume them and dispose of them.

Being a “managed” cloud services provider is like being a “managed” self-service car wash. If the car wash machine is so complex, inflexible and unreliable as to require the constant attention of a human being to ensure that users can wash their cars, then those users might as well just go home and wash their cars in the drive (ie have on-premises infrastructure).

From what I can see, the difference in Amazon and Rackspace in this regard stems from their inception.

Amazon’s AWS platform was a spin-off from their shopping function. They had lots of spare compute capacity outside peak periods and decided to hire it out, along with the tools they used to manage it. As such, it was a battle-hardened infrastructure that was used in real, live-fire web environments, and felt familiar and well-designed to actual system engineers.

Rackspace’s service seems to have been designed by marketing professionals. Its ridiculously basic, doesn’t seem to accommodate any future-proofing, and is totally inflexible. Much more attention seems to have gone into the marketing strategy (check out the number of pretty people on the Rackspace website, compared to the JPG-free Amazon AWS site) than their actual technology.

To illustrate this, I’m going to give a specific example. As well as backing up the points made above, I’m also hoping that this article will be picked up by search engines, as it highlights a major flaw in a certain Rackspace functionality, which would cause problems for Rackspace users if not addressed.

When you create a Rackspace Cloud server, you are given an option to schedule daily imaging of the server. That means you can create an offline copy of the server at a point in time, which you can restore at a later time to re-establish the functionality of that server.

To most people working in infrastructure operations, this means one thing: backup.

You think: “If I can make a daily image of my server, and hold the 7 most recent images, my backups are sorted.” Inevitably, that’s what a lot of Rackspace users are doing.

But here’s the thing (that you only find out when you ask the question): because of the way this imaging process works, the creation of the images will inevitably start to fail, and there is no mechanism in the platform to alert you when they do fail.

The explanation of the technology is given here:

https://community.rackspace.com/products/f/25/t/3778

To summarise:

A Rackspace server image is composed of 2 parts: the base image file when the image was first created, and the extended image file that contain all the changes to the image that have been made since the base was created.

That means that when you restore the image, the base is restored first, and then all the changes are applied to the base from the extended image.

That means that if the data and you server is changing, but not necessarily growing (eg you could be writing huge logs, or a huge database, but pruning effectively) the size of your image is constantly growing. For First Generation Rackspace servers, if an image gets to greater than 160GB/250GB (which is peanuts in today’s Big Data world) the imaging will fail. For Next Generation servers, there is apparently no limit, but check out the comments of the “Racker” on this Rackspace support thread:

https://community.rackspace.com/general/f/34/t/461?pi1830=1

“Next Gen has no limits for either Windows or Linux, but as an image gets really large, there may be an increased chance of the process failing (Things sometimes go wrong when you are talking about moving hundreds of GBs of data).”

Wow! Like who would need to manage “hundreds of GBs” of data in 2016?! What is this? Star Trek!?

This is consistent with what I was told on a support thread by another Racker, namely that imaging is offered on a “Best Effort” basis. Remember, this is bits and bytes technology we’re talking about here, where stuff normally works or doesn’t. We’re not talking about nuclear fusion.

The same Racker goes on to say:

“For customers who run into these limits, there is generally a larger issue though. The truth is that you really should NOT be using imaging as a backup solution. Think about it, does it really make sense to backup tons and tons of data every day when only a few things changed on the server? Do you really want to spin up a new Cloud Server just to recover a single file?”

That’s a sort of valid point, but here’s a question: if scheduled daily imaging isn’t suitable for backup, why the hell is scheduling daily imaging made available as a feature, inviting hapless Ops Engineers to think that their servers are being reliably backed up when really they are not? What exactly is the purpose of scheduled daily image if not for backup?

And the reason the point is only “sort of” valid is that there are times when you will need to make a full daily image of a server. Let’s say you have a MySQL server that has a 200GB data payload. You can’t run a mysqldump against that every night, because it will grind the server to a standstill. You have to do a bit-for-bit image of the system to back it up (as recognised by Amazon RDS service, where you can schedule daily snapshots of your RDS instances).

It actually gets worse.

Imaging cannot only fail because of image size, but also because of “bugs” in the Rackspace platform. A few weeks back, I noticed that imaging on one of our smaller Rackspace servers had stopped working. I dialled up a support chat and ask the guy who responded what was going on.

Theodore R: Garreth! thank you for holding. We have a known bug in ORD that we've seen a few failures on scheduled images. To help with this. Go ahead and cancel the two jobs stuck at 0%. Then de-activiate the schedule then re-enable the schedule. I'm sorry about this it is a known issue we are working on resolving this.

Me: If you knew there was a bug why didn't you tell your customers?

Theodore R: I don't have that answer. As I'm front line support but I will bring that up to my manager in our team meeting today.

Theodore R: I do apologize about this

So they had a bug in their platform that has probably disabled scheduled images for hundreds customers, which isn’t alerted, and they haven’t told anyone!

This is just a sample of the grind I go through with Rackspace every week. While writing this, I am monitoring a ticket they’ve opened to tell me that one of my servers has failed and they are working on it. I have been instructed:

“Please do not access or modify ‘<server-name>’ during this process.”

Of course, it doesn’t seem to dawn on them that this could be a public web server, with thousands of users knocking on the http door all the time, and the only way I can stop this is to login to the server to shutdown the web server, which I am apparently not supposed to do.”

If you still don’t believe me, you can look at another piece of evidence. For the last year, Rackspace have been offering a service called “Fanatical Support for Amazon AWS”

https://www.rackspace.com/managed-aws (Pretty People on web page? Check.)

Yes, you can pay Rackspace to “manage” your investment in their main competitor. This is basically Rackspace saying “Yes, we know our service is dogfood, but in order to keep the lights on, we going to try and squeeze a few dollars out of customers who’ve seen the light and are moving elsewhere.

Like I said at the start, ignore the clickbait “comparison” articles. Rackspace is something you should avoid in your IT organisation in the same way you avoid IE6 and Blackberrys.

Never presume its safe to use your credit card online

In light of the recent information security breach at TalkTalk, I thought it would be a good opportunity to share my thoughts on information security and the use of credit cards to purchase goods and services online.

I am not by any means an information security expert, but I can credibly claim to have more knowledge of the subject that the average member of the public, and probably the average IT professional too.

My experience derives primarily from managing systems used to book flights and hotel rooms, in which customer credit cards are used as the method of payment. In my most recent role in this regard, over €40m of revenue per month was flowing through systems under my control.

To understand the underlying risk in passing sensitive data to a computer system you encounter online, the most important point to understand is that for every commercial organisation that ever existed, information security is a drag on profitability.

In itself, that isn’t surprising or unique. There are lots of business functions that are a drag on profitability. The difference with information security is that while its significance is generally understood by decision makers, the complexity of the risk involved is not, which means that when its drag factor is considered with every other drag factor, its tends to get bumped down the list more easily when decisions have to be made about priorities.

For instance, if a Marketing Director says to a CEO that a product launch needs to be delayed to develop a new advertising campaign, because a focus group indicated that the original marketing campaign was not appealing, the logic is immediately accessible to the CEO, and the decision is relatively simple.

If, an the other hand, an IT Director says to a CEO that a new product launch has to be delayed because the software underpinning the payment system for the product hasn’t been penetration tested for Cross Site Scripting vulnerabilities, the logic is less accessible, and the CEO will probably consider the situation of terms of probability, not logic.

“What are the chances of our software being targeted when there are millions of online systems for hackers to choose from? We’ve released products with this software before and everything was fine, so why not this time?”

This type of thinking is pervasive in corporations that rely on information systems and ask us to trust them with our data. When it comes to information security, more often than not, risk is considered in terms of what is probable, not what is possible. The critical flaw in this is that a decision makers estimation of probability is always influenced both by their experience and their wider commercial objectives. If they keep subconsciously diluting risk because it interferes with their commercial objectives, and nothing ever goes wrong, it becomes easier to dilute that risk further and further each time. More often than not they’ll get away with this. There are millions of online systems, and you need to be unlucky to be targeted, but you’re just as likely to be targeted as any one else.

A real life example of this is readily available. I recently had to make a trip to England, and needed to book 4 days parking at Dublin Airport, which would cost in the region of €30, which I knew I would have to pay for with my credit card.

Being relatively familiar with what goes on behind the scenes on such sites,  I tend to rank the security they offer higher than more obvious considerations like price. The gold standard for me is the availability of PayPal as a payment option. There was a time when asking users to pay using PayPal was seen as second rate, which resulted in many online retailers discounting it in favour of custom solutions that made their online presence seem apparently more sophisticated.

Thankfully, those days are gone. PayPal have a proven track record when it comes to credit card security, and my advise would be to always choose PayPal as your payment option rather than entering your card details anew into a integrated payment system.

In the absence of PayPal, I would always favour payment solutions that link into payment gateways like Realex and WorldPay. These solutions require you to enter your credit cards details, but these are either forwarded directly to the payment gateway provider, or entered on a page provided by the gateway provider. The key thing to understand is that the organisation you purchasing the product or service from has very limited responsibility in terms of managing your credit card data. This is a good thing, because it means they have to deal with the issue of information security being a drag on profitability less frequently than organisations that attempt to process credit card payments themselves.

When neither of these options are obviously available, I will always look for an online statement from the product or service provider about how they deal with credit card payments. The key element to look for in such a statement is either attestation to or certification of what is known as PCI DSS compliance.

PCI DSS compliance is a set of standards agreed by the credit card industry which organisations storing or communicating credit card data are supposed to adhere to. There is no law or industry requirement that they should, although many banks will refuse to deal with organisations if they don’t.

Implementing PCI DSS compliance in an organisation is an onerous and expensive task. Not only is the baseline standard difficult to achieve, particularly if the organisation has grown rapidly and the standard has to be implemented retrospectively, but it is also evolving all the time, requiring organisations to devote dedicated resources to ensuring that processes and procedures are up to the date. Its also a standard that involves much more than software. It touches every part of the organisation, from HR to Accounting to Marketing to IT.

Any organisation that has been through this knows the pain involved, and when you achieve compliance, its something you want to tell people about. As such, if I were dealing with an organisation that has full PCI DSS compliance in place, I’d expect them to make that clear on their website.

Lets take a look at the parking options available at Dublin Airport to see how they fared in relation to the payments options available to me.

The first one I tried, because it was the cheapest, was http://quickpark.ie/. I got my quote and clicked through the booking process to where my credit card details were required. There was no PayPal option, and no evidence of the site using a payment gateway, so I started searching for some evidence of PCI DSS compliance. This is what I found on their Frequently Asked Questions page:

How do I know my booking details are secure?

You can rest assured that your personal data is safe with us. Every booking is encrypted via SSL protocol. Along with encryption we take all the measures required to keep all your personal data safe.

This is an incredibly anaemic statement for a website asking you to enter your credit card details. Their reference to the “SSL protocol” means that your details are encrypted while they are in transit from your computer to theirs (which is a very basic standard), but no information is provided regarding what happens to your details once they arrive at the other end. The reference to “SSL” is also telling. The primary protocols used in the transfer of data across the web are SSL and TLS. Up until about a year ago, SSL was predominant, until a flaw was discovered in it, resulting in the well-managed websites switching to TLS. The fact that QuickPark’s web site continues to refer to “SSL” is not encouraging, never mind the total absence or any reference to the PCI DSS standard.

My next attempt was on the Dublin Airport Authority’s website, where you can book parking in the parking areas owned by the airport. This was at http://www.dublinairport.com/gns/to-from-the-airport/car-parking.aspx.

Again, there was no PayPal option, and no reference to a payment gateway provider, so I again went searching for a statement on information security. This is what I found on their Pre-Booking Frequently Asked Questions page:

How do I know my booking details are secure?

The details you provide are encrypted to prevent them being read over the internet. This is indicated by the GeoTrust icon on the car parking page. You can click on the icon for more detail.

This statement is similarly anaemic to the one provided by QuickPark, which surprised me, given that the Dublin Airport Authority is a long-standing semi-state body, compared to QuickPark, which is a relatively small private company.

Again there is reference to the transmission of data over the internet, but no reference to the management of data on the other side. The reference to the GeoTrust logo is meaningless, but was presumably included as it features the word “Trust”. Obtaining a a GeoTrust logo for your website, or a logo from any one of hundreds of similar providers, costs about $20 per annum. All it signifies is from whom you bought your digital certificate to encrypt your data transmission. It means nothing is terms of how your data is managed by the company once its gets to its destination.

At this point, I decided to change tack, and did a Google search for “Dublin airport parking pci dss”.

The first result I got was for the Clayton Hotel, which is just off the motorway near the exit for the airport, and which offers car parking to airport users.

On their parking Frequesting Asked Questions page (http://www.claytonhoteldublinairport.com/park-and-fly/parking-faqs/) they state:

How do I know my booking details are secure?

To ensure that you are trading in a secure environment, Clayton Hotel Dublin Airport has contracted the services of Advam. Advam is the leading provider of integrated global card services for private enterprise and government agencies in Australia and around the globe. Advam is a Tier 1 payment processor which adheres to the most stringent of industry accreditations including Level 1 PCI DSS compliance, EMV certification and ISO 9001 accreditation. When you enter your payment details online, you will notice that you will are using a secure site which uses 1024 bit tunnelling encryption to protect your information during transmission. Every transaction processed through Advam’s payment switch is protected by the latest in encryption technology and a combination of state of the art firewalls and intrusion detection systems guard every point of ingress and egress on the Advam network.

This was obviously cut and paste from another document provided by the company referenced in it, Advam, but that in itself isn’t a problem. From this statement I can see that not only is the transmission of the data secure, but that responsibility for the data has been handed of to a third party who specialise in credit card security and who have achieved PCI DSS compliance. This is the type of statement I would expect to see from an organisation asking for my credit card data, and this was the option I chose.

In considering this example, I need to go back to my earlier point about information security being a drag on profitability. As noted, its one drag in a mix of many different drags, but its a drag that tends to get pushed aside because it isn’t one that decision makers can easily relate to.

When this is not the case, or in other words when a decision maker has decided to sufficiently prioritise information security, its a painful process for the organisation involved, and part of the payback is making sure that everyone knows the effort you’ve gone to, particularly if your competitors haven’t.

From that point of view, finding anaemic statements like those referred to above turns on a warning light for me. The absence of more comprehensive information about information security doesn’t mean that these organisation are insecure, but it does mean that they aren’t particularly bothered about promoting information security as a feature of their service offering, which suggests they’ve haven’t invested particularly deeply in it.

At this juncture, it would be nice to be able to go back and view what TalkTalk said about their information security before they were hacked, but at the time of writing, the entire TalkTalk website is just one big blurb about how TalkTalk take information security more seriously than anything else. Presumably, it will be that way for some time.

That said, its unlikely that very many of TalkTalk’s customers ever bothered checking out their statements about information security.

If you don’t want to be in the same position they are in today, its probably a habit you should get in to.

 

 

Thoughts on Ansible variables

If you want to use Ansible to really empower your configuration management function, its important to have a solid understanding of how variables work.

Here’s a few must-knows:

Values in ansible.cfg are environment variables, not script variables

The ansible.cfg file is provided to allow the user set default values that are used when ansible is executed from a local environment. This file isn’t a YAML file, which is why assignments use “=” rather than “:”.

The values in this file are set as environment variables when Ansible runs. You cannot access these values directly as script variables eg

remote_user = root

does not provide you with a

{{ remote_user }}

variable in your playbooks.

Its important to create your variables in the right place: inventory or play

Generally, a variable will apply to either a host (or group of hosts), or to a task (play) within a playbook. Decide early where your variable applies and create it in the right place kamagra uk online pharmacy.

For variables that apply to hosts (eg a username to login with) create the variable in either:

Your inventory file:

[server_group_1]

server1 ansible_ssh_user=admin

Under your group_vars directory:

#file: ./groups_vars/server_group_1

ansible_ssh_user=admin

Under your host_vars directory:

#file: ./host_vars/server1

ansible_ssh_user=admin

You can also create host-related variables deeper in your playbook:

- hosts: webservers
  remote_user: admin

but I don’t recommend this. Ansible provides sufficient functionality to create an abstraction layer for variables above the play/task level, and it makes sense to use it.

For variables that are specific to plays, the value can be set closer to the point of execution, for example:

After the hosts specification:

- hosts: webservers
  vars:
     app_version: 12.03

As a parameter for the role that is being applied to the hosts:

- hosts: webservers
  roles:
    - { role: app, app_version: 12.03 }

�Variables in Ansible have precedence rules

Particular care needs to be paid to precedence. In instances, you may want a variable to have an absolute value which cannot be changed by an assignment in any other part of the playbook or from the command line. In other instances you may wish to allow a variable to be changed. These behaviours are controlled by where you create the assignment of the variable.

 

Installing Passenger for Puppet on Amazon Linux

Introduction

Puppet ships with a web server called Web Brick. This is fine for test and use with a small number of nodes, but will cause problems with larger fleets of nodes. It is recommended to use the Ruby application server, Passenger, to run Puppet in production environments.

Setup

Provision a new server instance.

Install required RPMs. Use Ruby 1.8 rather than Ruby 2.0. Both are shipped with the Amazon Linux AMI at the time of writing, but you need to set up the server to use version 1.8 by default.

sudo yum install -y ruby18 httpd httpd-devel mod_ssl ruby18-devel rubygems18 gcc mlocate
sudo yum install -y gcc-c++ libcurl-devel openssl-devel zlib-devel git

Make Ruby 1.8 the default

sudo alternatives --set ruby /usr/bin/ruby1.8

Set Apache to start at boot

sudo chkconfig httpd on

Install Passenger gem

sudo gem install rack passenger

Update the location DB (you will need this to find files later)

sudo updatedb

Find the path to the installer and add this to the path

locate passenger-install-apache2-module
sudo vi /etc/profile.d/puppet.sh
 
export PATH=$PATH:/usr/lib/ruby/gems/1.8/gems/passenger-5.0.10/bin/
 
sudo chmod 755 /etc/profile.d/puppet.sh

Make some Linux swap space (the installer will fail on smaller instances if this doesn’t exist)

sudo dd if=/dev/zero of=/swap bs=1M count=1024
sudo mkswap /swap
sudo chmod 0600 /swap
sudo swapon /swap

At this point, open a separate shell to the server (you should have 2 shells). This isn’t absolutely essential, but the installer will ask you to update an Apache file mid-flow, so if you want to do things to the letter of the law, a second shell helps.

Next, run the installer, and accept the default options.

sudo /usr/lib/ruby/gems/1.8/gems/passenger-5.0.10/bin/passenger-install-apache2-module

The installer will ask you to add some Apache configuration before it completes. Do this in your second shell. Add the config to a file called /etc/httpd/conf.d/puppet.conf. You can ignore warning about the PATH.

<IfModule mod_passenger.c>
  PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-5.0.10
  PassengerDefaultRuby /usr/bin/ruby1.8
</IfModule>

Restart Apache after you add this and then press Enter to complete the installation

Next, make the necessary directories for the Ruby application

sudo mkdir -p /usr/share/puppet/rack/puppetmasterd
sudo mkdir /usr/share/puppet/rack/puppetmasterd/public /usr/share/puppet/rack/puppetmasterd/tmp

Copy the application config file to the application directory and set the correct permissions

sudo cp /usr/share/puppet/ext/rack/files/config.ru /usr/share/puppet/rack/puppetmasterd/
sudo chown puppet:puppet /usr/share/puppet/rack/puppetmasterd/config.ru

Add the necessary SSL config for the ruby application to Apache. You can append this to the existing puppet.conf file you created earlier. Note that you need to update this file to specify the correct file names and paths for your Puppet certs (puppet.pem in the example below).The entire file should now look like below:

LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-5.0.10/buildout/apache2/mod_passenger.so
<IfModule mod_passenger.c>
  PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-5.0.10
  PassengerDefaultRuby /usr/bin/ruby1.8
</IfModule>
# And the passenger performance tuning settings:
# Set this to about 1.5 times the number of CPU cores in your master:
PassengerMaxPoolSize 12
# Recycle master processes after they service 1000 requests
PassengerMaxRequests 1000
# Stop processes if they sit idle for 10 minutes
PassengerPoolIdleTime 600
Listen 8140
<VirtualHost *:8140>
    # Make Apache hand off HTTP requests to Puppet earlier, at the cost of
    # interfering with mod_proxy, mod_rewrite, etc. See note below.
    PassengerHighPerformance On
    SSLEngine On
    # Only allow high security cryptography. Alter if needed for compatibility.
    SSLProtocol ALL -SSLv2 -SSLv3
    SSLCipherSuite EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!IDEA:!ECDSA:kEDH:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
    SSLHonorCipherOrder     on
    SSLCertificateFile      /var/lib/puppet/ssl/certs/puppet.pem
    SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/puppet.pem
    SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
    SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
    SSLCARevocationFile     /var/lib/puppet/ssl/ca/ca_crl.pem
    #SSLCARevocationCheck   chain
    SSLVerifyClient         optional
    SSLVerifyDepth          1
    SSLOptions              +StdEnvVars +ExportCertData
    # Apache 2.4 introduces the SSLCARevocationCheck directive and sets it to none
    # which effectively disables CRL checking. If you are using Apache 2.4+ you must
    # specify 'SSLCARevocationCheck chain' to actually use the CRL.
    # These request headers are used to pass the client certificate
    # authentication information on to the puppet master process
    RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e
    DocumentRoot /usr/share/puppet/rack/puppetmasterd/public
    <Directory /usr/share/puppet/rack/puppetmasterd/>
      Options None
      AllowOverride None
      # Apply the right behavior depending on Apache version.
      <IfVersion < 2.4>
        Order allow,deny
        Allow from all
      </IfVersion>
      <IfVersion >= 2.4>
        Require all granted
      </IfVersion>
    </Directory>
    ErrorLog /var/log/httpd/puppet-server.example.com_ssl_error.log
    CustomLog /var/log/httpd/puppet-server.example.com_ssl_access.log combined
</VirtualHost>

The ruby application is now ready. Install the puppet master application. Note, do NOT start the puppetmaster service or set it to start at boot.

sudo yum install -y puppet-server

Restart Apache and test using a new puppet agent. You can also import the the ssl assets from an existing puppet master into /var/lib/puppet/ssl. This will allow you existing puppet agents to continue to work.

Allowing puppet agents manage their own certificates

What?

Why would you want to allow a puppet agent manage the certificates the puppet master holds for that agent? Doesn’t that defeat the whole purpose of certificate based authentication in puppet?

Well, yes, it does, but there are situations in which this is useful, but only where security in not a concern!!

Enter Cloud Computing.

Servers in Cloud Computing environments are like fruit flies. There are millions of them all over the world being born and dying at any given time. In a an advanced Cloud configuration they can have lifespans of hours, if not minutes.

As puppet generally relies on fully qualified domain names to match agent requests to stored certificates, this can become a bit of a problem, as server instances that come and go in something like Amazon AWS can sometimes be required to have the same hostname at each launch.

Imagine the following scenario:

You are running automated performance testing, in which you want to test the amount of time if takes to re-stage an instance with a specific hostname and run some tests against it. Your script both launches the instance an expects the instance to contact a puppet master to obtain its application.

In this case, the first time the instance launches, the puppet agent will generate a client certificate signing request, send that to the master, get it signed and pull the necessary catalog. The puppet master will then have certificate for that agent.

Now, you terminate the instance and re-launch it. The agent presents another signing request, with the same hostname, but this time the puppet master refuses to play, telling you that it already has a certificate for that hostname, and the one you are presenting doesn’t match.

You’re snookered.

Or so you think. The puppet master has a REST api that is disabled by default but when you can open up to it receive HTTP requests to manage certificates. To enable the necessary feature, add the following to your auth.conf file

path /certificate_status
auth any
method find, save, destroy
allow *

Restart the puppet master when you’ve done this.


sudo service puppetmaster restart

Next, when you start you server instance, include the following script at boot. It doesn’t actually matter when this is run, provided it is run after the hostname of the instance has been set.


#!/bin/bash

curl -k -X PUT -H "Content-Type: text/pson" --data '{"desired_state":"revoked"}' https://puppet:8140/production/certificate_status/$HOSTNAME

curl -k -X DELETE -H "Accept: pson"  https://puppet:8140/production/certificate_status/$HOSTNAME

rm -Rf /var/lib/puppet/ssl/*

puppet agent -t

This will revoke and delete the agent certificate on the master, delete the agent’s copy of the certificate and renew the signing process, giving you new certs on the agent and master and allowing the catalog to be ingested into the agent.

You can also pass a script like this as part of the Amazon EC2 process of launching an instance.

aws ec2 run-instances  --user-data file://./pclean.sh

Where pclean.sh is the name of the locally saved script file, and it is saved in the same directory as your working directory (otherwise include the absolute path).

With this in place, each time you launch a new instance, regardless of its hostname, it will revoke any existing cert that has the same hostname, and generate a new one.

Obviously, if you are launching hundreds of instances at the same time, you may have concurrency issues, and some other solution will be required.

Again, this is only a solution for environments where security is not an issue.

Stagduction

Stagduction

(Noun) A web application state in which the the service provided is not monitored, not redundant and has not been performance tested, but which is in use by a large community of people as a result of poor planning, poor communication and over-zealous sales people.

Install Ruby for Rails on Amazon Linux

A quick HOWTO on how to install Ruby for Rails on Amazon Linux

Check your Ruby version (bundled in Amazon Linux)


ruby -v
ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-linux]

Check your sqlite3 version (bundled with Amazon Linux)


sqlite3 --version
3.7.17 2013-05-20 00:56:22 118a3b35693b134d56ebd780123b7fd6f1497668

Check Rubygems version (bundled with Amazon Linux)


gem -v
2.0.14

Install Rails (this sticks on the command line for a while, be patient. The extra parameters exclude the documentation, which if installed, can melt the CPU on smaller instances whilst compiling)


sudo gem install rails --no-ri --no-rdoc

Check Rails installed


rails --version
Rails 4.1.6

Install gcc (always handy to have)


sudo yum install -y gcc

Install ruby and sqlite development packages


sudo yum install -y ruby-devel sqlite-devel

Install node.js (Rails wants a JS interpreter)

 sudo bash
curl -sL https://rpm.nodesource.com/setup | bash -
exit
sudo yum install -y nodejs

Install the sqlite3 and io-console gems


gem install sqlite3 io-console

Make a blank app


mkdir myapp
cd myapp
rails new .

Start it (in the background)


bin/rails s &

Hit it


wget -qO- http://localhost:3000

Debug (Rails console)


bin/rails c

Application monitoring with Nagios and Elasticsearch

As the applications under your control grow, both in number and complexity, it becomes increasingly difficult to rely on predicative monitoring.

Predicative monitoring is monitoring things that you know should be happening. For instance, you know your web server should be accepting HTTP connections on TCP port 80, so you use a monitor to test that HTTP connections are possible on TCP port 80.

In more complex applications, it harder to predict what may or may not go wrong; similarly, some things can’t be monitored in predictive way, because your monitoring system may not be able to emulate the process that you want to monitor.

For example, lets say your application sends Push message to a mobile phone application. To monitor this thoroughly, you would have to have a monitor that persistently sends Push messages to a mobile phone, and some way of monitoring that the mobile phone received them.

At this stage, you need to invert your monitoring system, so that it stops asking if things are OK, and instead listens for applications that are telling it that they are not OK.

Using your application logs files is one way to do this.

Well-written applications are generally quite vocal when it comes to being unwell, and will always describe an ERROR in their logs if something has gone wrong. What you need to do is find a way of linking your monitoring system to that message, so that it can alert you that something needs to be checked.

This doesn’t mean you can dispense with predictative monitoring altogether; what is does means is that you don’t need to rely on predicative monitoring entirely (or in other words, you don’t need to be able to see into the future) to keep your applications healthy.

This is how I’ve implemented log based monitoring. This was something of a nut to crack, as our logs arise from an array of technologies and adhere to very few standards in terms of layout, logging levels and storage locations.

The first thing you need is a logstash implementation. Logstash comprises a stack of technologies: an agent to ship logs out to a Redis server; a Redis server to queue logs for indexing; a logstash server for creating indices and storing them in elasticsearch; an elasticsearch server to search your indices.

The setup of this stack is beyond this article; its well-described over on the logstash website, and is reasonably straightforward.

Once you have your logstash stack set up, you can start querying the elasticsearch search api for results. Queries are based on HTTP POST and JSON, and results are output in JSON.

Therefore, to test you logs, you need to issue a HTTP POST query from Nagios, check the results for ERROR strings, and alert accordingly.

The easient way to have Nagios send a POST request with a JSON payload to elasticsearch is with the Nagios jmeter plugin, which allows you to create monitors based on your jmeter scripts.

All you need then is a correctly constructed JSON query to send to elasticsearch, which is where things get a bit trickier.

Without going into this in any great detail, formulating a well-constructed JSON query that will parse just the right log indices in elasticsearch isn’t easy. I cheated a little in this. I am familiar with the Apache Lucene syntax that the Logstash Javascript client, Kibana, uses, and was able to formulate my query based on this.

Kibana sends encrypted queries to elasticsearch, so you can’t pick them out of the HTTP POST/GET variables. Instead, I enabled logging of slow queries on elasticsearch (threshold set to 0s) so that I could see in the elasticsearch logs what exact queries were being run against elasticsearch. Here’s an example:


{
  "size": 100,
  "sort": {
    "@timestamp": {
      "order": "desc"
    }
  },
  "query": {
    "filtered": {
      "query": {
        "query_string": {
          "query": "NOT @source_host:\"uatserver\"",
          "default_field": "_all",
          "default_operator": "OR"
        }
      },
      "filter": {
        "range": {
          "@timestamp": {
            "from": "2014-10-06T11:05:25+00:00",
            "to": "2014-10-06T12:05:25+00:00"
          }
        }
      }
    }
  },
  "from": 0
}

You can test a query like this by sending it straight to your elasticsearch API:


curl -XPOST 'http://localhost:9200/_search' -d '{"size":100,"sort":{"@timestamp":{"order":"desc"}},"query":{"filtered":{"query":{"query_string":{"query":"NOT @source_host:\"uatserver\"","default_field":"_all","default_operator":"OR"}},"filter":{"range":{"@timestamp":{"from":"2014-10-06T11:05:25+00:00","to":"2014-10-06T12:05:25+00:00"}}}}},"from":0}'

This searches a batch of 100 log entries that do not have a tag of “uatserver”, from a previous 5 minute period.

Now that we now what we want to send to elasticsearch, we can construct a simple jmeter script. In this this, we simply specify a a HTTP POST request, containing Body Data of the JSON given above, and include a Response Assertion for the strings we do not want to see in the logs.

We can then use that script in Nagios with the jquery plugin. If the script finds the ERROR string in the logs, it will generate an alert.

2 things are important here:

The alert will only tell you that an error has appeared in the logs, not what that error was; and if the error isn’t persistent, the monitor will eventually recover.

Clearly, there is a lot of scope for false negatives in this, so if your logs are full of tolerable errors (they shouldn’t be really) you are going to have to be more specific about your search strings.

The good news is that if you get this all working, its very easy to create new monitors. Rather than writing bespoke scripts and working with Nagios plugins, all you need to do is change the queries and the Response Assertions in your jmeter script, and you should be able to monitor anything that is referenced in your application logs.

To assist in some small way, here is a link to a pre-baked JMeter script that includes an Apache Lucene query, and is also set up with the necessary Javascript-based date variables to search over the previous 15 minutes.

Negative matching on multiple ip addresses in SSH

In sshd_config, you can use the

Match

directive to apply different configuration parameters to ssh connections depending on their characteristics.

In particular, you can match on ip address, both positively and negatively.

You can specify multiple conditions in the match statement. All conditions must be matched before the match configuration is applied.

To negatively match an ip address, that is, to apply configuration if the connection is not from a particular ip address, use the following syntax

Match Address *,!62.29.1.162/32
ForceCommand /sbin/sample_script

To negatively match more than one ip address, that is, to apply configuration if the connection is not from one of more ip addresses, use the following syntax

Match Address *,!62.29.1.162/32,!54.134.118.96/32
ForceCommand /sbin/sample_script