Installing a new SSL certificate in your ELB via CLI

For future me:

  1. Create the key and CSR:
    $ openssl req -out wildcard.site.com.csr -new -newkey rsa:2048 -nodes -keyout wildcard.site.com.key
    
  2. Upload the CSR to your SSL vendor (in this case, DigiCert) and obtain the signed SSL certificate.
  3. Create a PEM-encoded version of the signing key. This is required for AWS/IAM certs. To check if your key is already PEM-encoded, just “head -1 site.key”. If the first line says “—–BEGIN PRIVATE KEY—–” then it’s NOT PEM-encoded. The first line should be “—–BEGIN RSA PRIVATE KEY—–“.
    $ openssl rsa -in wildcard.site.com.key -outform PEM -out wildcard.site.com.pem.key
    writing RSA key
    
  4. Upload the certificate to the IAM keystore:
    $ aws iam upload-server-certificate --server-certificate-name star_site_20141014 --certificate-body file:///Users/evan/certs_20141014/site/certs/star_site_com.crt --private-key file:///Users/evan/certs_20141014/wildcard.site.com.pem.key --certificate-chain file:///Users/evan/certs_20141014/site/certs/DigiCertCA.crt
    {
        "ServerCertificateMetadata": {
            "ServerCertificateId": "XXXXXXXXXXXXXXX",
            "ServerCertificateName": "star_site_20141014",
            "Expiration": "2017-12-18T12:00:00Z",
            "Path": "/",
            "Arn": "arn:aws:iam::9999999999:server-certificate/star_site_20141014",
            "UploadDate": "2014-10-14T15:29:28.164Z"
        }
    }
    

Once the above steps are complete, you can go into the web console (EC2 -> Load Balancers), select the ELB whose cert you want to change, click the “Listeners” tab, click the SSL port (443) and select the new cert from the dropdown.

Create CloudWatch alerts for all Elastic Load Balancers

I manage a bunch of ELBs but we were missing an alert on a pretty basic metric: how many errors the load balancer was returning.  Rather than wade through the UI to add these alerts I figured it would be easier to do it via the CLI.

Assuming aws-cli is installed and the ARN for your SNS topic (in my case, just an email alert) is $arn:

for i in `aws elb describe-load-balancers | grep LoadBalancerName | 
perl -ne 'chomp; my @a=split(/s+/); $a[2] =~ s/[",]//g ; print "$a[2] ";' ` ; 
do aws cloudwatch put-metric-alarm --alarm-name "$i ELB 5XX Errors" --alarm-description 
"High $i ELB 5XX error count" --metric-name HTTPCode_ELB_5XX --namespace AWS/ELB 
--statistic Sum --period 300 --evaluation-periods 1 --threshold 50 
--comparison-operator GreaterThanThreshold --dimensions Name=LoadBalancerName,Value=$i 
--alarm-actions $arn --ok-actions $arn ; done

That huge one-liner creates a CloudWatch notification that sends an alarm when the number of 5XX errors returned by the ELB is greater than 50 over 5 minutes, and sends an “ok” message via the same SNS topic. The for loop creates/modifies the alarm for every ELB.

More info on put-metric-alarm available in the AWS docs.

Load balancing in EC2 with Nginx and HAProxy

We wanted to setup a loadbalanced web cluster in AWS for expansion. My first inclination was to use ELB for this, but I soon learned that ELB doesn’t let you allocate a static IP, requiring you to refer to it only by DNS name. This would be OK except for the fact that our current DNS provider, Dyn, requires IP addresses when using their GSLB (geo-based load balancer) service.

Rather than let this derail the whole project, I decided to look into the software options available for loadbalancing in EC2. I’ve been a fan of hardware load balancers for a while, sort of looking down at software-based solutions without any real rationale, but in this case I really had no choice so I figured I’d give it a try.

My first stop was Nginx. I’ve used it before in a reverse-proxy scenario and like it. The problem I had with it was that it doesn’t support active polling of nodes – the ability to send requests to the webserver and mark the node as up or down based on the response. As far as I can tell, using multiple upstream servers in Nginx allows you to specify max_fails and fail_timeout, however a “fail” is determined when a real request comes in. I don’t want to risk losing a real request – I like active polling.
Continue reading “Load balancing in EC2 with Nginx and HAProxy”