The not-so-secret secret to Postgres performance

I manage a bunch of Postgres DBs and one of the things I almost always forget to do when setting up a new one is set the readahead up from the default of 256. I created this script and run it out of /etc/rc.local and sometimes cron it too. The 3 commands at the top are only really relevant on systems with “huge” memory – probably over 64 GB. We ran into some memory problems with CentOS 6 on a box with 128 GB ram which we ended up working around by reinstalling CentOS 5, but the /sys/kernel/mm/redhat_transparent_hugepage/ options below should fix them in 6.x (though we haven’t actually tried it on that DB, we haven’t seen any problems in other large DBs).

#!/bin/bash

echo no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
echo never >/sys/kernel/mm/redhat_transparent_hugepage/defrag

echo 1 > /proc/sys/vm/dirty_background_ratio

BLOCK_DEVICES=`perl -ne 'chomp; my @a=split(/[s]+/); next unless $a[4]; next if ($a[4] =~ /sd[a-z]+[d]+/); print "$a[4]n";' /proc/partitions `

logger -t tune_blockdevs "Block devices matching: $BLOCK_DEVICES"

for DEV in $BLOCK_DEVICES;  do
        logger -t tune_blockdevs "Setting IO params for $DEV"
### Uncomment the below line if using SSD
#        echo "noop" > /sys/block/$DEV/queue/scheduler
        /sbin/blockdev --setra 65536 /dev/$DEV
done

Problems with PostgreSQL 9.0 on CentOS 6.3 with a 2.6.32-series kernel

Note: this is cross-posted from the Yodle Tech Blog.

We recently migrated our Postgres database to a significantly more powerful machine. While our existing server was doing okay load-wise, we were approaching 90% full on its Postgres volume. We’d also recently acquired an awesome new PureStorage SSD-based SAN and were excited to move our database onto it. For comparison, here are the the specs for the two systems:

Old server New server
Nickname DB1 DB2
CPU 2x 6-core Xeon X5690 CPUs at 3.47GHz 2x 8-core E5-2690 CPUs at 2.90 GHz
Memory 96 GB 1333 MHz 128 GB 1600 MHz
Storage
  • Dell PERC H800 SAS HBA
  • Dell MD1220 drive enclosure
  • 24x 15k RPM SAS drives configured in RAID 10
  • QLogic QLE2562 dual-port 8 Gbit FC HBA
  • PureStorage SSD-based SAN
OS CentOS 5.6 x86_64 CentOS 6.3 x86_64
Kernel 2.6.18-238.9.1.el5 2.6.32-279.2.1.el6.x86_64

As you can probably guess, moving from spinning disk onto SSD was the centerpiece of the upgrade. We’d tested the PureStorage system pretty extensively and used it in some secondary databases and seen some amazing improvements. We were excited to see how it would handle our production workload.

Basic Configuration

DB1 was built in early 2011 and was running CentOS 5.6. We’re generally pretty big fans of CentOS, so when building DB2 we went with CentOS 6.3, which was the latest version. We followed our standard installation procedure, which essentially means doing a minimal installation and then installing just what we needed, and of course updating all packages to the latest versions.

We provisioned a big LUN for the Postgres data and 500 GB each for Postgres “temp” and logging. While a separate partition/volume for pg_log/pg_xlog is a common practice, we also created a separate volume for temp as a safeguard against runaway queries that generate lots of temp files due to hash-joins or sorting gigantic result sets. In the past we’ve run into situations where the temporary files consume all available space, and if the sort space is on the same partition as your data you run the risk of crashing the database if Postgres attempts to allocate space and there’s none to be had. Since all the SAN volumes are thin provisioned, there’s no penalty for creating separate volumes, and we get the additional benefit of being able to specify different mount options for each volume. (It’s worth noting that PostgreSQL 9.2 allows you to specify a maximum size for temp space, but since we’re still on 9.0 that option isn’t available to us.) We then created XFS filesystems on each volume. We didn’t bother partitioning the volumes, just created the filesystems on the raw devices. This way, in the event we need to expand a volume, we don’t have to worry about growing the partition, just the LUN (in the SAN) and the filesystem.

The initial cutover

We configured streaming replication between DB1 and DB2 to get DB2 in sync with DB1. On the night of Sunday, August 19th, we cut over from DB1 to DB2 as the primary. All things considered, everything went well and our database was only inaccessible to our servers for a few minutes. (Incidentally, while the database was “down” all our client sites were fine since they are completely decoupled from the database.) After verifying everything was up we ran some test queries and confirmed the performance was awesome, gave ourselves some virtual high-fives and went to bed.

The next day, Monday, started out well. Everything that touched the DB was much faster. Pages that took 3-5 seconds to load were nearly instant. Jobs that took hours to run finished in under 15 minutes. It was as if someone had hit the turbo button on our whole system. Everything was wonderful. Or so we thought.

The problems begin

Around midday Monday we started seeing lots of errors being thrown in our applications with the cause of “too many clients” to the database. Our connection limit has been set at 250 forever, so it seemed strange to encounter problems with the connection limit out of the blue. We had recently done a code release so our first line of thought was that there was a bug in the code. We use Hibernate, and it’s pretty easy to blame Hibernate. The problem was intermittent — every 20-30 minutes we’d see a bunch of SQLExceptions due to “too many connections”, which would last 1-2 minutes, and then it would clear up and everything was fine again. We made it through the rest of the day, and in the evening the problem seemed to have cleared up. We thought maybe it was a transient problem that had resolved itself but we saw a bunch of the same exceptions in some jobs that ran overnight. Our system is pretty resilient though, and most connection attempts automatically retried until they succeeded. The problem was a high priority but since it wasn’t affecting end users we were able to limp along. Tuesday was much the same as Monday and we continued investigating. Wednesday the real trouble started.

The real trouble starts

Our nightly pg_dump, which would normally finish by 8:30 AM, was only halfway complete by 8 AM on Wednesday. We had the same too-many-clients exceptions throughout the night. But around midday we started getting reports from users that our UI was really slow: some pages that had been loading nearly instantly were taking 30-60 seconds to load, or more. This quickly turned the problem into a crisis. We saw the same “too many connections” errors in the frontend logs as we had in the backend. The basic behavior was the same – we’d get a wave of “too many connections” for a couple minutes, then it would clear up for a while, and then the cycle would repeat.

We finally had to admit that maybe the problem wasn’t in the code, but in the database. To mitigate the error, we lowered the active connection limits in our connection pools, which improved the “too many connections” error for users, but led page loads to hang while the connection pool waited for a slot to open. This wasn’t much of an improvement for end users though – rather than failing quickly, their page loads were hanging indefinitely.

A minor breakthrough – and a workaround

Thursday morning we were able to get much closer to the actual problem by running vmstat on the database itself. vmstat shows the amount of memory cached vs free along with IO in/out. As people who’ve run Postgres databases before are probably aware, it’s common to see almost 100% of available memory “used” in top, even when the database doesn’t seem to be doing very much. This is due to caching.

What we saw in vmstat, however, was that during the “problem” periods the “cached” was decreasing while the “free” was increasing while the io in/out dropped to nearly zero. After some thought, we realized that the reason for the “too many connections” errors was that queries that normally finished in a few milliseconds were instead taking 30+ seconds due to the IO blocking we were seeing in vmstat. With queries running much longer than usual, they quickly ran over each other and we hit the limit of 250.

So the root behavior we were seeing that seemed to be causing the problem was:

  • Periodically, something was aggressively attempting to reclaim cached memory.
  • While memory was being reclaimed, IO was almost completely blocked.
  • While IO was blocked, database queries naturally took longer than usual, leading to rapid consumption of available database connections.

Now that we seemed to have a root cause (or more concrete symptom) we were able to formulate some relevant search queries. Simply Googling for linux memory reclaim, one of the results on the first page was from the PostgreSQL performance list and seemed to describe very similar behavior on machines with a lot of cores. DB2 has 16 physical cores, and we had logical cores enabled, for a total of 32. Maybe that’s “a lot.” We found some other similar threads and even this blog post written on the very same day we started Googling (how’s that for coincidence?). We also found references to Linux & NUMA memory-reclaim problems for MySQL and MongoDB. Most posts seemed to indicate there’s a problem in the 2.6.32 kernel line with optimizations related to NUMA memory. If experiencing problems, the suggested remedy was to set /proc/sys/vm/zone_reclaim_mode to 0 if using NUMA. Well, /proc/sys/vm/zone_reclaim_mode was already zero, and we weren’t using NUMA anyway – we were using interleaved memory. But this was enough to plant the seed of doubt in our minds for the 2.6.32 kernel. DB1 had been running for well over a year without issue on a 2.6.18 kernel.

On Thursday, thankfully, we discovered that “manually” clearing the cache caused the IO blockage to clear up immediately, at least until the system had time to fill it back up again and the memory-reclaim process kicked in again. The cache is dropped by executing echo 1 > /proc/sys/vm/drop_caches. Every time queries began to block, dropping the cache immediately cleared it up. Since we now had a stopgap workaround, we just cronned dropping of the cache for every 30 minutes. This let us get through Friday and the weekend without having to babysit the database too much for failed connections, etc.

On Friday we decided the best course of action was to put a QLogic fibre channel HBA in DB1 (with its presumably known-good 2.6.18-series kernel), provision some LUNs on it, set it up as a streaming replica and fail back over to it. This way we’d be on a good server but still have the IO benefits of the SSD. We ordered the card from Newegg Friday and received it Monday.

Tactical Retreat

I installed the card Monday (8/27), configured DB1 as a replica, tested it Tuesday, and Wednesday evening we cut back over to DB1 as our primary. With fingers crossed, we went into Thursday and thankfully everything was fine. While overall performance was a little slower than it had been on DB2 (before it had gotten really bad), it was still blazingly fast compared to how it had been on spinning disk. But most importantly there was no evidence of memory-reclaim problems.

We set DB2 up as a hotstandby and performed some testing on it to try and recreate the problem. Unfortunately we didn’t have a lot of success (possibly due to hostandbys being read-only replicas of the primary) but based on the grumbling about NUMA and memory-reclaim in the 2.6.32 kernel line, we felt the best course of action was to go to an older, proven kernel. Since DB1 seemed to be fine with its 2.6.18-238.9.1.el5 kernel, we opted to do a clean install of CentOS 5.8, running the latest in the CentOS 2.6.18 line: 2.6.18-308.13.1.el5.

DB2 reborn

We installed CentOS 5.8, remounted the existing LUNs, resynced it with the primary and did some more testing. We didn’t notice any problems, but since we weren’t really able to reproduce the problem reliably in the first place that wasn’t much consolation. On Thursday, 9/6, we cut back over to DB2. As I write this, DB2 has been in service for several weeks without any sign of the memory reclamation issue, or, more importantly, the IO/query blocking.

Conclusions

While we’re really happy to have (apparently) resolved the problem, we found the resolution somewhat unsatisfying. We had tested DB2 pretty extensively internally prior to putting it into production the first time, literally running millions of real-world queries against it and never encountered anything like the blocking we saw once it took on the production workload. Prior to making it our primary DB we had used it for several days as a hot-standby read-only reporting server and still never saw an issue. After we cut back from DB2 to DB1 we did yet more testing and still weren’t able to reproduce the query-blocking behavior. While this definitely felt like a problem related to the Linux kernel we didn’t have any hard evidence, and going back to a 2.6.18-series kernel was to some degree a leap of faith. Thankfully, so far it’s worked out for us.

Though we didn’t ultimately figure out what it was in the kernel that caused the problem, we did learn a lot from the experience. Most importantly, we ironed out our database cutover procedure to the point where it’s not really a major event to cutover. Since clearly at some point we’ll want to upgrade from CentOS 5.8, we know that we can always fail back to another database with minimal interruption if a problem like this arises in the future.

Graphing SSH dictionary attacks with HighCharts

After my 10-year-old basement Linux server died this week from a power outage, I took the sad step of giving up on it. It’s died before and I’ve patched it back together with a new power supply here or an addon PCI SATA card there, but I finally decided to throw in the towel since I had a newer old computer that had been idle for several years. The one that died was an Athlon K7 750 MHz with 512 MB ram. The new one is an Athlon 2 GHz (3200+) with 1 gig. For my uses, specs don’t really matter that much, but it’s nice to have more power for free.

I put CentOS 6 on it and configured Samba and copied all the data off the old machine and was back up and running within a few hours. Since I forward ports through my FiOS router to this box I did my standard lockdown procedure, including adding myself to the AllowUsers in sshd_config. Afterwards I took a look in /var/log/secure and saw the typical flood of dictionary attacks trying to get in as root or bob or tfeldman or jweisz. I have iptables configured to rate-limit SSH connections to 2 per 5 seconds per IP so the box doesn’t get DoSed out of existence, but some stuff does make it through to sshd.

Looking through /var/log/secure, I got to thinking it would be interesting if there was some way to visualize the attacks in a handy graph. Then I remembered, oh, wait, I can do that.

I wrote a perl script to parse out the attacks from /var/log/secure and insert them into a Postgres DB. This turned out to be pretty easy. Then I thought it would be more interesting to tie the IP of each attack to its originating country. I’ve used MaxMind’s GeoIP DB pretty extensively before, but I was looking something free. That’s when I remembered that MaxMind has a free GeoIP DB: GeoLiteCity. I grabbed it and yum-installed the Perl lib and added the geo data to the attack DB. Rather than worry about normalizing the schema I just shoved the info into the same table. Life is easier this way, and it’s just a for-fun project.

So I got that all working and parsed it against the existing /var/log/secures via

[root@lunix2011 ~]# zcat /var/log/secure-20111117.gz | perl parse-secure.pl 

I wrote ssh.php to see what’s in the table:

ssh.php list of hacking attempts
ssh.php list of hacking attempts

So now that the data was all in place, time to move on to the graphs, which is what I really wanted to do. Last time I wanted to graph data programmatically I used JPGraph, which does everything in PHP and is super versatile. But I wanted something… cooler. Maybe something interactive. A little Googling turned up Highcharts which is absolutely awesome, and does everything in JavaScript. I basically modified some of their example charts and pumped my data into them and got the charts below.

Pie chart of attacks grouped by country for the past 30 days:

Pie chart by country
Pie chart by country

Bar graph of attacks per day:

Bar graph of daily attacks
Bar graph of daily attacks

So, that’s that. Code is in github if anyone wants to play around with it. I’ve cronned parse-secure.pl to run every 5 minutes so the data gets updated automatically.

PostgreSQL query to determine the largest tables in the database

This is just a handy query I use from time to time to see which tables are growing madly, and how much of the growth is index bloat.

select
schemaname,
tablename,
pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) as size_p,
pg_total_relation_size(schemaname || '.' || tablename) as siz,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) as total_size_p,
pg_total_relation_size(schemaname || '.' || tablename) - pg_relation_size(schemaname || '.' || tablename) as index_size,
(100*(pg_total_relation_size(schemaname || '.' || tablename) - pg_relation_size(schemaname || '.' || tablename)))/case when pg_total_relation_size(schemaname || '.' || tablename) = 0 then 1 else pg_total_relation_size(schemaname || '.' || tablename) end || '%' as index_pct
from pg_tables
order by siz desc limit 20;

This returns schema, table, size on disk (in human-readable and byte format – for sorting), and total size on disk including indices, and the percentage of the total size that comprises the indexes.

Sample result from our OpenFire Jabber server:

 schemaname |      tablename       |   size_p   |    siz    | total_size_p | index_size | index_pct
------------+----------------------+------------+-----------+--------------+------------+-----------
 public     | ofconversation       | 71 MB      | 159236096 | 152 MB       |   84623360 | 53%
 public     | ofconparticipant     | 38 MB      |  95395840 | 91 MB        |   55394304 | 58%
 public     | ofpresence           | 10184 kB   |  10452992 | 10208 kB     |      24576 | 0%
 public     | ofpubsubitem         | 5336 kB    |   7733248 | 7552 kB      |    2269184 | 29%
 public     | ofid                 | 4008 kB    |   4120576 | 4024 kB      |      16384 | 0%
 public     | ofpubsubsubscription | 912 kB     |   1458176 | 1424 kB      |     524288 | 35%
 public     | ofpubsubaffiliation  | 728 kB     |   1196032 | 1168 kB      |     450560 | 37%
 public     | ofoffline            | 832 kB     |    942080 | 920 kB       |      90112 | 9%
 pg_catalog | pg_depend            | 320 kB     |    794624 | 776 kB       |     466944 | 58%
 pg_catalog | pg_attribute         | 384 kB     |    761856 | 744 kB       |     368640 | 48%
 pg_catalog | pg_proc              | 376 kB     |    753664 | 736 kB       |     368640 | 48%
 pg_catalog | pg_rewrite           | 72 kB      |    270336 | 264 kB       |     196608 | 72%
 pg_catalog | pg_description       | 136 kB     |    245760 | 240 kB       |     106496 | 43%
 pg_catalog | pg_operator          | 112 kB     |    237568 | 232 kB       |     122880 | 51%
 pg_catalog | pg_class             | 88 kB      |    212992 | 208 kB       |     122880 | 57%
 pg_catalog | pg_type              | 64 kB      |    155648 | 152 kB       |      90112 | 57%
 pg_catalog | pg_statistic         | 72 kB      |     98304 | 96 kB        |      24576 | 25%
 pg_catalog | pg_amop              | 24 kB      |     90112 | 88 kB        |      65536 | 72%
 pg_catalog | pg_conversion        | 16 kB      |     90112 | 88 kB        |      73728 | 81%
 pg_catalog | pg_constraint        | 8192 bytes |     81920 | 80 kB        |      73728 | 90%
(20 rows)

Time: 8.917 ms