Goodbye, pg_dump

I’ve been a Postgres user and administrator for a while. Over the years, my views on backups have evolved.

Originally, like most people, I started out with good old pg_dump. With a reasonably small database (under 50 GB) dumping to a flat text file is a fine option. I’d generally do something like pg_dump -Upostgres dbname | gzip > dbname.sql.gz to compress it on the fly and save space. For years this seemed perfect: dumping the entire database in a single transaction into a single file that can be restored anywhere.

But as my databases started growing larger and larger, the time it took to do a pg_dump grew as well. At a previous job, the database grew to nearly 2TB and the pg_dump took nearly 18 hours. We’d by that point already changed the pg_dump schedule from daily to weekly and then to three times a month and then finally to semi-monthly. Not only was it slow, but since it operated in a single transaction it wreaked havoc with normal database operation for queries that needed locks on tables locked by the dump.

When we moved the database from a physical RAID to a volume on our SAN, that gave us the opportunity to use LUN snapshotting rather than pg_dump (I just remembered I already wrote about that here). This let us move to a monthly pg_dump and more frequent snapshot-level backups that took up very little space. This was ideal on Compellent since the snapshots would auto-expire after however long you specified.

When I started at Yodle we were doing nightly pg_dumps and pretty soon we ran into the same problems I’d seen at Didit with the dump itself interfering with normal DB operation – the dump would start at midnight and run until 7-8 AM when I started, and after a few months it would still be running at noon. We discussed moving to wal archiving and making a basebackup to NFS but that would require a pretty massive amount of space, and as anybody who uses “enterprise storage” knows, that’s not something you want to do. We discussed building a whitebox file server for backups but nobody was really in love with that option – we’re trying to reduce the reliance on physical machines as much as possible. We talked about pushing it all to S3 but that seemed rather difficult.

When I attended NYC PgDay earlier this year, there was lots of discussion about WAL-E. I hadn’t ever head of WAL-E so I looked it up and was impressed. Basically, WAL-E handles archiving of wal to S3, but first compresses and pgp-encrypts it. It also handles pushing the basebackup to S3, also compressed and pgp-encrypted. This was just what we were looking for. We set it up and, amazingly, it worked perfectly. After a few weeks (and confirming we can restore from the wal-e backups) we moved our pg_dump to weekly, on the weekend when it doesn’t interfere with any user processes. We do a wal-e basebackup every 3-4 days or so and retain 3 of them. We retain all the wal so we can restore the DB to any point within the last ~10 days if needed. The best part is it’s faster than pg_dump, and since the basebackup doesn’t operate in a transaction (it’s a filesystem-level backup rather than an application-level backup) it doesn’t mess with user queries. There’s of course elevated IO during this time but our SAN has more than enough bandwidth.

We setup some basic monitoring of S3 (check the age of the most recent WAL and log it in Zabbix) just to ensure the backups are actually happening, and we’re at the point where we’re discussing moving pg_dump to monthly, or simply not doing it at all. Overall, wal-e has been a huge win for us, enabling better, faster backups that don’t interfere with the DB itself, and, while not free, aren’t ridiculously expensive. And since it’s in its own S3 bucket, you can tweak the bucket settings (e.g. enable RRS) to save money, and Amazon tells you exactly how much your backups cost you.

Problems with PostgreSQL 9.0 on CentOS 6.3 with a 2.6.32-series kernel

Note: this is cross-posted from the Yodle Tech Blog.

We recently migrated our Postgres database to a significantly more powerful machine. While our existing server was doing okay load-wise, we were approaching 90% full on its Postgres volume. We’d also recently acquired an awesome new PureStorage SSD-based SAN and were excited to move our database onto it. For comparison, here are the the specs for the two systems:

Old server New server
Nickname DB1 DB2
CPU 2x 6-core Xeon X5690 CPUs at 3.47GHz 2x 8-core E5-2690 CPUs at 2.90 GHz
Memory 96 GB 1333 MHz 128 GB 1600 MHz
Storage
  • Dell PERC H800 SAS HBA
  • Dell MD1220 drive enclosure
  • 24x 15k RPM SAS drives configured in RAID 10
  • QLogic QLE2562 dual-port 8 Gbit FC HBA
  • PureStorage SSD-based SAN
OS CentOS 5.6 x86_64 CentOS 6.3 x86_64
Kernel 2.6.18-238.9.1.el5 2.6.32-279.2.1.el6.x86_64

As you can probably guess, moving from spinning disk onto SSD was the centerpiece of the upgrade. We’d tested the PureStorage system pretty extensively and used it in some secondary databases and seen some amazing improvements. We were excited to see how it would handle our production workload.

Basic Configuration

DB1 was built in early 2011 and was running CentOS 5.6. We’re generally pretty big fans of CentOS, so when building DB2 we went with CentOS 6.3, which was the latest version. We followed our standard installation procedure, which essentially means doing a minimal installation and then installing just what we needed, and of course updating all packages to the latest versions.

We provisioned a big LUN for the Postgres data and 500 GB each for Postgres “temp” and logging. While a separate partition/volume for pg_log/pg_xlog is a common practice, we also created a separate volume for temp as a safeguard against runaway queries that generate lots of temp files due to hash-joins or sorting gigantic result sets. In the past we’ve run into situations where the temporary files consume all available space, and if the sort space is on the same partition as your data you run the risk of crashing the database if Postgres attempts to allocate space and there’s none to be had. Since all the SAN volumes are thin provisioned, there’s no penalty for creating separate volumes, and we get the additional benefit of being able to specify different mount options for each volume. (It’s worth noting that PostgreSQL 9.2 allows you to specify a maximum size for temp space, but since we’re still on 9.0 that option isn’t available to us.) We then created XFS filesystems on each volume. We didn’t bother partitioning the volumes, just created the filesystems on the raw devices. This way, in the event we need to expand a volume, we don’t have to worry about growing the partition, just the LUN (in the SAN) and the filesystem.

The initial cutover

We configured streaming replication between DB1 and DB2 to get DB2 in sync with DB1. On the night of Sunday, August 19th, we cut over from DB1 to DB2 as the primary. All things considered, everything went well and our database was only inaccessible to our servers for a few minutes. (Incidentally, while the database was “down” all our client sites were fine since they are completely decoupled from the database.) After verifying everything was up we ran some test queries and confirmed the performance was awesome, gave ourselves some virtual high-fives and went to bed.

The next day, Monday, started out well. Everything that touched the DB was much faster. Pages that took 3-5 seconds to load were nearly instant. Jobs that took hours to run finished in under 15 minutes. It was as if someone had hit the turbo button on our whole system. Everything was wonderful. Or so we thought.

The problems begin

Around midday Monday we started seeing lots of errors being thrown in our applications with the cause of “too many clients” to the database. Our connection limit has been set at 250 forever, so it seemed strange to encounter problems with the connection limit out of the blue. We had recently done a code release so our first line of thought was that there was a bug in the code. We use Hibernate, and it’s pretty easy to blame Hibernate. The problem was intermittent — every 20-30 minutes we’d see a bunch of SQLExceptions due to “too many connections”, which would last 1-2 minutes, and then it would clear up and everything was fine again. We made it through the rest of the day, and in the evening the problem seemed to have cleared up. We thought maybe it was a transient problem that had resolved itself but we saw a bunch of the same exceptions in some jobs that ran overnight. Our system is pretty resilient though, and most connection attempts automatically retried until they succeeded. The problem was a high priority but since it wasn’t affecting end users we were able to limp along. Tuesday was much the same as Monday and we continued investigating. Wednesday the real trouble started.

The real trouble starts

Our nightly pg_dump, which would normally finish by 8:30 AM, was only halfway complete by 8 AM on Wednesday. We had the same too-many-clients exceptions throughout the night. But around midday we started getting reports from users that our UI was really slow: some pages that had been loading nearly instantly were taking 30-60 seconds to load, or more. This quickly turned the problem into a crisis. We saw the same “too many connections” errors in the frontend logs as we had in the backend. The basic behavior was the same – we’d get a wave of “too many connections” for a couple minutes, then it would clear up for a while, and then the cycle would repeat.

We finally had to admit that maybe the problem wasn’t in the code, but in the database. To mitigate the error, we lowered the active connection limits in our connection pools, which improved the “too many connections” error for users, but led page loads to hang while the connection pool waited for a slot to open. This wasn’t much of an improvement for end users though – rather than failing quickly, their page loads were hanging indefinitely.

A minor breakthrough – and a workaround

Thursday morning we were able to get much closer to the actual problem by running vmstat on the database itself. vmstat shows the amount of memory cached vs free along with IO in/out. As people who’ve run Postgres databases before are probably aware, it’s common to see almost 100% of available memory “used” in top, even when the database doesn’t seem to be doing very much. This is due to caching.

What we saw in vmstat, however, was that during the “problem” periods the “cached” was decreasing while the “free” was increasing while the io in/out dropped to nearly zero. After some thought, we realized that the reason for the “too many connections” errors was that queries that normally finished in a few milliseconds were instead taking 30+ seconds due to the IO blocking we were seeing in vmstat. With queries running much longer than usual, they quickly ran over each other and we hit the limit of 250.

So the root behavior we were seeing that seemed to be causing the problem was:

  • Periodically, something was aggressively attempting to reclaim cached memory.
  • While memory was being reclaimed, IO was almost completely blocked.
  • While IO was blocked, database queries naturally took longer than usual, leading to rapid consumption of available database connections.

Now that we seemed to have a root cause (or more concrete symptom) we were able to formulate some relevant search queries. Simply Googling for linux memory reclaim, one of the results on the first page was from the PostgreSQL performance list and seemed to describe very similar behavior on machines with a lot of cores. DB2 has 16 physical cores, and we had logical cores enabled, for a total of 32. Maybe that’s “a lot.” We found some other similar threads and even this blog post written on the very same day we started Googling (how’s that for coincidence?). We also found references to Linux & NUMA memory-reclaim problems for MySQL and MongoDB. Most posts seemed to indicate there’s a problem in the 2.6.32 kernel line with optimizations related to NUMA memory. If experiencing problems, the suggested remedy was to set /proc/sys/vm/zone_reclaim_mode to 0 if using NUMA. Well, /proc/sys/vm/zone_reclaim_mode was already zero, and we weren’t using NUMA anyway – we were using interleaved memory. But this was enough to plant the seed of doubt in our minds for the 2.6.32 kernel. DB1 had been running for well over a year without issue on a 2.6.18 kernel.

On Thursday, thankfully, we discovered that “manually” clearing the cache caused the IO blockage to clear up immediately, at least until the system had time to fill it back up again and the memory-reclaim process kicked in again. The cache is dropped by executing echo 1 > /proc/sys/vm/drop_caches. Every time queries began to block, dropping the cache immediately cleared it up. Since we now had a stopgap workaround, we just cronned dropping of the cache for every 30 minutes. This let us get through Friday and the weekend without having to babysit the database too much for failed connections, etc.

On Friday we decided the best course of action was to put a QLogic fibre channel HBA in DB1 (with its presumably known-good 2.6.18-series kernel), provision some LUNs on it, set it up as a streaming replica and fail back over to it. This way we’d be on a good server but still have the IO benefits of the SSD. We ordered the card from Newegg Friday and received it Monday.

Tactical Retreat

I installed the card Monday (8/27), configured DB1 as a replica, tested it Tuesday, and Wednesday evening we cut back over to DB1 as our primary. With fingers crossed, we went into Thursday and thankfully everything was fine. While overall performance was a little slower than it had been on DB2 (before it had gotten really bad), it was still blazingly fast compared to how it had been on spinning disk. But most importantly there was no evidence of memory-reclaim problems.

We set DB2 up as a hotstandby and performed some testing on it to try and recreate the problem. Unfortunately we didn’t have a lot of success (possibly due to hostandbys being read-only replicas of the primary) but based on the grumbling about NUMA and memory-reclaim in the 2.6.32 kernel line, we felt the best course of action was to go to an older, proven kernel. Since DB1 seemed to be fine with its 2.6.18-238.9.1.el5 kernel, we opted to do a clean install of CentOS 5.8, running the latest in the CentOS 2.6.18 line: 2.6.18-308.13.1.el5.

DB2 reborn

We installed CentOS 5.8, remounted the existing LUNs, resynced it with the primary and did some more testing. We didn’t notice any problems, but since we weren’t really able to reproduce the problem reliably in the first place that wasn’t much consolation. On Thursday, 9/6, we cut back over to DB2. As I write this, DB2 has been in service for several weeks without any sign of the memory reclamation issue, or, more importantly, the IO/query blocking.

Conclusions

While we’re really happy to have (apparently) resolved the problem, we found the resolution somewhat unsatisfying. We had tested DB2 pretty extensively internally prior to putting it into production the first time, literally running millions of real-world queries against it and never encountered anything like the blocking we saw once it took on the production workload. Prior to making it our primary DB we had used it for several days as a hot-standby read-only reporting server and still never saw an issue. After we cut back from DB2 to DB1 we did yet more testing and still weren’t able to reproduce the query-blocking behavior. While this definitely felt like a problem related to the Linux kernel we didn’t have any hard evidence, and going back to a 2.6.18-series kernel was to some degree a leap of faith. Thankfully, so far it’s worked out for us.

Though we didn’t ultimately figure out what it was in the kernel that caused the problem, we did learn a lot from the experience. Most importantly, we ironed out our database cutover procedure to the point where it’s not really a major event to cutover. Since clearly at some point we’ll want to upgrade from CentOS 5.8, we know that we can always fail back to another database with minimal interruption if a problem like this arises in the future.

Benchmarking disk IO on ext3 vs ext4 vs xfs with fio

With the old database phased out, I figured this was a good time to benchmark IO on it before either repurposing it or mothballing it. In the past I’ve used dd for elementary sequential read/write testing, but I recently found fio which is much more versatile. Apparently developed by FusionIO, fio lets you control concurrency, blocksize and many other parameters and reports lots more data, most notably IOPS.

I created 4 fio config files, basically the same as the one below, for direct / buffered disk access and sequential / random writes. All have 8k blocksize since that’s what Postgres uses by default.


[root@link ~]# cat random-write-test.fio
[random-write]
rw=randwrite
size=5G
direct=1
directory=/msa70/fio/data/
numjobs=5
group_reporting
name=random-write-direct
bs=8k
runtime=10
[root@link ~]# cat sequential-write-test-buf.fio
[sequential-write]
rw=write
size=5G
direct=0
directory=/msa70/fio/data/
numjobs=5
group_reporting
name=sequential-write-buffered
bs=8k
runtime=10

First I ran fio against a freshly created ext3 volume on our HP MSA70 with 24x 10kRPM SAS disks in RAID 10. The mkfs command used was mke2fs -j -m0 /dev/cciss/c0d0. The controller card is a HP P800. Here’s the output:

[wpspoiler name=’fio – ext3 full results’]

[root@link ~]# time fio random-write-test.fio random-write-test-buf.fio sequential-write-test.fio sequential-write-test-buf.fio
random-write-direct: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
random-write-direct: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
random-write-buf: (g=1): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
random-write-buf: (g=1): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
sequential-write-direct: (g=2): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
sequential-write-direct: (g=2): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
sequential-write-buffered: (g=3): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
sequential-write-buffered: (g=3): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
fio 1.57
Starting 20 processes
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [________________W___] [19.8% done] [0K/0K /s] [0 /0  iops] [eta 03m:14s]
random-write-direct: (groupid=0, jobs=5): err= 0: pid=2447
  write: io=596856KB, bw=56946KB/s, iops=7118 , runt= 10481msec
    clat (usec): min=67 , max=944937 , avg=645.32, stdev=4647.69
     lat (usec): min=67 , max=944937 , avg=645.46, stdev=4647.69
    bw (KB/s) : min=  144, max=47152, per=22.71%, avg=12933.03, stdev=4985.93
  cpu          : usr=0.26%, sys=1.48%, ctx=74614, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/74607/0, short=0/0/0
     lat (usec): 100=12.22%, 250=80.24%, 500=6.59%, 750=0.20%, 1000=0.16%
     lat (msec): 2=0.07%, 4=0.01%, 10=0.10%, 20=0.05%, 50=0.08%
     lat (msec): 100=0.10%, 250=0.14%, 500=0.04%, 750=0.01%, 1000=0.01%
random-write-buf: (groupid=1, jobs=5): err= 0: pid=2452
  write: io=6377.3MB, bw=472902KB/s, iops=59112 , runt= 13809msec
    clat (usec): min=8 , max=4490.8K, avg=77.54, stdev=3751.21
     lat (usec): min=8 , max=4490.8K, avg=77.66, stdev=3751.21
    bw (KB/s) : min=    7, max=438688, per=34.13%, avg=161414.56, stdev=63682.89
  cpu          : usr=2.27%, sys=24.01%, ctx=1596, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/816288/0, short=0/0/0
     lat (usec): 10=0.06%, 20=83.59%, 50=16.19%, 100=0.04%, 250=0.04%
     lat (usec): 500=0.03%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.03%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%, >=2000=0.01%
sequential-write-direct: (groupid=2, jobs=5): err= 0: pid=2457
  write: io=2832.0KB, bw=286105 B/s, iops=34 , runt= 10136msec
    clat (usec): min=75 , max=1506.6K, avg=157500.69, stdev=96237.55
     lat (usec): min=75 , max=1506.6K, avg=157500.81, stdev=96237.54
    bw (KB/s) : min=    6, max=  444, per=19.58%, avg=54.61, stdev=13.65
  cpu          : usr=0.00%, sys=0.00%, ctx=572, majf=0, minf=140
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/354/0, short=0/0/0
     lat (usec): 100=4.80%, 250=33.05%, 500=0.56%
     lat (msec): 50=1.13%, 100=18.36%, 250=25.99%, 500=9.04%, 750=4.24%
     lat (msec): 1000=1.41%, 2000=1.41%
sequential-write-buffered: (groupid=3, jobs=5): err= 0: pid=2462
  write: io=255624KB, bw=19933KB/s, iops=2491 , runt= 12824msec
    clat (usec): min=7 , max=9411.7K, avg=1443.28, stdev=33091.95
     lat (usec): min=7 , max=9411.7K, avg=1443.38, stdev=33091.95
    bw (KB/s) : min=   31, max=43687, per=29.58%, avg=5896.30, stdev=2696.81
  cpu          : usr=0.05%, sys=0.89%, ctx=600, majf=0, minf=136
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/31953/0, short=0/0/0
     lat (usec): 10=13.51%, 20=77.52%, 50=8.68%, 100=0.07%, 250=0.08%
     lat (usec): 500=0.03%, 750=0.01%
     lat (msec): 4=0.01%, 10=0.01%, 20=0.02%, 50=0.01%, 250=0.01%
     lat (msec): 750=0.02%, 2000=0.01%, >=2000=0.03%

Run status group 0 (all jobs):
  WRITE: io=596856KB, aggrb=56946KB/s, minb=58313KB/s, maxb=58313KB/s, mint=10481msec, maxt=10481msec

Run status group 1 (all jobs):
  WRITE: io=6377.3MB, aggrb=472902KB/s, minb=484251KB/s, maxb=484251KB/s, mint=13809msec, maxt=13809msec

Run status group 2 (all jobs):
  WRITE: io=2832KB, aggrb=279KB/s, minb=286KB/s, maxb=286KB/s, mint=10136msec, maxt=10136msec

Run status group 3 (all jobs):
  WRITE: io=255624KB, aggrb=19933KB/s, minb=20411KB/s, maxb=20411KB/s, mint=12824msec, maxt=12824msec

Disk stats (read/write):
  cciss!c0d0: ios=798/24542244, merge=0/937067, ticks=15866/140262853, in_queue=140448875, util=89.71%

real    18m35.082s
user    0m4.328s
sys     5m16.479s
[root@link ~]#

[/wpspoiler]
That’s a lot of data; the most interesting numbers are these:

random-write-direct: (groupid=0, jobs=5): err= 0: pid=2447
  write: io=596856KB, bw=56946KB/s, iops=7118 , runt= 10481msec
random-write-buf: (groupid=1, jobs=5): err= 0: pid=2452
  write: io=6377.3MB, bw=472902KB/s, iops=59112 , runt= 13809msec
sequential-write-direct: (groupid=2, jobs=5): err= 0: pid=2457
  write: io=2832.0KB, bw=286105 B/s, iops=34 , runt= 10136msec
sequential-write-buffered: (groupid=3, jobs=5): err= 0: pid=2462
  write: io=255624KB, bw=19933KB/s, iops=2491 , runt= 12824msec

I then reformatted the RAIDed LUN with ext4 with mkfs.ext4 -m0 /dev/cciss/c0d0 and ran the same tests again:
[wpspoiler name=’fio – ext4 full results’]

[root@link ~]# time fio random-write-test.fio random-write-test-buf.fio sequential-write-test.fio sequential-write-test-buf.fio
random-write-direct: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
random-write-direct: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
random-write-buf: (g=1): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
random-write-buf: (g=1): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
sequential-write-direct: (g=2): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
sequential-write-direct: (g=2): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
sequential-write-buffered: (g=3): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
sequential-write-buffered: (g=3): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
fio 1.57
Starting 20 processes
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 5 (f=5): [_______________WWWWW] [21.5% done] [0K/126.1M /s] [0 /15.9K iops] [eta 02m:30s]
random-write-direct: (groupid=0, jobs=5): err= 0: pid=2379
  write: io=589392KB, bw=58933KB/s, iops=7366 , runt= 10001msec
    clat (usec): min=77 , max=2522.4K, avg=781.82, stdev=9768.99
     lat (usec): min=77 , max=2522.4K, avg=781.95, stdev=9768.99
    bw (KB/s) : min=  515, max=44176, per=22.05%, avg=12996.62, stdev=5332.75
  cpu          : usr=0.26%, sys=2.86%, ctx=73715, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/73674/0, short=0/0/0
     lat (usec): 100=2.54%, 250=92.13%, 500=4.76%, 750=0.14%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.07%, 20=0.04%, 50=0.07%
     lat (msec): 100=0.09%, 250=0.11%, 500=0.03%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%, >=2000=0.01%
random-write-buf: (groupid=1, jobs=5): err= 0: pid=2384
  write: io=6707.1MB, bw=670727KB/s, iops=83840 , runt= 10241msec
    clat (usec): min=7 , max=1582.5K, avg=57.04, stdev=1910.33
     lat (usec): min=7 , max=1582.5K, avg=57.15, stdev=1910.33
    bw (KB/s) : min=   30, max=518784, per=25.08%, avg=168240.26, stdev=84472.16
  cpu          : usr=2.51%, sys=25.56%, ctx=35766, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/858614/0, short=0/0/0
     lat (usec): 10=2.14%, 20=91.72%, 50=5.11%, 100=0.31%, 250=0.15%
     lat (usec): 500=0.19%, 750=0.15%, 1000=0.09%
     lat (msec): 2=0.08%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%
sequential-write-direct: (groupid=2, jobs=5): err= 0: pid=2389
  write: io=839760KB, bw=83959KB/s, iops=10494 , runt= 10002msec
    clat (usec): min=70 , max=300244 , avg=473.09, stdev=3815.84
     lat (usec): min=71 , max=300244 , avg=473.22, stdev=3815.84
    bw (KB/s) : min= 8198, max=30528, per=20.27%, avg=17014.88, stdev=1677.78
  cpu          : usr=0.32%, sys=3.15%, ctx=104986, majf=0, minf=140
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/104970/0, short=0/0/0
     lat (usec): 100=73.60%, 250=26.18%, 500=0.01%, 750=0.01%
     lat (msec): 50=0.01%, 100=0.01%, 250=0.18%, 500=0.02%
sequential-write-buffered: (groupid=3, jobs=5): err= 0: pid=2394
  write: io=1730.7MB, bw=163683KB/s, iops=20460 , runt= 10827msec
    clat (usec): min=7 , max=7649.9K, avg=220.41, stdev=12535.19
     lat (usec): min=7 , max=7649.9K, avg=220.52, stdev=12535.19
    bw (KB/s) : min=   27, max=331055, per=35.92%, avg=58788.88, stdev=27863.44
  cpu          : usr=0.42%, sys=5.54%, ctx=4119, majf=0, minf=136
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/221525/0, short=0/0/0
     lat (usec): 10=50.94%, 20=47.34%, 50=1.45%, 100=0.18%, 250=0.05%
     lat (usec): 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2000=0.01%, >=2000=0.01%

Run status group 0 (all jobs):
  WRITE: io=589392KB, aggrb=58933KB/s, minb=60347KB/s, maxb=60347KB/s, mint=10001msec, maxt=10001msec

Run status group 1 (all jobs):
  WRITE: io=6707.1MB, aggrb=670726KB/s, minb=686824KB/s, maxb=686824KB/s, mint=10241msec, maxt=10241msec

Run status group 2 (all jobs):
  WRITE: io=839760KB, aggrb=83959KB/s, minb=85974KB/s, maxb=85974KB/s, mint=10002msec, maxt=10002msec

Run status group 3 (all jobs):
  WRITE: io=1730.7MB, aggrb=163683KB/s, minb=167611KB/s, maxb=167611KB/s, mint=10827msec, maxt=10827msec

Disk stats (read/write):
  cciss!c0d0: ios=52/276341, merge=0/553601, ticks=224/1731709, in_queue=1735311, util=94.47%

real    0m42.218s
user    0m1.812s
sys     0m19.028s
[root@link ~]#

[/wpspoiler]
Summary numbers:

random-write-direct: (groupid=0, jobs=5): err= 0: pid=2379
  write: io=589392KB, bw=58933KB/s, iops=7366 , runt= 10001msec
random-write-buf: (groupid=1, jobs=5): err= 0: pid=2384
  write: io=6707.1MB, bw=670727KB/s, iops=83840 , runt= 10241msec
sequential-write-direct: (groupid=2, jobs=5): err= 0: pid=2389
  write: io=839760KB, bw=83959KB/s, iops=10494 , runt= 10002msec
sequential-write-buffered: (groupid=3, jobs=5): err= 0: pid=2394
  write: io=1730.7MB, bw=163683KB/s, iops=20460 , runt= 10827msec

This is just one run of each test, so not very scientific, but it does look pretty clear that ext4 outperforms ext3 (I actually did run the test several times but didn’t collect the results, they were pretty consistent though). What the test doesn’t show, but you can see from the time output is that each ext3 test took over 4 minutes to run while the ext4 tests each took about 10-12 seconds. In the ext3 tests, each “file1: Laying out IO file(s) (1 file(s) / 5120MB)” line took nearly a minute; in the ext4 tests they were essentially instant.

This is nothing groundbreaking, I was just playing around with fio and figured I’d post the results in case anyone else was curious. I did notice some oddities though, like the fact that random buffered writes appear to be faster than even sequential buffered writes, on both ext3 and ext4 – on ext4, showing 670 MB/s random buffered writes and 83k IOPS. That doesn’t seem to make much sense, but I’ll leave the tea-leaf reading to someone else.

Addendum: I ran the test with XFS, just for fun. Overall, it was significantly faster than ext3, but not as fast as ext4, though sequential buffered writes were much faster than either ext3 or ext4.

[wpspoiler name=’fio – xfs full results’]

[root@link ~]# mkfs.xfs -f -b size=4k /dev/cciss/c0d0
meta-data=/dev/cciss/c0d0        isize=256    agcount=4, agsize=107506378 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=430025510, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=209973, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@link ~]# mount -v -t xfs /dev/cciss/c0d0 /msa70/
/dev/cciss/c0d0 on /msa70 type xfs (rw)
[root@link ~]# mkdir -p /msa70/fio/data
(reverse-i-search)`f': mkdir -p /msa70/^Co/data
[root@link ~]# time fio random-write-test.fio random-write-test-buf.fio sequential-write-test.fio sequential-write-test-buf.fio
random-write-direct: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
random-write-direct: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
random-write-buf: (g=1): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
random-write-buf: (g=1): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
sequential-write-direct: (g=2): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
sequential-write-direct: (g=2): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
sequential-write-buffered: (g=3): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
...
sequential-write-buffered: (g=3): rw=write, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
fio 1.57
Starting 20 processes
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
random-write-buf: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-direct: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
sequential-write-buffered: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 5 (f=5): [_______________WWWWW] [91.3% done] [0K/143.4M /s] [0 /17.1K iops] [eta 00m:26s]
random-write-direct: (groupid=0, jobs=5): err= 0: pid=2662
  write: io=416920KB, bw=41688KB/s, iops=5210 , runt= 10001msec
    clat (usec): min=83 , max=1592.6K, avg=1013.62, stdev=10137.19
     lat (usec): min=83 , max=1592.6K, avg=1013.77, stdev=10137.19
    bw (KB/s) : min=    7, max=23184, per=22.46%, avg=9363.32, stdev=2852.49
  cpu          : usr=0.31%, sys=1.61%, ctx=106802, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/52115/0, short=0/0/0
     lat (usec): 100=0.06%, 250=4.55%, 500=87.85%, 750=4.23%, 1000=0.70%
     lat (msec): 2=2.01%, 4=0.13%, 10=0.24%, 20=0.07%, 50=0.01%
     lat (msec): 100=0.03%, 250=0.06%, 500=0.03%, 750=0.02%, 2000=0.02%
random-write-buf: (groupid=1, jobs=5): err= 0: pid=2667
  write: io=6557.9MB, bw=27876KB/s, iops=3484 , runt=240892msec
    clat (usec): min=7 , max=237984K, avg=1631.10, stdev=257906.92
     lat (usec): min=7 , max=237984K, avg=1631.22, stdev=257906.92
    bw (KB/s) : min=    0, max=628832, per=1222.74%, avg=340851.04, stdev=70996.06
  cpu          : usr=0.13%, sys=1.33%, ctx=24945, majf=0, minf=134
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/839397/0, short=0/0/0
     lat (usec): 10=18.02%, 20=79.39%, 50=2.19%, 100=0.18%, 250=0.07%
     lat (usec): 500=0.03%, 750=0.01%, 1000=0.04%
     lat (msec): 2=0.02%, 4=0.01%, 10=0.02%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01%
     lat (msec): >=2000=0.01%
sequential-write-direct: (groupid=2, jobs=5): err= 0: pid=2677
  write: io=519056KB, bw=51895KB/s, iops=6486 , runt= 10002msec
    clat (usec): min=81 , max=242925 , avg=825.67, stdev=3980.13
     lat (usec): min=81 , max=242925 , avg=825.81, stdev=3980.13
    bw (KB/s) : min=   59, max=20029, per=19.03%, avg=9873.41, stdev=2128.50
  cpu          : usr=0.23%, sys=1.43%, ctx=129921, majf=0, minf=140
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/64882/0, short=0/0/0
     lat (usec): 100=60.60%, 250=38.63%, 500=0.01%
     lat (msec): 10=0.10%, 50=0.11%, 100=0.23%, 250=0.32%
sequential-write-buffered: (groupid=3, jobs=5): err= 0: pid=2682
  write: io=7688.1MB, bw=770776KB/s, iops=96347 , runt= 10215msec
    clat (usec): min=6 , max=1727.2K, avg=53.67, stdev=1633.96
     lat (usec): min=6 , max=1727.2K, avg=53.78, stdev=1633.96
    bw (KB/s) : min= 5340, max=704608, per=21.28%, avg=164049.10, stdev=87065.00
  cpu          : usr=2.15%, sys=19.54%, ctx=16284, majf=0, minf=136
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/984185/0, short=0/0/0
     lat (usec): 10=41.97%, 20=56.42%, 50=1.10%, 100=0.23%, 250=0.10%
     lat (usec): 500=0.05%, 750=0.02%, 1000=0.03%
     lat (msec): 2=0.01%, 4=0.01%, 10=0.02%, 20=0.01%, 50=0.01%
     lat (msec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01%

Run status group 0 (all jobs):
  WRITE: io=416920KB, aggrb=41687KB/s, minb=42688KB/s, maxb=42688KB/s, mint=10001msec, maxt=10001msec

Run status group 1 (all jobs):
  WRITE: io=6557.9MB, aggrb=27876KB/s, minb=28545KB/s, maxb=28545KB/s, mint=240892msec, maxt=240892msec

Run status group 2 (all jobs):
  WRITE: io=519056KB, aggrb=51895KB/s, minb=53140KB/s, maxb=53140KB/s, mint=10002msec, maxt=10002msec

Run status group 3 (all jobs):
  WRITE: io=7688.1MB, aggrb=770776KB/s, minb=789274KB/s, maxb=789274KB/s, mint=10215msec, maxt=10215msec

Disk stats (read/write):
  cciss!c0d0: ios=8/836060, merge=0/549215, ticks=11/36458428, in_queue=36468678, util=98.88%

real    4m31.948s
user    0m2.692s
sys     0m24.351s
[root@link ~]#

[/wpspoiler]

XFS abridged:

random-write-direct: (groupid=0, jobs=5): err= 0: pid=2662
  write: io=416920KB, bw=41688KB/s, iops=5210 , runt= 10001msec
random-write-buf: (groupid=1, jobs=5): err= 0: pid=2667
  write: io=6557.9MB, bw=27876KB/s, iops=3484 , runt=240892msec
sequential-write-direct: (groupid=2, jobs=5): err= 0: pid=2677
  write: io=519056KB, bw=51895KB/s, iops=6486 , runt= 10002msec
sequential-write-buffered: (groupid=3, jobs=5): err= 0: pid=2682
  write: io=7688.1MB, bw=770776KB/s, iops=96347 , runt= 10215msec


Using WAL archiving & Compellent snapshots for PostgreSQL backups

I seem to have what may be an irrational dislike for differential backups in general. No matter what system I’m backing up, I feel far more confident in doing complete backups than differential ones. The significant exception to this is rsync, but even rsync does “full” backups of the files that have changed. Even so, I usually add the -W flag to rsync so it moves the entire file if it’s been modified. I guess I just like knowing there’s a single file that contains the entire database rather than having to restore a huge file and then replay a bunch of differential files in sequence.

This has worked for a long time, even with our PostgreSQL DB, which I’ve been backing up with good ol’ pg_dump, but it’s gotten to the point that the DB backup takes over 12 hours now, during which performance is seriously degraded. With the recent migration to the new server, overall performance seems significantly better overall, but performance during the nightly pg_dump is much worse. Rather than trying to troubleshoot it, I think it’s time I bit the bullet and move to WAL archiving to enable differential backups, and stop doing a full backup every night.

The PostgreSQL docs are pretty great at explaining how to do this. Basically:

  1. Configure WAL archiving to copy the WAL files to another server as soon as they’re complete.
  2. Issue the pg_start_backup() command.
  3. Perform a filesystem-level backup of the DB (usually /var/lib/pgsql on RedHat/CentOS distros).
  4. Issue the pg_stop_backup() command.

That’s basically it. The backup from step 2 plus the archived WAL files will allow you to recover to any point in time after issuing the pg_stop_backup() command. So if you complete your backup at 2 AM on Sunday and your DB crashes on Wednesday at 6 PM, you can restore to any point between 2 AM Sunday and whatever your most recently archived WAL file is (presumably within a few minutes of the crash at 6 PM Wednesday). If you decided you wanted to restore to 12:01 AM on Wednesday, you can do that using the recovery_target_time setting. My aversion to differential backups aside, this is pretty awesome.

As I said, the Postgres docs do a good job of covering this procedure completely. The only thing I have to add is how I did this using a SAN-level snapshot as my “filesystem backup.” I’d never done any scripting with Compellent before and it turned out to be pretty easy. From the Knowledge Center, under Software download the latest version of Command Utility (I downloaded 5.4.1). The utility itself is just a JAR, you’ll need Java installed to run it.

I considered doing a real filesystem backup of the DB (love that rsync) but the problem was that the DB is currently 1.2 TB and gobbling that much space on the NAS wasn’t very appealing. Compellent replays (snapshots) are just deltas, and I can easily store a week worth of backups for much less than (7 * 1.2 = ) 8.4 TB.

I wrote a crappy bash script to do everything, included below:

I retain the WAL files for 8 days and the snapshots for 7 days, but I may adjust this since the WAL files themselves consume a lot of space – about 30-40 GB per day. Though this is still less than the gzipped pg_dump I had been doing, which was about 85 GB per day.

I’ve cronned the script to run at 2 AM and so far it appears to work. Compellent replays are created almost instantly, so the backup script completed in about 10 seconds, which includes 6 seconds of sleep, which probably aren’t necessary. Considering that the pg_dump method took 12+ hours to complete, 10 seconds is immeasurably better. Well, I guess it is measurable, you just need to divide 12 hours by 10 seconds.

I’m pretty happy with this so far. The improved performance far outweighs any irrational philosophical objection I may have had to differential backups. Buuuut I’m still going to do pg_dumps on Saturdays.

Migrating a PostgreSQL DB to a new machine without doing a dump & restore

Long ago, before I stepped into my role as de facto DBA in my current job, dump & restore of our Postgres database (dumping the entire DB to a text file, formatting the data disk, and restoring the data) was a pretty regular event. This was needed because we didn’t regularly vacuum the DB, which in turn was due to vacuums taking forever (which in the end ended up being due to crappy underlying hardware). We started doing monthly global vacuums (they took so long that monthly was about all we could handle) and we discovered we no longer needed to do dump & restores. When we finally upgraded to Postgres 8.2, which introduced autovacuum, things got even better, since we didn’t have to manage vacuuming via cron jobs.

In the years between then and now our Postgres DB has ballooned to such a size that a D&R isn’t something we could feasibly do in a scheduled maintenance window. The last time I did one was when we bade farewell to the decrepit database in September, 2007. At that point our database consumed 730 GB on disk. Looking through my email archives, we began the dump at 6:30 PM on a Friday night and it completed at 3:48 AM Saturday. The restore started around 9 AM and ran until around 1 PM (I assume it went so much faster than the dump due to the new DB being significantly better hardware-wise). Building indices took until 9:27 PM Saturday. We then ran a global “ANALYZE” to generate DB stats; the analyze ran from 10 PM until 1 AM Sunday. We then had most of Sunday to process the backlog of data that accumulated since Friday afternoon when we took the database offline.

So, with a 730 GB DB, the entire procedure took 9 hours (dump) + 4 hours (restore) + 8 hours (index rebuild) + 3 hours (analyze), so about 24 hours.

However, as I said, in the years since then our database has grown as we house more and more data, currently at about 1220 GB. It might have been possible to do the migration via dump & restore in the scheduled window, but I wasn’t looking forward to it. Instead, I decided to try a different option: copying the data files directly from the old server to the new one. If this worked it would eliminate almost all the overhead and the move would be complete in however long it took to copy the data from one host to the other.

Reasons

We had a couple reasons for doing this upgrade. Performance wasn’t really one of them though; we were pretty confident that the performance of the DB was as good as we were likely to get with platter disks, and our SAN doesn’t currently have SSDs. The old DB has dual Xeon 5160 2-core CPUs @3.0ghz, 32 GB memory, and a RAID 5 OS volume. The database resided on an HP MSA70, with 24x 10krpm 146 GB SAS drives (+1 hot spare) in RAID 10 for a 1.6 TB logical volume. At the time I debated RAID 6 vs RAID 10 but in the end I opted for the performance of RAID 10 vs the capacity of RAID 6 and it worked out well.

But one of the reasons we decided to upgrade was that the drives in the DB were starting to die and the warranty had expired on them, and each disk cost about $300 to replace. That was a pretty big liability and I expected the disks to begin dying more frequently.

Another reason for upgrading was the benefits of having the data on the SAN, especially snapshotting. We’d been doing daily backups for a while but doing a dump of the DB while it’s in use makes it take forever, causing degraded performance while it’s running. Snapshot isn’t a perfect solution, but at least it’s an option.

Another reason for wanting to move the DB to the SAN was for DR purposes; if we setup SAN-SAN replication to another site, with the DB on the SAN, we get that backed up for free.

And probably the biggest reason, we were up to 1.25 TB used out of 1.6, over 80% full. We’d probably be good for another few months, but for me 80% is pretty full.

Prerequisites

In order for this to work, the version of Postgres on both machines has to be of the same minor version. In my case, the source DB (server A) was running 8.2.5 (the newest at the time the box was built), so I built 8.2.18 (source RPM) on the target (server B). Server B is pretty beefy: HP DL360 G7 with dual Xeon X5660 6-core CPUs @2.8 GHz, 96 GB PC3-10600 ECC mem, QLogic QLE-4062 iSCSI HBA connected to a 4TB volume on our Compellent SAN (tier 1, RAID 10 across 32x 15krpm FC disks). Both machines of course need to be of the same architecture, in my case x86_64. It might work across Intel/AMD, but I’m not sure about that; fortunately I didn’t have to worry about that.

Moving the data

When we did dump & restores, we dumped directly to a commonly-mounted NAS, which worked well, since we wouldn’t start the restore until the dump was complete, and we didn’t want to consume disk on the target with a gigantic dump file (in addition to spinning the disks with reading the dumpfile while attempting to write to them; the contention causes everything to go much slower). There wasn’t really any need to use a NAS as an intermediary in this case though, it would just double the amount of time needed to get the data from A to B.

I created an NFS export on B:

/data/pgsql             10.0.0.35(rw,no_root_squash)

And mounted it on A with these options in /etc/fstab:

10.0.0.36:/data/pgsql  /mnt/gannon/nfs nfs     rw,rsize=32768,wsize=32768,nfsvers=3,noatime,udp        0     0

I tried TCP vs UDP mounts and found UDP was faster.

I then copied the data over with my favorite Unix tool, rsync:

time rsync -atp --delete --progress /var/lib/pgsql/data /mnt/gannon/nfs/ > /home/evan/rsync.log

Dry run

In January I did a dry run of the procedure. I had tried copying data over without stopping postgres on A, so as not to cause a service interruption, but it didn’t work; data was changing too rapidly. By the time the rsync completed, the files it had copied over earliest had already been modified again. I ended up scheduling some downtime for a weekend and did the copy. With the above NFS settings I was able to transfer data at around 50 MB/s over our gigabit switches, the whole thing took 3-4 hours. When it came up, everything seemed to be fine. I was pretty happy, because 3-4 hours is a whole lot better than 24+.

Day of Reckoning

I finally did the real migration this past weekend. I started it at 8 PM and (after an rsync snafu) completed it around 4 AM. The snafu was caused by my use of the “--delay-updates” flag, which I later learned copies modified files to a /.~tmp~/ directory, and when all of them are copied, moves them into place in an attempt to make it a more “atomic” operation. I didn’t realize this was what was happening, and I got a little freaked out when I saw the disk usage for the target growing 100 GB larger than the source. I cancelled the rsync and ran it again, stupidly dropping the –delay-updates flag, which with the –delete flag caused it to delete all the stuff in .~tmp~ that it had already copied over. It deleted like 300 GB of stuff before I freaked out and cancelled it again. I cursed myself a few times, then manually moved the contents of .~tmp~ up to the parent to salvage what had already been transferred, and ran the rsync once again to move the remaining data. So it probably would have been done much sooner had I not cancelled it and then deleted a bunch of my progress.

You may notice that the rsync flags above don’t include -z. With huge binary files being transferred over a fast LAN I don’t think there’s much reason to use -z, in fact when I added -z the throughput plummeted.

After copying the data, I moved the virtual IP of the DB to the new machine, moved the cron jobs over, started postgres on B and everything worked. I finished all of my cleanup around 6 AM Saturday, though like I said, I would have been done much sooner had I not deleted a bunch of my progress. Even still, this is a lot better than the dump & restore scenario, and has the added benefit of being reversible. I’m planning to upgrade postgres on A to 8.2.18 and leave it as a standby server; if a problem arises with B, the data can be moved back relatively quickly.

Conclusion

Well, I don’t have any great insight to put here, but so far this has worked out. My next DB project is upgrading from 8.2 to either 8.4 or the 9.x series, but that’s going to require a lot more planning since client drivers will likely need to be updated, and I’m not sure if queries might need to be altered.

The end (I hope).

Do I still need swap space?

About three years ago I replaced our primary database. For years we’d been plagued by awful performance in the database and we were never able to diagnose the problem. The original server was a real beast at the time: 8 Opterons (single core), 32 gigs ram, and a fibre channel RAID connected via a QLogic HBA. This was back in 2005, so those specs don’t probably sound that impressive today, but this was a crazy configuration (with a crazy price tag to match). On paper it looked like this server should be basically invincible but the performance was awful, slowing down every process within the company. We contacted a few different companies (including CommandPrompt, which employs several of the core Pg devs) to see if they could assist us in diagnosing the problems but tuning only helped to a point. There was just something wrong with the box, maybe having to do with the FC HBA itself (which nobody knew much about).
Continue reading “Do I still need swap space?”