The bright side of Compellent

Since I was bemoaning Compellent’s pricing recently I figured it would be unfair of me not to highlight the upside. Their tagline is (or was when we purchased it) “The only SAN so sophisticated it’s simple.” While I can’t say whether they’re the ONLY one, the idea is definitely true. This is the first SAN I’ve ever used, and aside from the learning curve for iSCSI itself (targets, spinup delay, etc.) it’s totally simple and intuitive. Create LUNs, map them to servers. Don’t worry about things like RAID levels or hot disks. We’re into our second year with Compellent and it’s definitely lived up to its promise of simplicity.

I don’t know how much management the average SAN requires but our sales rep recently asked me how much time per week we spend managing the SAN. I crinkled my brow, because I don’t really spend any time managing the SAN. I’ve logged in to the web interface a lot more over the past few weeks than I normally do because the SAN filled up quickly due to our experimentation with Hadoop, and I wanted to make sure we didn’t get to 100% before I was able to order more disks. But aside from that incident I think the only times I’ve logged in to the management console have been to add a LUN or map a datastore to a new ESX host.

I was reminded about this simplicity when we finally added the disks last week. We went to the datacenter Wednesday to move some servers around in the racks to ensure there would be enough power in the SAN rack for the new enclosure (16x 2TB disks). We also updated a firmware update for the SAN (required so it would recognize the new 2TB drives). We have redundant controllers, so we were told there shouldn’t be any downtime. I don’t tend to trust those types of statements – if someone says something will be down for an hour I budget for 4. If it’s 8 hours I budget for a day. If it’s zero I just think they’re lying and it’s going to explode and kill people.

So all things considered I was rather impressed. We have dual controllers, so the update was installed on one controller first, and that controller rebooted. When it rebooted, the iSCSI traffic did actually fail over properly to the secondary controller. This wasn’t completely flawless – the console on some of the machines showed some iSCSI errors, but the machines seemed to be working fine (I rebooted them just to be safe). A couple of the VMs (whose data/swap drives are all on the SAN) barfed and had to be rebooted – I think our Jabber server was the only casualty, but that was back up in under a minute. When the second controller updated itself, its traffic failed back over to the first one. When it was all done (took about 30 mins total) there was a warning about the ports being unbalanced, which was rectified by clicking the “rebalance ports” button. So all in all, I’d say there was “pretty much” no downtime. After the update, we racked the new enclosure and called it a day.

This week a tech from Compellent came onsite to do the actual install for the enclosure (hooking up the fibre loops and installing the new license). This was really zero downtime. I got some alerts that one of the loops was down, but it didn’t affect anything. Pop the disks in, wire it up, install license, and we’ve got another 32 TB usable space. It’s been over a day and the data is in the process of moving from our tier 1 (32x 15krpm FC disks) down to tier 3 (SATA). All in all it was a pretty painless procedure. Sure, it would have been easier had we not had to do the firmware update, but I guess when a new type of drive is introduced that’s to be expected.

So in conclusion, I guess this just reinforces my theory that the only bad thing about Compellent is the price. And if that’s the worst thing someone can say about your product, that’s probably a pretty good place to be.

4 Replies to “The bright side of Compellent”

  1. Hadoop (If I’ve read right) doesn’t seem like an ideal fit for compellent. On the compellent you already have the redundancy and hadoop tries to keep three copies of things, multiplying your disk storage. You can’t use non-redundant either because the lun is striped over all the disks, so If you lose a disk, you lose all three copies of your hadoop stored data. Of course you can get around this by creating different disk folders/storage pools for each hadoop node.

  2. That’s completely correct. Implementing Hadoop in VMware on a SAN (not just compellent) basically defeats the purpose of Hadoop – distributing the query across machines with their own CPU and disk IO. We did it this way to test the waters before spending money on dedicated hardware. I think we’re more likely to implement this in Amazon Elastic MapReduce rather than shelling out the money for new hardware just for Hadoop.

  3. Just had a Compellent installed at our office. So far I like it. Haven’t had much time to play with it. We’re moving to Office 365, so we’ve been working on that plus getting ready to move to a Co-Lo.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: