FiOS speed 10 months later, better than ever.

I switched to FiOS in December, 2009, and I was pretty apprehensive, having been a Cablevision customer for many years. I really had no problem with Cablevision’s service, I just thought their pricing was much too high in the face of the new competition (and deals) Verizon was offering. I ended up going with Verizon due to their awesome deal, but now it’s almost a year later and I can’t imagine going back to Optimum. It’ll probably come down to price when the current promo pricing I have with Verizon ends, but if the price was equal then no contest – I’d stick with FiOS.

The VMware datastore LUN 2TB (well, 1.99999 TB) size limit

I started migrating our physical machines to VMs using VMware a few years ago and the first problem I ran into is still the most annoying one: the size limit for LUNs is, per VMware’s docs, (2TB – 512B). That’s 512 bytes shy of 2TB, so basically 1.99999 TB, or 2047.999 GB. So when I create a new LUN for a datastore in the SAN the max size is 2047 GB. Now, as the VMware KB article states, this is a limitation of SCSI, not VMware per se, but that doesn’t make it any less annoying. When I first setup ESX, I created a 5 TB LUN for the datastore. It showed up in vCenter as 1 TB. After some Googling I learned of the 2 TB limit — the usable space is basically usable space = (size of lun) % 2TB, where % is the modulo operator — and found something suggesting using extents to expand the datastore across luns. I did that, but I later learned that there seems to be a consensus that extents should be avoided.

There are other things I learned along the way – that you want to limit the number of vmdks per datastore anyway, for example, due to the risk of SCSI reservation errors and IO contention, but these are all things that it feels like we shouldn’t have to worry about. I can see having separate LUNs/datastores for different logical groupings of disks, allowing you to have different snapshot schedules for each datastore, or allowing you to put an entire datastore in Tier 1 or Tier 3 (to use Compellent parlance) based on its value to you. But having to segregate stuff for technical reasons seems like a problem that should already be solved.

And maybe it is… I’ve never tried NFS datastores, but if I created an 8TB LUN, mapped it to a physical box (skirting the 2TB limit imposed by VMware), export the volume from the host over NFS and use that as the datastore, I guess I’d be able to do all the things I want. Hmm. I’ll have to think about that. I guess I’d still keep the ESX host local luns on iSCSI so they could boot from SAN, though I suppose when we move to ESXi that won’t be much of an issue anyway.

Hmm… Well, I started writing this as a rant but I think I just morphed it into a new research project.

The bright side of Compellent

Since I was bemoaning Compellent’s pricing recently I figured it would be unfair of me not to highlight the upside. Their tagline is (or was when we purchased it) “The only SAN so sophisticated it’s simple.” While I can’t say whether they’re the ONLY one, the idea is definitely true. This is the first SAN I’ve ever used, and aside from the learning curve for iSCSI itself (targets, spinup delay, etc.) it’s totally simple and intuitive. Create LUNs, map them to servers. Don’t worry about things like RAID levels or hot disks. We’re into our second year with Compellent and it’s definitely lived up to its promise of simplicity.

I don’t know how much management the average SAN requires but our sales rep recently asked me how much time per week we spend managing the SAN. I crinkled my brow, because I don’t really spend any time managing the SAN. I’ve logged in to the web interface a lot more over the past few weeks than I normally do because the SAN filled up quickly due to our experimentation with Hadoop, and I wanted to make sure we didn’t get to 100% before I was able to order more disks. But aside from that incident I think the only times I’ve logged in to the management console have been to add a LUN or map a datastore to a new ESX host.

I was reminded about this simplicity when we finally added the disks last week. We went to the datacenter Wednesday to move some servers around in the racks to ensure there would be enough power in the SAN rack for the new enclosure (16x 2TB disks). We also updated a firmware update for the SAN (required so it would recognize the new 2TB drives). We have redundant controllers, so we were told there shouldn’t be any downtime. I don’t tend to trust those types of statements – if someone says something will be down for an hour I budget for 4. If it’s 8 hours I budget for a day. If it’s zero I just think they’re lying and it’s going to explode and kill people.

So all things considered I was rather impressed. We have dual controllers, so the update was installed on one controller first, and that controller rebooted. When it rebooted, the iSCSI traffic did actually fail over properly to the secondary controller. This wasn’t completely flawless – the console on some of the machines showed some iSCSI errors, but the machines seemed to be working fine (I rebooted them just to be safe). A couple of the VMs (whose data/swap drives are all on the SAN) barfed and had to be rebooted – I think our Jabber server was the only casualty, but that was back up in under a minute. When the second controller updated itself, its traffic failed back over to the first one. When it was all done (took about 30 mins total) there was a warning about the ports being unbalanced, which was rectified by clicking the “rebalance ports” button. So all in all, I’d say there was “pretty much” no downtime. After the update, we racked the new enclosure and called it a day.

This week a tech from Compellent came onsite to do the actual install for the enclosure (hooking up the fibre loops and installing the new license). This was really zero downtime. I got some alerts that one of the loops was down, but it didn’t affect anything. Pop the disks in, wire it up, install license, and we’ve got another 32 TB usable space. It’s been over a day and the data is in the process of moving from our tier 1 (32x 15krpm FC disks) down to tier 3 (SATA). All in all it was a pretty painless procedure. Sure, it would have been easier had we not had to do the firmware update, but I guess when a new type of drive is introduced that’s to be expected.

So in conclusion, I guess this just reinforces my theory that the only bad thing about Compellent is the price. And if that’s the worst thing someone can say about your product, that’s probably a pretty good place to be.

Forcing WordPress administration over SSL

I never like typing a password into a non-SSL site, no matter how trivial it is. In order to give my own site this ability I simply used mod_rewrite to force requests to WordPress’s admin pages to go over SSL.

The .htaccess file for the site looks like this:

# BEGIN WordPress

RewriteEngine On
RewriteBase /evan/
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /evan/index.php [L]

# END WordPress

To force the admin pages to SSL, just add these lines under RewriteEngine On:


RewriteCond %{HTTPS} !=on
RewriteRule ^wp-(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]

Edit – The above code screws up uploads (which go into the /wp-content directory). I replaced that with the following and it Worked As Intended.


RewriteCond %{HTTPS} !=on
RewriteRule ^wp-login(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
RewriteCond %{HTTPS} !=on
RewriteRule ^wp-admin(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]

That’s pretty much it. If your request starts with “wp-” it’ll redirect you to the same URL, but starting with https://. Problem solved. You do need to make sure you have an SSL VirtualHost pointing to your WordPress DocumentRoot so that https://yoursite.com goes to the same place as http://yoursite.com.

I partitioned my laptop so stupidly.

When I first installed Linux on my laptop (on my old hard drive) I did it as dual-boot, so I resized my Windows XP partition down to 50 GB, created a 2nd partition for Linux and installed it there. I think I played around with Fedora and Ubuntu and one other distro (maybe FreeBSD?) so I had a bunch of stupid partitions. I eventually went to Linux exclusively and repurposed my XP partition to be my home directory (/docs) and moved all my documents there.

Then I moved to an SSD, which in addition to being incalculably faster than the Seagate Momentus 5400rpm drive, was also bigger – 128 GB instead of 100 GB. This was good, except that my method of moving the data from old drive to new drive was “dd if=/dev/sda of=/dev/sdb”, doing a bit copy from the old drive to the new one. This worked, but it left the old partition table in place on the new drive, basically leaving the extra 28 GB invisible. I didn’t realize this until a couple of weeks ago when I extended the “rear” partition to consume the rest of the space on the disk.

Anyway, now I have this totally stupid partition scheme on the disk:

Retarded partitioning scheme
Retarded partitioning scheme

I could fix it but it seems like a big pain in the ass. Maybe I can clear out /dev/sd8 and copy my docs to it, then copy the root partition to /dev/sda1… Oh, I don’t know. This is why my rule of thumb about partitioning is: don’t.

Using SSH tunnel & Squid to create a private encrypted proxy for true private browsing (mostly)

I once worked at this place where I got a stern talking-to for viewing non-work-related pages. It was around Christmas and I was doing my shopping online (since I left the house at 7 AM and got home at 8 PM). It’s not like I was farting around all the time. Anyway, the idea that I was being proactively watched by someone with an axe to grind pissed me off, so I decided I wouldn’t give him anything to read.

I don’t have that problem anymore, but I do frequently connect to open wifi points where my traffic can be viewed. I use SSL for things like email, but why even let them see that I’ve gone to nytimes.com?

My solution to both problems was the same: on my Linux box at home, run a proxy server, and pipe all my traffic to it via an SSH tunnel.

Step 1: Install Squid

Since I use CentOS, to do this I just did a yum install squid

Step 2: Configure Squid

Well, the default squid config (/etc/squid/squid.conf) was pretty much fine, although I needed to add an ACL clause so I could actually use the proxy. The LAN in my house is 192.168.1.0/24, so I put these lines in my squid.conf:

acl subnet_192 src 192.168.1.0/255.255.255.0
http_access allow subnet_192

Then start Squid.

Step 3:Create the SSH tunnel

I run Linux, so that’s the syntax I can provide (You can use putty to do this from a Windows machine):
ssh -f evan@public-hostname-of-proxy-server -L 3128:private-ip-of-proxy-server.com:3128 -N
This opens an SSH connection from your local machine (port 3128) to the remote server’s private IP on port 3128 (3128 being the default port on which squid listens). So connections to localhost:3128 will be forwarded over the SSH tunnel to port 3128 on the other machine’s private IP.

Step 4: Set your browser to point to localhost:3128 as proxy server

Well, that’s pretty self-explanatory. In the browser’s options (lots of other apps support HTTP proxies as well – AIM, etc), find the section about proxy settings and set the HTTP and HTTPS proxies to “localhost” and port 3128.

That’s it. To test if it’s working, try going to geoiptool.com and confirm that it shows you as coming from the home machine’s IP.

If you have a strict network admin who’s locked down outbound SSH, you can just have sshd listen on port 80 or 443, which almost everyone allows. A really nosy admin may notice encrypted traffic going to the server and kill it, but… well, I never said it was foolproof. 🙂

One reason I hate iTunes.

I’ve always hated iTunes. It’s a huge pile of bloatware and it’s slow as poo. It’s like 100 mb or more for an mp3 player. I remember winamp playing mp3s when it was a 500k download. Anyway.

I keep all my music on a Linux machine running samba. This way it’s available to every machine in the house. When I had Winamp on all my machines this was wonderful. But now that I’m forced into iTunes (thanks to having an iPhone), it turns out to be a major pain. In iTunes I unchecked the box for “let iTunes keep my libary organized” to prevent it from copying the entire library to each computer’s local disk. Initially adding my library of ~4000 tracks to iTunes takes over an hour (100 mbit wire) – it would take about 5 minutes in Winamp, even reading the ID3 tags for each track as it was added (rather than lazily as the song was played).

But the thing that iTunes does that is so annoying it prompted me to write this whiny rant is:

iTunes "Song Not Found"
iTunes 'Song Not Found'

If, for some reason, my M: drive (where the Samba share is mapped) is not connected when iTunes starts, every song in the library gets this “!” exclamation point of doom. If I attempt to play any of these tracks, I am given the option to locate the file. Nice in theory, but locating all 4000 tracks isn’t realistic. If I quit iTunes, reconnect the M: drive, and reopen iTunes, the ! persists. The only solution I’ve found to this is deleting the entire library from iTunes and re-adding it, which as I said, takes an extremely long time.

I have other reasons for hating iTunes, this is a blog, not a book.

Long email signatures amuse me.

Putting a glossary in your emails is a new one for me. Not really a bad idea if you deal with lots of industry-specific terminology and lots of non-industry people.


From: Clickatell SC [noreply@clickatell.com]
Sent: Sunday, September 12, 2010 1:04 AM
To: Evan D. Hoffman
Subject: Clickatell System Alert

Dear Clickatell Client,

(Blah Blah Blah)

Apologies for any inconvenience caused.

Email:
support@clickatell.com

Phone:
+27 21 910 7700 (South Africa)
+1 650 641 0011 (US)
+44 20 7060 0212 (UK)
+61 290 371 951 (Australia)

--
Clickatell

www.clickatell.com

Our Vision
Connecting the world through any message, anywhere.

-------------------------------------------------------------------------------
Terminology:

-Mobile originated (MO): A message sent (originating) from a mobile handset to an application via Clickatell.

-Mobile terminated (MT): A message sent from an application to (terminating on) a mobile handset via Clickatell.

-Premium rated message (MO): A mobile user is charged a premium for the message that they send to a particular short or long code. This service is not available in all regions; please contact an Account Manager for more information.

-Revenue share: This refers to the portion of the premium charge associated with a premium rated message, which is passed on to the content provider.

-Content provider: This is the Clickatell customer who is offering one or more services that are usually premium rated SMS system.

-Customer: A registered Clickatell customer utilising the Clickatell API for message delivery and receipt.

-Sender ID: The “from” address that appears on the user’s handset. This is also known as the message originator or source address. A Sender ID must be registered within your account and approved by us before it may be used.

-Destination address: The mobile number/MSISDN of the handset to which the message must be delivered. The number should be in international number format, e.g. country code + local mobile number, excluding the leading zero (0).

-Source address: The Sender ID or From address of the SMS.

-Short code: A short number which is common across all the operators for a specific region.

-Subscriber: The mobile network subscriber who owns the mobile number (MSISDN) which will send or receive SMSs, or be billed for premium rated services.

-Upstream gateway: A network operator, third party or our own short message service centre (SMSC).

Facebook for iPhone – “Places” hangs on “Locating you…”

I decided to see how “Places” stacked up with Foursquare. I reactivated my Facebook account and reinstalled the iPhone app. Went to “Places” and clicked “Check In,” and… nothing. It mentioned something about turning on Location Services. I know I already have that enabled because other apps are using it without problem. Turns out you need to enable Location Services explicitly:

First, go to the Settings app and select General:

Settings -> General
Settings -> General

Select Location Services:

Location Services
Location Services

Make sure Facebook is enabled (On) if you want to use Places. If you want to disable Places, make sure this is set to Off.

Turn Facebook "On"
Make sure Facebook is 'On' to use Facebook Places.