Macbook Pro locks up with SSD installed.

A few weeks ago I switched from my trusty old HP nc8430 to a Macbook Pro (MC118LL/A) that was left spare when another employee left. I mostly enjoyed using Linux but I was tired of dealing with weird quirks like having X lock up, essentially forcing me to do a hard reboot.

To transition, I copied my documents from Linux to Mac, then turned off the Linux laptop. Surprisingly I found I didn’t need to turn Linux back on at all.
Continue reading “Macbook Pro locks up with SSD installed.”

JavaScript: The Good Parts

I just finished reading JavaScript: The Good Parts, one of the best programming books I’ve read. The ending is fantastic:

We see a lot of feature-driven product design in which the cost of features is not properly accounted. Features can have a negative value to consumers because they make the products more difficult to understand and use. We are finding that people like products that just work. It turns out that designs that just work are much harder to produce than designs that assemble long lists of features.

Features have a specification cost, a design cost, and a development cost. There is a testing cost and a reliability cost. The more features there are, the more likely one will develop problems or will interact badly with another. In software systems, there is a storage cost, which was becoming negligible, but in mobile applications is becoming significant again. There are ascending performance costs because Moore’s Law doesn’t apply to batteries.

Features have a documentation cost. Every feature adds pages to the manual, increasing training costs. Features that offer value to a minority of users impose a cost on all users. So, in designing products and programming languages, we want to get the core features—the good parts—right because that is where we create most of the value.

We all find the good parts in the products that we use. We value simplicity, and when simplicity isn’t offered to us, we make it ourselves. My microwave oven has tons of features, but the only ones I use are cook and the clock. And setting the clock is a struggle. We cope with the complexity of feature-driven design by finding and sticking with the good parts.

It would be nice if products and programming languages were designed to have only good parts.

Continue reading “JavaScript: The Good Parts”

The bright side of Compellent

Since I was bemoaning Compellent’s pricing recently I figured it would be unfair of me not to highlight the upside. Their tagline is (or was when we purchased it) “The only SAN so sophisticated it’s simple.” While I can’t say whether they’re the ONLY one, the idea is definitely true. This is the first SAN I’ve ever used, and aside from the learning curve for iSCSI itself (targets, spinup delay, etc.) it’s totally simple and intuitive. Create LUNs, map them to servers. Don’t worry about things like RAID levels or hot disks. We’re into our second year with Compellent and it’s definitely lived up to its promise of simplicity.

I don’t know how much management the average SAN requires but our sales rep recently asked me how much time per week we spend managing the SAN. I crinkled my brow, because I don’t really spend any time managing the SAN. I’ve logged in to the web interface a lot more over the past few weeks than I normally do because the SAN filled up quickly due to our experimentation with Hadoop, and I wanted to make sure we didn’t get to 100% before I was able to order more disks. But aside from that incident I think the only times I’ve logged in to the management console have been to add a LUN or map a datastore to a new ESX host.

I was reminded about this simplicity when we finally added the disks last week. We went to the datacenter Wednesday to move some servers around in the racks to ensure there would be enough power in the SAN rack for the new enclosure (16x 2TB disks). We also updated a firmware update for the SAN (required so it would recognize the new 2TB drives). We have redundant controllers, so we were told there shouldn’t be any downtime. I don’t tend to trust those types of statements – if someone says something will be down for an hour I budget for 4. If it’s 8 hours I budget for a day. If it’s zero I just think they’re lying and it’s going to explode and kill people.

So all things considered I was rather impressed. We have dual controllers, so the update was installed on one controller first, and that controller rebooted. When it rebooted, the iSCSI traffic did actually fail over properly to the secondary controller. This wasn’t completely flawless – the console on some of the machines showed some iSCSI errors, but the machines seemed to be working fine (I rebooted them just to be safe). A couple of the VMs (whose data/swap drives are all on the SAN) barfed and had to be rebooted – I think our Jabber server was the only casualty, but that was back up in under a minute. When the second controller updated itself, its traffic failed back over to the first one. When it was all done (took about 30 mins total) there was a warning about the ports being unbalanced, which was rectified by clicking the “rebalance ports” button. So all in all, I’d say there was “pretty much” no downtime. After the update, we racked the new enclosure and called it a day.

This week a tech from Compellent came onsite to do the actual install for the enclosure (hooking up the fibre loops and installing the new license). This was really zero downtime. I got some alerts that one of the loops was down, but it didn’t affect anything. Pop the disks in, wire it up, install license, and we’ve got another 32 TB usable space. It’s been over a day and the data is in the process of moving from our tier 1 (32x 15krpm FC disks) down to tier 3 (SATA). All in all it was a pretty painless procedure. Sure, it would have been easier had we not had to do the firmware update, but I guess when a new type of drive is introduced that’s to be expected.

So in conclusion, I guess this just reinforces my theory that the only bad thing about Compellent is the price. And if that’s the worst thing someone can say about your product, that’s probably a pretty good place to be.

I almost forgot that I hate computers.

I’d almost forgotten that I hate computers. Then I came home from Memorial Day weekend and woke my desktop up from Sleep in order to extract the pics I’d taken. The computer looked ok for about 15 seconds, then kind of froze. The cursor would move but it didn’t do anything. I watched it do (apparently) nothing for about 2 minutes when I hit the hard-reboot button. It hung at POST with the HDD light on solid. I couldn’t get into the BIOS, so I tried plugging into a different SATA port on the mobo, and finally got the thing to post by … unplugging the drive completely. I got into the bios, did a “load failsafe defaults” and tried plugging the drive back in and booting. It made it through POST this time but gave me the “INSERT SYSTEM DISK” error. After about an hour of messing around with it I turned it off and came upstairs to go to bed. The drive is a Seagate Barracuda 7200.11, 1000 GB, purchased less than 18 months ago (Christmas 2008). I know things die, but this is just dumb. Maybe tomorrow I’ll see if anything shows up using a USB-to-SATA adapter. Sigh.

Generate a report of Exchange mailbox sizes broken out by department and location

I found a script a few months ago that generated a CSV report of mailbox size, which included the Mailbox Name (usually the user’s name), size in Kbytes, number of items, which server it’s on, etc. This was very helpful, but I wanted to see which department within the company used the most space on the mail server, and the department wasn’t one of the pieces of data included in the report. It took a while but I figured out how to do LDAP lookups in vbscript and was able to add that info, so the report now has the user’s department, office location, and quota limit in it as well as the other fields. This makes it very easy to do a PivotChart in Excel to generate a pie chart of the size by department. The script is attached – change the extension to .vbs to run it. You’ll need to plug in your Exchange server and domain controller where the placeholders currently are.

EmailSizeByDepartment.vbs

Upgraded to Fedora Core 12

I upgraded my work laptop from FC11 to FC12 yesterday using the “preupgrade” tool. It was pretty simple, though it took a lot longer than I expected. There was some funkiness with my screen going crazy after the upgrade – my external monitor and the laptop’s LCD both did this crazy wavy-line thing. I tried changing the refresh rate, running system-config-display, nothing worked. I found a post that suggested passing “nomodeset” to the kernel boot options – that solved it. Yay!

The other problem I had was reinstalling VMware Workstation – I couldn’t. I got this error: /tmp/vmware-root/modules/vmnet-only/vnetUserListener.c:240: error: ‘TASK_INTERRUPTIBLE’ … etc. I ended up having to edit the vmplayer source files directly (!!!) to get them to compile – instructions found here

So far FC12 seems exactly like FC11. But that’s fine – I only upgraded because I didn’t want to be on a dead-end version once FC13 is released.