Home NAS build, 2017 Edition

Many years ago, before WiFi or even routers were common in homes, I was faced with the problem of how to connect multiple computers in my home to the internet when our cable provider only provided a single IP address. Being the late 1990s-early 2000s, this was a problem largely faced only by super-nerds like me, and the generally accepted solution was to build your own router, generally using a Linux box with two Ethernet ports, ipchains (or iptables, if you were bleeding-edge) to do NAT, DHCP and DNS. I had an old computer laying around so that’s what I did. This box served me well for many years, ultimately being replaced by a Linksys WRT54g.

While the Linux box was in service, it took on a few other roles, the most useful of which was in-house file server. Long before services like Dropbox — or even Gmail — existed, it was a real pain to share large files with other people in your house. Streaming music wasn’t really a thing back then; the only way to share your music collection with someone was to keep your MP3s on a central server somewhere, or just re-download them from Napster.

Eventually my need for a dedicated file server mostly dried up, and the power in my basement was so flaky that the thing would shut off randomly without my even noticing, and then when I actually wanted to get something off it it wasn’t turned on, which was annoying. That plus the fact that the thing had a 750 MHz Athlon CPU filled with cobwebs and dust and a single 40 GB drive meant it was more hassle than it was worth. The final nail was my house being flooded during Hurricane Sandy, at which point the server was submerged in brackish sea water.

And so, for several years, I made do without a dedicated file server in my house, backing all my important stuff up to a single USB 3.0 WD My Book. As a sysadmin, trusting my important stuff to a single spinning disk never sat well with me, but it was the cheapest option.

A problem that arose in the absence of a file server was how to share the 100,000 photos & videos I’d taken over the past 15 years with the rest of my family? I enabled file sharing on my Mac and exported the USB drive, but that required the USB drive to be always plugged in, which was annoying — a laptop tethered to a USB disk isn’t very portable.

Finally, about a year ago, I decided I needed a new file server, to hold personal items like family photos as well as to serve as a DLNA media server for my movie and music collection. Plex makes this formerly clunky task pretty smooth. The question I faced was whether to build my own home file server — a more modern version of my ancient Athlon box — or buy one of the off-the-shelf home NAS devices that have come to market over the past few years. I know several people who have Synology, QNAP or Drobo devices and they all speak highly of them. Ultimately, about a month ago, I decided to go with a Synology DS416. I already detailed my Synology experience in a separate post, but the short story is it was just too slow for my needs and didn’t really simplify anything I wanted to do, so I returned it and decided to build my own server once again.

Ultimately I settled on the parts in this list. While this came out slightly more expensive than the Synology + 4 drives, it’s far more powerful. I should note that that’s not the exact motherboard I got — for some reason PC Part Picker doesn’t list the Asus Prime B250M-C. I also went with 16 GB memory, which may be overkill for a fileserver, but I’d like this thing to last at least 10 years, so a bit of futureproofing won’t hurt — especially if I want to play around with running VMs or something.

I was surprised to see how powerful the Kaby Lake i3–7100 CPU was — two 64-bit cores (plus HT) at 3.9 GHz with 51 Watts TDP. This really put the Synology’s 32-bit Annapurna Labs Alpine AL-212 dual-core 1.4GHz CPU to shame. I also threw in a Crucial 275 GB SATA m.2 SSD for the OS drive. I couldn’t find a case I really liked, but fortunately had an old Core 2 Duo desktop collecting dust, housed in a Cooler Master case, and everything fit inside it without issue (though I had to buy a bracket to mount one of the 3.5″ drives in a 5.25″ bay).

I debated running FreeNAS, but ultimately went with Ubuntu 16.04 (Xenial) since I’ve been working with it for almost 4 years and have grown to love it. Configuring the storage drives was something I’d been thinking about for a long time. My original plan was to use mdadm to create a RAID5 of the 4 spinning drives, which would give me about 11–12 TB usable — more than enough for the foreseeable future. But in reading about the new features of Xenial, ZFS was one of the big headlines, so I decided to go with a raidz — the ZFS equivalent of RAID5.

I ran into a couple of bumps along the way — one of the HGST drives bricked itself after a couple of days and I had to RMA it, and another reported I/O errors in syslog on every reboot, so I had to RMA that one also. I should point out that I steered away from Seagate and toward HGST on the basis of the Backblaze hard disk report. While I didn’t go with the specific model of HGST drive they used, I figured the brand might be worth something. Having had a 50% defect rate on my batch of 4 drives, it appears I was wrong.

With all the RMA’d drives replaced, I performed a 24-hour run of badblocks to make sure nothing was wrong with the new drives. Fortunately all looked good and I’m now back up and running. After having read this and this, and after my own painful experience with failed disks, I decided to rebuild my ZFS using a striped mirror set (RAID10) rather than a raidz (RAID5). While this means I lose 50% of the capacity, I gain much shorter rebuild times and better performance when a disk fails and the array is in a degraded state. I also went with lz4 compression for the zfs volume based on some of the benchmarks in those articles — in this box, CPU power far exceeds disk speed, so minimizing disk reads and writes via compression is a big win.

On the software side, I currently have the file server running Samba for general “file server” stuff, vsftpd for my Foscam dog camera to store videos, netatalk for Apple File Protocol so I can backup my Mac to it via Time Machine. I installed Plex, which — as expected — performs far better now than it did on Synology and has no problem transcoding videos on the fly. I actually wrote a Python script to transcode all my old .AVI home videos into iPhone-compatible MP4s (h.264 + AAC audio) and the i3 plowed through them pretty quickly.

Being a sysadmin at heart, I wanted to make sure I had some decent monitoring in place. Lately I’ve rediscovered collectd, which I consider by far the simplest way to see relevant metrics for a Linux system, and which is fairly trivial to configure. The collection3 web UI provides an easy way to see everything from available disk space to current CPU speed — see below.

CPU speed for the past 24 hours. It was pegged at 3.9 GHz during the badblocks test.
The newly created ZFS volume is filling up as I copy my stuff onto it.

I also installed Webmin, which I hadn’t looked at in quite a while. I’m not completely sold on it, but it does give a very nice dashboard with overall system health, with metrics ranging from CPU usage to drive temperature:

Webmin dashboard

I wanted to be able to access my server from the Internet via a browser, but over a TLS connection rather than plain HTTP, for reasons that would hopefully be obvious. I was initially planning to use a self-signed SSL certificate, but this seemed like a good time to try out Let’s Encrypt. Installation and setup were pretty simple and in under 5 minutes I had a trusted cert installed with Nginx fronting it — for free! Sweet! Last time I bought an SSL cert it cost $200 for one year, and involved annoying phone verification.

One really neat feature this file server provides is the ability to present ZFS snapshots to Windows clients as “Previous Versions” using volume shadow copy. ZFS, as a copy-on-write filesystem, makes creating snapshots trivial, and a few Samba config lines present them to Windows under the Previous Versions tab in file info. A cron job on the server generates snapshots every 8 hours. Samba config snippet and the cron job are below.

Here’s what a file looks like that has previous versions — I created it at 23:33, snapshotted it, then edited it again at 23:34:

The snapshots are visible using this command:

root@lunix:~# zfs list -t snapshot
NAME                                USED  AVAIL  REFER  MOUNTPOINT
lunix1/data1@snap_2017-02-09-0113   284K      -  3.02M  -
lunix1/data1@snap_2017-02-09-0432  4.84M      -   482G  -
lunix1/data1@snap_2017-02-09-0433  4.79M      -   489G  -

I’ve only had this thing up a couple of days but so far I’m pretty happy with how it turned out over all. If I had to do it again I’d probably opt for 2x 6TB or 8TB drives rather than 4x 4TB drives — I don’t need the striping performance, really, and two fewer spinning drives would reduce overall power consumption, failure probability, cost, and open up more options for smaller cases — the case I have is nice, but it’s a full desktop chassis.

Thanks for reading.

My brief love affair with Synology

For several years I’ve been missing the loss of my Linux file server and contemplating either building a new one or buying a home NAS. About 3 weeks ago I made the plunge and ordered a Synology DS416 from Newegg.

Setup was very easy, though I was surprised to learn the device doesn’t work at all without a drive in it. I expected there would be a built on OS volume for running native services but apparently everything gets copied onto the data volume once the drives are inserted.

As for the drives themselves, I chose 4 HGST 4TB Deskstars. I’ve been burned by all of the major disk manufacturers but Backblaze’s data makes a good case for hgst over other vendors. In a raid 5 this gave me about 12 TB usable space, more than enough for my needs.

With the disks in place and the OS installed I started copying my files over. After some mucking around with creating my own share, I ended up using the default photo and video shares since it seems those are where you must put things to use the PhotoStation and VideoStation features- two of the main reasons I went with Synology in the first place. I found this pretty obnoxious as it meant I had to separate my home videos into separate directories.

Once I got everything copied over, which took about two days, I started noticing the real problems. The main issue was the overall performance of the system. The only thing it did reasonably well was read and write a single large file — pure sequential io. Indexing photos, videos or music took ages and from what I could tell, unindexed files wouldn’t appear at all to non-admin users, meaning I’d copy over 1000 files but my wife could only see 400 of them on a CIFS mount unless I made her an admin. The CPU was also so slow that Plex said outright that the CPU was too slow to transcode anything and wouldn’t even allow me to play a 4 MB avi on my PS4 via DLNA. I knew when I bought it that the 416 wasn’t ideal for transcoding, but I didn’t realize that almost every single thing I wanted to do would require transcoding.

It wasn’t a good file server, photo server or video server, but at least it let me install Plex, right? Well, Plex on the Synology crashed every time I added or deleted a library. It got so annoying I added a “scheduled task” to restart it every minute. That on top of the “server is too slow” error.

So it wasn’t good at any of these things, but at least it was an appliance so I didn’t have to manage it myself, right? No. I had to log in via ssh to fix file permissions with manual chown and chmod commands to get Plex and the PhotoStation apps to even see my media at all. And the Synology appeared to be running Postgres 9.3 internally (which, as a Postgres DBA in a previous life, warmed my heart), but the log file was filled with “slow query” alerts — queries that returned a single element took 13 seconds! Once I started mucking around with postgresql.conf I knew it was time to call it quits. The final nail came when I found myself compiling a custom build of ffmpeg so I could re-encode all my home movies to a format that Synology or Plex might actually be able to handle.

To bring this tale to an abrupt ending, I RMAd the Synology and returned it to Newegg last night. I kept the disks and plan to build my own Ubuntu system with an Intel CPU, which surprisingly costs about the same as the DS416, and much less than any of the higher end models. A MicroATX motherboard, Kaby Lake i3, 8 GB ram, a 550-watt power supply AND a 275 GB m.2 SSD come out to nearly the same price as the Synology DS416.

How (the hell) do you set up Splunk Cloud on Linux?

This took me way longer than I would’ve thought, mostly due to horrible documentation. Here’s my TL;DR version:

  1. Sign up for Splunk Cloud
  2. Download and install the forwarder binary from here.
  3. Log in here and note the URL of your Splunk instance:

    splunk_cloud
    In the above picture, assume the URL is https://prd-p-jxxxxxxxx.splunk6.splunktrial.com.

  4. Make sure your instances can connect to port tcp/9997 on your input host. Your input host is the hostname from above with “input-” prepended to it. So in our example, the input host is input-prd-p-jxxxxxxxx.splunk6.splunktrial.com. To ensure you can connect, try telnet input-prd-p-jxxxxxxxx.splunk6.splunktrial.com 9997. If it can’t connect you may need to adjust your firewall rules / Security groups to allow outbound tcp/9997

Below are the actual commands I used to get data into our Splunk Cloud trial instance:

$ curl -O http://download.splunk.com/products/splunk/releases/6.2.0/universalforwarder/linux/splunkforwarder-6.2.0-237341-linux-2.6-amd64.deb
$ sudo dpkg -i splunkforwarder-6.2.0-237341-linux-2.6-amd64.deb
$ sudo /opt/splunkforwarder/bin/splunk add forward-server input-prd-p-jxxxxxxxx.splunk6.splunktrial.com:9997
This appears to be your first time running this version of Splunk.
Added forwarding to: input-prd-p-jxxxxxxxx.splunk6.splunktrial.com:9997.
$ sudo /opt/splunkforwarder/bin/splunk add monitor '/var/log/postgresql/*.log'
Added monitor of '/var/log/postgresql/*.log'.
$ sudo /opt/splunkforwarder/bin/splunk list forward-server
Splunk username: admin
Password:
Active forwards:
	input-prd-p-jxxxxxxxx.splunk6.splunktrial.com:9997
Configured but inactive forwards:
	None
$ sudo /opt/splunkforwarder/bin/splunk list monitor
Monitored Directories:
		[No directories monitored.]
Monitored Files:
	/var/log/postgresql/*.log
$ sudo /opt/splunkforwarder/bin/splunk restart

Can I create an EC2 MySQL slave to an RDS master?

No.

Here’s what happens if you try:

mysql> grant replication slave on *.* to 'ec2-slave'@'%';
ERROR 1045 (28000): Access denied for user 'rds_root'@'%' (using password: YES)
mysql> update mysql.user set Repl_slave_priv='Y' WHERE user='rds_root' AND host='%';
ERROR 1054 (42S22): Unknown column 'ERROR (RDS): REPLICA SLAVE PRIVILEGE CANNOT BE GRANTED OR MAINTAINED' in 'field list'
mysql>

Note: this is for MySQL 5.5, which is unfortunately what I’m currently stuck with.

The m3.medium is terrible

I’ve been doing some testing of various instance types in our staging environment, originally just to see if Amazon’s t2.* line of instances is usable in a real-world scenario. In the end, I found that not only are the t2.mediums viable for what I want them to do, but they’re far better suited than the m3.medium, which I wouldn’t use for anything that you ever expect to reach any load.

Here are the conditions for my test:

  • Rails application (unicorn) fronted by nginx.
  • The number of unicorn processes is controlled by chef, currently set to (CPU count * 2), so a 2 CPU instance has 4 unicorn workers.
  • All instances are running Ubuntu 14.04 LTS (AMI ami-864d84ee for HVM, ami-018c9568 for paravirtual) with kernel 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64.
  • The test used loader.io to simulate 65 concurrent clients hitting the API (adding products to cart) as fast as possible for 600 seconds (10 minutes).
  • The instances were all behind an Elastic Load Balancer, which routes traffic based on its own algorithm (supposedly the instances with the lowest CPU always gets the next request).

The below charts summarize the findings.

average nginx $request_time
average nginx $request_time

This chart shows each server’s performance as reported by nginx. The values are the average time to service each request and the standard deviation. While I expected the m3.large to outperform the m3.medium, I didn’t expect the difference to be so dramatic. The performance of the t2.medium is the real surprise, however.

#	_sourcehost	_avg	_stddev
1	m3.large	6.30324	3.84421
2	m3.medium	15.88136	9.29829
3	t2.medium	4.80078	2.71403

These charts show the CPU activity for each instance during the test (data as per CopperEgg).

m3.large
m3.large
t2.medium
t2.medium
m3.medium
m3.medium

The m3.medium has a huge amount of CPU steal, which I’m guessing accounts for its horrible performance. Anecdotally, in my own experience m3.medium far more prone to CPU steal than other instance types. Moving from m3.medium to c3.large (essentially the same instance with 2 cpus) eliminates the CPU steal issue. However, since the t2.medium performs as well as the c3.large or m3.large and costs half of the c3.large (or nearly 1/3 of the m3.large) I’m going to try running most of my backend fleet on t2.medium.

I haven’t mentioned the credits system the t2.* instances use for burstable performance, and that’s because my tests didn’t make much of a dent in the credit balance for these instances. The load test was 100x what I expect to see in normal traffic patterns, so the t2.medium with burstable performance seems like an ideal candidate. I might add a couple c3.large to the mix as a backstop in case the credits were depleted, but I don’t think that’s a major risk – especially not in our staging environment.

Edit: I didn’t include the numbers, but the performance seemed to be the consistent whether on hvm or paravirtual instances.

Using OpenSWAN to connect two VPCs in different AWS regions

Amazon has a pretty decent writeup on how to do this (here), but in trying to establish Postgres replication across regions, I found some weird behavior where I could connect to the port directly (telnet to 5432) but psql (or pg_basebackup) didn’t work. tcpdump showed this:

16:11:28.419642 IP 10.121.11.47.35039 > 10.1.11.254.postgresql: Flags [P.], seq 9:234, ack 2, win 211, options [nop,nop,TS val 11065893 ecr 1811434], length 225
16:11:28.419701 IP 10.121.11.47.35039 > 10.1.11.254.postgresql: Flags [P.], seq 9:234, ack 2, win 211, options [nop,nop,TS val 11065893 ecr 1811434], length 225
16:11:28.421186 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [.], ack 234, win 219, options [nop,nop,TS val 1811520 ecr 11065893,nop,nop,sack 1 {9:234}], length 0
16:11:28.425273 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [P.], seq 2:1377, ack 234, win 219, options [nop,nop,TS val 1811522 ecr 11065893], length 1375
16:11:28.425291 IP 10.1.96.20 > 10.1.11.254: ICMP 10.121.11.47 unreachable - need to frag (mtu 1422), length 556
16:11:28.697397 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [P.], seq 2:1377, ack 234, win 219, options [nop,nop,TS val 1811590 ecr 11065893], length 1375
16:11:28.697438 IP 10.1.96.20 > 10.1.11.254: ICMP 10.121.11.47 unreachable - need to frag (mtu 1422), length 556
16:11:29.241311 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [P.], seq 2:1377, ack 234, win 219, options [nop,nop,TS val 1811726 ecr 11065893], length 1375
16:11:29.241356 IP 10.1.96.20 > 10.1.11.254: ICMP 10.121.11.47 unreachable - need to frag (mtu 1422), length 556
16:11:30.333438 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [P.], seq 2:1377, ack 234, win 219, options [nop,nop,TS val 1811999 ecr 11065893], length 1375
16:11:30.333488 IP 10.1.96.20 > 10.1.11.254: ICMP 10.121.11.47 unreachable - need to frag (mtu 1422), length 556
16:11:32.513418 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [P.], seq 2:1377, ack 234, win 219, options [nop,nop,TS val 1812544 ecr 11065893], length 1375
16:11:32.513467 IP 10.1.96.20 > 10.1.11.254: ICMP 10.121.11.47 unreachable - need to frag (mtu 1422), length 556
16:11:36.881409 IP 10.1.11.254.postgresql > 10.121.11.47.35039: Flags [P.], seq 2:1377, ack 234, win 219, options [nop,nop,TS val 1813636 ecr 11065893], length 1375
16:11:36.881460 IP 10.1.96.20 > 10.1.11.254: ICMP 10.121.11.47 unreachable - need to frag (mtu 1422), length 556

After quite a bit of Google and mucking in network ACLs and security groups, the fix ended up being this:

iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1500

(The above two commands need to be run on both OpenSwan boxes.)

OpenVPN CLI Cheat Sheet

Adding a regular user called testing

/usr/local/openvpn_as/scripts/sacli -u testing -k type -v user_connect UserPropPut

Add an autologin user called knock

/usr/local/openvpn_as/scripts/sacli -u knock -k prop_autologin -v true UserPropPut

Add an admin user called admin

/usr/local/openvpn_as/scripts/sacli -u admin -k prop_superuser -v true UserPropPut; /etc/init.d/openvpnas restart

Allow user testing to networks 192.168.0.0/24 and 10.0.0.0/16 via NAT

/usr/local/openvpn_as/scripts/sacli -u testing -k access_to.0 -v +NAT:192.168.0.0/24 UserPropPut; /usr/local/openvpn_as/scripts/sacli -u testing -k access_to.1 -v +NAT:192.168.0.0/16 UserPropPut; /usr/local/openvpn_as/scripts/sacli start

Allow user testing to networks 192.168.0.0/24 and 10.0.0.0/16 via ROUTE

/usr/local/openvpn_as/scripts/sacli -u testing -k access_to.0 -v +ROUTE:192.168.0.0/24 UserPropPut; /usr/local/openvpn_as/scripts/sacli -u testing -k access_to.1 -v +ROUTE:192.168.0.0/16 UserPropPut; /usr/local/openvpn_as/scripts/sacli start

Remove access to network entry 0 and 1 for user testing

/usr/local/openvpn_as/scripts/sacli -u testing -k access_to.0 UserPropDel; /usr/local/openvpn_as/scripts/sacli -u testing -k access_to.1 UserPropDel; /usr/local/openvpn_as/scripts/sacli start

Get installer with profile for user, in this case autologin

./sacli –user testing AutoGenerateOnBehalfOf
./sacli –user testing –key prop_autologin –value true UserPropPut
./sacli –itype msi –autologin -u testing -o installer_testing/ GetInstallerEx

Get separate certificate files for user, for open source applications

./sacli -o ./targetfolder –cn test Get5

Get unified (.ovpn file) for user, for Connect Client for example

./sacli -o ./targetfolder –-cn test Get1

Show all users in user database with all their properties

./confdba -u -s

Show only a specific user in user database with all properties

./confdba -u –prof testuser -s

Remove a user from the database, revoke his/her certificates, and then kick him/her off the server

./confdba -u –prof testing –rm
./sacli –user testing RevokeUser
./sacli –user testing DisconnectUser

Set a password on a user from the command line, when using LOCAL authentication mode:

./sacli –user testing –new_pass passwordgoeshere SetLocalPassword

Enable Google Authenticator for a user:

./sacli --key vpn.server.google_auth.enable --value true ConfigPut

 

Create CloudWatch alerts for all Elastic Load Balancers

I manage a bunch of ELBs but we were missing an alert on a pretty basic metric: how many errors the load balancer was returning.  Rather than wade through the UI to add these alerts I figured it would be easier to do it via the CLI.

Assuming aws-cli is installed and the ARN for your SNS topic (in my case, just an email alert) is $arn:

for i in `aws elb describe-load-balancers | grep LoadBalancerName | 
perl -ne 'chomp; my @a=split(/s+/); $a[2] =~ s/[",]//g ; print "$a[2] ";' ` ; 
do aws cloudwatch put-metric-alarm --alarm-name "$i ELB 5XX Errors" --alarm-description 
"High $i ELB 5XX error count" --metric-name HTTPCode_ELB_5XX --namespace AWS/ELB 
--statistic Sum --period 300 --evaluation-periods 1 --threshold 50 
--comparison-operator GreaterThanThreshold --dimensions Name=LoadBalancerName,Value=$i 
--alarm-actions $arn --ok-actions $arn ; done

That huge one-liner creates a CloudWatch notification that sends an alarm when the number of 5XX errors returned by the ELB is greater than 50 over 5 minutes, and sends an “ok” message via the same SNS topic. The for loop creates/modifies the alarm for every ELB.

More info on put-metric-alarm available in the AWS docs.

Super quick wordpress exploit stopper

I got an email yesterday from my host (DigitalOcean) that I was running a phishing website. So, I’m not, but I quickly guessed what happened: my WordPress got hacked. This is just one of the risks of running silly little PHP apps. I logged in, deleted the themes directories, reinstalled clean ones, and ensured this doesn’t happen again by doing the following:

  • useradd apache_ro
  • chown -R apache_ro:apache_ro $WP/wp-content/themes

Now apache can’t write to those directories. This means you can’t update WordPress via the web UI, but I’m ok with that.

The not-so-secret secret to Postgres performance

I manage a bunch of Postgres DBs and one of the things I almost always forget to do when setting up a new one is set the readahead up from the default of 256. I created this script and run it out of /etc/rc.local and sometimes cron it too. The 3 commands at the top are only really relevant on systems with “huge” memory – probably over 64 GB. We ran into some memory problems with CentOS 6 on a box with 128 GB ram which we ended up working around by reinstalling CentOS 5, but the /sys/kernel/mm/redhat_transparent_hugepage/ options below should fix them in 6.x (though we haven’t actually tried it on that DB, we haven’t seen any problems in other large DBs).

#!/bin/bash

echo no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
echo never >/sys/kernel/mm/redhat_transparent_hugepage/defrag

echo 1 > /proc/sys/vm/dirty_background_ratio

BLOCK_DEVICES=`perl -ne 'chomp; my @a=split(/[s]+/); next unless $a[4]; next if ($a[4] =~ /sd[a-z]+[d]+/); print "$a[4]n";' /proc/partitions `

logger -t tune_blockdevs "Block devices matching: $BLOCK_DEVICES"

for DEV in $BLOCK_DEVICES;  do
        logger -t tune_blockdevs "Setting IO params for $DEV"
### Uncomment the below line if using SSD
#        echo "noop" > /sys/block/$DEV/queue/scheduler
        /sbin/blockdev --setra 65536 /dev/$DEV
done