NILFS – A File system to make SSDs scream… in pain?

So I got this 128 gig Corsair SSD and put it in my laptop at work. After some fiddling I copied my old disk over to my new disk by booting to Knoppix and doing dd if=/dev/sda of=/dev/sdb bs=4k conv=notrunc,noerror. It’s a lot faster, but what’s really fast now is my Windows XP VM. Anyway, I was looking into other filesystems to try out on SSD to improve speed and I found this article claiming that NILFS is the best choice. So I decided to test it using the same ghetto test I always use for filesystem performance: dd!

Drive info:

Model: ATA CORSAIR CMFSSD-1 (scsi)
Disk /dev/sda: 128GB

The nilfs version in use is whatever’s in yum:

[root@ehoffman ~]# rpm -qai nilfs-utils
Name        : nilfs-utils                  Relocations: (not relocatable)
Version     : 2.0.14                            Vendor: Fedora Project
Release     : 2.fc11                        Build Date: Thu 30 Jul 2009 07:16:08 PM EDT
Install Date: Tue 27 Oct 2009 04:18:28 PM EDT      Build Host: xenbuilder4.fedora.phx.redhat.com
Group       : System Environment/Base       Source RPM: nilfs-utils-2.0.14-2.fc11.src.rpm
Size        : 211949                           License: GPLv2+
Signature   : RSA/8, Thu 30 Jul 2009 09:25:18 PM EDT, Key ID 1dc5c758d22e77f2
Packager    : Fedora Project
URL         : http://www.nilfs.org
Summary     : Utilities for managing NILFS v2 filesystems
Description :
Userspace utilities for creating and mounting NILFS v2 filesystems.

NILFS volume on /dev/sda9, ext3 on /dev/sda8

/dev/sda8 on /docs type ext3 (rw)
/dev/sda9 on /nilfs type nilfs2 (rw,gcpid=3711)

I found this pretty unsettling when I mounted the nilfs volume:

[root@ehoffman ~]# mount -t nilfs2 /dev/sda9 /nilfs/
mount.nilfs2: WARNING! - The NILFS on-disk format may change at any time.
mount.nilfs2: WARNING! - Do not place critical data on a NILFS filesystem.

Write a 100 MB file and a 1.0 GB file to the nilfs volume and the ext3 volume:

[root@ehoffman ~]# dd if=/dev/zero of=/docs/zeros.dat bs=4k count=25600
25600+0 records in
25600+0 records out
104857600 bytes (105 MB) copied, 0.434741 s, 241 MB/s
[root@ehoffman ~]# rm -f /docs/zeros.dat
[root@ehoffman ~]# dd if=/dev/zero of=/docs/zeros.dat bs=4k count=256000
256000+0 records in
256000+0 records out
1048576000 bytes (1.0 GB) copied, 19.6931 s, 53.2 MB/s
[root@ehoffman ~]# rm -f /docs/zeros.dat
[root@ehoffman ~]# dd if=/dev/zero of=/docs/zeros.dat bs=4k count=256000
256000+0 records in
256000+0 records out
1048576000 bytes (1.0 GB) copied, 12.7625 s, 82.2 MB/s
[root@ehoffman ~]# rm -f /docs/zeros.dat
[root@ehoffman ~]# dd if=/dev/zero of=/nilfs/zeros.dat bs=4k count=25600
25600+0 records in
25600+0 records out
104857600 bytes (105 MB) copied, 5.42617 s, 19.3 MB/s
[root@ehoffman ~]# dd if=/dev/zero of=/nilfs/zeros.dat bs=4k count=256000
256000+0 records in
256000+0 records out
1048576000 bytes (1.0 GB) copied, 47.4966 s, 22.1 MB/s
[root@ehoffman ~]# rm -f /nilfs/zeros.dat

With 100 MB, ext3 writes at 200+MB/s and with 1 gig it writes around 50+ MB/s. I think the SATA controller on my laptop is SATA I (not SATA II). On nilfs it seems to hover around 20-25 MB regardless of file size. Anyway, based on this I guess I’ll be staying with ext3/4 for the foreseeable future.

Advertisements

2 Replies to “NILFS – A File system to make SSDs scream… in pain?”

  1. While NILFS is indeed probably not ready for prime time, I don’t think your methodology is entirely valid.

    When copying files, a 105MB file on ext3 will probably just queue the dirty pages in memory, before the data has been flushed to disk, whereas NILFS is probably super careful about not returning until all data is on the disk. This fact is borne out by the fact that both 105MB file and 1G file on NILFS are written at roughly the same rate.

    I guess at as NILFS becomes more production ready, async operation would be an option on NILFS in a similar manner to ext3/ext4.

    Other things you might want to try:
    – Using a larger block size. 4K is small for data copying or bulk writing, and NILFS would excel in bulk writes as it does all it’s writes in sequential mode.
    – After the copy the 1G file, run the sync command and see how long that takes on each file system.

    The article you reference uses postmark benchmarks, which probably does file sync for each file (email servers are quite conservative when saving user data).

    Being a log structured file system, files can be fragmented which can affect read performance on rotating hard disks, but is not a problem on FLASH based SSD. Of course, on an SSD, random writes are also not the problem it is on hard disks, so that benefit of NILFS is lost on SSD.

    If you’re not interested in other NILFS features (the main one being the continuous snapshots) then perhaps it indeed is not for you. But don’t discount its performance until you’re sure your benchmark is comparing apples to apples. Personally, I would simply use ext3 or ext4 on an SSD if I can persuade my boss to buy me one 🙂

    (Note, not affiliated in any way with NILFS project.)

    1. Thanks for the info. I realized that NILFS’s most useful feature seems to be the checkpointing stuff, but NILFS was also what everything seemed to be citing when I was looking into what would be the ideal filesystem for an SSD. I have a feeling ext4 would be the best choice, and my / partition on the SSD is ext4, but my worry is that if something goes wrong, nilfs is too new for most tools (e.g. older Knoppix discs) to recognize it for recovery.

      My little dd test is certainly not exhaustive but I’ve found it to be a pretty quick way to compare speed across disks and filesystems, but I’ll have to try other block sizes and timing the sync like you suggested.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s