Btrfs:RAID Setup

I recently became very interested in LVM and its ability to have a volume that spans multiple drives. I was just about to do an LVM setup when I began researching btrfs in more depth. It is rumored to be the next replacement for ext4, the default Linux filesystem (in most cases). It also happpens to support volumes that span multiple devices (raid, though software raid albeit), aside from a whole list of other functionalities.

Being a person who really enjoys trying new, cool, and often unstable things (who doesn’t love a good learning experience), I decided to set up a raid 5 using btrfs with three whopping one terabyte drives. If all goes well, I should be able to loose one drive and still have 2 terabytes ( [3-1]*1000 = 2000 ) fully functional.

Getting Started

Creating a btrfs filesystem is as simple as creating an ext4 filesystem (or any other filesystem for that matter). You use the mkfs command. However, I created a raid setup, so I needed a few more parameters. Here’s what I used.

mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd

Well that was easy. What’d we just do?

mkfs.btrfs duh

-m raid5

Sets the metadata up to use raid 5

-d raid5

Set the data up to use raid 5

/dev/sdb /dev/sdc /dev/sdd

Span our volume across these devices

With that, you should now [very quickly] have a new raid 5 (or whatever you selected for your raid levels). To mount it, run the mount command on any of the raw devices in your raid.

mount -t btrfs /dev/sdb /mnt/oh_heyyy


Btrfs supports various kinds of seamless compression. The default is none since compression will cause a performance hit (naturally). I thought I’d give it a try anyways. I set up lzo compression (supposedly the fastest compression, but less effective) about half way through my sync job (forgot to do it initially). The original total size of the files in each home directory came to 386 GB (lots of users for a home system). The end result after compression was 377 GB, so I ended up saving 9 GB of space while still getting an amazing transfer rate (see the benchmarks section). Keep in mind though that I enabled compression after I had already synced a good 100 GB of files, so there is a good portion of that data that isn’t compressed. Despite that though, 9 GB of space isn’t too bad, especially given the performance.

Handy Commands

Here’s what commands I’m using most frequently up to this point.

  • btrfs fi[lesystem] show: Shows a list of filesystems and their corresponding devices.

  • btrfs fi[lesystem] label <dev> <label>: Changes the label of the specified raid device.

  • btrfs fi[lesystem] df /path/to/mount: Displays real df data about the mounted volume.


I know there are other ways to benchmark storage io, but I wanted to see what the maximum write speed would be and I don’t have a second raid set up to get a high transfer rate in, so my fastest option at this point is /dev/zero. Here’s my setup (again).

  • My btrfs raid 5 is mounted at /home/. The raid is made up of three 1 TB Western Digital Green drives, each at 7200 rpm and it is mounted with "-o compress=lzo".

  • The OS itself ( / ) is installed on a single HDD, a 7200 rpm 500 GB Maxtor (slightly olde).

Btrfs Raid Performance

First, we test writing 2000 1M blocks of zeros to /home/, the raid.

[root@zion ~]# dd if=/dev/zero of=/home/bench.test bs=1M count=2000
2000+0 records in 2000+0 records out 2097152000 bytes (2.1
GB) copied, 6.24284 s, 336 MB/s

336 MB/s! Not bad for a homemade drive array using software raid and some desktop drives.

Non-Raid Single HDD Performance

Starting with the same as the last but writing to /root/, the single HDD, we get…​

[root@zion ~]# dd if=/dev/zero of=/root/bench.test bs=1M count=2000
2000+0 records in 2000+0 records out 2097152000 bytes
(2.1 GB) copied, 30.5043 s, 68.7 MB/s


I might not be well versed enough in the area of storage, but setting up a btrfs raid was really easy. I did have to learn all the different raid levels to decide which I wanted to use of course, but I would have done that anyways. The filesystem (again, spanning 3 TB) was created ridiculously fast (as fast as I hit the enter key). I performed an rsync from my old drive (a single 500 GB HDD, 7200 rpm, 3 Gbit/s) to the new raid (2 TB across 3 HDDs, 7200 rpm, 6 Gbit/s) volume and got about a 31 MB per second transfer rate, which is the max transfer rate that my single 500 GB drive has ever done anyways, so at least btrfs can perform that well (not that that’s particularly amazing). I was also very impressed by the 336 MB/s write speed of the raid array. Perhaps I’m ignorant at this point in time, but that seems pretty impressive for some cheap off-the-shelf desktop drives. They’re not even 10k rpm, let alone 15k. I would certainly say that from a performance perspective, btrfs is definitely ready for home use. It may be a little new for enterprise use, but that’s up to the enterprise. For me though, I will keep using it until I see any problems. Even then, I’ll still troubleshoot and then probably continue using it.

Finally, I have to give some serious credit to the guys who wrote the b-tree filesystem (oddly Oracle sponsored it). It’s this kind of open source that drives the world of technology (not that others don’t of course) to expand beyond "what the consumer wants". You guys are innovating in the coolest ways and best of all, you’re making it freely available. Many thanks guys!