bokumin.org

Github

Simple RAID Configuration with btrfs on openSUSE

This article is a translation of the following my article:

 

 

* Translated automatically by Google.
* Please note that some links or referenced content in this article may be in Japanese.
* Comments in the code are basically in Japanese.

 

by bokumin

 

Simple RAID Configuration with btrfs on openSUSE

 

Introduction

 

I will leave this as a memorandum about how to build a RAID using the btrfs file system.
This method is much easier to implement than the conventional RAID configuration using mdadm, and there are fewer restrictions, so I hope it will be helpful.
This time, we will mainly focus on RAID1, and finally, we will also touch on RAID0 and RAID10 (1+0).
*For safety, please make a backup before proceeding.

A brief explanation of RAID can be found below.

 

RAID0 RAID1 RAID10とは

 

RAID0 (Striping)
Can be created if there are two or more HDDs. Although there is no redundancy, the read/write speed improves with 100% capacity efficiency (1TB for 500GB x 2 units). If one of the disks breaks, it will stop working, but it is an effective RAID level for temporary work or to allocate a work area for video editing, etc.

RAID1 (Mirroring)
A method of writing the same data to multiple disks. Since reading is performed from multiple disks, the speed is slightly improved, but the disadvantage is that the capacity efficiency is only 50%, and the writing speed is slower than a single disk because it writes to both disks.

RAID10(RAID1+0)
Combination of RAID1 and RAID0. At least four disks are required, but the configuration is designed to compensate for each other’s weaknesses (redundancy + high speed). The capacity efficiency is 50%, the same as RAID1, but it allows high-speed reading and writing while maintaining data security.

 

 

Execution environment

 

  • OS: openSUSE Tumbleweed (Kernel 6.17.0-2-default)
  • Storage: Seagate ST500LM000-1EJ162 x 2 (465.8GB each)

 

The configuration of two HDDs is like this. sda is the root directory already created with btrfs and is running.
sdb is an empty HDD that I installed to create a RAID.

 

bokucipi@Tomoko:~> lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 465.8G  0 disk 
├─sda1   8:1    0     8M  0 part 
└─sda2   8:2    0 465.8G  0 part /usr/local
                                 /srv
                                 /root
                                 /opt
                                 /boot/grub2/i386-pc
                                 /boot/grub2/x86_64-efi
                                 /.snapshots
                                 /
sdb      8:16   0 465.8G  0 disk 

 

RAID1 construction procedure

 

We will configure RAID1 by adding sdb to the currently running sda.
First, check the configuration of the running system disk (sda).

 

sudo fdisk -l /dev/sda
Device     Start       End   Sectors   Size Type
/dev/sda1   2048     18431     16384     8M BIOS boot
/dev/sda2  18432 976773134 976754703 465.8G Linux filesystem

 

My environment consists of a boot partition and a btrfs root area.

 

Next, create the same partition configuration on the new disk (sdb).
Creating a boot partition is not required. If you also want to make startup redundant, please follow the steps below. If boot redundancy is not required, the entire sdb can be made into one partition.

 

sudo fdisk /dev/sdb
# fdiskでの操作

g          # GPTパーティションテーブルを作成
n          # パーティション1を作成
[Enter]    # パーティション番号 (1)
[Enter]    # 開始セクタ (デフォルト)
+8M        # サイズ 8MB
t          # タイプ変更
4          # BIOS boot

n          # パーティション2を作成
[Enter]    # パーティション番号 (2)
[Enter]    # 開始セクタ (デフォルト)
[Enter]    # 終了セクタ (ディスク全体)

w          # 書き込んで終了

 

After creating the partition, check it using lsblk etc.

 

lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 465.8G  0 disk 
├─sda1   8:1    0     8M  0 part 
└─sda2   8:2    0 465.8G  0 part /usr/local
                                 /srv
                                 /root
                                 /opt
                                 /boot/grub2/i386-pc
                                 /boot/grub2/x86_64-efi
                                 /.snapshots
                                 /
sdb      8:16   0 465.8G  0 disk 
├─sdb1   8:17   0     8M  0 part 
└─sdb2   8:18   0 465.8G  0 part 

 

It is the same as sda, so next we will add the btrfs file system to the disk.

 

sudo btrfs device add /dev/sdb2 /

 

Convert data (sda2) and metadata (sdb2) to RAID1.

 

sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /

 

-dconvert is data, -mconvert is metadata. Place each in RAID1.
This process will take some time, but the system can be used normally, so please check it frequently using the top command.

 

Once the command completes successfully, we will proceed with the confirmation work.

 

sudo btrfs filesystem usage /
Overall:
    Device size:                 931.51GiB
    Device allocated:            168.06GiB
    Device unallocated:          763.44GiB
    Device missing:                  0.00B
    Device slack:                  3.50KiB
    Used:                        164.31GiB
    Free (estimated):            383.38GiB      (min: 383.38GiB)
    Free (statfs, df):           383.38GiB
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:              223.58MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,RAID1: Size:78.00GiB, Used:76.34GiB (97.88%)
   /dev/sda2      78.00GiB
   /dev/sdb2      78.00GiB

Metadata,RAID1: Size:6.00GiB, Used:5.81GiB (96.89%)
   /dev/sda2       6.00GiB
   /dev/sdb2       6.00GiB

System,RAID1: Size:32.00MiB, Used:16.00KiB (0.05%)
   /dev/sda2      32.00MiB
   /dev/sdb2      32.00MiB

Unallocated:
   /dev/sda2     381.72GiB
   /dev/sdb2     381.72GiB

 

Please make sure your data, metadata, and system are in RAID1.
Next, we will also check the device statistics for errors

 

sudo btrfs device stats /
[/dev/sda2].write_io_errs    0
[/dev/sda2].read_io_errs     0
[/dev/sda2].flush_io_errs    0
[/dev/sda2].corruption_errs  0
[/dev/sda2].generation_errs  0
[/dev/sdb2].write_io_errs    0
[/dev/sdb2].read_io_errs     0
[/dev/sdb2].flush_io_errs    0
[/dev/sdb2].corruption_errs  0
[/dev/sdb2].generation_errs  0

 

If all error counters are 0, it is normal. RAID1 creation is complete.

 

Make both disks bootable

 

One of the reasons for building RAID1 is so that if one disk fails, you can boot from the other. Install GRUB on both disks so you can boot from either.

 

sudo grub2-install /dev/sda
sudo grub2-install /dev/sdb
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

 

Turn off the PC, remove the sda, and check if you can boot with just the sdb. You may need to configure the boot order etc. in the BIOS.

 

RAID0/RAID10(1+0) configuration procedure

 

btrfs also supports other RAID levels. We will discuss the configuration steps for the RAID levels of RAID0 and RAID10 that are recommended as of 2025.

 

RAID0 (Striping)

 

This time, we will create RAID0 with the assumption that we will add an SDD to /media/movie (sdc), which is used for video editing work.
*In the case of RAID0, startup redundancy is meaningless (because all data will be lost if one device fails). Therefore, I do not create a boot partition on the SDD and use the entire SDD for data.

 

# sddにパーティションを作成
sudo fdisk /dev/sdd

# fdisk
g          # GPTパーティションテーブル作成
n          # 新しいパーティション
[Enter]    # パーティション番号 (1)
[Enter]    # 開始セクタ (デフォルト)
[Enter]    # 終了セクタ (ディスク全体)
w          # 書き込んで終了

# sdd1をbtrfsファイルシステムに追加
sudo btrfs device add /dev/sdd1 /media/movie

# RAID0に変換
sudo btrfs balance start -dconvert=raid0 -mconvert=raid0 /media/movie

# 進行状況の確認
sudo btrfs balance status /media/movie

 

RAID10(RAID1+0)

 

Assume that RAID1 has already been constructed with sda and sdb. If you have not yet created RAID1, please refer to the first part of this article.
In the case of RAID10, if you want to make booting redundant, it is better to create a boot partition on each disk, so this is what I did this time. In other words, even if multiple devices fail at the same time (within the allowable range of RAID10), it is possible to start up.

 

sudo grub2-install /dev/sdc
sudo grub2-install /dev/sdd
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

 

Let’s actually build RAID10.
Create the same partition configuration as sda on sdc and sdd.

 

sudo btrfs device add /dev/sdc2 /
sudo btrfs device add /dev/sdd2 /
sudo btrfs balance start -dconvert=raid10 -mconvert=raid10 /

 

This is how to create a RAID configuration.

 

When you are in trouble

 

If the balance process stops midway, execute the following command

 

# 再開
sudo btrfs balance resume /

# キャンセル
sudo btrfs balance cancel /

 

If the disk fails, use the following command (for RAID1)

 

# 故障ディスクを削除
sudo btrfs device delete missing /

# 新しいディスクを追加
sudo btrfs device add /dev/sdb2 /

# RAID1に再構成
sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /

 

Conclusion

 

Btrfs RAID has RAID built into btrfs itself, so it can be changed later in environments previously used with btrfs, and can be implemented with just a few command lines. Since snapshots are also applied to these, backups are easy, so you can easily try RAID.
However, btrfs RAID5/6 has a known problem (write hole problem) and there is a risk of data loss. Please note that as of 2025, it is still not recommended for use in production environments.

 


Personally, I recommend using different RAID levels depending on the purpose. For example, you can use RAID1 to ensure redundancy for important data such as system areas and documents, and use RAID0 to speed up temporary data such as video editing work areas. With btrfs, multiple RAID levels can coexist within the same system, allowing you to combine RAID with the right people in the right places.

 

If you would like to try RAID, please start with RAID0 or RAID1 and find the configuration that suits your usage. Above, I said that it is quite easy to set up RAID with btrfs. I hope this is helpful.

 

End