"Excuse me please. My ear is full of milk..."

– Oliver Hardy, Going Bye Bye (1934)

Skip on down to the menu.

mdadm reference

I didn't make this up. I found it at http://prefetch.net/reference/mdadm.txt. I thought I'd keep a copy to make it easier to find when I need it.

# Create a RIAD5 MD device with five members
$ mdadm --create /dev/md2 --level=5 --raid-devices=6 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1

# Stop an md device so we can reuse the devices
$ mdadm -S /dev/md2

# Print the available md devices
$ mdadm --detail --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=039a7497:72bfae8f:6ab0b026:3b063457
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=98c089ae:a88d40a7:c22d9771:da50dfe5
ARRAY /dev/md2 level=raid5 num-devices=6 spares=1 UUID=017379a7:72298d9d:5e1ae2a2:2eb50769

# Populate /etc/mdadm.conf with the devices we found during the scan
$ mdadm --detail --scan > /etc/mdadm.conf 

# Query the details of an MD device
/sbin/mdadm --query /dev/md2
/dev/md2: 1164.41GiB raid5 6 devices, 0 spares. Use mdadm --detail for more detail.

# Get detailed information on a MD device
$ /sbin/mdadm --detail /dev/md2
/dev/md2:
        Version : 00.90.03
  Creation Time : Mon Mar  5 22:02:08 2007
     Raid Level : raid5
     Array Size : 976751616 (931.50 GiB 1000.19 GB)
    Device Size : 244187904 (232.88 GiB 250.05 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Mon Mar  5 22:02:24 2007
          State : active, degraded, recovering
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : b98af6a8:21b57b11:77f6c603:283ae710
         Events : 0.3

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        2      active sync   /dev/sde1
       3       8       81        3      active sync   /dev/sdf1
       5       8       97        4      spare rebuilding   /dev/sdg1

# Monitor the progress of an MD device rebuild
$ while :; do clear; mdadm --detail /dev/md2; sleep 10 ; done

# Start MD device md2 which has two devices associated with it
$ mdadm -A /dev/md0 /dev/sdb1 /dev/sdc1

# Remove a device from the meta device MD2
$ mdadm /dev/md0 --fail /dev/sdc1 --remove /dev/sdc1

# Monitor the md device MD2 every 60 seconds
$ mdadm --monitor --mail=sysadmin --delay=60 /dev/md2

# Add a block changed bitmap to the MD device
$ /sbin/mdadm /dev/md0 -Gb internal

# Add a hot spare to the meta device md2
$ /sbin/mdadm --add /dev/md2 /dev/sdh1

Mdadm is the modern tool most Linux distributions use these days to manage software RAID arrays; in the past raidtools was the tool we have used for this. This cheat sheet will show the most common usages of mdadm to manage software raid arrays; it assumes you have a good understanding of software RAID and Linux in general, and it will just explain the commands line usage of mdadm. The examples bellow use RAID1, but they can be adapted for any RAID level the Linux kernel driver supports.

1. Create a new RAID array
Create (mdadm --create) is used to create a new array:
mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2
or using the compact notation:
mdadm -Cv /dev/md0 -l=1 -n2 /dev/sd[ab]1

2. /etc/mdadm.conf
/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian) is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:
mdadm --detail --scan >> /etc/mdadm.conf
or on debian
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Remove a disk from an array
We can't remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):
mdadm --fail /dev/md0 /dev/sda1
and now we can remove it:
mdadm --remove /dev/md0 /dev/sda1

This can be done in a single step using:
mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

4. Add a disk to an existing array
We can add a new disk to an array (replacing a failed one probably):
mdadm --add /dev/md0 /dev/sdb1

5. Verifying the status of the RAID arrays
We can check the status of the arrays on the system with:
cat /proc/mdstat
or
mdadm --detail /dev/md0

The output of this command will look like:

cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]
here we can see both drives are used and working fine - U. A failed drive will show as F, while a degraded array will miss the second disk -

Note: while monitoring the status of a RAID rebuild operation using watch can be useful:
watch cat /proc/mdstat

6. Stop and delete a RAID array
If we want to completely remove a raid array we have to stop if first and then remove it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0
and finally we can even delete the superblock from the individual drives:
mdadm --zero-superblock /dev/sda

Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:
sfdisk -d /dev/sda | sfdisk /dev/sdb

(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).

There are many other usages of mdadm particular for each type of RAID level, and I would recommend to use the manual page (man mdadm) or the help (mdadm --help) if you need more details on its usage. Hopefully these quick examples will put you on the fast track with how mdadm works.

2009-05-15, T. Sneddon