Mind Dump, Tech And Life Blog
written by Ivan Alenko
published under license Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)copy! share!
posted in category Systems Software / RAID
posted at 06. May '19
last updated at 24. Aug '21

Howto Modify Existing Software RAID 1 in Linux (md)

The goal is to enlarge the third partition on the disk while removing fourth and recreate fourth again, but with a different size and on different place. Partition with operating system stayed the same. If you have problem with system partition, you can mount a new partition to large directory.

This article is long and there are many many details, because the topic is tricky one and it’s really easy to make a mistake.

Also this is not detailed step by step guide, because I had to try procedures multiple times and sometimes also I didn’t write down commands, because it was “clear and simple”.

The setup: 4 partitions on RAID 1 originally on 500GB disk, now on 1TB. There is 500 GB of free space on the end of disk which I want to use.

Both 500 GB disks partially failed at the same point in time, so the bad one was disconnected and RAID was synchronized with a new 1TB disk drive. Later I ran S.M.A.R.T. extended test on the remaining 500GB drive and found out it’s bad too and replaced with the second 1TB drive (the same model - I know, sucks, use different series or brands for RAID1).

TLDR:

  1. DO A FULL BACKUP (or don’t….like me and hope for the best, have only data backed up)
  2. disassemble array with mdadm /dev/md3 -f /dev/sdb4 -r sdb4
  3. format & repartition /dev/sda3 and /dev/sda4 to desired sizes
  4. zero superblocks on /dev/sda3 and /dev/sda4 and assign new UUIDs to /dev/md2 and /dev/md3, update UUIDs in /etc/fstab
  5. restart /etc/init.d/mdadm restart or restart computer if partition sizes are still wrong and repeat 4.
  6. copy data over from /dev/sdb3 and /dev/sdb4 to /dev/sda3 and /dev/sda4
  7. format /dev/sdb3 and /dev/sdb4
  8. assemble array again on third and fourth partitions

Setup

RAID 1 setup:

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sdb4[3] sda4[2]
      219700032 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[3]
      229360448 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

A detail of the first partition - md0 consists of sda1 and sdb1:

$ mdadm --detail /dev/md0

/dev/md0:
        Version : 1.2
  Creation Time : Wed Jun  5 18:38:46 2013
     Raid Level : raid1
     Array Size : 29279104 (27.92 GiB 29.98 GB)
  Used Dev Size : 29279104 (27.92 GiB 29.98 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Oct 26 21:45:36 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : n2orava:0  (local to host n2orava)
           UUID : a6b4536f:b26e33fb:60e4af7b:0abceabc
         Events : 40491

    Number   Major   Minor   RaidDevice State
       3       8       17        0      active sync   /dev/sdb1
       2       8        1        1      active sync   /dev/sda1

Disassemble RAID 1 array

The plan is to disassemble the array and resize sda.

First failed attempts trying commands found somewhere on the internet. They didn’t work, because sdb3 is the part of md2 array, not md3.

$ mdadm /dev/md3 --remove /dev/sdb3
mdadm: hot remove failed for /dev/sdb3: No such device or address
$ mdadm -f /dev/md3 --remove /dev/sdb3
mdadm: hot remove failed for /dev/sdb3: No such device or address
$ mdadm -f /dev/md2 --remove /dev/sdb2
mdadm: hot remove failed for /dev/sdb2: No such device or address
$ mdadm --manage -f /dev/md2 --remove /dev/sdb2
mdadm: hot remove failed for /dev/sdb2: No such device or address

Just to be sure nothing was removed:

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sdb4[3] sda4[2]
      219700032 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[3]
      229360448 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Looks good:

$ mdadm --examine /dev/sdb3

/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 0c72d9ee:e8f09a3a:5543bc98:79231fc1
           Name : n2orava:2  (local to host n2orava)
  Creation Time : Wed Jun  5 18:40:43 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 458721280 (218.74 GiB 234.87 GB)
     Array Size : 229360448 (218.74 GiB 234.87 GB)
  Used Dev Size : 458720896 (218.74 GiB 234.87 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=384 sectors
          State : clean
    Device UUID : e30939b5:7fec66cd:ed14332e:b34c5516

    Update Time : Fri Oct 26 21:53:38 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 847321d1 - correct
         Events : 7716


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

Well, here I started to realize the numbering is off by one…/dev/md/2

$ mdadm --query /dev/sdb3
/dev/sdb3: is not an md array
/dev/sdb3: device 3 in 2 device active raid1 /dev/md/2.  Use mdadm --examine for more detail.

lsblk is very useful to visualize what belongs to where:

$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0 931,5G  0 disk
├─sda1    8:1    0    28G  0 part
│ └─md0   9:0    0  27,9G  0 raid1 /
├─sda2    8:2    0   9,3G  0 part
│ └─md1   9:1    0   9,3G  0 raid1 [SWAP]
├─sda3    8:3    0 218,9G  0 part
│ └─md2   9:2    0 218,8G  0 raid1 /network
└─sda4    8:4    0 209,7G  0 part
  └─md3   9:3    0 209,5G  0 raid1 /zalohy
sdb       8:16   0 931,5G  0 disk
├─sdb1    8:17   0    28G  0 part
│ └─md0   9:0    0  27,9G  0 raid1 /
├─sdb2    8:18   0   9,3G  0 part
│ └─md1   9:1    0   9,3G  0 raid1 [SWAP]
├─sdb3    8:19   0 218,9G  0 part
│ └─md2   9:2    0 218,8G  0 raid1 /network
└─sdb4    8:20   0 209,7G  0 part
  └─md3   9:3    0 209,5G  0 raid1 /zalohy
sr0      11:0    1  1024M  0 rom

Well, this finally worked!

$ mdadm /dev/md3 -f /dev/sdb4 -r sdb4
mdadm: set /dev/sdb4 faulty in /dev/md3
mdadm: hot removed sdb4 from /dev/md3

Looks promising, fourth partition is disassembled:

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sda4[2]
      219700032 blocks super 1.2 [2/1] [_U]

md2 : active raid1 sda3[2] sdb3[3]
      229360448 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

I can proceed with disassembling the third partition. I won’t touch first or second.

$ mdadm /dev/md2 -f /dev/sdb3 -r sdb3
mdadm: set /dev/sdb3 faulty in /dev/md2
mdadm: hot removed sdb3 from /dev/md2

2/1 means there is two drives array, but only with one available.

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sda4[2]
      219700032 blocks super 1.2 [2/1] [_U]

md2 : active raid1 sda3[2]
      229360448 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Resize partitions

Originally just wanted to move data from fourth partition to third, remove fourth, growfs on third and create fourth again.

But here things got very dicy and I almost lost all data on both partitions. I had to use sdb to copy data over to sda which was being modified. Also I had serious problems with growfs on XFS. It didn’t work as I expected, so I had to remove the third partition, which I originally didn’t want to, only make it bigger.

Also just changing things around with fdisk is not enough. You NEED to change a superblock on RAID, otherwise it will still register only the original size of the partition.

I have 465,8 gigabytes of free space I want to use:

$ cfdisk /dev/sda
                                                        Disk: /dev/sda
                                    Size: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
                                               Label: dos, identifier: 0x00074188

    Device           Boot                 Start              End         Sectors         Size       Id Type
    /dev/sda1        *                     2048         58593279        58591232          28G       fd Linux raid autodetect
    /dev/sda2                          58593280         78125055        19531776         9,3G       fd Linux raid autodetect
    /dev/sda3                          78125056        537108479       458983424       218,9G       fd Linux raid autodetect
    /dev/sda4                         537108480        976771071       439662592       209,7G       fd Linux raid autodetect
>>  Free space                        976771072       1953525167       976754096       465,8G

I tried to to various stuff to please xfs_growfs, but nothing worked.

$ xfs_info /dev/md2
meta-data=/dev/md2               isize=256    agcount=4, agsize=14335028 blks
         =                       sectsz=512   attr=2, projid32bit=0
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=57340112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=27998, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Some weird math I don’t remember how and why…one block is one sector, which is 4 kilobytes…I think.

1024*1024*1024 / 4096 = 262144
# added blocks + old size
# 262144 + 57340112 = 57602256
xfs_growfs -D 57602256 /network

irb(main):008:0>  (57340112 * 4096) / (1024*1024*1024.0)
=> 218.73516845703125
irb(main):009:0>

irb(main):014:0> (80440112 * 4096) / (1024*1024*1024.0)
=> 306.85467529296875
irb(main):015:0> (80440112 * 4096) / (1024*1024*1024.0)

Fuck xfs_growfs.

So I decided to remove third partition and not resize it, because xfs_growfs didn’t work. Unmount third partition and reformat.

$ umount /network/
$ /etc/init.d/mdadm restart
$ mkfs.xfs -f -L network /dev/md2

Formatting the partition changed UUIDs, so I needed to update /etc/fstab

# update new  uuids and mount
# /network was on /dev/md2 during installation
UUID=e0dc80b9-fd7d-45fc-a644-3e463bbff804 /network        xfs     nodev,nosuid,noexec,usrquota,grpquota 0       2
# /zalohy was on /dev/md3 during installation
UUID=9eb3970d-d103-4533-8138-a28018abb6e3 /zalohy         xfs     nodev,nosuid,noexec,usrquota,grpquota 0       2

At this point data on /dev/sda3 were lost and I had to copy them from /dev/sdb3 which was disassociated from the array at the beginning of my journey. BUT it wasn’t as easy, mounting it.

Copy data back from /dev/sdb3 to /dev/sda3

You can’t mount it:

$ mount /dev/sdb3 /mnt/
mount: unknown filesystem type 'linux_raid_member'

You can’t event create a new RAID array from it containing single drive. Because md is aware it belongs to an other array.

$ mdadm --assemble /dev/md9 /dev/sdb3
mdadm: Found some drive for an array that is already active: /dev/md/2
mdadm: giving up.

Hell, this is getting frustrating. I found on the internet that I need to generate a new UUID for the partition.

$ mdadm --examine /dev/sdb3
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 0c72d9ee:e8f09a3a:5543bc98:79231fc1
           Name : n2orava:2  (local to host n2orava)
  Creation Time : Wed Jun  5 18:40:43 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 458721280 (218.74 GiB 234.87 GB)
     Array Size : 229360448 (218.74 GiB 234.87 GB)
  Used Dev Size : 458720896 (218.74 GiB 234.87 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=384 sectors
          State : clean
    Device UUID : e30939b5:7fec66cd:ed14332e:b34c5516

    Update Time : Fri Oct 26 22:02:51 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 847323fa - correct
         Events : 7716


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

Somewhere on local computer….

$ uuidgen
5ED5821B-912B-4688-8FDB-A4912499A098

Using fourth partition as a test subject:

$ mdadm --assemble /dev/md11 --update=uuid --uuid="5ED5821B-912B-4688-8FDB-A4912499A098" /dev/sdb4
mdadm: /dev/md11 assembled from 1 drive - need all 2 to start it (use --run to insist).

Getting closer…

$ mdadm --assemble /dev/md11 --run --update=uuid --uuid="5ED5821B-912B-4688-8FDB-A4912499A098" /dev/sdb4
mdadm: /dev/sdb4 is busy - skipping

Hm, shit. This won’t be as easy as I thought. I’m not sure what I did here, but I remember disassembling the array /dev/md11 and assembling a new one. And restarting md service. Just run the command with --run parameter and you won’t have to start it forcibly.

Yes, I can list files now!

$ mount /dev/md127 /mnt
$ ls /mnt/
baliky-201810050020.txt.....

Creating a new single drive array from /dev/sdb3:

$ mdadm --assemble /dev/md12 --run --update=uuid --uuid="99025ED1-C8C9-4AD5-BA6B-6394CA6C5040" /dev/sdb3

So this is how you can mount linux_raid_member file system. Anyways, don’t do anything with data partitions. Well, I copied stuff, but later found out that reported size is wrong and had to do it all over again.

Unmount + disassemble array for now.

Messing with new partition layout

This section contains only failed attempts and is only for the reference. Restart md first and then RESTART THE COMPUTER AFTER PARTITIONS ON /DEV/SDA CHANGE SIZE. Otherwise you will see many weird numbers and system utilities too. md3 and md4 will start in degraded more with single drive. Which is ok.

I’m not sure if you have to restart the computer. I did it three times, because it might have helped to find out why it doesn’t work.

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb4[3]
      219700032 blocks super 1.2 [2/1] [U_]

md3 : active raid1 sda4[2]
      219700032 blocks super 1.2 [2/1] [_U]

md2 : active raid1 sda3[2]
      229360448 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Use fourth partition as a test subject…Stop the original array.

$ umount /zalohy/
$ mdadm --stop /dev/md3
mdadm: stopped /dev/md3

md3 is stopped:

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb4[3]
      219700032 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sda3[2]
      229360448 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Create a new md3 array and hope it will take a new partition size into account:

mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sda4
???
$ mdadm --create /dev/md3 --level=1 --raid-devices=1 --force /dev/sda4
mdadm: /dev/sda4 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Wed Jun  5 18:40:50 2013
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.

Trying to “grow” the size of md3 from 260G to 460G. So I added bitmap, which isn’t really needed. These commands aren’t probably needed.

$ mdadm --grow /dev/md3 --bitmap none
$ mdadm --grow /dev/md3 --size=max <- works
$ mdadm --wait /dev/md3
$ mdadm --grow /dev/md3 --bitmap internal

Another attempt, also zeroing a superblock on sda4, so a new array can be created with it. mkfs.xfs won’t help to wipe RAID data. I’m not super if grow command helped in any way. I think I just zeroed a superblock, generate a new UUID and created a new array. Things are blurry here.

mdadm --stop /dev/md3
mdadm --zero-superblock /dev/sda4
mdadm --create /dev/md3 --level=1 --raid-devices=1 --force /dev/sda4
mdadm --grow /dev/md2 --size=500170752
477
  • reformat partition on sda3 again
  • update uuid in fstab again
  • mount and copy data from the old /dev/sdb4 and /dev/sdb3

Recreate the first array after I’m not sure what happened

$ mdadm --create /dev/md2 --level=1 --raid-devices=1 --force /dev/sda3

As you saw in previous section, this command starts a new array from the (now disassociated) partition on the second drive of RAID 1 array.

mdadm --assemble /dev/md12 --run --update=uuid --uuid="D944D0DC-4B01-43F1-8A87-911080405435" /dev/sdb3

Now, mdstat should look like this. md0 and md1 are untouched with sda1+sdb1 and sda2+sdb2. md2 uses sda3, md3 uses sda3. New arrays md12 uses sdb3 and md127 sdb4.

md3 has already new correct larger size while md127 not.

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md12 : active raid1 sdb3[3]
      229360448 blocks super 1.2 [2/1] [U_]

md3 : active raid1 sda4[0]
      465709760 blocks super 1.2 [1/1] [U]
      bitmap: 0/4 pages [0KB], 65536KB chunk

md127 : active raid1 sdb4[3]
      219700032 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sda3[2]
      471728128 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sda2[2] sdb2[3]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[3] sda1[2]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>
root@n2orava:/mnt#

We don’t need md12 yet.

mount /mnt
mdadm --stop /dev/md12

Let focus on sdb4 in md127. The size of 234.87GB is old one. Assuming data are already in sda4, we can zero superblock and format it.

$ mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Wed Jun  5 18:40:43 2013
     Raid Level : raid1
     Array Size : 229360448 (218.74 GiB 234.87 GB)
  Used Dev Size : 229360448 (218.74 GiB 234.87 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Oct 27 01:51:28 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : n2orava:2  (local to host n2orava)
           UUID : d944d0dc:4b0143f1:8a879110:80405435
         Events : 7832

    Number   Major   Minor   RaidDevice State
       3       8       19        0      active sync   /dev/sdb3
       -       0        0        1      removed

$ mdadm --stop /dev/md127
mdadm: stopped /dev/md127

Rebuilding new md2 (as md126) and md3

Since /dev/sda3 has already a correct size AND data from /dev/sdb3, we reset superblock on /dev/sdb3 and format it. Do the same for /dev/sda4 and /dev/sdb4.

$ mdadm --zero-superblock /dev/sdb3

(format command is missing, but should be something like mkfs.xfs --label network /dev/sdb3)

Getting md3 array to work again.

The detail of md3 (md126). Contains /dev/sda4 only. The name md126 is because I did a mistake and got automatic name of array.

$ mdadm --detail /dev/md126
/dev/md126:
        Version : 1.2
  Creation Time : Fri Oct 26 23:58:47 2018
     Raid Level : raid1
     Array Size : 465709760 (444.14 GiB 476.89 GB)
  Used Dev Size : 465709760 (444.14 GiB 476.89 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct 27 23:01:20 2018
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : n2orava:3  (local to host n2orava)
           UUID : bf6f0119:8d5c3260:2ed35679:ef1abb74
         Events : 5

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
$ mdadm --zero-superblock /dev/sdb4

(format command is missing, but should be something like mkfs.xfs --label network /dev/sdb3)

$ mdadm /dev/md126 --add /dev/sdb4
mdadm: added /dev/sdb4

Rebuilding process of formerly md3 (now md126) array. /dev/sdb4 serves as a spare disk now.

$ mdadm --detail /dev/md126
/dev/md126:
        Version : 1.2
  Creation Time : Fri Oct 26 23:58:47 2018
     Raid Level : raid1
     Array Size : 465709760 (444.14 GiB 476.89 GB)
  Used Dev Size : 465709760 (444.14 GiB 476.89 GB)
   Raid Devices : 1
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct 27 23:21:14 2018
          State : clean
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

           Name : n2orava:3  (local to host n2orava)
           UUID : bf6f0119:8d5c3260:2ed35679:ef1abb74
         Events : 6

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4

       1       8       20        -      spare   /dev/sdb4

Declare the array has two drives instead of only one, so the spare (sdb4) is used as a second drive:

$ mdadm /dev/md126 --grow --raid-devices=2
raid_disks for /dev/md126 set to 2

Rebuilding formerly md3 (now md126) array.

$ mdadm --detail /dev/md126
/dev/md126:
        Version : 1.2
  Creation Time : Fri Oct 26 23:58:47 2018
     Raid Level : raid1
     Array Size : 465709760 (444.14 GiB 476.89 GB)
  Used Dev Size : 465709760 (444.14 GiB 476.89 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct 27 23:21:58 2018
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 0% complete

           Name : n2orava:3  (local to host n2orava)
           UUID : bf6f0119:8d5c3260:2ed35679:ef1abb74
         Events : 10

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       8       20        1      spare rebuilding   /dev/sdb4

Recovery started on md126.

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[2]
      471728128 blocks super 1.2 [2/1] [_U]

md126 : active raid1 sdb4[1] sda4[0]
      465709760 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  1.3% (6159744/465709760) finish=59.2min speed=129356K/sec
      bitmap: 0/4 pages [0KB], 65536KB chunk

md1 : active (auto-read-only) raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[3] sda1[2]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Done on both third and fourth partitions.

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb3[3] sda3[2]
      471728128 blocks super 1.2 [2/1] [_U]
        resync=DELAYED

md126 : active raid1 sdb4[1] sda4[0]
      465709760 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  4.5% (21211712/465709760) finish=52.7min speed=140468K/sec
      bitmap: 0/4 pages [0KB], 65536KB chunk

md1 : active (auto-read-only) raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[3] sda1[2]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Final status:

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sdb4[1] sda4[0]
      465709760 blocks super 1.2 [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

md1 : active (auto-read-only) raid1 sdb2[3] sda2[2]
      9757568 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sdb3[3] sda3[2]
      471728128 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[3]
      29279104 blocks super 1.2 [2/2] [UU]

unused devices: <none>

There is still a bitmap on md3 left from my experiments. It’s a kind of buffer and don’t really matters in my case. I was lazy to remove it.

So that’s all. I hope this article helped. At least to troubleshoot things. +。:.゚ヽ(´∀。)ノ゚.:。+゚゚+。:.゚ヽ(*´∀)ノ゚.:。+゚.

Add Comment