Recover single disk from non-mdadm fake RAID 1

I have one remaining disk1 from a two-disk RAID-1 array, created through some "hardware" fake RAID controller in DDF format, plugged into my laptop via a USB adapter. The situation looks as follows:

> sudo fdisk -l /dev/sdb
Disk /dev/sdb: 465,78 GiB, 500107862016 bytes, 976773168 sectors
Disk model: 2115            
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

> sudo dmraid -r
/dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 976642096 sectors, data@ 0

> sudo dmraid -s -v
ERROR: ddf1: wrong # of devices in RAID set "ddf1_RAID" [1/2] on /dev/sdb
*** Group superset .ddf1_disks
--> *Inconsistent* Subset
name   : ddf1_RAID
size   : 976609280
stride : 64
type   : mirror
status : inconsistent
subsets: 0
devs   : 1
spares : 0

So there's no partitions that mdadm can assemble.

Ideally, I'd like to mount the partitions on that disk just like a normal external drive to access the data. Restoring the RAID array is not necessary.

Now, I have read in several places to use dmraid -rE /dev/sdb to erase the RAID metadata, or even dd zeros onto the first couple of thousand bytes. The question I have is: will that leave the underlying partitions intact? If not, how can I savely recover them?

I have already found the underlying partitions by following this tutorial on testdisk:

Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63
     Partition               Start        End    Size in sectors
>D Linux                    2  42 41 19124 123  7  307200000 [HOME]
 D HPFS - NTFS          19124 123  8 38246 203 37  307200000
 D HPFS - NTFS          38246 203 38 59006 223 33  333510656 [DATA]
 D Linux Swap           59006 223 34 60703 234 11   27262976

If I use testdisk to update the partition table, would that be a good idea?

1In reality, I still have both, since it was the main board that failed, not the disks, but that shouldn't change the question. At least it gives me a second chance for each error.

задан 10 July 2020 в 18:17

1 ответ

I dared to try partition recovery with testdisk, and it worked effortlessly. The four partitions I already wrote about in the question are quickly found with the "quick search" function, but shown as deleted. I changed marked them as primary by using the left/right arrow keys, and then just continued to writing the partition structure. That takes no time at all.

Then the program says you have to reboot first for any changes to take effect, but that wasn't necessary here, since no bootable or root partitions were involved anyway -- after some seconds, all partitions were mounted automatically and intact.

ответ дан 30 July 2020 в 22:10

Другие вопросы по тегам:

Похожие вопросы: