Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject: RAID Question
PostPosted: Fri Feb 20, 2009 12:48 am 
Offline
Junior Member

Joined: Sat Sep 24, 2005 9:10 am
Posts: 39
I have a non-Linode related question I was hoping to get an answer from someone who's an expert in mdadm. I have/had a RAID5 array with four 250GB drives. As a result of some poking around inside the computer while it was running, the system froze and I had to hard reset it. After booting up it looks like the RAID array got messed up. It looks as if one of the drives were removed. Here's what mdadm says:
Code:
# mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Thu Dec 27 08:47:02 2007
     Raid Level : raid5
    Device Size : 244198464 (232.89 GiB 250.06 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Feb 11 22:04:35 2009
          State : active, degraded, Not Started
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 654d2ad9:ca55af99:8b394a0b:cda00542
         Events : 0.5

    Number   Major   Minor   RaidDevice State
       0      33        0        0      active sync   /dev/hde
       1      33       64        1      active sync   /dev/hdf
       2      34        0        2      active sync   /dev/hdg
       3       0        0        3      removed

and this is what dmesg says:
Code:
# dmesg | grep md0
md: md0 stopped.
md: md0: raid array is not clean -- starting background reconstruction
raid5: cannot start dirty degraded array for md0
raid5: failed to run raid set md0

and some more info
Code:
# mdadm -E /dev/md0
mdadm: No md superblock detected on /dev/md0.

Do I still have hope or did I lose all my data? I stopped messing around with it before I do something bad and mess it up even more (if that's possible).

Any help would be greatly appreciated.

- George


Top
   
 Post subject:
PostPosted: Fri Feb 20, 2009 1:17 am 
Offline
Junior Member

Joined: Sat Sep 24, 2005 9:10 am
Posts: 39
I found this on a forum. Would this help me? I really don't know much about mdadm and I want to make sure I don't mess it up any more than it already is. Maybe changing the state manually is not that bad though. I don't know. Anybody?
Code:
[root@ornery ~]# cat /sys/block/md0/md/array_state
inactive
[root@ornery ~]# echo "clean" > /sys/block/md0/md/array_state
[root@ornery ~]# cat /sys/block/md0/md/array_state
clean
[root@ornery ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdc1[1] sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2]
      2344252416 blocks level 6, 256k chunk, algorithm 2 [8/7] [_UUUUUUU]

unused devices: <none>
[root@ornery ~]# mount -o ro /dev/md0 /data
[root@ornery ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda2             226G   46G  168G  22% /
/dev/hda1             251M   52M  187M  22% /boot
/dev/shm              2.9G     0  2.9G   0% /dev/shm
/dev/sda2              65G   35G   27G  56% /var
/dev/md0              2.2T  307G  1.8T  15% /data


Top
   
 Post subject:
PostPosted: Mon Feb 23, 2009 12:41 am 
Offline
Senior Member

Joined: Mon Feb 02, 2009 1:43 am
Posts: 67
Website: http://fukawi2.nl
Location: Melbourne, Australia
Is the "removed" drive still present? Is it listed when you run `fdisk -l`? If it's not, then you probably need to start looking at the hardware and finding out why Linux doesn't know it's there anymore...


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group