shian:~# fdisk /dev/hdb Command (m for help): p Disk /dev/hdb: 1073 MB, 1073741824 bytes 16 heads, 63 sectors/track, 2080 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-2080, default 1):INTRO Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-2080, default 2080):INTRO Using default value 2080 Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/hdb: 1073 MB, 1073741824 bytes 16 heads, 63 sectors/track, 2080 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 2080 1048288+ fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
shian:~# mdadm --create /dev/md0 --verbose --level=1 --raid-devices=2 /dev/hdb1 /dev/hdc1 mdadm: size set to 1048192K mdadm: array /dev/md0 started.
NOTA: Si ya hemos usado el disco anteriormente para otro raid, es necesario reiniciar el superbloque para que se borre la información existente, puesto que sino, la creación puede fallar:
shian:~# mdadm --zero-superblock /dev/sdXX
shian:~# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 ide/host0/bus1/target0/lun0/part1[1] ide/host0/bus0/target1/lun0/part1[0] 1048192 blocks [2/2] [UU] [===========>.........] resync = 58.2% (611196/1048192) finish=0.0min speed=101866K/sec unused devices: <none>
shian:~# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 ide/host0/bus1/target0/lun0/part1[1] ide/host0/bus0/target1/lun0/part1[0] 1048192 blocks [2/2] [UU] unused devices: <none>
shian:~# mkfs.ext3 /dev/md0 mke2fs 1.37 (21-Mar-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262048 blocks 13102 blocks (5.00%) reserved for the super user First data block=0 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 22 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
shian:~# mkdir /mnt/raid shian:~# echo "/dev/md0 /mnt/raid ext3 defaults 0 1" >> /etc/fstab shian:~# mount /mnt/raid shian:~# df -h /dev/md0 Filesystem Size Used Avail Use% Mounted on /dev/md0 1008M 17M 941M 2% /mnt/raid
A partir de aquí, lo que yo hice inicialmente fue simular que un disco duro se estropeaba y al arrancar la máquina quería montar el raid sólo con el otro disco y utilizarlo normalmente. Además, después de simular con la máquina virtual que añadía un nuevo disco duro, quería añadirlo al raid para volver a tener de nuevo la redundancia. Después de leer muchos tutoriales y foros no había manera de que funcionase. Si reiniciaba la máquina sin un disco del raid, éste no se montaba y no podía acceder a los datos. Además, el dispositivo/dev/md0 no era reconocido, por lo que era como si el raid no existiese!. Finalmente, encontré en un pequeño tutorial la solución a mis problemas.
shian:/# cd /etc/mdadm shian:/etc/mdadm# cp mdadm.conf mdadm.conf.`date +%y%m%d` shian:/etc/mdadm# echo "DEVICE partitions" > mdadm.conf shian:/etc/mdadm# mdadm --detail --scan >> mdadm.conf shian:/etc/mdadm# shian:/etc/mdadm# cat mdadm.conf DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 UUID=a48e6816:ea6e7f37:6cc50cdb:6fead399 devices=/dev/hdb1,/dev/hdc1
shian:/etc/mdadm# umount /mnt/raid shian:/etc/mdadm# mdadm --stop /dev/md0 shian:/etc/mdadm# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors unused devices: <none> shian:/etc/mdadm# mdadm --assemble /dev/md0 /dev/hdb1 /dev/hdc1 mdadm: /dev/md0 has been started with 2 drives. shian:/etc/mdadm# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 ide/host0/bus0/target1/lun0/part1[0] ide/host0/bus1/target0/lun0/part1[1] 1048192 blocks [2/2] [UU] unused devices: <none>
Ahora sí, vamos a probar si realmente el podemos recuperar la información y el sistema funciona correctamente en caso de caída de un dispositivo. Además, veremos cómo reemplazar el disco defectuoso y recuperar de nuevo el raid 1 con los dos discos.
shian:/# mount /dev/md0 /mnt/raid shian:/# dd if=/dev/urandom of=/mnt/raid/random1 count=51200 51200+0 records in 51200+0 records out 26214400 bytes transferred in 7.829523 seconds (3348148 bytes/sec)
shian:/# cksum /mnt/raid/random1 1652310020 26214400 /mnt/raid/random1
md: bindmd: ide/host0/bus1/target0/lun0/part1's event counter: 00000006 md0: former device hdb1 is unavailable, removing from array! md: raid1 personality registered as nr 3 md0: max total readahead window set to 124k md0: 1 data-disks, max readahead per data-disk: 124k raid1: device ide/host0/bus1/target0/lun0/part1 operational as mirror 1 raid1: md0, not all disks are operational -- trying to recover array raid1: raid set md0 active with 1 out of 2 mirrors md: updating md0 RAID superblock on device md: ide/host0/bus1/target0/lun0/part1 [events: 00000007]<6>(write) ide/host0/bus1/target0/lun0/part1's sb offset: 1048192 md: recovery thread got woken up ... md0: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ...
shian:~# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 ide/host0/bus1/target0/lun0/part1[1] 1048192 blocks [2/1] [_U] unused devices: <none>
shian:~# df -h /dev/md0 Filesystem Size Used Avail Use% Mounted on /dev/md0 1008M 42M 916M 5% /mnt/raid
shian:~# mdadm --manage /dev/md0 --fail /dev/hdb1 mdadm: set /dev/hdb1 faulty in /dev/md0
Apagamos la máquina y cambiamos el disco duro defectuoso por uno nuevo. En el caso de VMware basta con crear un nuevo dispositivo de tipo disco duro. Además, este disco duro nuevo que añadimos va a ser de mayor tamaño que el anterior. Idealmente en un raid 1 los dos discos duros deben tener el mismo tamaño, pero linux nos proporciona la suficiente flexibilidad para que esto no sea así.
shian:~# sfdisk -d /dev/hdc | sfdisk /dev/hdb Checking that no-one is using this disk right now ... OK Disk /dev/hdb: 4161 cylinders, 16 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/hdb: unrecognized partition table type Old situation: No partitions found New situation: Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/hdb1 63 2096639 2096577 fd Linux raid autodetect /dev/hdb2 0 - 0 0 Empty /dev/hdb3 0 - 0 0 Empty /dev/hdb4 0 - 0 0 Empty Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).)
shian:~# fdisk /dev/hdb The number of cylinders for this disk is set to 4161. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (2081-4161, default 2081):INTRO Using default value 2081 Last cylinder or +size or +sizeM or +sizeK (2081-4161, default 4161):INTRO Using default value 4161 Command (m for help): p Disk /dev/hdb: 2147 MB, 2147483648 bytes 16 heads, 63 sectors/track, 4161 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 2080 1048288+ fd Linux raid autodetect /dev/hdb2 2081 4161 1048824 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. shian:~# mkfs.ext3 /dev/hdb2 mke2fs 1.37 (21-Mar-2005) warning: 62 blocks unused. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131328 inodes, 262144 blocks 13110 blocks (5.00%) reserved for the super user First data block=0 8 block groups 32768 blocks per group, 32768 fragments per group 16416 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. shian:~# mkdir /mnt/tmp shian:~# mount /dev/hdb2 /mnt/tmp
shian:~# mdadm --manage /dev/md0 --add /dev/hdb1 mdadm: hot added /dev/hdb1
shian:~# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 ide/host0/bus0/target1/lun0/part1[2] ide/host0/bus1/target0/lun0/part1[1] 1048192 blocks [2/1] [_U] [=======>.............] recovery = 39.5% (415488/1048192) finish=0.1min speed=69248K/sec unused devices: <none>
shian:~# cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 ide/host0/bus0/target1/lun0/part1[0] ide/host0/bus1/target0/lun0/part1[1] 1048192 blocks [2/2] [UU] unused devices: <none>
shian:~# umount /mnt/raid shian:~# mount /dev/hdb1 /mnt/raid shian:~# cksum /mnt/raid/random1 1652310020 26214400 /mnt/raid/random1
shian:~# umount /mnt/raid/ shian:~# mount /mnt/raid/ shian:~# df -h /mnt/raid/ Filesystem Size Used Avail Use% Mounted on /dev/md0 1008M 42M 916M 5% /mnt/raid
md: ide/host0/bus0/target1/lun0/part1's event counter: 0000000c md: ide/host0/bus1/target0/lun0/part1's event counter: 0000000c md: raid1 personality registered as nr 3 md0: max total readahead window set to 124k md0: 1 data-disks, max readahead per data-disk: 124k raid1: device ide/host0/bus0/target1/lun0/part1 operational as mirror 0 raid1: device ide/host0/bus1/target0/lun0/part1 operational as mirror 1 raid1: raid set md0 active with 2 out of 2 mirrors md: updating md0 RAID superblock on device md: ide/host0/bus0/target1/lun0/part1 [events: 0000000d]<6>(write) ide/host0/bus0/target1/lun0/part1's sb offset: 1048192 md: ide/host0/bus1/target0/lun0/part1 [events: 0000000d]<6>(write) ide/host0/bus1/target0/lun0/part1's sb offset: 1048192
Hemos visto una manera bastante sencilla y fiable de tener nuestros datos importantes a buen recaudo. No obstante este sistema raid no sirve de nada sin una buena política de backups, puesto que no protege del borrado accidental de archivos.
Después de haber probado FreeNAS y el Raid en Debian puedo sacar en claro lo siguiente:
Ahora ya sólo me queda cambiar mi actual Duron 1200Mhz (algo que voy a hacer en las próximas semanas) puesto que ya está algo viejo y cada vez lo noto más lento y utilizarlo como servidor NAS-Debian en casa.
El siguiente capítulo será la configuración de Samba para compartir los nuevos dispositivos creados con una máquina windows así como la gestión de los distintos permisos de usuario.
No hay comentarios:
Publicar un comentario