LVM & RAID

(Outdated!)

LVM - Logical Volume Manager;
RAID - Redundant Array of Independent (old: Inexpensive) Disks;

There is no strong correlation between LVM and RAID, i.e., each technology can be used independently. However, striped LVM usually requires RAID1. In the following examples, LVM means in fact LVM2, and RAID is assumed to be Linux Software RAID. Hardware RAID is less complicated, but (!) do not mix real (heavy | expensive | server) hardware RAID with pseudo (fake) RAID integrated on typical PC motherboard. In last case Linux Software RAID is preferred.

RAID + Striped LVM

To create LVM with striping, at least 4 disks are required. First, we create partitions Linux raid autodetect (1 disk = 1 partition). Then, assuming disks are sd[c-f], two RAID1 are created:

mdadm –-create /dev/md0 –-level=raid1
–-raid-devices=2 /dev/sd[cd]1

mdadm –-create /dev/md1 –-level=raid1
–-raid-devices=2 /dev/sd[ef]1

RAID config is kept in /etc/mdadm.conf. However, mdadm can handle array without this config file. Now, we create physical volumes:

pvcreate /dev/md0

pvcreate /dev/md1

In case of hardware RAID, mdadm is not used, and physical volumes are created as shown below:

pvcreate /dev/sdb

pvcreate /dev/sdc

After this, we create a volume group (name can be selected):

vgcreate VolGroup01 /dev/md0 /dev/md1

or (for HW RAID)

vgcreate VolGroup01 /dev/sdb /dev/sdc

The following commands create two logical volumes distributed accross two arrays, the stripe size is 256K, the volume sizes are 136GB:

lvcreate -i2 -I256 -L136G -nLogVol00 VolGroup01

lvcreate -i2 -I256 -L136G -nLogVol01 VolGroup01

And finally, file systems are created

mke2fs -c -j -T news /dev/VolGroup01/LogVol00

mke2fs -c -j -T news /dev/VolGroup01/LogVol01

and mounted like follows:

mount /dev/VolGroup01/LogVol00 /u02

mount /dev/VolGroup01/LogVol01 /u03

Other useful commands

pvdisplay

displays attributes of a physical volume;

vgdisplay

displays attributes of volume group;

lvdisplay

displays attributes of a logical volume;

pvscan

scans all disks for physical volumes;

vgscan

scans all disks for volume groups and rebuilds caches;

lvscan

scans all disks for logical volumes;

lvremove /dev/VolGroup01/LogVol01

removes LV;

vgchange -a n VolGroup01

deactivates a volume group;

vgremove VolGroup01

removes a volume group;

LVM without RAID (single disk)

There is no place here to discuss all advantages of LVM on a system with one or two disks. During Linux installation you may be prompted to use LVM, and if you choose, all config is done by setup program. The problems may arise when disk fails and you must manually recreate the whole structure (partitions, groups, volumes, etc). Let's assume that we're going to restore a system disk (filesystem dumps are available). First we create primary Linux partition for /boot and Linux swap partition. The rest goes to PV:

pvcreate /dev/sda3

After this we create volume group and logical volumes:

vgcreate VolGroup00 /dev/sda3

lvcreate -L16G –-LogVol00 VolGroup00

lvcreate -L8G –-LogVol01 VolGroup00

lvcreate -L64G –-LogVol02 VolGroup00

...

The next stage:

mke2fs -c -j -T news /dev/VolGroup00/LogVol00

...

How to access the striped volumes
when the system is broken

Let's assume, that the system [disk] [with all config files] has gone, but Software RAID array with striped LVM contains some useful data. We can reassemble it in two steps: 1) RAID; 2) LVM. With hardware RAID there'll be the last step only.

To perform this task, we must load Linux OS using some appropriate Restore / Repair / Live CD supporting multipath devices and LVM2. First of all, you must re-create mdadm.conf:

echo "DEVICE partitions" > /etc/mdadm.conf

echo "MAILADDR root" >> /etc/mdadm.conf

mdadm –-examine –-scan /dev/sdc1 /dev/sdd1
/dev/sde1 /dev/sdf1 >> /etc/mdadm.conf

Then, try to assemble RAID:

mdadm –A -s

cat /proc/mdstat

The last command is optional (just to be sure). If RAID is O.K., the following cmds should be executed in the specified sequence:

vgscan

pvscan

vgchange -a y

lvscan

The last cmd must display that all logical volumes are ACTIVE. Now volume can be mounted on some empty directory:

mount -o ro /dev/VolGroup01/LogVol00 /mnt