One of the best tools to look at LVM usage is with lvmstat. It can report the bytes read and written to logical volumes. Using that information, you can determine which logical volumes are used the most.
Gathering LVM statistics is not enabled by default:
# lvmstat -v data2vg
0516-1309 lvmstat: Statistics collection is not enabled for
this logical device. Use -e option to enable.
As you can see by the output here, it is not enabled, so you need to actually enable it for each volume group prior to running the tool using:
# lvmstat -v data2vg -e
The following command takes a snapshot of LVM information every second for 10 intervals:
# lvmstat -v data2vg 1 10
This view shows the most utilized logical volumes on your system since you started the data collection. This is very helpful when drilling down to the logical volume layer when tuning your systems.
# lvmstat -v data2vg
Logical Volume iocnt Kb_read Kb_wrtn Kbps
appdatalv 306653 47493022 383822 103.2
loglv00 34 0 3340 2.8
data2lv 453 234543 234343 89.3
What are you looking at here?
- iocnt: Reports back the number of read and write requests.
- Kb_read: Reports back the total data (kilobytes) from your measured interval that is read.
- Kb_wrtn: Reports back the amount of data (kilobytes) from your measured interval that is written.
- Kbps: Reports back the amount of data transferred in kilobytes per second.
You can use the -d option for lvmstat to disable the collection of LVM statistics.
A common issue on AIX servers is, that logical volumes are configured on only one single disk, sometimes causing high disk utilization on a small number of disks in the system, and impacting the performance of the application running on the server.
If you suspect that this might be the case, first try to determine which disks are saturated on the server. Any disk that is in use more than 60% all the time, should be considered. You can use commands such as iostat, sar -d, nmon and topas to determine which disks show high utilization. If the do, check which logical volumes are defined on that disk, for example on an IBM SAN disk:
# lspv -l vpath23
A good idea always is to spread the logical volumes on a disk over multiple disk. That way, the logical volume manager will spread the disk I/O over all the disks that are part of the logical volume, utilizing the queue_depth of all disks, greatly improving performance where disk I/O is concerned.
Let's say you have a logical volume called prodlv of 128 LPs, which is sitting on one disk, vpath408. To see the allocation of the LPs of logical volume prodlv, run:
# lslv -m prodlv
Let's also assume that you have a large number of disks in the volume group, in which prodlv is configured. Disk I/O usually works best if you have a large number of disks in a volume group. For example, if you need to have 500 GB in a volume group, it is usually a far better idea to assign 10 disks of 50 GB to the volume group, instead of only one disk of 512 GB. That gives you the possibility of spreading the I/O over 10 disks instead of only one.
To spread the disk I/O prodlv over 8 disks instead of just one disk, you can create an extra logical volume copy on these 8 disks, and then later on, when the logical volume is synchronized, remove the original logical volume copy (the one on a single disk vpath408). So, divide 128 LPs by 8, which gives you 16LPs. You can assign 16 LPs for logical volume prodlv on 8 disks, giving it a total of 128 LPs.
First, check if the upper bound of the logical volume is set ot at least 9. Check this by running:
# lslv prodlv
The upper bound limit determines on how much disks a logical volume can be created. You'll need the 1 disk, vpath408, on which the logical volume already is located, plus the 8 other disks, that you're creating a new copy on. Never ever create a copy on the same disk. If that single disk fails, both copies of your logical volume will fail as well. It is usually a good idea to set the upper bound of the logical volume a lot higher, for example to 32:
# chlv -u 32 prodlv
The next thing you need to determine is, that you actually have 8 disks with at least 16 free LPs in the volume group. You can do this by running:
# lsvg -p prodvg | sort -nk4 | grep -v vpath408 | tail -8
vpath188 active 959 40 00..00..00..00..40
vpath163 active 959 42 00..00..00..00..42
vpath208 active 959 96 00..00..96..00..00
vpath205 active 959 192 102..00..00..90..00
vpath194 active 959 240 00..00..00..48..192
vpath24 active 959 243 00..00..00..51..192
vpath304 active 959 340 00..89..152..99..00
vpath161 active 959 413 14..00..82..125..192
Note how in the command above the original disk, vpath408, was excluded from the list.
Any of the disks listed, using the command above, should have at least 1/8th of the size of the logical volume free, before you can make a logical volume copy on it for prodlv.
Now create the logical volume copy. The magical option you need to use is "-e x" for the logical volume commands. That will spread the logical volume over all available disks. If you want to make sure that the logical volume is spread over only 8 available disks, and not all the available disks in a volume group, make sure you specify the 8 available disks:
# mklvcopy -e x prodlv 2 vpath188 vpath163 vpath208 \
vpath205 vpath194 vpath24 vpath304 vpath161
Now check again with "mklv -m prodlv" if the new copy is correctly created:
# lslv -m prodlv | awk '{print $5}' | grep vpath | sort -dfu | \
while read pv ; do
result=`lspv -l $pv | grep prodlv`
echo "$pv $result"
done
The output should similar like this:
vpath161 prodlv 16 16 00..00..16..00..00 N/A
vpath163 prodlv 16 16 00..00..00..00..16 N/A
vpath188 prodlv 16 16 00..00..00..00..16 N/A
vpath194 prodlv 16 16 00..00..00..16..00 N/A
vpath205 prodlv 16 16 16..00..00..00..00 N/A
vpath208 prodlv 16 16 00..00..16..00..00 N/A
vpath24 prodlv 16 16 00..00..00..16..00 N/A
vpath304 prodlv 16 16 00..16..00..00..00 N/A
Now synchronize the logical volume:
# syncvg -l prodlv
And remove the original logical volume copy:
# rmlvcopy prodlv 1 vpath408
Then check again:
# lslv -m prodlv
Now, what if you have to extend the logical volume prodlv later on with another 128 LPs, and you still want to maintain the spreading of the LPs over the 8 disks? Again, you can use the "-e x" option when running the logical volume commands:
# extendlv -e x prodlv 128 vpath188 vpath163 vpath208 \
vpath205 vpath194 vpath24 vpath304 vpath161
You can also use the "-e x" option with the mklv command to create a new logical volume from the start with the correct spreading over disks.
You can grow your ext3 filesystems while online: The functionality has been included in resize2fs so to resize a logical volume, start by extending the volume:
# lvextend -L +2G /dev/systemvg/homelv
And the resize the filesystem:
# resize2fs /dev/systemvg/homelv
By omitting the size argument, resize2fs defaults to using the available space in the partition/lv.
To mount:
- Build a logical volume (the size of an ISO image, better if a little bigger).
- Create an entry in /etc/filesystem using that logical volume (LV), but setting its Virtual File System (V'S) to be cdrfs.
- Create the mount point for this LV/ISO.
- Copy the ISO image to the LV using dd.
- Mount and work on it like a mounted CD-ROM.
The entry in /etc/filesystem should look like:
/IsoCD:
dev = /dev/lv09
vfs = cdrfs
mount = false
options = ro
account = false
To unmount:
- Unmount the file system.
- Destroy the logical volume.
With HACMP, you can run into the following error during a verification/synchronization:
WARNING: The LVM time stamp for shared volume group: testvg is inconsistent
with the time stamp in the VGDA for the following nodes: host01
To correct the above condition, run verification & synchronization with
"Automatically correct errors found during verification?" set to either 'Yes'
or 'Interactive'. The cluster must be down for the corrective action to run.
This can happen when you've added additional space to a logical volume/file system from the command line instead of using the smitty hacmp menu. But you certainly don't want to take down the entire HACMP cluster to solve this message.
First of all, you don't. The cluster will fail-over nicely anyway, without these VGDA's being in sync. But, still, it is an annoying warning, that you would like to get rid off.
Have a look at your shared logical volumes. By using the lsattr command, you can see if they are actually in sync or not:
host01 # lsattr -Z: -l testlv -a label -a copies -a size -a type -a strictness -Fvalue
/test:1:809:jfs2:y:
host02 # lsattr -Z: -l testlv -a label -a copies -a size -a type -a strictness -Fvalue
/test:1:806:jfs2:y:
Well, there you have it. One host reports testlv having a size of 806 LPs, the other says it's 809. Not good. You will run into this when you've used the extendlv and chfs commands to increase the size of a shared file system. You should have used the smitty menu.
The good thing is, HACMP will sync the VGDA's if you do some kind of logical volume operation through the
smitty hacmp menu. So, either increase the size of a shared logical volume through the smitty menu with just one LP (and of course, also increase the size of the corresponding file system); Or, you can create an additional shared logical volume through smitty of just one LP, and then remove it again afterwards.
When you've done that, simply re-run the verification/synchronization, and you'll notice that the warning message is gone. Make sure you run the
lsattr command again on your shared logical volumes on all the nodes in your cluster to confirm.
Sometimes situations occur where a logical volume is deleted, but the ODM is not up to date. E.g. when "lsvg -l" doesn't show you the logical volume, but the lslv command can still show information about the logical volume. Not good.
To resolve this issue, first try:
# synclvodm -v [volume group name]
If that doesn't work, try this: (in the example below logical volume hd7 is used). Save the ODM information of the logical volume:
# odmget -q name=hd7 CuDv | tee -a /tmp/CuDv.hd7.out
# odmget -q name=hd7 CuAt | tee -a /tmp/CuAt.hd7.out
If you mess things up, you can allways use the following command to restore the ODM information:
# odmadd /tmp/[filename]
Delete the ODM information of the logical volume:
# odmdelete -o CuDv -q name=hd7
# odmdelete -o CuAt -q name=hd7
Then, remove the device entry of the logical volume in the /dev directory (if present at all).
Number of results found for topic
LVM: 16.
Displaying results: 11 - 16.