This next piece describes how to configure the storage multi-pathing software on a Red Hat Enterprise Linux 7 system. This is a required to install if you're using SAN storage and multiple paths are available to the storage (which is usually the case).
First, check if all required software is installed. It generally is, but it's good to check:
Next, check if the multipath daemon is running:# yum -y install device-mapper-multipath
If it is, stop it:# service multipathd status
Configure file /etc/multipath.conf, which is the configuration file for the multipath daemon:# service multipathd stop
This will create a default /etc/multipath.conf file, which will work quite well often, without any further configuration needed.# mpathconf --enable --with_multipathd y
Then start the multipath daemon:
You can now use the lsblk command to view the disks that are configured on the system.# service multipathd start Redirecting to /bin/systemctl start multipathd.service
This command should show that there have been mpathX devices created, which are the multipath devices managed by the multipath daemon, and you can now start using these mpathX disk devices as storage on the Red Hat system. Another way to check the mpath disk devices available on the system, is by looking at the /dev/mapper directory:# lsblk
If you have a clustered environment, where SAN storage devices are zoned and allocated to multiple systems, you may want to ensure that all the nodes in the cluster are using the same naming for the mpathX devices. That makes it easier to recognize which disk is which on each system.# ls -als /dev/mapper/mpath*
To ensure that all the nodes in the cluster use the same naming, first run a "cat /etc/multipath/bindings" command on all nodes, and identify which disks are shared on all nodes, and what the current naming of the mpathX devices on each system looks like. It may well be that the naming of the mpathX devices is already consistent on all cluster nodes.
If it is not, however, then copy file /etc/multipath/bindings from one server to all other cluster nodes. Be careful when doing this, especially when one or more servers in a cluster have more SAN storage allocated than others. Be sure that only those entries in /etc/multipath/bindings are copied over to all cluster nodes, where the entries represent shared storage on all cluster nodes. Any SAN storage allocated to just one server will show up in the /etc/multipath/bindings file for that server only, and it should not be copied over to other servers.
Once the file is correct on all cluster nodes. Restart the multipath daemon on each cluster node:
If you now do a "ls" in /dev/mapper on each cluster node, you'll see the same mpath names on all cluster nodes.# service multipathd stop # multipath -F # service multipathd start
Once this is complete, make sure that the multipath daemon is started at system boot time as well:
# systemctl enable multipathd
If you found this useful, here's more on the same topic(s) in our blog:
- Yum
- Installing MySQL, PHP and Apache
- Resetting the root password for a KVM guest image
- Unable to remove hdiskpower devices due to a method error
- Prune old Docker data
UNIX Health Check delivers software to scan Linux and AIX systems for potential issues. Run our software on your system, and receive a report in just a few minutes. UNIX Health Check is an automated check list. It will report on perfomance, capacity, stability and security issues. It will alert on configurations that can be improved per best practices, or items that should be improved per audit guidelines. A report will be generated in the format you wish, and the report includes the issues discovered and information on how to solve the issues as well.
Interested in learning more?
Interested in learning more?




