Topics: Backup & restore, Spectrum Protect

Tail TSM / IBM Spectrum Protect console log

The following command can be used to tail the TSM / IBM Spectrum Protect console log:

dsmadmc -console
This will allow you to continously follow what is happening on the TSM / IBM Spectrum Protect server.

Topics: Backup & restore, Spectrum Protect

Show configuration of a TSM / IBM Spectrum Protect server

To save the complete configuration of a TSM server to a file, run:

dsmadmc -id=admin -password=admin show config > /tmp/config
This assumes that you have an admin account with the password admin. And it will write the output file to /tmp/config.

If you wish to have comma separated output, add -comma.

To just display the status of the TSM / IBM Spectrum Protect server, run (this is included in the output of show config):
q status
Another very interesting command to run is:
q system

Topics: Backup & restore, Spectrum Protect

Register a new TSM / IBM Spectrum Protect administrator

To register a new TSM / IBM Spectrum Protect administrator, run:

register admin adminname password contact="Contact details of the new admin" emailaddress=email-address@ofthenewadmin.com
Next, grant system privilege authority to the new admin:
grant authority adminname class=sys
To remove a TSM admin, run:
remove admin adminname

Topics: Red Hat, System Admin

Red Hat: Creating a backup to ISO images

The following procedure describes how to create a full system backup, using MondoRescue, to ISO images, that can later be burnt to DVD, and used to recover the entire system.

First, set up the REPO for MondoResuce:

# cd /etc/yum.repos.d/
# wget ftp://ftp.mondorescue.org/rhel/7/x86_64/mondorescue.repo
Install MondoRescue:
# yum install mondo
Answer "y" to everything.

You will need a destination to put the ISO files in. For example a remote NFS mount on a separate server is a good choice, so the backup is not locally on the same system.

Edit /etc/mindi/mindi.conf, to allow for a larger RAM disk. Mindi is used by Mondo. Wihout it, Mindi will exit saying it ran out of space. Add to mindi.conf:
EXTRA_SPACE=240000
BOOT_SIZE=240000
Now run the MondoRescue backup:
# mondoarchive -O -V -i -s 4480m -d /target -I / -T /tmp
You can also add the -E option to tell MondoRescue to exclude certain folders.

The -s option tells MondoResuce to make ISO images of DVD size 4480m.

The command says it will log to /var/log/mondoarchive.log. A /var/log/mindi.log is also written. It will also indicate the number of media images to be created. Let it run, and your backup is successful.

Topics: AIX, System Admin

Configuring dsh

The dsh (distributed shell) is a very useful (and powerful) utility that can be used to run commands on multiple servers at the same time. By default it is not installed on AIX, but you can install it yourself:

First, install the dsm file sets. DSM is short for Distributed Systems Management, and these filesets include the dsh command. These file sets can be found on the AIX installation media. Install the following 2 filesets:

# lslpp -l | grep -i dsm
  dsm.core       7.1.4.0  COMMITTED  Distributed Systems Management
  dsm.dsh        7.1.4.0  COMMITTED  Distributed Systems Management
Next, we'll need to set up some environment variables that are being used by dsh. The best way to do it, is by putting them in the .profile of the root user (in ~root/.profile), so you won't have to bother setting these environment variables manually every time you log in:
# cat .profile
alias bdf='df -k'
alias cls="tput clear"
stty erase ^?
export TERM=vt100

# For DSH
export DSH_NODE_RSH=/usr/bin/ssh
export DSH_NODE_LIST=/root/hostlist
export DSH_NODE_OPTS="-q"
export DSH_REMOTE_CMD=/usr/bin/ssh
export DCP_NODE_RCP=/usr/bin/scp
export DSH_CONTEXT=DEFAULT
In the output from .profile above, you'll notice that variable DSH_NODE_LIST is set to /root/hostlist. You can update this to any file name you like. The DSH_NODE_LIST variable points to a text file with server names in them (1 per line), that you will use for the dsh command. Basically, every host name of a server that you put in the list that DSH_NODE_LIST refers to, will be used to run a command on using the dsh command. So, if you put 3 host names in the file, and then run a dsh command, that command will be executed on these 3 hosts in parallel.

Note: You may also use the environment variable WCOLL instead of DSH_NODE_LIST.

So, create file /root/hostlist (or any file that you've configured for environment variable DSH_NODE_LIST), and add host names in it. For example:
# cat /root/hostlist
host1
host2
host3
Next, you'll have to set up the ssh keys for every host in the hostlist file. The dsh command uses ssh to run commands, so you'll have to enable password-less ssh communication from the host where you've installed dsh on (let's call that the source host), to all the hosts where you want to run commands using dsh (and we'll call those the target hosts).

To set this up, follow these steps:
  • Run "ssh-keygen -t rsa" as user root on the source and all target hosts.
  • Next, copy the contenst of ~root/.ssh/id_rsa.pub from the source host into file ~root/.ssh/authorized_keys on all the target hosts.
  • Test if you can ssh from the source hosts, to all the target hosts, by running: "ssh host1 date", for each target host. If you're using DNS, and have fully qualified domain names configured for your hosts, you will want to test by performing a ssh to the fully qualified domain name instead, for example: "ssh host1.domain.com". This is because dsh will also resolve host names through DNS, and thus use these instead of the short host names. You will be asked a question when you run ssh for the first time from the source host to the target host. Answer "yes" to add an entry to the known_host file.
Now, ensure you log out from the source hosts, and log back in again as root. Considering that you've set some environment variables in .profile for user root, it is necessary that file .profile gets read, which is during login of user root.

At this point, you should be able to issue a command on all the target hosts, at the same time. For example, to run the "date" command on all the servers:
# dsh date
Also, you can now copy files using dcp (notice the similarity between ssh and dsh, and scp and dcp), for example to copy a file /etc/exclude.rootvg from the source host to all the target hosts:
# dcp /etc/exclude.rootvg /etc/exclude.rootvg
Note: dsh and dcp are very powerful to run commands on multiple servers, or to copy files to multiple servers. However, keep in mind that they can be very destructive as well. A command, such as "dsh halt -q", will ensure you halt all the servers at the same time. So, you probably may want to triple-check any dsh or dcp commands that you want to run, before actually running them. That is, if you value your job, of course.

Topics: Red Hat

Using Wodim to write an ISO image to DVD

Wodim is an easy tool to write an ISO image to DVD, and it's included with Red Hat.

In order to write an ISO image to DVD, first start off by making sure what the device is of the DVD burner. Most often, it is /dev/sr0. To validate this, run:

# ls -als /dev/sr0
If that's the correct device, all you need is an ISO image. Let's say, your ISO image is located in /path/to/image.iso. In that case, use the following command to write the ISO image to DVD:
# wodim dev=/dev/sr0 -v -data /path/to/image.iso

Topics: Red Hat

Red Hat Cluster Suite commands

Red Hat cluster controls the startup and shutdown of all application components on all nodes within a cluster. To check the status of the cluster, to start, stop or failover resource groups Red Hat cluster's standard commands can be used.

Following is a list of some of cluster commands.

  • To check cluster status: clustat
  • To start cluster manager: service cman start (do on both nodes right away with in 60 seconds)
  • To start cluster LVM daemon: service clvmd start (do on both nodes)
  • To start Resource group manager: service rgmanager start (do on both nodes)
  • To enables and starts the user service: clusvcadm -e service_name (check with clustat for available service names in your cluster)
  • To disable and stops the user service: clusvcadm -d service_name (check with clustat for available service names in your cluster)
  • To stop Resource group manager: service rgmanager stop
  • To stop cluster LVM daemon: service clvmd stop
  • To stop cluster manager: service cman stop (Do not stop CMAN at the same time on all nodes)
  • To relocate user service: clusvcadm -r service_name (check with clustat for available service names in your cluster)
  • To relocate user service: clusvcadm -r service_name (check with clustat for available service names in your cluster)

Topics: Red Hat

How to Mount and Unmount an ISO Image in RHEL

An ISO image or .iso (International Organization for Standardization) file is an archive file that contains a disk image called ISO 9660 file system format. Every ISO file have .ISO extension has defined format name taken from the ISO 9660 file system and specially used with CD/DVD Roms. In simple words an iso file is a disk image.

Typically an ISO image contains installation of software such as, operating system installation, games installation or any other applications. Sometimes it happens that we need to access files and view content from these ISO images, but without wasting disk space and time in burning them on to CD/DVD.

This article describes how to mount and unmount an ISO image on RHEL to access and list the content of ISO images.

To mount an ISO image, you must be logged in as root user and run the following commands from a terminal to create a mount point.

# mkdir /mnt/iso
Once you created mount point, use the mount command to mount an iso file. We'll use a file called rhel-server-6.6-x86_64-dvd.iso for our example.
# mount -t iso9660 -o loop /tmp/Fedora-18-i386-DVD.iso /mnt/iso/
After the ISO image mounted successfully, go the mounted directory at /mnt/iso and list the content of an ISO image. It will only mount in read-only mode, so none of the files can be modified.
# cd /mnt/iso
# ls -l
You will see the list of files of an ISO image, that we have mounted in the above command.

To unmount an ISO image, run the following command from the terminal as root:
# umount /mnt/iso

Topics: AIX, System Admin

Copy printer configuration from one AIX system to another

The following procedure can be used to copy the printer configuration from one AIX system to another AIX system. This has been tested using different AIX levels, and has worked great. This is particularly useful if you have more than just a few printer queues configured, and configuring all printer queues manually would be too cumbersome.

  1. Create a full backup of your system, just in case something goes wrong.
  2. Run lssrc -g spooler and check if qdaemon is active. If not, start it with startsrc -s qdaemon.
  3. Copy /etc/qconfig from the source system to the target system.
  4. Copy /etc/hosts from the source system to the target system, but be careful to not lose important entries in /etc/hosts on the target system (e.g. the hostname and IP address of the target system should be in /etc/hosts).
  5. On the target system, refresh the qconfig file by running: enq -d
  6. On the target system, remove all files in /var/spool/lpd/pio/@local/custom, /var/spool/lpd/pio/@local/dev and /var/spool/lpd/pio/@local/ddi.
  7. Copy the contents of /var/spool/lpd/pio/@local/custom on the source system to the target system into the same folder.
  8. Copy the contents of /var/spool/lpd/pio/@local/dev on the source system to the target system into the same folder.
  9. Copy the contents of /var/spool/lpd/pio/@local/ddi on the source system to the target system into the same folder.
  10. Create the following script, called newq.sh, and run it:
    #!/bin/ksh
    
    let counter=0
    cp /usr/lpp/printers.rte/inst_root/var/spool/lpd/pio/@local/smit/* \
       /var/spool/lpd/pio/@local/smit
    cd /var/spool/lpd/pio/@local/custom
    chmod 775 /var/spool/lpd/pio/@local/custom
    for FILE in `ls` ; do
       let counter="$counter+1"
       chmod 664 $FILE
       QNAME=`echo $FILE | cut -d':' -f1`
       DEVICE=`echo $FILE | cut -d':' -f2`
       echo $counter : chvirprt -q $QNAME -d $DEVICE
       chvirprt -q $QNAME -d $DEVICE
    done
    
  11. Test and confirm printing is working.
  12. Remove file newq.sh.

Topics: HMC, System Admin

Command line upgrade of HMC

This is how you update your HMC form version 7.9.0 to service pack 3 and all necessary fixes. At the time of writing, service pack 3 is the latest available service pack, and there are 2 fixes available for V7 R7.9.0 SP3, called MH01587 and MH01605. So the following procedure assumes that your HMC is currently at the base level of version 7.9.0, without any additional fixes or service packs installed.

This procedure is completely command line based. For this to work, you need to be able to ssh into the HMC using the hscroot user. For example, if your HMC is called yourhmc, you should be able to do this:

# ssh -l hscroot yourhmc
We also need to make sure we have some backups. Start with saving some output:
# lshmc -v
# lshmc -V
# lshmc -n
# lshmc -r  
The information outputted by the lshmc command is useful to determine what is currently installed on the HMC.

Next, take a console data backup of the HMC:
# bkconsdata -r nfs -h 10.11.12.13 -l /mksysb/HMC -d backupfile
The bkconsdata command above will backup the console data of the HMC via NFS to host 10.11.12.13 (replace with your own server name of IP address), and will store it in /mksysb/HMC/backupfile (replace /mksysb/HMC and backupfile in the bkconsdata command above to represent the correct location to back up to on your NFS server).

Mext, make a backup of the profiles for each managed server:
# bkprofdata -m  -f  --force 
The bkprofdata command above requires the name of each managed system. A good way to know the names of the managed systems configured on the HMC, is by running the following command:
# lssysconn -r all
Now that we have all the necessary backups, it's time to perform the actual upgrade.

Let's start with the upgrade to Service Pack 3:
# updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/updates/HMC_Update_V7R790_SP3.iso -r
This will download the service pack from the IBM site to the HMC via FTP and upgrade the HMC, and reboot it. This may take a while. The updhmc command may return a prompt after the download is completed, but that does not mean the update has occurred already. Please allow it to install and reboot. A message will be shown on the screen *The system is shutting down for reboot now". After the reboot, run the "lshmc -V" command again. It may take some time for the lshmc command will respond with proper output. Again, give it some time. As soon as the lshmc command shows that the service pack is installed, then you can move forward to the next step.

The next step is installing the fixes:
# updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/fixes/MH01587.iso -r
And...
# updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/fixes/MH01605.iso -r
After each fix is installed, the HMC will reboot, and you'll have to check with "lshmc -V" if the fix is installed.

And that concludes the upgrade. If any new service packs and or fixes are released by IBM you can install them in a similar fashion.

Number of results found: 396.
Displaying results: 11 - 20.