Quantcast
Channel: chmod666 AIX blog
Viewing all 24 articles
Browse latest View live

PowerVC Express using local storage : Overview, tips and tricks, and lessons learned from experience

$
0
0

Everybody talks about PowerVC since the October 8th announcement, after seeing a few videos and reading a few articles about it, I didn’t find anything telling what’s the product really has in his guts. I had the chance to deploy and test a PowerVC express version (using local storage), faced a lot of problems and found some interesting things to share with you. Rather than boiling the ocean :-) and asking for new features (oh everybody wants new features !), here is a practical how-to, some tips and tricks and the lessons I’ve learned about it. After a few weeks of work I can say that PowerVC is really good and pretty simple to use and deploy. Here we go :

Preparing the PowerVC express host

Setting SELinux from enforcing to permissive

cool1

Please refer to my previous about installing a Linux On Power if you have any doubt about this. Before trying to install PowerVC express edition you first have to disable selinux or at least set the policy form enforcing to permissive. Please note that a mandatory reboot is needed for this modification :

# sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
# grep ^SELINUX /etc/selinux/config
SELINUX=permissive
SELINUXTYPE=targeted

Reboot the PowerVC express host :

[root@powervc-standard ~]# shutdown -fh now
Broadcast message from root@powervc-standard
(/dev/pts/0) at 15:43 ...
The system is going down for halt NOW!

Yum repository

cool2

Before running the installer you have to configure your yum repository because the installer needs to install rpm shipped with the Red Hat Enterprise installation cdrom. I choose to use the cdrom as repository but this one can be served through http, without any problems :

# mkdir /mnt/cdrom ; mount -o loop /dev/cdrom /mnt/cdrom
# cat /etc/yum.repos.d/rhel-cdrom.repo
[rhel-cdrom]
name=RHEL Cdrom
baseurl=file:///mnt/cdrom
gpgckeck=0
enabled=1
# yum update
# yum upgrade

If using x86 version : noop scheduler

If you are using the x86 version of PowerVC express you can experience some slowness while trying to install the product. In my case I had to change the I/O scheduler from cfq to noop. My advice is just to temporarily enable it. My installation of PowerVC express takes hours (no joke, almost 5 hours) before changing the I/O scheduler to noop. Enabling this option reduce this time to an half hour (in my case) :

# cat /sys/block/vda/queue/scheduler
noop anticipatory deadline [cfq]
# echo "noop" > /sys/block/vda/queue/scheduler
# cat /sys/block/vda/queue/scheduler
[noop] anticipatory deadline cfq

PATH modification

Add the /opt/ibm/powervc/bin to your path to be allowed to run PowerVC commands such as powervc-console-term, powervc-services, powervc-get-token, and so on …..

# more /root/.bash_profile
PATH=$PATH:$HOME/bin:/opt/ibm/powervc/bin

I’ll not detail the installation here but just run this installer and follow the questions asked by the installer :

# ./install
Select the offering type to install:
   1 - Express  (IVM support)
   2 - Standard (HMC support)
   9 - Exit
1
Extracting license content
International Program License Agreement
[..]

Preparing the Virtual I/O Server and the IVM

Before trying to do anything you have to configure the Virtual I/O Server and the IVM, check that all the points below are ok before registering the host :

  • You need at least one Shared Ethernet Adapter to use PowerVC express, you can on an IVM have up to four Shared Ethernet Adapter.
  • A virtual media repository created with at least 40Gb free.
# lsrep
Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
   40795    29317 rootvg                   419328           342272
  • A PowerVM enterprise edition or PowerVM for IBM PowerLinux key.
  • The max virtual adapter correctly configured (in my case 256).
  • The maximum number of ssh sessions opened on the Virtual I/O Server has to be at least 20.
  • # grep MaxSessions /etc/ssh/sshd_config
    MaxSessions 20
    # stopsrc -s sshd
    # startsrc -s sshd
    
  • FTP transfers and FTP ports are opened between the PowerVC host and the IVM.
  • PowerVC usage

    Host Registering

    It’s very easy but registering the host is one of the most important step in this configuration. Just set your IVM hostname user and password. The tricky part is to check the box to use local storage, you then have to choose the directory where images will be stored. Be careful when choosing this directory, it can’t be changed on the fly, and you have to remove and re-register the host if you want to do this. My advice is not to choose the default /home/padmin directory, and to create a dedicated logical volume for this.

    deckard

    If the host registration fails check all the Virtual I/O Server prerequisites, then retry. If it fails again check the /var/log/nova/api.log and /var/log/nova/compute_xxx.log

    host1

    Manage existing Virtual Machines

    Unlike VMcontrol PowerVC allows to manage existing machines, so if your IVM is correctly configured, you’ll not have trouble to import existing machines, and manage them with PowerVC. This is one of the strength of PowerVC, it assure a backward compatibility for your existing Virtual I/O Clients. And it’s simple to use (look at the images below) :

    manage_existing
    manage_existing1

    Network Definition

    Create a network using one of your Shared Ethernet Adapter to be able to deploy machines :

    network_1

    First installation with ISO image

    Importing ISO images

    For the first installation (if you do not have any systems already installed on your system) you first need to import an iso to PowerVC, be very careful to read the next steps because I had a lot problems of space with this. Importing images is managed by glance, so if you have any problem checking in /var/log/glance file can be useful (putting in verbose mode too in /etc/glance/glance.conf). Just use the powervc-iso-import command to do so :

    # powervc-iso-import --name aix-7100-02-02 --os-distro aix --location /root/AIX_7.1_Base_Operating_System_TL_7100-02-02_DVD_1_of_2_32013.iso 
    Password: 
    +----------------------------+--------------------------------------+
    | Property                   | Value                                |
    +----------------------------+--------------------------------------+
    | Property 'architecture'    | ppc64                                |
    | Property 'hypervisor_type' | powervm                              |
    | Property 'os_distro'       | aix                                  |
    | checksum                   | df548a0cc24dbec196d0d3ead92feaca     |
    | container_format           | bare                                 |
    | created_at                 | 2014-02-04T19:45:29.125109           |
    | deleted                    | False                                |
    | deleted_at                 | None                                 |
    | disk_format                | iso                                  |
    | id                         | ee0a6544-c065-4ab7-aec8-7d6ee4248672 |
    | is_public                  | True                                 |
    | min_disk                   | 0                                    |
    | min_ram                    | 0                                    |
    | name                       | aix-7100-02-02                       |
    | owner                      | 437b161186414e2bb0d4778cbd6fa14c     |
    | protected                  | False                                |
    | size                       | 3835723776                           |
    | status                     | active                               |
    | updated_at                 | 2014-02-04T19:49:29.031481           |
    +----------------------------+--------------------------------------+
    

    importing_iso

    The result of the command above do not tell you anything about what is really done by the command.

    Images are stored forever in /var/lib/glance/images and copied here by powerc-iso-import, this is the place where you need to have free space, don’t forget to remove you source image from the PowerVC host or you’ll need to have more space (in fact double space :-)). Check the /var/lib/glance/images while running powervc-iso-import shows you that images are copied :

    # ls -lh /var/lib/glance/images
    total 3.3G
    -rw-r-----. 1 glance glance 3.3G Feb  4 22:08 3b95401b-85b4-4682-a7a5-332ea9e48348
    # ls -lh /var/lib/glance/images
    total 3.4G
    -rw-r-----. 1 glance glance 3.4G Feb  4 22:09 3b95401b-85b4-4682-a7a5-332ea9e48348
    

    image_import2

    Deploying a Virtual Machine with ISO image :

    Be careful when deploying images to have enough space in /home/padmin directory of the Virtual I/O Server : images are first copied to this directory before being available on the Virtual I/O Server media repository in /var/vio/VMLibrary (they are -apparently- removed later). On the PowerVC host itself, be careful to have enough space in /var/lib/nova/images and /var/lib/glance/images. On the PowerVC host images are stored by glance PowerVC so DON’T DELETE IMAGES in /var/lib/glance/images ! My understanding of this is that images are copied on fly from glance (/var/lib/images/glances) where images are stored (by powervc-import-iso), to nova (/var/lib/nova/images) where images are copied and then sent to the Virtual I/O Server, and then added to the Virtual I/O Server repository. PowerVC is using ftp to copy files to the Virtual I/O Server, so be sure to have ports open between PowerVC host and the Virtual I/O Server.

    • Here is an exemple of iso file present in /home/padmin on the Virtual I/O Server when deploying a server with an image, below we can see that image was copied in /var/lib/nova/images before beeing copied on the Virtual I/O Server :
    padmin@deckard# ls *.iso
    config                                                  rhel-server-6.4-beta-ppc64-dvd.iso                      smit.script
    89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso  rhel-server-ppc-6.4-boot.iso                            smit.transaction
    ioscli.log                                              smit.log                                                tivoli
    [root@powervc-express ~]# ls -l /var/lib/nova/images/
    total 4579236
    -rw-r--r--. 1 nova nova 4689133568 Feb  4 22:31 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso
    
  • Once images are copied from PowerVC they are imported to the Virtual I/O Server repository :
  • padmin@deckard# ps -ef | grep mkvopt
      padmin  6422716  8519802   0 05:41:42      -  0:00 ioscli mkvopt -name 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e -file /home/padmin/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso -ro
      padmin  8519802 10485798   0 05:41:42      -  0:00 rksh -c ioscli mkvopt -name 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e -file /home/padmin/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e.192_168_0_100.iso -ro;echo $?
      padmin 10158232  9699504   2 05:42:30  pts/0  0:00 grep mkvopt
    
  • The Virtual Optical Device is then used to load the CDROM to the partition :
  • padmin@deckard# lsmap -all
    SVSA            Physloc                                      Client Partition ID
    --------------- -------------------------------------------- ------------------
    vhost0          U8203.E4A.06E7E53-V1-C11                     0x00000002
    
    VTD                   vtopt0
    Status                Available
    LUN                   0x8200000000000000
    Backing device        /var/vio/VMLibrary/89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e
    Physloc               
    Mirrored              N/A
    
    VTD                   vtscsi0
    Status                Available
    LUN                   0x8100000000000000
    Backing device        lv00
    Physloc               
    Mirrored              N/A
    
    padmin@deckard# lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
       40795    29317 rootvg                   419328           342272
    
    Name                                                  File Size Optical         Access 
    89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e                       4472 vtopt0          ro     
    fa9b3cf0-a649-4bf0-b309-5f2bab6379ea                       3659 None            ro     
    rhel-server-ppc-6.4-boot.iso                                227 None            ro     
    rhel-server-ppc-6.4.iso                                    3120 None            ro     
    
    

    When deploying an iso image after all the steps below are finished the Virtual Machine newly created will be in shutoff state :

    shutoff_before_start

    Run the console term before starting the Virtual Machine, then start the Virtual Machine (by PowerVC) :

    # powervc-console-term tyler61
    Password: 
    Starting terminal.
    
  • When deploying multiple hosts with the same image it is possible that some virtual machines will have the same name; in this case the powervc-console-term will warn you :
  • # powervc-console-term --f mary
    Password: 
    Multiple servers were found with the same name. Specify the server ID.
    089ecbc5-5bed-4d06-8659-bf7c57529c95 mary
    231ad074-7557-42b5-82b9-82ae2483fccd mary
    powervc-console-term --f 089ecbc5-5bed-4d06-8659-bf7c57529c95
    
    padmin@deckard# ps -ef | grep -i mkvt
      padmin  2556048  8323232   0 06:39:15  pts/1  0:00 rksh -c ioscli rmvt -id 2 && ioscli mkvt -id 2 && exit
    

    starting_first

    Then follow the instruction on the screen to finish these first installation (like if you were installing an AIX form the cdrom):

    IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 
    IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 
    -
    Elapsed time since release of system processors: 151 mins 48 secs
    /
    Elapsed time since release of system processors: 151 mins 57 secs
    -------------------------------------------------------------------------------
                                    Welcome to AIX.
                       boot image timestamp: 18:20:10 02/04/2013
                     The current time and date: 05:00:31 02/05/2014
            processor count: 1;  memory size: 2048MB;  kernel size: 29153194
    boot device: /vdevice/v-scsi@30000002/disk@8200000000000000:\ppc\chrp\bootfile.exe
                           kernel debugger setting: enabled
    -------------------------------------------------------------------------------
    
    AIX Version 6.1
    
    

    Preparing the capture of the first installed Virtual Machine

    The Activation Engine

    Before capturing the Virtual machine run the Activation Engine. The script will allow the machine to be captured and the captured image to be automatically reconfigured on the fly after first boot time. Be careful when running the Activation Engine, the Virtual Machine will be shut-off just by running this script.

    # scp 192.168.0.98:/opt/ibm/powervc/activation-engine/vmc.vsae.tar .
    # tar xvf vmc.vsae.tar
    x activation-engine-2.2-106.aix5.3.noarch.rpm, 1240014 bytes, 2422 media blocks.
    [..]
    x aix-install.sh, 2681 bytes, 6 media blocks.
    # rm /opt/ibm/ae/AP/*
    # cp /opt/ibm/ae/AS/vmc-network-restore/resetenv /opt/ibm/ae/AP/ovf-env.xml
    # JAVA_HOME=/usr/java5/jre
    # ./aix-install.sh
    Install VSAE and VMC extensions
    package activation-engine-jython-2.2-106 is already installed
    package activation-engine-2.2-106 is already installed
    package vmc-vsae-ext-2.4.4-1 is already installed
    # /opt/ibm/ae/AE.sh --reset
    JAVA_HOME=/usr/java5/jre
    [..]
    [2014-02-04 23:49:51,980] INFO: OS: AIX Version: 6
    [..]
    [2014-02-04 23:51:20,095] INFO: Cleaning AR and AP directories
    [2014-02-04 23:51:20,125] INFO: Shutting down the system
    
    SHUTDOWN PROGRAM
    Tue Feb  4 23:51:21 CST 2014
    
    
    Broadcast message from root@tyler (tty) at 23:51:21 ... 
    
    shutdown: PLEASE LOG OFF NOW !!!
    System maintenance is in progress.
    All processes will be killed now. 
    
    Broadcast message from root@tyler (tty) at 23:51:21 ... 
    
    shutdown: THE SYSTEM IS BEING SHUT DOWN NOW
    
    [..]
    
    Wait for '....Halt completed....' before stopping. 
    Error reporting has stopped.
    

    Capturing the host

    Just select the virtual machine you want to capture, and click capture ;-) :

    powervc_capture_ted
    snapshot1

    Here are the step realized by PowerVC when running a capture (so be careful to have enough space on PowerVC and Virtual I/O Server before running it :

    • By looking on the Virtual I/O Server itself, the main capture process is a simple dd command capturing the logical volume of the physical volume used as rootvg backing device, once the dd is finished this one is gzipped (in /home/padmin) :
    padmin@deckard# ps -ef | grep dd      
        root  5832754  7078058   9 06:54:09      -  0:01 dd if=/dev/lv00 bs=1024k
        root  7078058  9043976  82 06:54:09      -  0:14 dd if=/dev/lv00 bs=1024k
      padmin  8388674  9699504   2 06:59:20  pts/0  0:00 grep dd
    padmin@deckard# ls -l /home/padmin/5154d176-6c3b-4eda-aa20-998deb207ca8.gz
    -rw-r--r--    1 root     staff    6102452605 Feb 05 07:13 5154d176-6c3b-4eda-aa20-998deb207ca8.gz
    
  • Once again the captured image is transferred to nova with ftp (ftpd process is spawned on the Virtual I/O Server) :
  • padmin@deckard# ps -ef | grep ftp
      padmin  7012516  9699504   1 07:20:47  pts/0  0:00 grep ftp
      padmin  7078072  4587660  47 07:14:18      -  0:11 ftpd
    [root@powervc-express ~]#  ls -l /var/lib/nova/images
    total 4666504
    -rw-r--r--. 1 nova nova 4778496000 Feb  5 00:22 5154d176-6c3b-4eda-aa20-998deb207ca8.gz
    
  • Then again the image is unzipped to glance :
  • # ls -l /var/lib/glance/images
    total 6453096
    -rw-r-----. 1 glance glance 1918828544 Feb  5 00:30 5154d176-6c3b-4eda-aa20-998deb207ca8
    -rw-r-----. 1 glance glance 4689133568 Feb  4 22:26 89386e2e-c2f0-4f9d-b0f4-267f4a99dc1e
    
  • During all these steps you can check take the Virtual Machine is in snapshot mode :
  • snapshot2

  • After the capture completion, you can have a look on the details :
  • image_3

    Deploying

    Deploying an host is very easy, just follow the instruction :

    • Here is an example of a deploy screen (I like visual thing when reading documents :-)) :
    • deploy1

    • Choose on which host you want to deploy the machine. You can on this step select the number of instance to deploy (you had to have a dhcp network configured for multiple instance), and select the size of the machine (by default a few one are pre-defined but you can define you own templates) :
    • deploy_1
      choose

    • PowerVC is smart enough to tell you a prediction of your machine usage. (It will show you in yellow the whole usage of your Power Server after the machine deployment (practical, and visual, love it !) :
    • deploy_3

    • Then just wait for the deployment to finish, steps are the same as an iso deployment, but the activation engine will be started at first boot to reconfigure the virtual machine :
    • deploy2

    Here is a image to sum-up the capture, the ISO deployment, and a deployment of a Virtual Machine, I think it’ll be easier for you to understand with an image :

    powervc-deploy-capture

    Tips and tricks

    Using PowerVC express on Power 6 machines

    There are a few things not written and tell in the manual. By looking in the source code you can find an hidden option to add in the /etc/nova/nova.conf file. There is one very interesting option for PowerVC express that allows you to try it on a Power6 server. If you want to do this, just add ivm_power6_enabled = true to /etc/nova/nova.conf. Restart PowerVC service before you can add any Power 6 server. The piece of code can be found in /usr/lib/python2.6/site-packages/powervc_discovery/registration/compute/ivm_powervm_registrar.py file :

    LOG.info("ivm_power6_enabled set to TRUE in nova.conf, "
             "so POWER6 will be allowed for testing")
    

    If you want to do so, just add it in the /etc/nova/nova.conf file in the [DEFAULT] section

    # grep power6 /etc/nova/nova.conf
    ivm_power6_enabled = true
    

    Just for the story, I was sure this was possible because the first presentation I’ve found on the internet about PowerVC was on PowerVC Express on a 8203-EA4 machine which is a Power 6 machine, the screenshots provided in these presentation were enough to tell me it was possible (don’t blame anybody for this). Next grep was my best friend to find were this option was hidden. Be aware that this option is only available for test purpose, so don’t open a PMR about this or it’ll be directly closed by IBM. Once again if IBMers are reading this one, tell me if it is ok to publish this option. If not I can remove it from the post.

    Enabling verbose and debug output

    PowerVC is not verbose at all when something is going wrong it’s sometimes difficult to check what is going on. First off all, the product is based on OpenStack so you’ll have access to all OpenStack log files. These files are located to /var/log/nova, /var/log/glance and so on. By default debug and verbose output are disabled for each OpenStack part. This is not supported by PowerVC but you can enable this verbose and debug output. For instance I had problem with nova when registering an host, putting verbose and debug mode in /etc/nova/nova.conf helped me a lot and let me check the ssh command run on the Virtual I/O Server (look for on the example below):

    # grep -iE "verbose|debug" /etc/nova/nova.conf
    verbose=true
    debug=true
    # vi /var/log/nova/compute-192_168_0_100.log
    2014-02-16 22:07:48.523 13090 ERROR powervc_nova.virt.ibmpowervm.ivm.exception [req-a4bee79a-5eb8-43fd-8ca6-ed75ebee880f 04c4ca89f32046ed91e0493c9e554d1d 437b161186414e2bb0d4778cbd6fa14c] Unexpected exception while running IVM command.
    Command: mksyscfg -r lpar -i "max_virtual_slots=64,max_procs=4,lpar_env=aixlinux,desired_procs=1,min_procs=1,proc_mode=shared,virtual_eth_adapters=\"36/0/1//0/0\",desired_proc_units=0.100000000000000,sharing_mode=uncap,min_mem=512,desired_mem=512,virtual_eth_mac_base_value=fa3f3d3cae,max_proc_units=4,lpar_proc_compat_mode=default,name=priss-6712136f-000000cd,max_mem=4096,min_proc_units=0.1"
    Exit code: 1
    Stdout: []
    Stderr: ['[VIOSE01040181-0025] Value for attribute desired_proc_units is not valid.', '']
    

    Using the PowerVC Rest API

    Systems engineers and systems administrator like me are rarely using REST APIs. If you want to automate some PowerVC actions such as deploying virtual machines without the need to go one the web interface you have to use the REST API provided with PowerVC. First of all here are the places where you’ll find some useful documentation for the PowerVC REST API

    • On the PowerVC inforcenter, you’ll find good tips and tricks for using the REST API :
    • The PowerVC programming guide :

    PowerVC is providing a script that is using the REST API. This one will generate an API token used for each calls of the API. This script is written in Python so I decided to take this script as reference to develop my own scripts based on this one :

    • You first have to use powervc-get-token to generate a token used to call the API. In general GET requests are used to query PowerVC (list virtual machines, list networks), and POST request to create things (create a network, create a virtual machine).
    • Get an API token to begin, by using powervc-get-token :
    # powervc-get-token 
    Password: 
    323806024c70455d84a7a1db900a4f89
    
  • To create a virtual machine you’ll need to know three things, the tenant, the network on which the VM will be deployed and the images used to deploy the server.
  • Here is the script I used to get the tenant (url : /powervc/openstack/identity/v2.0/tenants) :
  • import httplib
    import json
    import os
    import sys
    
    def main():
        token = raw_input("Please enter PowerVC token : ")
        print "PowerVC token used = "+token
    
        conn = httplib.HTTPSConnection('localhost')
        headers = {"X-Auth-Token":token, "Content-type":"application/json"}
        body = ""
    
        conn.request("GET", "/powervc/openstack/identity/v2.0/tenants", body, headers)
        response = conn.getresponse()
        raw_response = response.read()
        conn.close()
        json_data = json.loads(raw_response)
        print json.dumps(json_data, indent=4, sort_keys=True)
    
    if __name__ == "__main__":
        main()
    
  • By running the script I get the tenant id 437b161186414e2bb0d4778cbd6fa14c :
  • # ./powervc-get-tenants
    Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
    PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
    {
        "tenants": [
            {
                "description": "IBM Default Tenant", 
                "enabled": true, 
                "id": "437b161186414e2bb0d4778cbd6fa14c", 
                "name": "ibm-default"
            }
        ], 
        "tenants_links": []
    }
    
  • Here is the script I used to get the network id (url :/powervc/openstack/network/v2.0/networks) :
  • import httplib
    import json
    import os
    import sys
    
    def main():
        token = raw_input("Please enter PowerVC token : ")
        print "PowerVC token used = "+token
        tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
        print "Tenant ID = "+tenant_id
    
        conn = httplib.HTTPSConnection('localhost')
        headers = {"X-Auth-Token":token, "Content-type":"application/json"}
        body = ""
    
        conn.request("GET", "/powervc/openstack/network/v2.0/networks", body, headers)
        response = conn.getresponse()
        raw_response = response.read()
        conn.close()
        json_data = json.loads(raw_response)
        print json.dumps(json_data, indent=4, sort_keys=True)
    
    if __name__ == "__main__":
        main()
    
  • By running the script I get the network id 83e233a7-34ef-4bf2-ae95-958046da770f :
  • # ./powervc-list-networks
    Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
    PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
    Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
    Tenant ID = 437b161186414e2bb0d4778cbd6fa14c
    {
        "networks": [
            {
                "admin_state_up": true, 
                "id": "83e233a7-34ef-4bf2-ae95-958046da770f", 
                "name": "local_net", 
                "provider:network_type": "vlan", 
                "provider:physical_network": "default", 
                "provider:segmentation_id": 1, 
                "shared": false, 
                "status": "ACTIVE", 
                "subnets": [
                    "6b76f7e6-02fa-427f-9032-e8d28aaa6ef4"
                ], 
                "tenant_id": "437b161186414e2bb0d4778cbd6fa14c"
            }
        ]
    }
    
  • Here is the script I used to get the image id (url : /powervc/openstack/compute/v2/”+tenant_id+”/images)
  • import httplib
    import json
    import os
    import sys
    
    def main():
        token = raw_input("Please enter PowerVC token : ")
        print "PowerVC token used = "+token
        tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
        print "Tenant ID ="+tenant_id
    
        conn = httplib.HTTPSConnection('localhost')
        headers = {"X-Auth-Token":token, "Content-type":"application/json"}
        body = ""
    
        conn.request("GET", "/powervc/openstack/compute/v2/"+tenant_id+"/images", body, headers)
        response = conn.getresponse()
        raw_response = response.read()
        conn.close()
        json_data = json.loads(raw_response)
        print json.dumps(json_data, indent=4, sort_keys=True)
    
    if __name__ == "__main__":
        main()
    
  • By running the script I get the image id 0537da41-8542-41a0-b1b0-84ed75c6ed27 :
  • # ./powervc-list-images
    Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
    PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
    Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
    Tenant ID = 437b161186414e2bb0d4778cbd6fa14c
    {
        "images": [
            {
                "id": "0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                "links": [
                    {
                        "href": "http://localhost:8774/v2/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                        "rel": "self"
                    }, 
                    {
                        "href": "http://localhost:8774/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                        "rel": "bookmark"
                    }, 
                    {
                        "href": "http://192.168.0.12:9292/437b161186414e2bb0d4778cbd6fa14c/images/0537da41-8542-41a0-b1b0-84ed75c6ed27", 
                        "rel": "alternate", 
                        "type": "application/vnd.openstack.image"
                    }
                ], 
                "name": "ted_capture_201402161858"
            }
        ]
    }
    
  • With all this information, the token (a3a9904fa5a24a24aa6833358f54c7ce), the tenant id (437b161186414e2bb0d4778cbd6fa14c), the network id (83e233a7-34ef-4bf2-ae95-958046da770f), the image id (0537da41-8542-41a0-b1b0-84ed75c6ed27), I create a script to create a virtual machine (url : /powervc/openstack/compute/v2/”+tenant_id+”/servers) :
  • import httplib
    import json
    import os
    import sys
    
    def main():
        token = raw_input("Please enter PowerVC token : ")
        print "PowerVC token used = "+token
        tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
        print "Tenant ID ="+tenant_id
        headers = {"Content-Type": "application/json"}
    
        conn = httplib.HTTPSConnection('localhost')
        headers = {"X-Auth-Token":token, "Content-type":"application/json"}
    
        body = {
          "server": {
            "flavor": {
              "OS-FLV-EXT-DATA:ephemeral": 10,
              "disk": 10,
              "extra_specs": {
                "powervm:proc_units": 1
              },
              "ram": 512,
              "vcpus": 1
            },
            "imageRef": "0537da41-8542-41a0-b1b0-84ed75c6ed27",
            "max_count": 1,
            "name": "api",
            "networkRef": "83e233a7-34ef-4bf2-ae95-958046da770f",
            "networks": [
              {
              "fixed_ip": "192.168.0.21",
              "uuid": "83e233a7-34ef-4bf2-ae95-958046da770f"
              }
            ]
          }
        }
    
        conn.request("POST", "/powervc/openstack/compute/v2/"+tenant_id+"/servers",
                     json.dumps(body), headers)
        response = conn.getresponse()
        raw_response = response.read()
        conn.close()
        json_data = json.loads(raw_response)
        print json.dumps(json_data, indent=4, sort_keys=True)
    
    if __name__ == "__main__":
        main()
    
  • Running the script will finally create the virtual machine, you can check that the virtual machine is in deploying state in the PowerVC web interface : :
  • api

    # ./powervc-create-vm 
    Please enter PowerVC token : a3a9904fa5a24a24aa6833358f54c7ce
    PowerVC token used = a3a9904fa5a24a24aa6833358f54c7ce
    Please enter PowerVC Tenant ID : 437b161186414e2bb0d4778cbd6fa14c
    Tenant ID =437b161186414e2bb0d4778cbd6fa14c
    {
        "server": {
            "OS-DCF:diskConfig": "MANUAL", 
            "adminPass": "LE2bqbA2y87X", 
            "id": "0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
            "links": [
                {
                    "href": "http://localhost:8774/v2/437b161186414e2bb0d4778cbd6fa14c/servers/0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
                    "rel": "self"
                }, 
                {
                    "href": "http://localhost:8774/437b161186414e2bb0d4778cbd6fa14c/servers/0c7521d1-7e09-4c07-bc19-40e9ac3b756f", 
                    "rel": "bookmark"
                }
            ], 
            "security_groups": [
                {
                    "name": "default"
                }
            ]
        }
    }
    

    Backup and restore

    PowerVC does not have any HA solution, so my advice is to run it on a ip alias and to have a second dormant PowerVC instance ready to be setup at the time you need it. To do so my advice is to regularly run a powervc-backup (why not in crontab). If you need to restore PowerVC on the dormant instance, the only thing to do is to restore the backup (put it in /var/opt/imb/powervc/backups before running powervc-restore). The backup/restore is just an export/import of each db2 database (cinder,glance,nova,…), so it can take space and time (in my case my backup takes 8Gb and restoring the backup on the dormant instance takes me 1 hour).

    • Backuping :
    # powervc-backup 
    Continuing with this operation will stop all PowerVC services.  Do you want to continue?  (y/N):y
    PowerVC services stopped.
    Database CINDER backup completed.
    Database QTM_IBM backup completed.
    Database NOSQL backup completed.
    Database NOVA backup completed.
    Database GLANCE backup completed.
    Database KEYSTONE backup completed.
    Database and file backup completed. Backup data is in archive /var/opt/ibm/powervc/backups/20142199544840294/powervc_backup.tar.gz.
    PowerVC services started.
    PowerVC backup completed successfully.
    
  • Restoring :
  • # powervc-restore --noPrompt
    

    Places to check

    Finding information about PowerVC is not so simple, the product is still young and there are not many feedbacks and informations about it. Here are a few places to check if you have any problems. Keep in mind that the community is very active and is growing day by day :

    If I have one last word to say this one will be future. In my opinion PowerVC is the future for deployment on Power Systems. I had the chance to use VMcontrol and PowerVC : both are powerful, but the second is so simple to use that I can easily say that it will be used by IBM customers (has anyone used VMcontrol in production outside of PureSystems ?). Where VMcontrol has fail PowerVC can succeed …. but looking in the code you’ll find some part of VMcontrol (in the Activation Engine). So the ghost of VMcontrol is not so far and wil surely be kicked by PowerVC. Once again, I hope it helps, comments are welcome, I really need it to be sure my posts are useful and simple to understand.

    end


    Hardware Management Console : Be autonomous with upgrade, update, and use the Integrated Management Module

    $
    0
    0

    I’m sure that like me you do not have a physical access to your Hardware Management Consoles, or even if you have this access, some of your HMC are so far away from your working site (even in a foreign country) that you can’t afford to physically move to this place to update it. Even worse if -like me- you are working in a big place (who says too big ?) this job is often performed by IBM Inspectors and you do not have to worry about your Hardware Management Consoles and just have to ask IBM guys for anything about HMC. For some reasons I had to update an old Hardware Management Console from v7r7.3.0 to v7r7.7.0 SP3. Everybody is confused about the differences between updating an HMC and upgrading an HMC. I know really good bloggers : Anthony English and Rob McNelly have already post about this particular subject but I have to write this post as a reminder and to clarify some points which are not tell in Anthony’s and Rob’s posts :

    To finish this post I’ll talk about a feature nobody is using : the HMC is coming with an Integrated Management Module, this one allows you to have more control and to be autonomous with you HMC.

    The difference between updating, upgrading and migrating

    There is a lot of confusion when people are trying “update” their HMC. When do I have to update using the updhmc command, when do I have to upgrade using saveupgdata, getupdfiles and chhmc command, and finally when do I have to migrate using HMC Recovery CD/DVD ? All of this three operations are not well described by IBM. Here is what I’m doing in for each case, and this is the result of my own experience (don’t take this as an official document). Here is a little reminder it can be useful for a lot of people : an HMC version number looks like this : v7r7.7.0 SP3. v7 is the VERSION. 7.7.0 is the RELEASE and SP3 is the SERVICE PACK.

    • Updating : You have to update your HMC if your are applying a service pack, or a corrective fix update on the HMC, this operation can only be performed by the updhmc command. Use this method if fix central gives you an iso named “HMC_Update_*.iso” or a zip files named “MHxxxxx.zip”. These fixes can be applied to a minor version of the HMC.
    • Upgrading : You have to upgrade your HMC if your are moving from one minor version to another (from one release to another), for instance if your are moving from v7r7.7.0 to v7r7.8.0, this operation can be made by using HMC Recovery DVD (fix central will gives you two isos named “HMC_Recovery_*.iso”), or by using the images provided for a network upgrade (I’ll explain this in this post).
    • Migrating : You have to migrate your HMC by using HMC Recovery DVD when your are moving from one major version to another. For example when your are moving from an HMC v6 to and HMC v7 (for instance from any v6 version to v7r7.8.0). In this case you have no other choice than burning DVDs and moving in front of the HMC to perform the operation by yourself.

    Upgrading

    You can upgrade your HMC from its local storage by using the network images provided by IBM on a public FTP server, once connected to the public FTP server get the version you want to upgrade to and download all the files, bzImage and initrd.gz included :

    # ftp://ftp.software.ibm.com/software/server/hmc/network/v7770/
    # ls
    FTP Listing of /software/server/hmc/network/v7770/ at ftp.software.ibm.com
    [..]
    Feb 26 2013 00:00      2708320 bzImage
    Feb 26 2013 00:00    808497152 disk1.img
    Feb 26 2013 00:00   1142493184 disk2.img
    Feb 26 2013 00:00   1205121024 disk3.img
    Feb 26 2013 00:00           78 hmcnetworkfiles.sum
    Feb 26 2013 00:00     34160044 initrd.gz
    # mget *.*
    

    Put all the files on a server where you have an FTP server running (the HMC getupgfiles is using FTP to get the files) and download all the files with the getupgfiles command directly form the HMC (if your HMC has a direct access to the internet you can specify it the command):

    hscroot@gaff:~> getupgfiles -h 192.168.0.99 -u root -d /export/HMC/network_ugrade/v7770
    Enter the current password for user root:
    

    While images are downloading the HMC is mounting a temporary filesystem called /hmcdump and put the images in it. Once the images are downloaded the filesystem /hmcdump is unmounted. You can check the download progression with a loop looking on the /hmcdump filesystem :

    hscroot@gaff:~>  while true ; do date; ls -la /hmcdump; sleep 60; done
    [..]
    drwxr-xr-x  3 root root      4096 2013-12-24 16:26 .
    drwxr-xr-x 30 root root      4096 2013-12-19 14:52 ..
    -rw-r--r--  1 root hmc  824223312 2013-12-24 16:32 disk3.img
    -rw-r--r--  1 root hmc         78 2013-12-24 16:26 hmcnetworkfiles.sum
    drwx------  2 root root     16384 2007-12-19 03:24 lost+found
    Tue Apr  1 08:10:30 CEST 2014
    total 3121248
    drwxr-xr-x  3 root root       4096 2013-12-24 16:52 .
    drwxr-xr-x 30 root root       4096 2013-12-19 14:52 ..
    -rw-r--r--  1 root hmc     2708320 2013-12-24 16:52 bzImage
    -rw-r--r--  1 root hmc   808497152 2013-12-24 16:52 disk1.img
    -rw-r--r--  1 root hmc  1142493184 2013-12-24 16:45 disk2.img
    -rw-r--r--  1 root hmc  1205121024 2013-12-24 16:36 disk3.img
    -rw-r--r--  1 root hmc          78 2013-12-24 16:26 hmcnetworkfiles.sum
    -rw-r--r--  1 root hmc    34160044 2013-12-24 16:52 initrd.gz
    drwx------  2 root root      16384 2007-12-19 03:24 lost+found
    

    Please note that this filesystem is only mounted while the getupgfile command is running and can’t be mounted after the command execution … :

    hscroot@gaff:~> mount /hmcdump
    mount: only root can mount /dev/sda6 on /hmcdump
    

    Before launching the upgrade save all the data needed for the upgrade to disk, close all HMC events and clear all the filesystems :

    • Save all HMC upgrade data to disk. This command is MANDATORY, it save all the partition profile data, and the user data and the whole HMC configuration, if you forget this command you have to reconfigure the HMC by hand, so be careful with this one :-) :
    hscroot@gaff:~> saveupgdata -r disk
    
  • Close all HMC events :
  • hscroot@gaff:~> chsvcevent -o closeall
    
  • Remove all temporary HMC files from all filesystems :
  • hscroot@gaff:~> chhmcfs -o f -d 0
    

    The images are now downloaded to the HMC, to upgrade the HMC you just have to tell the HMC to boot on its alternate disk and to use the files you’ve just download for the upgrade :

    • To set the alternate disk partition on the HMC as a startup device on the next HMC boot and enable the upgrade on the alternate disk use the chhmc command :
    hscroot@gaff:~> chhmc -c altdiskboot -s enable --mode upgrade
    
  • Before rebooting check the altdiskboot attribute is set to enable :
  • hscroot@gaff:~> lshmc -r
    ssh=enable,sshprotocol=,remotewebui=enable,xntp=enable,"xntpserver=127.127.1.0,kronosnet1.fr.net.intra,kronosnet2.fr.net.intra,kronosnet3.fr.net.intra",syslogserver=,netboot=disable,altdiskboot=enable,ldap=enable,kerberos=disable,kerberos_default_realm=,kerberos_realm_kdc=,kerberos_clockskew=,kerberos_ticket_lifetime=,kerberos_keyfile_present=,"sol=disabled
    "
    
  • Reboot the HMC and wait :-) :
  • hscroot@gaff:~> hmcshutdown -t now -r
    

    Depending on the HMC model and on the version of the HMC the upgrade can takes 10 minutes to 40 minutes, you’ll have to be patient and to cross your finger and pray everything is going well. But don’t worry I never had an issue with this method. Once the HMC is rebooted and upgraded, you can check that the altdiskboot attribute is now set to disable :

    hscroot@gaff:~> lshmc -r
    ssh=enable,sshprotocol=,remotewebui=enable,xntp=enable,"xntpserver=127.127.1.0,kronosnet1.fr.net.intra,kronosnet2.fr.net.intra,kronosnet3.fr.net.intra",syslogserver=,syslogtcpserver=,syslogtlsserver=,netboot=disable,altdiskboot=disable,ldap=enable,kerberos=disable,kerberos_default_realm=,kerberos_realm_kdc=,kerberos_clockskew=,kerberos_ticket_lifetime=,kpasswd_admin=,trace=,kerberos_keyfile_present=,legacyhmccomm=enable,sol=disabled
    

    Updating

    Once the HMC is upgraded you have to update it. Unfortunately updates files (often ISO files) are only available on fix central and not on the public FTP. Get the ISO updates file from fix central and put it on your FTP (once again) server, then use the updhmc command to update the HMC, repeat the operation for each updates, and then reboot the HMC (in the example below I’m using sftp) :

    hscroot@gaff:~> updhmc -t s -i -h 192.168.0.99 -u root -f /export/HMC/v7r770/HMC_Update_V7R770_SP1.iso
    Password:
    iptables: Chain already exists.
    ip6tables: Chain already exists.
    [..]
    The corrective service file was successfully applied. A mandatory reboot is required but was not specified on the command syntax.
    hscroot@gaff:~> updhmc -t s -i -h 192.168.0.99 -u root -f /export/HMC/v7r770/HMC_Update_V7R770_SP2.iso
    Password:
    
    ip6tables: Chain already exists.
    ACCEPT  tcp opt -- in eth3 out *  0.0.0.0/0  -> 0.0.0.0/0  tcp dpt:5989
    ACCEPT  udp opt -- in eth3 out *  0.0.0.0/0  -> 0.0.0.0/0  udp dpt:657
    [..]
    The corrective service file was successfully applied. A mandatory reboot is required but was not specified on the command syntax.
    hscroot@gaff:~> hmcshutdown -t now -r
    

    After upgrading and updating the HMC check the version is ok with the lshmc command :

    hscroot@gaff:~> lshmc -V
    "version= Version: 7
     Release: 7.7.0
     Service Pack: 3
    HMC Build level 20131113.1
    ","base_version=V7R7.7.0
    "
    

    Using and configuring the Integrated Management Module

    I like to be autonomous and do things on my own. Who had never been stuck on a problem with an HMC and was forced to called an IBM inspector to reboot the HMC or even to insert a CD in the CDRom reader. A few people know this but the HMC is based on an IBM Xserie server (Who said Lenovo ?) and is shipped with an Integrated Management Module allowing you to boot, start, and stop the HMC without the need to have someone in the data-center. Unfortunately this method seems not to be supported by IBM so do it at your own risk.

    Use the dedicated port for the Integrated Management Console (the Red port)

    hmc_imm_port_good

    From the HMC command line using the chhmc command configure the Integrated Management Module IP address :

    hscroot@gaff:~> chhmc -c imm -s modify -a 10.10.20.4 -nm 255.255.255.0 -g 10.10.20.254
    

    Restart the Integrated Management Module to commit the changes. The IMM will not be pingable before restart :

    hscroot@gaff:~> chhmc -c imm -s restart
    

    The Integrated Management Module is now pingable and you can check its configuration :

    hscroot@gaff:~> lshmc -i
    ipv4addr=10.10.20.4,networkmask=255.255.255.0,gateway=10.10.20.254,username=USERID,mode=Dedicated
    

    By default the username is USERID and the password is PASSW0RD (with a zero), you can change it to fit your needs :

    hscroot@gaff:~> chhmc -c imm -s modify -u immusername --passwd "abc123"
    

    The Integrated Management Module is now configured and can be accessed from the web interface of from SSH :

    hmc_imm_login

    I will not detail all the actions you can do with the Integrated Management Module but here is a screen showing the Hardware Health of the HMC :

    hmc_imm_hardware

    One thing you can do for free (without IMM license) is to control the Power of the HMC, choosing to stop/start/restart or reboot. This feature can be very useful when the HMC is stucked :

    hmc_imm_actions

    If you choose to restart the HMC the Integrated Management Module will warn you before restarting :

    hmc_imm_restart

    You can access the HMC Integrated Management Module by using the SSH command line :

    • Use the power command to control the power of the HMC :
    system> help power
    usage:
       power on    [-options]   - power on server
       power off   [-options]   - power off server
       power cycle [-options]   - power off, then on
       power state              - display power state
       power -rp [alwayson|alwaysoff|restore]   - host power restore policy
    options:
       -s                       - shut down OS first
       -every day               - daily or weekly on,off or cycle commands
    [Sun|Mon|Tue|Wed|Thu|Fri|Sat|Day|clear]
       -t   time                - time (hh:mm)
    additional options for on.
       -d  date                 - date (mm/dd/yyyy)
       -clear                   - clear on date
    
  • Here is an example to restart the HMC :
  • system> power cycle -s
    ok
    
  • Checking the power state :
  • system> power state
    power on
    State:Booting OS or in unsupported OS
    

    The Integrated Management Module is a licensed product and unfortunately IBM does not support the Integrated Management Module on the HMC.It seems that the IMM license can’t be acquired for the HMC. I have checked on the trial licenses page and the HMC Hardware does not even exists when you have to choose the Hardware model for the trial license. This a shame because the licensed IMM allows to remote control the HMC, and to manage Virtual CDrom ….. useful for migration. So if an IBMer is reading this and have an explanation about this feel free to tell me what I’ve missed in the comments :

    hmc_imm_remote_control

    I hope this post will let you manage your HMC alone and to be autonomous :-)

    PowerVM Shared Ethernet Adapter simplification : Get rid of Control Channel Adapter

    $
    0
    0

    Since I started working on Virtual I/O Servers and PowerVM I’ve created many Shared Ethernet Adapters in all modes (standard, failover, or sharing). I’ve learned one important lesson “be careful when creating a Shared Ethernet Adapter“. A single mistake can cause a network outage and I’m sure that you’ve already seen someone in your team creating an ARP storm by mismatching control channel adapter or by adding a vlan that is already added on a Virtual Ethernet Adapter. Because of this kind of errors I know some customers who are trying to avoid the configuration of Shared Ethernet Adapter in failover or sharing mode to avoid any network outage. With the new version of Virtual I/O Server (starting from 2.2.2.2) network loop and ARP storms are -in most cases- detected and stopped at the Virtual I/O Server level or at the firwmare level. I always check two or three times my configuration before creating a Shared Ethernet Adapter. All these errors come -most of the time- from a lack of rigor and are in -almost- all cases due to the system administrator. With the new version of PowerVM you can now create all Shared Ethernet Adapters without specifying any control channel adapter (The Hardware Management Console and the Virtual I/O Server will do it for you). A new discovery protocol implemented on Virtual I/O Server is matching Shared Ethernet Adapters between them and will take care of creating the Control Channel vlan for you (this one will not be visible on the Virtual I/O Server). Much simpler = less errors. Here is a practical how-to :

    How does it work ?

    A new discovery protocol called SEA HA match partners between them by using a dedicated vlan (not configurable by the user). Here are a few things to know :

    • Multiple Shared Ethernet Adapters can share the vlan 4095 for their Control Channel link.
    • The vlan 4095 is created per Virtual Switch for this Control Channel link.
    • As always only two Shared Ethernet Adapters can be partners, the Hardware Management Console is ensuring that priority 1 and 2 are used (I’ve seen some customers using priority 3 and 4, do don’t this.)
    • Both failover and sharing mode can be used.
    • Shared Ethernet Adapters with a dedicated Control Channel Adapter, can be migrated to this configuration with a network outage, put the SEA in defined state before :

    Here is any example of this configuration on a Shared Ethernet Adapter in Sharing Mode :

    sea_no_ctl_chan_fig1

    On the image below you can follow the steps of this new discovery protocol :

    • 1/No dedicated Control Channel Adapter in Shared Ethernet Adapter Creation. The discovery protocol will be used if you are creating a SEA in failover or sharing mode without specifying the ctl_chan attribute.
    • 2/Partners are identified by their PVID, both partners must have the same PVID.
    • 3/This PVID has to be uniq per SEA pairs.
    • 4/Additional vlans ID are compared : partners with not matching additional vlans IDs are still considered as partners if their PVID match.
    • 5/Shared Ethernet Adapter with matching additional vlan IDs and not matching PVID are not considered as partners.
    • 6/If partners are not matching their additional vlan IDs they are still considered partners but an error is logged in the errlog.

    sea_no_ctl_chan_fig2

    Prerequisites

    Shared Ethernet Adapter without the need of a Control Channel Adapter can’t be created on all systems. At the time of writing this post only a few models of POWER7 machines (maybe POWER8) have the firmware implementing the feature. You have to check that the firmware of your machine is at least a XX780_XXX release. Be careful to check the release note of the firmware, some of the 780’s firmwares does not permit the creation of SEA without Control Channel Adapter (especially 9117-MMB) (here is an example on this page : link here, the release note says : “Support was added to the Management Console command line to allow configuring a shared control channel for multiple pairs of Shared Ethernet Adapters (SEAs). This simplifies the control channel configuration to reduce network errors when the SEAs are in fail-over mode. This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.”). Because the Hardware Management Console is using the vlan 4095 to create the Control Channel link between Shared Ethernet Adapters it has to be aware of this feature and must ensure that the vlan 4095 is not usable or configurable by the administrator. The HMC v7R7.8.0 is aware of this that’s why the HMC must be updated at least to this level.

    • Check your machine firmware, in my case I’m working on a 9117-MMD (P7+770) with the lastest firmware available (at the time of writing this post) :
    # lsattr -El sys0 -a modelname
    modelname IBM,9117-MMD Machine name False
    # lsmcode -A
    sys0!system:AM780_056 (t) AM780_056 (p) AM780_056 (t)
    
    • These prerequisites can be check directly from the Hardware Management Console :
    hscroot@myhmc:~> lslic -t sys -m 9117-MMD-65XXXX
    lic_type=Managed System,management_status=Enabled,disabled_reason=,activated_level=56,activated_spname=FW780.10,installed_level=56,installed_spname=FW780.10,accepted_level=56,accepted_spname=FW780.10,ecnumber=01AM780,mtms=9117-MMD*658B2AD,deferred_level=None,deferred_spname=FW780.10,platform_ipl_level=56,platform_ipl_spname=FW780.10,curr_level_primary=56,curr_spname_primary=FW780.10,curr_ecnumber_primary=01AM780,curr_power_on_side_primary=temp,pend_power_on_side_primary=temp,temp_level_primary=56,temp_spname_primary=FW780.10,temp_ecnumber_primary=01AM780,perm_level_primary=56,perm_spname_primary=FW780.10,perm_ecnumber_primary=01AM780,update_control_primary=HMC,curr_level_secondary=56,curr_spname_secondary=FW780.10,curr_ecnumber_secondary=01AM780,curr_power_on_side_secondary=temp,pend_power_on_side_secondary=temp,temp_level_secondary=56,temp_spname_secondary=FW780.10,temp_ecnumber_secondary=01AM780,perm_level_secondary=56,perm_spname_secondary=FW780.10,perm_ecnumber_secondary=01AM780,update_control_secondary=HMC
    
    • Check your Hardware Management Console release is at least V7R7.8.0 (in my case my HMC is at the latest level available at the time of writing this post) :
    hscroot@myhmc:~> lshmc -V
    "version= Version: 7
     Release: 7.9.0
     Service Pack: 0
    HMC Build level 20140409.1
    MH01406: Required fix for HMC V7R7.9.0 (04-16-2014)
    ","base_version=V7R7.9.0
    "
    

    Shared Ethernet Adapter creation in sharing mode without control channel

    The creation is simple, just identify your Real Adapter and your Virtual Adapter(s). Check on both Virtual I/O Server that PVID used on Virtual Adapters are the same and check priority are ok (use priority 1 on PRIMARY Virtual I/O Server and priority 2 on BACKUP Virtual I/O Server). I’m creating in this post a Shared Ethernet Adapter in Sharing Mode, steps are the same if you are creating a Shared Ethernet Adapter in auto mode.

    • Identify the Real Adapter (in my case an LACP 802.3ad adapter) :
    padmin@vios1$ lsdev -dev ent17
    name             status      description
    ent17            Available   EtherChannel / IEEE 802.3ad Link Aggregation
    padmin@vios2$ lsdev -dev ent17
    name             status      description
    ent17            Available   EtherChannel / IEEE 802.3ad Link Aggregation
    
  • Identify the Virtual Adapters : priority 1 on PRIMARY Virtual I/O Server and priority 2 on BACKUP Virtual I/O Server (my advice is to check that additional vlan IDs are ok too) :
  • padmin@vios1$ entstat -all ent13 | grep -iE "Priority|Port VLAN ID"
      Priority: 1  Active: False
    Port VLAN ID:    15
    padmin@vios1$ entstat -all ent14 | grep -iE "Priority|Port VLAN ID"
      Priority: 1  Active: False
    Port VLAN ID:    16
    padmin@vios2$ entstat -all ent13 | grep -iE "Priority|Port VLAN ID"
      Priority: 2  Active: True
    Port VLAN ID:    15
    padmin@vios2$ entstat -all ent14 | grep -iE "Priority|Port VLAN ID"
      Priority: 2  Active: True
    Port VLAN ID:    16
    
  • Create the Shared Ethernet Adapter without specifying the ctl_chan attribute :
  • padmin@vios1$ mkvdev -sea ent17 -vadapter ent13 ent14 -default ent13 -defaultid 15 -attr ha_mode=sharing largesend=1 large_receive=yes
    ent18 Available
    padmin@vios2$ mkvdev -sea ent17 -vadapter ent13 ent14 -default ent13 -defaultid 15 -attr ha_mode=sharing largesend=1 large_receive=yes
    ent18 Available
    
  • Shared Ethernet Adapter are created! You can check that the ctl_chan attribute is empty when checking the device :
  • padmin@svios1$ lsdev -dev ent18 -attr
    attribute     value       description                                                        user_settable
    
    accounting    disabled    Enable per-client accounting of network statistics                 True
    adapter_reset yes         Reset real adapter on HA takeover                                  True
    ctl_chan                  Control Channel adapter for SEA failover                           True
    gvrp          no          Enable GARP VLAN Registration Protocol (GVRP)                      True
    ha_mode       sharing     High Availability Mode                                             True
    [..]
    pvid          15          PVID to use for the SEA device                                     True
    pvid_adapter  ent13       Default virtual adapter to use for non-VLAN-tagged packets         True
    qos_mode      disabled    N/A                                                                True
    queue_size    8192        Queue size for a SEA thread                                        True
    real_adapter  ent17       Physical adapter associated with the SEA                           True
    send_RARP     yes         Transmit Reverse ARP after HA takeover                             True
    thread        1           Thread mode enabled (1) or disabled (0)                            True
    virt_adapters ent13,ent14 List of virtual adapters associated with the SEA (comma separated) True
    
  • By using the entstat command you can check that the Control Channel exists and is using the PVID 4095 (same result on second Virtual I/O Server) :
  • padmin@vios1$ entstat -all ent18 | grep -i "Control Channel PVID"
        Control Channel PVID: 4095
    
    • Looking at the entstat output SEA are partners (one PRIMARY_SH and one BACKUP_SH :
    padmin@vios1$ entstat -all ent18 | grep -i state
        State: PRIMARY_SH
    padmin@vios2$  entstat -all ent18 | grep -i state
        State: BACKUP_SH
    

    Verbose and intelligent errlog

    While configuring Shared Ethernet Adapter in this mode the errlog can give you a lot of informations about your configuration. For instance if additional vlan IDs does not match betweens Virtual Adapters of a Shared Ethernet Adapter you’ll be warned by an error in the errlog. Here are a few examples :

    • Additional vlan IDs does not match between Virtual Adapters :
    padmin@vios1$ errlog | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A759776F   0506205214 I H ent18          SEA HA PARTNERS VLANS MISMATCH
    
    • Looking on a detailed output you can get the missing vlan id :
    padmin@vios1$ 
    ---------------------------------------------------------------------------
    LABEL:          VIOS_SEAHA_DSCV_VLA
    IDENTIFIER:     A759776F
    Date/Time:       Tue May  6 20:52:59 2014
    Sequence Number: 704
    Machine Id:      00XXXXXXXX00
    Node Id:         vios1
    Class:           H
    Type:            INFO
    WPAR:            Global
    Resource Name:   ent18
    Resource Class:  adapter
    Resource Type:   sea
    Location:
    
    Description
    SEA HA PARTNERS VLANS MISMATCH
    
    Probable Causes
    VLAN MISCONFIGURATION
    
    Failure Causes
    VLAN MISCONFIGURATION
    
            Recommended Actions
            NONE
    
    Detail Data
    ERNUM
    0000 001A
    ABSTRACT
    Discovered HA partner with unmatched VLANs
    AREA
    VLAN misconfiguration
    BUILD INFO
    BLD: 1309 30-10:08:58 y2013_40A0
    LOCATION
    Filename:sea_ha.c Function:seaha_process_dscv_init Line:6156
    DATA
    VLAN = 0x03E9
    
    • The last line is the value of the missing vlan in hexadecimal (0x03E9, 1001 converted in decimal). We can manually check that this vlan is missing on vios1 :
    # echo "ibase=16; 03E9" | bc
    1001
    padmin@vios1$ entstat -all ent18 | grep -i "VLAN Tag IDs:"
    VLAN Tag IDs:  1659
    VLAN Tag IDs:  1682
    VLAN Tag IDs:  1682
    padmin@vios2$ entstat -all ent18 | grep -i "VLAN Tag IDs:"
    VLAN Tag IDs:  1659
    VLAN Tag IDs:  1001  1682
    VLAN Tag IDs:  1001  1682
    
    • A loss of communication between SEA will also be logged in the errlog :
    padmin@vios1$ errlog | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    B8C78C08   0502231214 I H ent18          SEA HA PARTNER LOST
    padmin@vios1$ errlog -ls | more
    Location:
    
    Description
    SEA HA PARTNER LOST
    
    Probable Causes
    SEA HA PARTNER DOWN
    
    Failure Causes
    SEA HA PARTNER DOWN
    
            Recommended Actions
            INITIATE PARTNER DISCOVERY
    
    Detail Data
    ERNUM
    0000 0019
    ABSTRACT
    Initiating partner discovery due to lost partner
    AREA
    SEA HA discovery partner lost
    BUILD INFO
    BLD: 1309 30-10:08:58 y2013_40A0
    LOCATION
    Filename:sea_ha.c Function:seaha_dscv_ka_rcv_timeout Line:2977
    DATA
    Partner MAC: 0x1A:0xC4:0xFD:0x72:0x9B:0x0F
    
    • Be careful looking at the errlog, a SEA in sharing mode will “become primary” even if it is the “backup” SEA (you have to look with errlog -ls command for the details) :
    padmin@vios$ errlog | grep BECOME
    E48A73A4   0506205214 I H ent18          BECOME PRIMARY
    padmin@vios2$ errlog | grep BECOME
    1FE2DD91   0506205314 I H ent18          BECOME PRIMARY
    
    padmin@vios1$ errlog -ls | more
    LABEL:          VIOS_SEAHA_PRIMARY
    IDENTIFIER:     E48A73A4
    [..]
    Description
    BECOME PRIMARY
    [..]
    padmin@vios2$ errlog -ls | more
    LABEL:          VIOS_SEAHA_BACKUP
    IDENTIFIER:     1FE2DD91
    [..]
    Description
    BECOME PRIMARY
    [..]
    ABSTRACT
    Transition from INIT to BACKUP
    [..]
    seahap->state= 0x00000003
    Become the Backup SEA
    

    Removing the control channel adapter from an existing Shared Ethernet Adapter

    A “classic” Shared Ethernet Adapter can be modified to be usable without the need of a dedicated Control Channel Adapter. This modification require a network outage and the Shared Ethernet Adapter needs to be in defined state. I DO NOT LIKE to do administration as root on Virtual I/O Servers but I’ll do it here because of the use of the mkdev command :

    • On both Virtual I/O Servers put the Shared Ethernet Adapter in defined state :
    padmin@vios1$ oem_setup_env
    root@vios1# rmdev -l ent18
    ent18 Defined
    padmin@vios2$ oem_setup_env
    root@vios2# rmdev -l ent18
    ent18 Defined
    
    • On both Virtual I/O Servers remove the dedicated Control Channel Adapter for both Shared Ethernet Adapters :
    root@vios1# lsattr -El ent18 -a ctl_chan
    ctl_chan ent12 Control Channel adapter for SEA failover True
    root@vios1# chdev -l ent18 -a ctl_chan=""
    ent18 changed
    root@vios1# lsattr -El ent18 -a ctl_chan
    ctl_chan  Control Channel adapter for SEA failover True
    root@vios2# lsattr -El ent18 -a ctl_chan
    ctl_chan ent12 Control Channel adapter for SEA failover True
    root@vios2# chdev -l ent18 -a ctl_chan=""
    ent18 changed
    root@vios2# lsattr -El ent18 -a ctl_chan
    ctl_chan  Control Channel adapter for SEA failover True
    
    • Put each Shared Ethernet Adapter in available state by using the mkdev command :
    root@vios1# mkdev -l ent18
    ent18 Available
    root@vios2# mkdev -l ent18
    ent18 Available
    
    • Verify that the Shared Ethernet Adapter is now using vlan 4095 as Control Channel PVID :
    padmin@vios1$ entstat -all ent18 | grep -i "Control Channel PVID"
        Control Channel PVID: 4095
    padmin@vios2$ entstat -all ent18 | grep -i "Control Channel PVID"
        Control Channel PVID: 4095
    

    The first step to a global PowerVM simplification

    Be aware that this simplification is one of the first step of a much larger project. With the latest version of the HMC v8R80.1 a lot of new features will be available (June 2014). I can’t wait to test the “single point of management” for Virtual I/O Servers. Anyway, creating a Shared Ethernet Adapter is easier than before. Use this method to avoid human errors and misconfiguration of your Shared Ethernet Adapters. As always I hope this post will help you to understand this simplification. :-)

    Deep dive into PowerVC Standard 1.2.1.0 using Storage Volume Controller and Brocade 8510-4 FC switches in a multifabric environment

    $
    0
    0

    Before reading this post I highly encourage you to read my first post about PowerVC because this one will be focused on the standard edition specificity. I had the chance to work on PowerVC express with IVM and local storage and now with a PowerVC standard with an IBM Storage Volume Controller & Brocade fibre channel switches. A few things are different between these two versions (particularly the storage management). Virtual Machines created by PowerVC standard will use NPIV (Virtual fibre channel adapters) instead of virtual vSCSI adapters. Using local storage or using an SVC in a multi fabric environment are two different things and PowerVC ways to capture/deploy and manage virtual machines are totally different. The PowerVC configuration is more complex and you have to manage the fibre channel ports configuration, the storage connectivity groups and templates. Last but no least the PowerVC standard edition is Live Partition Mobility aware. Let’s have a look on all the standard version specificity. But before you start reading this post I have to warn you that this one is very long (It’s always hard for me to write short posts :-)). Last thing, this post is the result of one month of work on PowerVC mostly on my own, but I had to thanks IBM guys for helping about a few problems (Paul, Eddy, Jay, Phil, …). Cheers guys!

    Prerequisites

    PowerVC standard needs to connect to the Hardware Management Console, to the Storage Provider, and to the Fibre Channel Switches. Be sure ports are open between PowerVC, the HMC, the Storage Array, and the Fibre Channel Switches :

    • Port TCP 12443 between PowerVC and the HMC (PowerVC is using the HMC K2 Rest API to communicate with the HMC)
    • Port TCP 22 (ssh) between PowerVC and the Storage Array.
    • Port TCP 22 (ssh) between PowerVC and the Fibre Channel Switches.

    pvcch

    Check your storage array is compatible with PowerVC standard (for the moment only IBM Storwise storage and IBM Storage Volume Controller are supported). All Brocade switches with a firmware 7 are supported. Be careful the PowerVC Redbook is not up-to-date about this : all Brocade switches are supported (An APAR and a PMR are opened about this mistake)

    This post was written with this PowerVC configuration :

    • PowerVC 1.2.0.0 x86 version & PowerVC 1.2.1.0 PCC version.
    • Storage Volume Controller with EMC VNX2 Storage array.
    • Brocade DCX 8510-4.
    • Two Power770+ with firmware latest AM780 firmware.

    PowerVC standard storage specifics and configuration

    PowerVC needs to control the storage to create or delete luns, to create hosts and it also needs to control the fibre channel switches to create and delete zone for the virtual machines. If you are working with multi fibre channel adapters with many ports you have also to configure the storage connectivity groups and the fibre channels ports to tell which port to use and in which case (you may want to create virtual machines for development only on two virtual fibre channel adapters and production one on four). Let’s see how to to this :

    Adding storage and fabric

    • Add the storage provider (in my case a Storage Volume Controller but it can be any IBM Storwise family storage array) :
    • blog_add_storage

    • PowerVC will ask you a few questions while adding the storage provider (for instance which pool will be the default pool for the deployment of the virtual machines). You can next check in this view the actual size and remaining size of the used pool :
    • blog_storage_added

    • Add each fibre channel switch (in my case two switches one for fabric A and the second one for the fabric B) (be very careful with the fabric designation (A or B), it will be used later when creating storage templates and storage connectivity groups) :
    • blog_add_frabric

    • Each fabric can be viewed and modified afterwards :
    • blog_fabric_added

    Fibre Channel Port Configuration

    If you are working in a multi fabric environment you have to configure the fibre channel ports. For each port the first step is to tell PowerVC on which fabric the port is connected. In my case here is the configuration (you can refer to the colours on the image below, and on the explications below) :

    pb connectivty_zenburn

    • Each Virtual I/O Server has 2 fibre channel adapters with four ports.
    • For the first adapter : first port is connected to Fabric A, and last port is connected to Fabric B.
    • For the second adapter : first port is connected to Fabric B, and last port is connected to Fabric A.
    • Two ports (port 1 and 2) are remaining free for future usage (future growing).
    • For each port I have to tell PowerVC if the port is connected on : (With PowerVC 1.2.0.0 you have to do this manually and check on the fibre channel switch where are the ports connected. With PowerVC 1.2.1.0 it is automatically detected by PowerVC :-))
    • 17_choose_fabric_for_each_port

      • Connected on Fabric A ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
      • blog_connected_fabric_A

    switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
    No device found
    switch_fabric_a:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
    Local:
     Type Pid    COS     PortName                NodeName                 SCR
     N    01fb40;    2,3;10:00:00:90:fa:3e:c6:ce;20:00:01:20:fa:3e:c6:ce; 0x00000003
        Fabric Port Name: 20:12:00:27:f8:79:ce:01
        Permanent Port Name: 10:00:00:90:fa:3e:c6:ce
        Device type: Physical Unknown(initiator/target)
        Port Index: 18
        Share Area: Yes
        Device Shared in Other AD: No
        Redirect: No
        Partial: No
        Aliases: XXXXX59_3ec6ce
    
  • Connected on Fabric B ? (check the image below) (check switch command to find if the port is connected on the fibre channel switch)
  • blog_connected_fabric_B

    switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:D1
    Local:
     Type Pid    COS     PortName                NodeName                 SCR
     N    02fb40;    2,3;10:00:00:90:fa:3e:c6:d1;20:00:01:20:fa:3e:c6:d1; 0x00000003
        Fabric Port Name: 20:12:00:27:f8:79:d0:01
        Permanent Port Name: 10:00:00:90:fa:3e:c6:d1
        Device type: Physical Unknown(initiator/target)
        Port Index: 18
        Share Area: Yes
        Device Shared in Other AD: No
        Redirect: No
        Partial: No
        Aliases: XXXXX59_3ec6d1
    switch_fabric_b:FID1:powervcadm> nodefind 10:00:00:90:FA:3E:C6:CE
    No device found
    
  • Free, not connected ? (check the image below)
  • blog_not_connected

  • At the end each fibre channel port has to be configured with one of these three choices (connected on Fabric A, connected on Fabric B, Free/not connected).
  • Port Tagging and Storage Connectivity Group

    Fibre channel ports are now configured, but we have to be sure that when deploying a new virtual machine :

    • Each virtual machine will be deployed with four fibre channel adapters (I am in a CHARM configuration).
    • Each virtual machine is connected on the first Virtual I/O Server to the Fabric A and Fabric B on different adapters (each adapter on a different CEC).
    • Each virtual machine is connected to the second Virtual I/O Server to Fabric A and Fabric B on different adapters.
    • I can choose to deploy the virtual machine using fcs0 (Fabric A) and fcs7 (Fabric B) on each Virtual I/O Server or using fcs3 (Fabric B) and fcs4 (Fabric A). Ideally half of the machines will be created with the first configuration and the second half one the second configuration.

    To do this you have to tag each port with a tag of the name of your choice, and then create a storage connectivity group. A storage connectivity is a constraint that is used for the deployment of virtual machine :

    pb_port_tag_zenburn

    • Two tags are created and set on each ports, fcs0(A)_fcs7(B), and fcs3(B)_fcs4(A) :
    • blog_port_tag

    • Two connectivity groups are created to force the usage of tagged fibre channel ports when deploying a virtual machine.
      • When creating a connectivity group you have to choose the Virtual I/O Server(s) used when deploying a virtual machine using this connectivity group. It can be useful to tell PowerVC to deploy development machines on a single Virtual I/O Server, and production one on dual Virtual I/O Server :
      • blog_vios_connectivity_group

      • In my case connectivity groups are created to restrict the usage of fibre channel adapters. I want to deploy on fibre channel ports fcs0/fcs7 or fibre channel ports fcs3/fcs4. Here are my connectivity groups :
      • blog_connectivity_1
        blog_connectivity_2

      • You can check a sum-up of your connectivity group. I wanted to add this image because I think the two images (provided in PowerVC) are better than text to explain what is a connectivity group :-) :
      • 22_create_connectivity_group_3

    Storage Template

    If you are using different pools or different storage arrays (for example, in my case I can have different storage arrays behind my Storage Volume Controller) you may want to tell PowerVC to deploy virtual machines on a specific pool or with a specific type (I want for instance, my machines to be created on compressed luns, on thin provisioned luns, or on thick provisioned luns). In my case I’ve created two different templates to create machines on thin or compressed lun. Easy !

    • When creating a storage template you first have to choose the storage pool :
    • blog_storage_template_select_storage_pool

    • Then choose the type of lun for this storage template :
    • blog_storage_template_create

    • Here are exemple with my two storage templates :
    • blog_storage_list

    A deeper look on VM capture

    I you read my last article about PowerVC express version you know that capturing an image could take some time when using local storage, “dding” a whole disk is long, copying a file to the PowerVC host is long. But don’t worry PowerVC standard solve this problem easily by using all the potential of the IBM Storage (In my case a Storage Volume Controller) … the solution FlashCopies, more specifically what we call a FlashCopy-Copy (to be clear : a FlashCopy-Copy is a full copy of a lun : there are no more relationship between the source lun being copied on the FlashCopy lun (the FlashCopy is created with the autodelete argument)) . Let me explain to you how PowerVC standard manages the virtual machine capture :

    • The activation engine has be run, the virtual machine to be captured is stopped.
    • The user launch the capture by using PowerVC.
    • A FlashCopy-Copy is created from the storage side, we can check it from the GUI interface :
    • blog_flash_copy_pixelate_1

    • Checking with the SVC command line we can see that (use catauditlog command to check this) :
      • A new volume called volume-Image-[name_of_the_image] is created (all captured images will be called volume-Image-[name]), taking care of the storage template (diskgroup/pool, grainsize, rsize ….)
    # mkvdisk -name volume-Image_7100-03-03-1415 -iogrp 0 -mdiskgrp VNX_XXXXX_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
    
  • A FlashCopy-Copy with the id of boot volume of the virtual machine to capture as source, and the id of the image’s lun as target is created :
  • # mkfcmap -source 865 -target 880 -autodelete
    
  • We can check the vdisk 865 is the boot volume of the captured machine and has a FlashCopy running:
  • # lsvdisk -delim :
    id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC_name:RC_id:RC_name:vdisk_UID:fc_map_count:copy_count:fast_write_state:se_copy_count:RC_change:compressed_copy_count
    865:_BOOT:0:io_grp0:online:0:VNX_00086_SAS_POOL_1:60.00GB:striped:0:fcmap0:::600507680184879C2800000000000431:1:1:empty:1:no:0
    
  • The FlashCopy-Copy is prepared and started (at this step we can already use our captured image, the copy is running in background) :
  • # prestartfcmap 0
    # startfcmap 0
    
  • While the copy of the FlahsCopy is running we can check the advancement (we can check it too by logging on the GUI too) :
  • IBM_2145:SVC:powervcadmin>lsfcmap
    id name   source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name            group_id group_name status  progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring start_time   rc_controlled
    0  fcmap0 865             XXXXXXXXX7_BOOT 880             volume-Image_7100-03-03-1415                     copying 54       50        100            off                                       no        140620002138 no
    
    IBM_2145:SVC:powervcadmin>lsfcmapprogress fcmap0
    id progress
    0  54
    
  • After the FlashCopy-Copy is finished, there are no more relationship between the source volume and the finished FlashCopy. The captured image is a vdisk :
  • IBM_2145:SVC:powervcadmin>lsvdisk 880
    id 880
    name volume-Image_7100-03-03-1415
    IO_group_id 0
    IO_group_name io_grp0
    status online
    mdisk_grp_id 0
    mdisk_grp_name VNX_XXXXX_SAS_POOL_1
    capacity 60.00GB
    type striped
    [..]
    vdisk_UID 600507680184879C280000000000044C
    [..]
    fc_map_count 0
    [..]
    
  • The is no more fcmap for the source volume :
  • IBM_2145:SVC:powervcadmin>lsvdisk 865
    [..]
    fc_map_count 0
    [..]
    

    Deployment mechanism

    blog_deploy3_pixelate

    Deploying a virtual machine with the standard version is very similar as deploying a machine with the express version. The only thing different is the possibility to choose the storage template (with the constraints of the storage connectivity group)

    View from the Hardware Management Console

    PowerVC is using the Hardware Management Console new k2 rest API to create the virtual machine, if you want to go further and check the commands used on the HMC you can check it with the lssvcevents command :

    time=06/21/2014 17:49:12,text=HSCE2123 User name powervc: chsysstate -m XXXX58-9117-MMD-658B2AD -r lpar -o on -n deckard-e9879213-00000018 command was executed successfully.
    time=06/21/2014 17:47:29,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
    time=06/21/2014 17:46:51,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 29 --id 1 -a remote_slot_num=6,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:46:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""6/CLIENT/1//29//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:46:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:46:17,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 28 --id 1 -a remote_slot_num=5,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:46:06,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""5/CLIENT/1//28//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:45:57,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:45:46,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 30 --id 2 -a remote_slot_num=4,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:45:36,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 29 --id 1 command was executed successfully.
    time=06/21/2014 17:45:27,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""4/CLIENT/2//30//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:45:18,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:45:08,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o r -m 9117-MMD*658B2AD -s 28 --id 1 command was executed successfully.
    time=06/21/2014 17:45:07,text=User powervc has logged off from session id 42151 for the reason:  The user ran the Disconnect task.
    time=06/21/2014 17:45:07,text=User powervc has disconnected from session id 42151 for the reason:  The user ran the Disconnect task.
    time=06/21/2014 17:44:50,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o a -m 9117-MMD*658B2AD -s 23 --id 1 -a adapter_type=server,remote_lpar_id=8,remote_slot_num=3
    command was executed successfully."
    time=06/21/2014 17:44:40,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,virtual_scsi_adapters+=3/CLIENT/1//23/0,name=last*valid*c
    onfiguration -o apply --override command was executed successfully."
    time=06/21/2014 17:44:32,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:44:22,"text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype fc -o a -m 9117-MMD*658B2AD -s 25 --id 2 -a remote_slot_num=2,remote_lpar_id=8,adapter_type=server co
    mmand was executed successfully."
    time=06/21/2014 17:44:11,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_fc_adapters+=""""2/CLIENT/2//25//0"""""",name=l
    ast*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:44:02,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:43:50,text=HSCE2123 User name powervc: chhwres -r virtualio --rsubtype scsi -o r -m 9117-MMD*658B2AD -s 23 --id 1 command was executed successfully.
    time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 1 -o off command was executed successfully.
    time=06/21/2014 17:43:31,text=HSCE2123 User name powervc: chled -r sa -t virtuallpar -m 9117-MMD*658B2AD --id 2 -o off command was executed successfully.
    time=06/21/2014 17:42:57,"text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r prof -i lpar_name=deckard-e9879213-00000018,""virtual_eth_adapters+=""""32/0/1665//0/0/zvdc4/fabbb99d
    e420/all/"""""",name=last*valid*configuration -o apply --override command was executed successfully."
    time=06/21/2014 17:42:49,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:41:53,text=HSCE2123 User name powervc: chsyscfg -m 9117-MMD*658B2AD -r lpar -p deckard-e9879213-00000018 -n default_profile -o apply command was executed successfully.
    time=06/21/2014 17:41:42,text=HSCE2245 User name powervc: Activating the partition 8 succeeded on managed system 9117-MMD*658B2AD.
    time=06/21/2014 17:41:36,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=deckard-e9879213-00000018,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,max_mem=8192,p
    rofile_name=default_profile,max_virtual_slots=64,lpar_proc_compat_mode=default,proc_mode=shared,min_procs=4,desired_procs=4,max_procs=4,min_proc_units=2,desired_proc_units=2,max_proc_units=2,s
    haring_mode=uncap,uncap_weight=128,lpar_avail_priority=127,sync_curr_profile=1 command was executed successfully."
    time=06/21/2014 17:41:01,"text=HSCE2123 User name powervc: mksyscfg -m 9117-MMD*658B2AD -r lpar -i name=FAKE_1403368861661,profile_name=default,lpar_env=aixlinux,min_mem=8192,desired_mem=8192,
    max_mem=8192,max_virtual_slots=4,virtual_eth_adapters=5/0/1//0/1/,virtual_scsi_adapters=2/client/1//2/0,""virtual_serial_adapters=0/server/1/0//0/0,1/server/1/0//0/0"",""virtual_fc_adapters=3/
    client/1//2//0,4/client/1//2//0"" -o query command was executed successfully."
    

    blog_deploy3_hmc1

    As you can see on the picture below four virtual fibre channel adapters are created taking care of the constraints of the storage connectivity groups create earlier (looking on the Virtual I/O Server vfcmaps are ok …) :

    blog_deploy3_hmc2_pixelate

    padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost14 -npiv
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vfchost14     U9117.MMD.658B2AD-V1-C28                8 deckard-e98792 AIX
    
    Status:LOGGED_IN
    FC name:fcs3                    FC loc code:U2C4E.001.DBJN916-P2-C1-T4
    Ports logged in:2
    Flags:a
    VFC client name:fcs2            VFC client DRC:U9117.MMD.658B2AD-V8-C5
    
    padmin@XXXXX60:/home/padmin$ lsmap -vadapter vfchost15 -npiv
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vfchost15     U9117.MMD.658B2AD-V1-C29                8 deckard-e98792 AIX
    
    Status:LOGGED_IN
    FC name:fcs4                    FC loc code:U2C4E.001.DBJO029-P2-C1-T1
    Ports logged in:2
    Flags:a
    VFC client name:fcs3            VFC client DRC:U9117.MMD.658B2AD-V8-C6
    

    View from the Storage Volume Controller

    The SVC side is pretty simple, two steps, FlashCopy-Copy of the volume-Image (the one created at the capture step) (the source of the FlashCopy is the volumeImage-[name] lun) and a host creation for the new virtual machine :

    • Creation of a FlashCopy-Copy with the volume used for the capture as source :
    • blog_deploy3_flashcopy1

    # mkvdisk -name volume-boot-9117MMD_658B2AD-deckard-e9879213-00000018 -iogrp 0 -mdiskgrp VNX_00086_SAS_POOL_1 -size 64424509440 -unit b -autoexpand -grainsize 256 -rsize 2% -warning 0% -easytier on
    # mkfcmap -source 880 -target 881 -autodelete
    # prestartfcmap 0
    # startfcmap 0
    
  • The host is created using the height wwpns of the newly created virtual machine (I repaste here the lssyscfg command to check the wwpns are the same :-)
  • hscroot@hmc1:~> lssyscfg -r prof -m XXXXX58-9117-MMD-658B2AD --filter "lpar_names=deckard-e9879213-00000018"
    name=default_profile,lpar_name=deckard-e9879213-00000018,lpar_id=8,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/1/XXXXX60/29/0,virtual_eth_adapters=32/0/1665//0/0/zvdc4/fabbb99de420/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/2/XXXXX59/30/c050760727c5004a,c050760727c5004b/0"",""4/client/2/XXXXX59/25/c050760727c5004c,c050760727c5004d/0"",""5/client/1/XXXXX60/28/c050760727c5004e,c050760727c5004f/0"",""6/client/1/XXXXX60/23/c050760727c50050,c050760727c50051/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
    
    # mkhost -name deckard-e9879213-00000018-06976900 -hbawwpn C050760727C5004A -force
    # addhostport -hbawwpn C050760727C5004B -force 11
    # addhostport -hbawwpn C050760727C5004C -force 11
    # addhostport -hbawwpn C050760727C5004D -force 11
    # addhostport -hbawwpn C050760727C5004E -force 11
    # addhostport -hbawwpn C050760727C5004F -force 11
    # addhostport -hbawwpn C050760727C50050 -force 11
    # addhostport -hbawwpn C050760727C50051 -force 11
    # mkvdiskhostmap -host deckard-e9879213-00000018-06976900 -scsi 0 881
    

    blog_deploy3_svc_host1
    blog_deploy3_svc_host2

    View from fibre channel switches

    On the two fibre channel switches four zones a created (do not forget the zones used for the Live Partition Mobility). These zone can be easily identified by their names. All PowerVC zones are prefixed by “powervc” (unfortunately names are truncated) :

    • Four zones are created on the fibre channel switch of the fabric A :
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c50051500507680110f32c
                    c0:50:76:07:27:c5:00:51; 50:05:07:68:01:10:f3:2c
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004c500507680110f385
                    c0:50:76:07:27:c5:00:4c; 50:05:07:68:01:10:f3:85
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004d500507680110f385
                    c0:50:76:07:27:c5:00:4d; 50:05:07:68:01:10:f3:85
    
    switch_fabric_a:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c50050500507680110f32c
                    c0:50:76:07:27:c5:00:50; 50:05:07:68:01:10:f3:2c
    
  • Four zones are created on the fibre channel switch of the fabric B :
  • switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004e500507680120f385
                    c0:50:76:07:27:c5:00:4e; 50:05:07:68:01:20:f3:85
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c5004a500507680120f32c
                    c0:50:76:07:27:c5:00:4a; 50:05:07:68:01:20:f3:2c
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
     zone:  powervc_eckard_e9879213_00000018c050760727c5004b500507680120f32c
                    c0:50:76:07:27:c5:00:4b; 50:05:07:68:01:20:f3:2c
    
    switch_fabric_b:FID1:powervcadmin> zoneshow powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
     zone:  powervc_eckard_e9879213_00000018c050760727c5004f500507680120f385
                    c0:50:76:07:27:c5:00:4f; 50:05:07:68:01:20:f3:85
    

    Activation Engine and Virtual Optical Device

    All my deployed virtual machines are connected to one of the Virtual I/O Server by a vSCSI adapter. This vSCSI adapter is used to connect the virtual machine to a virtual optical device (a virtual cdrom) needed by the activation engine to reconfigure the virtual machine. Looking in the Virtual I/O Server the virtual media repository is filled with customized iso files needed to activate the virtual machines :

    • Here is the output of the lsrep command on one of my Virtual I/O Server is by PowerVC :
    padmin@XXX60:/home/padmin$ lsrep 
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free 
        1017     1014 rootvg                   279552           110592 
    
    Name                                                  File Size Optical         Access 
    vopt_1c967c7b27a94464bebb6d043e6c7a6e                         1 None            ro 
    vopt_b21849cc4a32410f914a0f6372a8f679                         1 None            ro 
    vopt_e9879213dc90484bb3c5a50161456e35                         1 None            ro
    
  • At the time of writing this post the vSCSI adapter is not deleted after the virtual machines activation, but this one is only used at the first boot of the machines :
  • blog_adapter_for_ae_pixelate

  • Even better you can mount this iso and check it is used by the activation engine. The network configuration to be applied at reboot is written in an xml file. For those -like me- who have ever played with VMcontrol it may remember you the deploy command used in VMcontrol :
  • root@XXXX60:# cd /var/vio/VMLibrary
    root@XXXX60:/var/vio/VMLibrary# loopmount -i vopt_1c967c7b27a94464bebb6d043e6c7a6e -o "-V cdrfs -o ro" -m /mnt
    root@XXXX60:/var/vio/VMLibrary# cd /mnt
    root@XXXX60:/mnt# ls
    ec2          openstack    ovf-env.xml
    root@XXXX60:/mnt# cat ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.244.17.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="deckard"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/>;<Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value="10.10.20.10 10.10.20.11"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.244.17.35"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    

    Shared Ethernet Adapters auto management

    This part is not specific to the standard version of PowerVC but I wanted to talk about this here. You probably already knows that PowerVC is built on top of OpenStack and OpenStack is clever. The product doesn’t want to keep unnecessary objects in your configuration. I was very impressed by the management of the networks and of the vlans, PowerVC is managing and taking care of your Shared Ethernet Adapter for you. You don’t have to remove not used vlan, and to add by hand new vlans (just add the network in PowerVC), here are a few examples :

    • If you are adding a vlan in PowerVC you have the choice to select the Shared Ethernet Adapter for this vlan. For instance you can choose not to deploy this vlan on a particular host :
    • blog_network_do_not_use_pixelate

    • If you deploy a virtual machine on this vlan this one will be automatically added to the Shared Ethernet Adapter if this is the first machine using this vlan :
    # chhwres -r virtualio --rsubtype vnetwork -o a -m 9117-MMD*658B2AD --vnetwork 1503-zvdc4 -a vlan_id=1503,vswitch=zvdc4,is_tagged=1
    
  • If you are moving a machine from one host to one another and this machine is last to use this vlan, the vlan will be automatically cleaned up and removed from the Shared Ethernet Adapter.
  • I have in my configuration two Shared Ethernet Adapters each one on a different virtual switch. Good news : PowerVC is vswitch aware :-)
  • This link is explaining this in details (not the redbook): Click here
  • Mobility

    PowerVC standard is able to manage the mobility of your virtual machines. Machines can be relocated on any hosts on the PowerVC pool. You do not have anymore to remind you the long and complicated migrlpar command, PowerVC is taking care of this for you, just by clicking the migrate button :

    blog_migrate_1_pixelate

    • Looking in the Hardware Management Console lssvcevents, you can check that the migrlpar command is taking care of the storage connectivity group created earlier, and is going to map the lpar on adapter fcs3 and fcs4 :
    # migrlpar -m XXX58-9117-MMD-658B2AD -t XXX55-9117-MMD-65ED82C --id 8 -i ""virtual_fc_mappings=2//1//fcs3,4//1//fcs4,5//2//fcs3,6//2//fcs4""
    
  • On the Storage Volume Controller, the host created with the Live Partition Mobility wwpns are correctly activated while the machine is moving to the other host :
  • blog_migrate_svc_lpm_wwpns_greened

    About supported fibre channel switches : all FOS >= 6.4 are ok !

    At the time of writing this post things are not very clear about this. Checking in the Redbook the only supported models of fibre channel switches are IBM SAN24B-5 and IBM SAN48B-5. I’m using Brocade 8510-4 fibre channel switches and they are working well with PowerVC. After a couple of calls and mails with the PowerVC development team it seems that all Fabric OS superior or equals to version 6.4 are ok. Don’t worry if the PowerVC validator is failing, it may appends, just open a call to get the validator working with you switch model (have problems in version 1.2.0.1 but nor more problem with the latest 1.2.1.0 :-))

    Conclusion

    PowerVC is impressive. In my opinion PowerVC is already production ready. Building a machine with four virtual NPIV fibre channel adapter in five minutes is something every AIX system administrator has dreamed of. Tell your boss this is the right way to build machines, and invest for the future by deploying PowerVC : it’s a must have :-) :-) :-) :-)! Need advice about it, need someone to deploy it ? Hire me !

    sitckers_resized

    Exploit the full potential of PowerVC by using Shared Storage Pools & Linked Clones | PowerVC secrets about Linked Clones (pooladm,mksnap,mkclone,vioservice)

    $
    0
    0

    My journey into PowerVC still continues :-). The blog was not updated for two months but I’ve been busy these days, get sick … and so on, have another post in the pipe but this one has to be approved by IBM before posting ….. Since the latest version (at the time of writing this post 1.2.1.2) PowerVC is now capable of managing Shared Storage Pool (SSP). It’s a huge announcement because a lot of customers do not have a Storage Volume Controller and supported fibre channel switches. By using PowerVC in conjunction with SSP you will reveal the true and full potential of the product. There are two major enhancements brought by SSP, the first is the time of deployment of the new virtual machines … by using an SSP you’ll move from minutes to …. seconds. Second huge enhancement : by using SSP you’ll automatically -without knowing it- using a feature called “Linked Clones”. For those who are following my blog since the very beginning you’re probably aware that Linked Clones are usable and available since SSP were managed by the IBM Systems Director VMcontrol module. You can still refer to my blog posts about it … even if ISD VMcontrol is now a little bit outdated by PowerVC : here. Using PowerVC with Shared Storage Pools is easy, but how does it work behind the scene ? After analysing the process of deployment I’ve found some cool features, PowerVC is using secrets undocumented commands, pooladm, vioservice, mkdev secrets arguments … :

    Discovering Shared Storage Pool on your PowerVC environment

    The first step to do before beginning is to discover the Shared Storage Pool on PowerVC. I’m taking the time to explain you that because it’s so easy that people (like me) can think there is much to do about it … but no PowerVC is simple. You have nothing to do. I’m not going to explain you here how to create a Shared Storage Pool, please refer to my previous posts about this : here and here. After the Shared Storage Pool is created this one will be automatically added into PowerVC … nothing to do. Keep in mind that you will need the latest in date version of the Hardware Management Console (v8r8.1.0). If you are in trouble discovering the Shared Storage Pool check the Virtual I/O Server‘s RMC are ok. In general if you can query and perform any action on the Shared Storage Pool from the HMC there will be no problem from the PowerVC side.

    • You don’t have to reload PowerVC after creating the Shared Storage Pool, just check you can see it from the storage tab :
    • pvc_ssp1

    • You will get more details by clicking on the Shared Storage Pool ….
    • pvc_ssp2

    • such as captured image on the Shared Storage Pool ….
    • pvc_ssp3

    • volumes created on it …
    • pvc_ssp4

    What is a linked clone ?

    Think before start. You have to understand what is a Linked Clone before reading the rest of this post. Linked Clones are not well described in documentations and Rebooks. Linked Clones are based on Shared Storage Pools snapshots. No Shared Storage Pool = No Linked Clones. Here is what is going behind the scene when you are deploying a Linked Clone :

    1. The captured rootvg underlying disk is a Shared Storage Pool Logical Unit.
    2. When the image is captured the rootvg Logical Unit is copied and is known as a “Logical (Client Image) Unit”.
    3. When deploying a new machine a snapshot is created from the Logical (Client Image) Unit.
    4. A “special Logical Unit” is created from the snapshot. This Logical Unit seems to be a pointer to the snapshot. We call it a clone.
    5. The machine is booted and the activation engine is running and reconfiguring the network.
    6. When a block is modified on the new machine this one is duplicated and modified on one new block on the Shared Storage Pool.
    7. This said if no blocks are modified all the machines created from this capture are sharing the same blocks on the Shared Storage Pool.
    8. Only modified blocks are not shared between Linked Clones. The more things you will change on your rootvg the more space you will use on the Shared Storage Pool.
    9. That’s why these machines are called Linked Clones : they all are connected by the same source Logical Unit.
    10. You will save TIME (just a snapshot creation for the storage side) and SPACE (all rootvg will be shared by all the deployed machines) by using Linked Clones.

    An image is sometimes better than long text, so here is a schema explaining all about Linked Clones :

    LinkedClones

    You have to capture an SSP base VM to deploy on the SSP

    Be aware of one thing, you can’t deploy a virtual machine on the SSP if you don’t have an captured image on the SSP. You can’t deploy your Storwize images to deploy on the SSP. You first have to create by your own a machine which has its rootvg running on the SSP :

    • Create an image based on an SSP virtual machine :
    • pvc_ssp_capture1

    • Shared Storage Pool Logical Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
    • Shared Storage Pool Logical (Client Image) Unit are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
    • The Logical Unit of the captured virtual machine is copied with the dd command from the VOL1 (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1) directory to the IM directory (/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM) (so from volumes to images).
    • If you do this yourself by using the dd command you can see that the capture image is not shown at the ouput of the snapshot command (by using Linked Clones the snapshot command is separated in two categories, the actuals and “real” Logical Unit and Logical (Client Image) Units which are the PowerVC images …
    • A secret API managed by a secret command called vioservice is adding your newly created image too the Shared Storage pool soliddb.
    • After the “registration” the Client Image is visible with the snapshot command.

    Deployment

    After the image is captured and stored on the Shared Storage Pool images directory, you can now deploy virtual machines based on this image. Keep in mind that blocks are shared by each linked clones, you’ll be suprised that deploying machines will not used the free space on the shared storage pool. But be aware that you can’t deploy any machines if there is no “blank” space in the PowerVC space bar (check image below ….) :

    deploy

    Step by step deployment by exemple

    • A snapshot of the image is created trough the pooladm command. You can check the output of the snapshot command after this step you’ll see a new snapshot derived from the Logical (Client Image) Unit.
    • This snapshot is cloned (My understanding of the clone is that it is a normal logical unit sharing block with an image). After the snapshot is cloned a new volume is created in the shared storage pool volume directory but at this step this one is not visible with the lu command because creating a clone do not create meta-data on the shared storage pool.
    • A dummy logical unit is created. Then the clone is moved on the dummy logical unit to replace it.
    • The clone logical unit is mapped to client.

    dummy

    You can do it yourself without PowerVC (not supported)

    Just for my understanding of what is doing PowerVC behind the scene I decided to try to do all the steps on my own.This steps are working but are not supported at all by IBM.

    • Before starting to read this you need to know that $ prompts are for padmin commands, # prompts are for root commands. You’ll need the cluster id and the pool id to build some xml files :
    $ cluster -list -field CLUSTER_ID
    CLUSTER_ID:      c50a291c18ab11e489f46cae8b692f30
    $ lssp -clustername powervc_cluster -field POOL_ID
    POOL_ID:         000000000AFFF80C0000000053DA327C
    
  • So the cluster id will be c50a291c18ab11e489f46cae8b692f30 and the pool id will be c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C. These id are often prefixed by two characters (I don’t know the utility of these ones but it will work in all cases …)
  • Image files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM.
  • Logical units files are stored in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1.
  • Create the “envelope” of the Logical (Client Image) Unit, by creating an xml file (the udid are build with the cluster udid and the pool udid) used as the standard input of the vioservice command :
  • # cat create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="8">
                            <Image label="chmod666-homemade-image"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    # /usr/ios/sbin/vioservice lib/libvio/lu < create_client_image.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="29c50a291c18ab11e489f46cae8b692f30d87113d5be9004791d28d44208150874" capacity="55296" physicalUsage="0" unusedCommitment="0" type="8" derived="" thick="0" tmoveState="0"><Image label="chmod666-homemade-image" relPath=""/></LU></Tier></Pool></Cluster></Response></VIO>
    # ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     staff    57982058496 Sep  8 19:00 chmod666-homemade-image.d87113d5be9004791d28d44208150874
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    
  • You can now see with the snapshot command that a new Logical (Client Image) Unit is here :
  • $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c055296          THIN               100% 0              ac671df86edaf07e96e399e3a2dbd425
    chmod666-homemade-image  55296          THIN                 0% 55299          d87113d5be9004791d28d44208150874
    volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be38155296          THIN               100% 55299          e525b8eb474f54e1d34d9d02cb0b49b4
                    Snapshot
                    2631012f1a558e51d1af7608f3779a1bIMSnap
                    09a6c90817d24784ece38f71051e419aIMSnap
                    e400827d363bb86db7984b1a7de08495IMSnap
                    5fcef388618c9a512c0c5848177bc134IMSnap
    
  • Copy the source image (the stopped virtual machine with the activation engine activated) to this newly created image. (This one will be the new reference of all your virtual machines created with this image as source). Use the dd command to do it (and don’t forget the block size). You can check while the dd is running that the unused percentage is increasing :
  • # dd if=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-aaaa95f8317c666549c4809264281db536dd.a2b7ed754030ca97668b30ab6cff5c45 of=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 bs=1M
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN                23% 0              d87113d5be9004791d28d44208150874
    [..]
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN                40% 0              d87113d5be9004791d28d44208150874
    n$ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    [..]
    chmod666-homemade-image  55296          THIN               100% 0              d87113d5be9004791d28d44208150874
    
  • You have now a new reference image. This one will be used as a reference for all you linked clone deployed virtual machines. A linked clone is created from a snapshot, so you have first to create a snapshot of the newly created image, by using the pooladm command (keep in mind that you can’t use snapshot command to work on Logical (Client Image) Unit). The snapshot is identified by the logical unit name suffixed by the “@“. Use mksnap to create the snap, and lssnap to show it. The snapshot will be visible at the output of the snapshot command :
  • # pooladm file mksnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap
    # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    $ snapshot -clustername powervc_cluster -list -spname powervc_sp
    Lu(Client Image)Name     Size(mb)       ProvisionType     %Used Unused(mb)     Lu Udid
    chmod666-homemade-image  55296          THIN               100% 55299  d87113d5be9004791d28d44208150874
                    Snapshot
                    chmod666IMSnap
    [..]
    
  • You can now create the clone from the snap (snap are identified by a ‘@’ character prefixed by the image name). Name the clone the way you want because this one will be renamed and moved to replace a normal logical unit, I’m using here the PowerVC convention (IMtmp). The creation of the clone will create a new file in the VOL1 directory with no shared storage pool meta data, so this clone will no be visible at the output of the lu command :
  • $ pooladm file mkclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874@chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    $ ls -l  /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/*chmod666-IM*
    -rwx------    1 root     system   57982058496 Sep  9 16:27 /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp
    
  • By using vioservice, create a logical unit on the shared storage pool. This will create a new image with a newly generated udid. If you check in the volume directory you can notice that the clone does not have the meta-data file needed by shared storage pool.(This file is prefixed by a dot (.)). After creating this logical unit replace it with your clone with a simple move :
  • $ cat create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="1">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C">
                    <Tier>
                        <LU capacity="55296" type="1">
                            volume-boot-9117MMD_658B2AD-chmod666"/>
                        </LU>
                    </Tier>
                </Pool>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < create_client_lu.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"><Pool udid="24c50a291c18ab11e489f46cae8b692f30000000000AFFF80C0000000053DA327C" name="powervc_sp" raidLevel="0" overCommitSpace="0"><Tier udid="25c50a291c18ab11e489f46cae8b692f3019f95b3ea4c4dee1" name="SYSTEM" overCommitSpace="0"><LU udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" capacity="55296" physicalUsage="0" unusedCommitment="0" type="1" derived="" thick="0" tmoveState="0"><Disk label="volume-boot-9117MMD_658B2AD-chmod666"/></LU></Tier></Pool></Cluster></Response></VIO>
    $ mv /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666-IMtmp /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • You are ready to use your linked clone, you have a source image, a snap of this one, and a clone of this snap :
  • # pooladm file lssnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874
    Primary Path         File Snapshot name
    ---------------------------------------
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/chmod666-homemade-image.d87113d5be9004791d28d44208150874 chmod666IMSnap
    # pooladm file lsclone /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    Snapshot             Clone name
    ----------------------------------
    chmod666IMSnap /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777
    
  • Then, using vioservice or the mkdev command map the clone to your virtual scsi adapter (identifed by its physloc name) (do this on both Virtual I/O Servers) :
  • $ cat map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
        <Request action="5">
            <Cluster udid="22c50a291c18ab11e489f46cae8b692f30">
                <Map label="" udid="27c50a291c18ab11e489f46cae8b692f30e4d360832b29be950824d3e5bf57d777" drcname="U9117.MMD.658B2AD-V2-C99"/>
            </Cluster>
        </Request>
    </VIO>
    $ /usr/ios/sbin/vioservice lib/libvio/lu < map_clone.xml
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <VIO xmlns="http://ausgsa.austin.ibm.com/projects/v/vios/schema/1.20" version="1.20">
    
    <Response><Cluster udid="22c50a291c18ab11e489f46cae8b692f30" name="powervc_cluster"/></Response></VIO>
    

    or

    # mkdev -t ngdisk -s vtdev -c virtual_target -aaix_tdev=volume-boot-9117MMD_658B2AD-chmod666.e4d360832b29be950824d3e5bf57d777 -audid_info=4d360832b29be950824d3e5bf57d77 -apath_name=/var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1 -p vhost5 -acluster_id=c50a291c18ab11e489f46cae8b692f30
    
  • Boot the machine ... this one is a linked clone create by yourself without PowerVC.
  • About the activation engine ?

    Your captured image has the activation engine enabled. To reconfigure the network & the hostname PowerVC is copying an iso from the PowerVC server to the Virtual I/O Server. This iso contains an ovf file needed by the activation engine to customize your virtual machine. To customize my linked clone virtual machine created on my own I decided to re-use an old iso file created by PowerVC for another deployment :

    • Mount the image located in /var/vio/VMLibrary, and modify the xml ovf file to fit your needs :
    # ls -l /var/vio/VMLibrary
    total 840
    drwxr-xr-x    2 root     system          256 Jul 31 20:17 lost+found
    -r--r-----    1 root     system       428032 Sep  9 18:11 vopt_c07e6e0bab6048dfb23586aa90e514e6
    # loopmount -i vopt_c07e6e0bab6048dfb23586aa90e514e6 -o "-V cdrfs -o ro" -m /mnt
    
  • Copy the content of the cd to a directory :
  • # mkdir /tmp/mycd
    # cp -r /mnt/* /tmp/mycd
    
  • Edit the ovf file to fit your needs (In my case for instance I'm changing the hostname of the machine and it's ip address :
  • # cat /tmp/mycd/ovf-env.xml
    <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ovfenv:id="vs0">
        <PlatformSection>
        <Locale>en</Locale>
      </PlatformSection>
      <PropertySection>
      <Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.ipv4defaultgateway" ovfenv:value="10.218.238.1"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.hostname" ovfenv:value="homemadelinkedclone"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.slotnumber.1" ovfenv:value="32"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.dnsIPaddresses" ovfenv:value=""/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.usedhcpv4.1" ovfenv:value="false"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4addresses.1" ovfenv:value="10.218.238.140"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.adapter.networking.ipv4netmasks.1" ovfenv:value="255.255.255.0"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.networking.domainname" ovfenv:value="localdomain"/><Property ovfenv:key="com.ibm.ovf.vmcontrol.system.timezone" ovfenv:value=""/></PropertySection>
    </Environment>
    
  • Recreate the cd using the mkdvd command and put it in the /var/vio/VMLibrary directory :
  • # mkdvd -r /tmp/mycd -S
    Initializing mkdvd log: /var/adm/ras/mkcd.log...
    Verifying command parameters...
    Creating temporary file system: /mkcd/cd_images...
    Creating Rock Ridge format image: /mkcd/cd_images/cd_image_19267708
    Running mkisofs ...
    
    mkrr_fs was successful.
    # mv /mkcd/cd_images/cd_image_19267708 /var/vio/VMLibrary
    $ lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
        1017     1015 rootvg                   279552           171776
    
    Name                                                  File Size Optical         Access
    cd_image_19267708                                             1 None            rw
    vopt_c07e6e0bab6048dfb23586aa90e514e6                         1 vtopt1          ro
    
  • Load the cdrom and map it to the linked clone :
  • $ mkvdev -fbo -vadapter vhost11
    $ loadopt -vtd vtopt0 -disk cd_image_19267708
    
  • When the linked clone virtual machine will boot the cd will be mounted and the activation engine will take the ovf file as parameter, and will reconfigure the network. For instance you can check the hostname has changed :
  • # hostname
    homemadelinkedclone.localdomain
    

    A view on the layout ?

    I asked myself a question about Linked Clones, how can we check Shared Storage Pool blocks (or PP ?) are shared by the capture machine (the captured LU) on one linked clone ? To answer to this question I had to play with the pooladm command (which is unsupported for customer use) to check the logcial unit layout of the capture virtual machine and of the deployed linked clone and then compare them. Please note that this is my understanding of the linked clones. This is not validated by any IBM support, do this at your own risk, you can correct my interpretation of what I'm seeing here :-) :

    • Get the layout of the captured VM by getting the layout of the logical unit (the captured image is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4) :
    root@vios:/home/padmin# ls -l /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM
    total 339759720
    -rwx------    1 root     system   57982058496 Aug 12 17:53 volume-Image_7100-03-03-1415-SSP3e2066b2a7a9437194f48860affd56c0.ac671df86edaf07e96e399e3a2dbd425
    -rwx------    1 root     system   57982058496 Aug 18 19:15 volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/IM/volume-Image_7100-03-03-1415-c--5bd3991bdac84c48b519e19bfb1be381.e525b8eb474f54e1d34d9d02cb0b49b4 | /tmp/captured_vm.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • Get the layout of the linked clone (the linked clone is in my case located in /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba)
  • # ls /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    # pooladm file layout /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba | tee /tmp/linked_clone.layout
    0x0-0x100000 shared
        LP 0xFE:0xF41000
        PP /dev/hdisk968 0x2E8:0xF41000
    0x100000-0x200000 shared
        LP 0x48:0x387F000
        PP /dev/hdisk866 0x1:0x387F000
    [..]
    
  • At this step you can first compare the two files, you can see some useful informations, but do not misunderstand this output. You first have to sort it to make conclusion. But you can be sure of one thing : some PPs have been modified on the linked clone and cannot be shared anymore, others are shared between the linked clone and the capture image :
  • sdiff_layout1_modifed_1

  • You can have a better view of shared and not shared PPs by sorting the output of these files, here the commands I used to do it :
  • #grep PP linked_clone.layout | tr -s " " | sort -k1 > /tmp/pp_linked_clone.layout
    #grep PP captured_vm.layout | tr -s " " | sort -k1 > /tmp/pp_captured_vm.layout
    
  • By sdiffing these two files I can now check which PPs are shared and which are not :
  • sdiff_layout2_modifed_1

  • The pooladm command can give you stats about linked clone. My understanding of the owned block count tell me that 78144 SSP blocks (not PPs) (so blocks of 4k) are uniq to this linked clones and not shared with the captured image :
  • vios1#pooladm file stat /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Path: /var/vio/SSP/powervc_cluster/D_E_F_A_U_L_T_061310/VOL1/volume-boot-9117MMD_658B2AD-layo583b3eb5e98b495b992fdc3accc39bc3.54c172062957d73ec92e90d203d23fba
    Size            57982058496
    Number Blocks   14156655
    Data Blocks     14155776
    Pool Block Size 4096
    
    Tier: SYSTEM
    Owned Blocks    78144
    Max Blocks      14156655
    Block Debt      14078511
    

    Mixed NPIV & SSP deployment

    For some reasons for some machine with an I/O intensive workload, it can be usefull to put your data luns on an NPIV adapter. I'm actually working on a project involving PowerVC and the question was ask, why not mix SSP Lun for rootvg and NPIV based lun for data volume group. One more time it's very simple with PowerVC, just attach a volume, this time by choosing your Storage Volume Controller provider ... easy :

    mixed1_masked

    This will created NPIV adapters and create new zoning and masking on the fibre channels switches. One more time easy ....

    Debugging ?

    I'll not lie. I had a lot of problems with Shared Storage Pool and PowerVC but these problems were related to my configuration moving a lot during the tests. Always remind you that you'll learn from theses errors and in my case it helped my a lot to debug PowerVC :

    • From the Virtual I/O Server side check you have no core file in the /home/ios/logs directory. A core file in this directory indicates one of the command run by PowerVC just "cored" :
    root@vios1:/home/ios/logs# ls core*
    core.9371682.18085943
    
  • From the Virtual I/O Server side check the /home/ios/logs/viosvc.log. You can check all the xml files and all the ouputs used by the vioservice command. Most of PowerVC actions are performed trough the vioservice command ....
  • root@vios1:/home/ios/logs# ls viosvc.log
    -rw-r--r--    1 root     system     10240000 Sep 11 00:28 viosvc.log
    
  • Step by step check all PowerVC actions are ok. For instance verify with the lsrep command that the iso has been copied from PowerVC to the Virtual I/O Server library. Check there is space left on the Shared Storage Pool ....
  • Sometimes the secret vioserivce api is stucked and not responding. In some cases it can be useful to rebuild the soliddb ... I'm using this script to do it (run it as root) :
  • # cat rebuilddb.sh
    #!/usr/bin/ksh
    set -x
    stopsrc -s vio_daemon
    sleep 30
    rm -rf /var/vio/CM
    startsrc -s vio_daemon
    
  • EDIT I had another info from IBM regarding the method to rebuild the SolidDB, using my script won't properly bring up the SolidDB back up properly and could leave you in a bad state. Just add this at the end of the script :
  • pid=$(lssrc -s vio_daemon | awk 'NR==2 {print $2}')
    kill -1 $pid  
    
  • On PowerVC side when you have problem it is always good to increase the verbosity of the logs (located in /var/log) (in this case nova) (restart PowerVC after setting verbosity level)
  • # openstack-config --set /etc/nova/nova-9117MMD_658B2AD.conf DEFAULT default_log_levels powervc_nova=DEBUG,powervc_k2=DEBUG,nova=DEBUG
    

    Conclusion

    It takes me more than two months write this post. Why ? Just because PowerVC design is not documented. It work like a charm, but nobody will explain you HOW. I hope this post will help you to understand how PowerVC is working. I'm a huge fan of PowerVC and SSP, try it by yourself and you'll see that it is a pleasure to use it. It's simple, effecient, and powerfull. Anybody can give me an access to a PowerKVM host to write & proove that PowerVC is also simple and efficient with PowerKVM ... ?

    An overview of the IBM Technical Collaboration Council for PowerSystems 2014

    $
    0
    0

    Since now eight ten months I decided to change my job for better or for worst. Talking about the better I had the chance to be enrolled for the Technical Collaboration Council for Power Systems (I’ll not talk about the worst … this could takes me hours to explain it..). The Technical Collaboration Council is not well known in Europe, and not well known for Power Systems and I think writing this blog post may offer a better worldwide visibility to the Technical Collaboration Council. It deserve a blog post :-).

    To be clear and to avoid any problem to participate in the meeting you have first to sign a Non Disclosure Announcement. A lot of presentations are still IBM confidential. This said I had sign this NDA. So I cannot talk about the content of the meeting. Sure there is a lot of things to say but I have to keep it for me … :-)

    3
    But what is exactly the Technical Collaboration Council ? This annual meeting takes places in Austin Texas at the home of Power Systems :-). The duration is for one week from Monday to Friday. The Technical Collaboration Council is inviting biggest IBM customers all over the world. For a guy like me so involved in this community, coming here was a great opportunity and way to spread the word about my blog and my participation in the Power community. In fact we were just a few people coming from Europe and a lot of US guys. The TCC looks like an IBM Technical University in better … because you can participate during the meeting and answer to a lot of surveys about the shape of things to come about Power Systems :-) :-) :-) .

    Here is what you can see and what you can do when you are coming to the TCC Power. And for me it’s exciting !!! :

    • Meetings about trends an directions about Power Systems (overview of new products (hardware and software), new functionality and new releases going to be released in the next year).
    • Direct Access to IBM Lab. You can go and ask the lab about a particular feature you need, or about something you didn’t understand. For instance I had a quick meeting with PowerVC guys (not only guys, sorry Christine) about my needs for the next few months. Another one : I had the chance to talk to the head manager of AIX and ask him about a few things I’d like to see in the new version of AIX (Who said an installation over http ?).
    • Big “names” of Power are here, they share and talk : Doug Ballog, Satya Sharma. Seeing them is always impressive !
    • Interaction and sharing with other customers : like me a lot of customers were here at the TCC and sharing about how they do things and how they use their Power Systems it ALWAYS useful. Had a few interesting conversations with guys from another big bank with the same constraints as me.
    • You can say what you think. IBM is waiting for you feedback .. positive or negative.
    • Demo and hands on new products and new functionality (Remember about the IBM Provisionning Toolkit for PowerVM & a cool LPM scheduler presented by STG lab services guys).
    • Possibility to enroll for beta programs … (in my case HMC)
    • You can finally meet guys you had on the phone or by mail since a couple of years in real. It’s always useful !
    • And of course lot of fun :-)

    I had the chance to talk about my experience about PowerVC in front of all the TCC members. It was very stressful for a French guy like me … and I just had a few minutes to prepare … Hope it was good, but It was a great experience. You can do things like this at the TCC … you think PowerVC is good, just go on the scene and have 15 minutes talk about it … :-)

    4

    The Technical Collaboration Council is not just about technical stuffs and work. You can also have a lot of fun talking to IBM guys and customers. There are a lot of moments when people can eat and drink together and the possibility to share about everything is always here. And if I had to remember only one thing about the Technical Collaboration Council it will be that it is a great moment of sharing with others and not just about work and Power Systems. This said I wanted to thanks IBM and a lot of people for their kindness, their availability and all the fun they give us during this week. So thanks to : Philippe H., Patrice P., Satya S., Jay K., Carl B., Eddy S., Natalie M, Christine W, François L, Rosa D … and sorry for those I’ve forgotten :-). And never forget that Power is performance redefined.

    Ok ; one last word. Maybe some of the customers who were here this year are going to read this post and I encourage you to react to this post and to post comments. Redhat moto is “We grow when we share”, but in such events I am (and we are) growing when IBM is sharing. People may think that IBM do not share … I disagree :-). They are doing it and they are doing it well ! And never forget that the Power Community is still alive and ready to rocks ! So please raise your voice about it. In such times, times of Media and Social we have to prove to IBM and to the world that is community is growing, is great, and is ready to share.
    One last thing, the way to work in US seems to be very different than the way we do in Europe … could be cool to move to US

    Configuration of a Remote Restart Capable partition

    $
    0
    0

    How can we move a partition to another machine if the machine or the data-center on which the partition is hosted is totally unavailable ? This question is often asked by managers and technical people. Live Partition Mobility can’t answer to this question because the source machine needs to be running to initiate the mobility. I’m sure that most of you are implementing a manual solution based on a bunch of scripts recreating the partition profile by hand but this is hard to maintain and it’s not fully automatized and not supported by IBM. A solution to this problem is to setup your partitions as Remote Restart Capable partitions. This PowerVM feature is available since the release of VMcontrol (IBM Systems Director plugin). Unfortunately this powerful feature is not well documented but will probably in the next months or in the next year be a must have on your newly deployed AIX machines. One last word : with the new Power8 machines things are going to change about remote restart, the functionality will be easier to use and a lot of prerequisites are going to disappear. Just to be clear this post has been written using Power7+ 9117-MMD machines, the only thing you can’t do with these machines (compared to Power8 ones) is changing a current partition to be remote restart capable aware without having to delete and recreate its profile.

    Pre-requesite

    To create and use a remote restart partition on Power7+/Power8 machines you’ll need this prerequisites :

    • A PowerVM enterprise license (Capability “PowerVM remote restart capable” to true, be careful there is another capability named “Remote restart capable” this was used by VMcontrol only, so double check the capability ok for you).
    • A firmware 780 (or later all Power8 firmware are ok, all Power7 >= 780 are ok).
    • Your source and destination machine are connected to the same Hardware Management Console, you can’t remote restart between two HMC at the moment.
    • Minimum version of HMC is 8r8.0.0. Check you have the rrstartlpar command (not the rrlpar command used by VMcontrol only).
    • Better than a long post check this video (don’t laugh at me, I’m trying to do my best but this is one of my first video …. hope it is good) :

    What is a remote restart capable virtual machine ?

    Better than a long text to explain you what is, check the picture below and follow each number from 1 to 4 to understand what is a remote restart partition :

    remote_restart_explanation

    Create the profile of you remote restart capable partition : Power7 vs Power8

    A good reason to move to Power8 faster than you planed is that you can change a virtual machine to be remote restart capable without having to recreate the whole profile. I don’t know why at the time of writing this post but changing a non remote restart capable lpar to a remote restart capable lpar is only available on Power8 systems. If you are using a Power7 machine (like me in all the examples below) be carful to check this option while creating the machine. Keep in mind that if you forgot to check to option you will not be able to enable the remote restart capability afterwards and you unfortunately have to remove you profile and recreate it, sad but true … :

    • Don’t forget to check the check box to allow the partition to be remote restart capable :
    • remote_restart_capable_enabled1

    • After the partition is created you can notice in the I/O tab that all remote restart capable partition are not able to own any physical I/O adapter :
    • rr2_nophys

    • You can check in the properties that the remote restart capable feature is activated :
    • remote_restart_capable_activated

    • If you try to modify an existing profile on a Power7 machine you’ll get this error message. On a Power8 machine there will be not problem :
    # chsyscfg -r lpar -m XXXX-9117-MMD-658B2AD -p test_lpar-i remote_restart_capable=1
    An error occurred while changing the partition named test_lpar.
    The managed system does not support changing the remote restart capability of a partition. You must delete the partition and recreate it with the desired remote restart capability.
    
  • You can verify that some of your lpar are remote restart capable :
  • lssyscfg -r lpar -m source-machine -F name,remote_restart_capable
    [..]
    lpar1,1
    lpar2,1
    lpar3,1
    remote-restart,1
    [..]
    
  • On a Power 7 machine the best way to enable remote restart on an already created machine is to delete the profile and recreate it by hand and adding it the remote restart attribute :
  • Get the current partition profile :
  • $ lssyscfg -r prof -m s00ka9927558-9117-MMD-658B2AD --filter "lpar_names=temp3-b642c120-00000133"
    name=default_profile,lpar_name=temp3-b642c120-00000133,lpar_id=11,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_id=0,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/2/s00ia9927560/32/0,virtual_eth_adapters=32/0/1659//0/0/vdct/facc157c3e20/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/1/s00ia9927559/32/c050760727c5007a,c050760727c5007b/0"",""4/client/1/s00ia9927559/35/c050760727c5007c,c050760727c5007d/0"",""5/client/2/s00ia9927560/34/c050760727c5007e,c050760727c5007f/0"",""6/client/2/s00ia9927560/35/c050760727c50080,c050760727c50081/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,electronic_err_reporting=null,sriov_eth_logical_ports=none
    
  • Remove the partition :
  • $ chsysstate -r lpar -o shutdown --immed -m source-server -n temp3-b642c120-00000133
    $ rmsyscfg -r lpar -m source-server -n temp3-b642c120-00000133
    
  • Recreate the partition with the remote restart attribute enabled :
  • mksyscfg -r lpar -m s00ka9927558-9117-MMD-658B2AD -i 'name=temp3-b642c120-00000133,profile_name=default_profile,remote_restart_capable=1,lpar_id=11,lpar_env=aixlinux,all_resources=0,min_mem=8192,desired_mem=8192,max_mem=8192,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,mem_mode=ded,mem_expansion=0.0,hpt_ratio=1:128,proc_mode=shared,min_proc_units=2.0,desired_proc_units=2.0,max_proc_units=2.0,min_procs=4,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,shared_proc_pool_name=DefaultPool,affinity_group_id=none,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=64,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=3/client/2/s00ia9927560/32/0,virtual_eth_adapters=32/0/1659//0/0/vdct/facc157c3e20/all/0,virtual_eth_vsi_profiles=none,"virtual_fc_adapters=""2/client/1/s00ia9927559/32/c050760727c5007a,c050760727c5007b/0"",""4/client/1/s00ia9927559/35/c050760727c5007c,c050760727c5007d/0"",""5/client/2/s00ia9927560/34/c050760727c5007e,c050760727c5007f/0"",""6/client/2/s00ia9927560/35/c050760727c50080,c050760727c50081/0""",vtpm_adapters=none,hca_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default,sriov_eth_logical_ports=none'
    

    Creating a reserved storage device

    The reserved storage device pool is used to store the configuration data of the remote restart partition. At the time of writing this post thoses devices are mandatory and as far as I know they are used just to store the configuration and not the state (memory state) of the virtual machines itself (maybe in the future, who knows ?) (You can’t create or boot any remote restart partition if you do not have a reserved storage device pool created, do this before doing anything else) :

    • You have first to find on both Virtual I/O Server and on both machines (source and destination machine used for the remote restart operation) a bunch of devices. These ones have to be the same on all the Virtual I/O Server used for the remote restart operation. The lsmemdev command is used to find those devices :
    vios1$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios2$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios3$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    vios4$ lspv | grep -iE "hdisk988|hdisk989|hdisk990"
    hdisk988         00ced82ce999d6f3                     None
    hdisk989         00ced82ce999d960                     None
    hdisk990         00ced82ce999dbec                     None
    
    $ lsmemdev -r avail -m source-machine -p vios1,vios2
    [..]
    device_name=hdisk988,redundant_device_name=hdisk988,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,redundant_capable=1
    device_name=hdisk989,redundant_device_name=hdisk989,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E6000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E6000000000000,redundant_capable=1
    device_name=hdisk990,redundant_device_name=hdisk990,size=61440,type=phys,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E7000000000000,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E7000000000000,redundant_capable=1
    [..]
    $ lsmemdev -r avail -m dest-machine -p vios3,vios4
    [..]
    device_name=hdisk988,redundant_device_name=hdisk988,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E5000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E5000000000000,redundant_capable=1
    device_name=hdisk989,redundant_device_name=hdisk989,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E6000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E6000000000000,redundant_capable=1
    device_name=hdisk990,redundant_device_name=hdisk990,size=61440,type=phys,phys_loc=U2C4E.001.DBJN914-P2-C2-T1-W500507680140F32C-L3E7000000000000,redundant_phys_loc=U2C4E.001.DBJN914-P2-C1-T1-W500507680140F32C-L3E7000000000000,redundant_capable=1
    [..]
    
  • Create the reserved storage device pool using the chhwres command on the Hardware Management Console (create on all machines used by the remote restart operation) :
  • $ chhwres -r rspool -m source-machine -o a -a vios_names=\"vios1,vios2\"
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk988 --manual
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk989 --manual
    $ chhwres -r rspool -m source-machine -o a -p vios1 --rsubtype rsdev --device hdisk990 --manual
    $ lshwres -r rspool -m source-machine --rsubtype rsdev
    device_name=hdisk988,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,is_redundant=1,redundant_device_name=hdisk988,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,lpar_id=none,device_selection_type=manual
    device_name=hdisk989,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E6000000000000,is_redundant=1,redundant_device_name=hdisk989,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E6000000000000,lpar_id=none,device_selection_type=manual
    device_name=hdisk990,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Inactive,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E7000000000000,is_redundant=1,redundant_device_name=hdisk990,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E7000000000000,lpar_id=none,device_selection_type=manual
    $ lshwres -r rspool -m source-machine
    "vios_names=vios1,vios2","vios_ids=1,2"
    
  • You can also create the reserved storage device pool from Hardware Management Console GUI :
  • After selecting the Virtual I/O Server, click select devices :
  • rr_rsd_pool_p

  • Choose the maximum and minimum size to filter the devices you can select for the creation of the reserved storage device :
  • rr_rsd_pool2_p

  • Choose the disk you want to put in you reserved storage device pool (put all the devices used by remote restart partitions in manual mode, automatic devices are used by suspend/resume operation or AMS pool. One device can not be shared by two remote restart partitions) :
  • rr_rsd_pool_waiting_3_p
    rr_pool_create_7_p

  • You can check afterwards that your reserved device storage pool is created and is composed by three devices :
  • rr_pool_create_9
    rr_pool_create_8_p

    Select a storage device for each remote restart partition before starting it :

    After creating the reserved device storage pool you have for every partition to select a device from the storage pool. This device will be used to store the configuration data of the partition :

    • You can see you cannot start the partition if no devices were selected !
    • To select the correct device size you first have to calculate the needed space for every partition using the using the lsrsdevsize command. This size around the size of max memory value set in the partition profile (don’t ask me why):
    $ lsrsdevsize -m source-machine -p temp3-b642c120-00000133
    size=8498
    
  • Select the device you want to assign to your machine (in my case there was already a device selected for this machine) :
  • rr_rsd_pool_assign_p

  • Then select the machine you want to assign for the device :
  • rr_rsd_pool_assign2_p

  • Or do this in command line :
  • $ chsyscfg -r lpar -m source-machine -i "name=temp3-b642c120-00000133,primary_rs_vios_name=vios1,secondary_rs_vios_name=vios2,rs_device_name=hdisk988"
    $ lssyscfg -r lpar -m source-machine --filter "lpar_names=temp3-b642c120-00000133" -F primary_rs_vios_name,secondary_rs_vios_name,curr_rs_vios_name
    vios1,vios2,vios1
    $ lshwres -r rspool -m source-machine --rsubtype rsdev
    device_name=hdisk988,vios_name=vios1,vios_id=1,size=61440,type=phys,state=Active,phys_loc=U2C4E.001.DBJN916-P2-C1-T1-W500507680140F32C-L3E5000000000000,is_redundant=1,redundant_device_name=hdisk988,redundant_vios_name=vios2,redundant_vios_id=2,redundant_state=Active,redundant_phys_loc=U2C4E.001.DBJN916-P2-C2-T1-W500507680140F32C-L3E5000000000000,lpar_name=temp3-b642c120-00000133,lpar_id=11,device_selection_type=manual
    

    Launch the remote restart operation

    All the remote restart operations are launched from the Hardware Management Console with the rrstartlpar command. At the time of writing this post there is not GUI function to remote restart a machine and you can only do it with the command line :

    Validation

    Like you can do it with a Live Partition Mobility move you can validate a remote restart operation before running it. You can only perform the remote restart operation if the machine on which the remote restart machine is hosted is shutdown or in error, so the validation is very useful and mandatory to check your remote restart machine are well configured without having to stop the source machine :

    $ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar
    $ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar -d 5
    $ rrstartlpar -o validate -m source-machine -t dest-machine -p rrlpar --redundantvios 2 -d 5 -v
    

    Execution

    As I said before the remote restart operation can only be performed if the source machine is in a particular state, the states that allows a remote restart operation are :

    • Power Off.
    • Error.
    • Error – Dump in progress state.

    So the only way to test a remote restart operation today is to shutdown your source machine :

    • Shutdown the source machine :
    • step1

    $ chsysstate -m source-machine -r sys  -o off --immed
    

    rr_step2_mod

  • You can next check on the Hardware Management Console that Virtual I/O Servers and the remote restart lpar are in state “Not available”. You’re now ready to remote restart the lpar (if the partition id is used on the destination machine the next available one will be used) (you have to wait a little before remote restarting the partition, check below) :
  • $ rrstartlpar -o restart -m source-machine -t dest-machine -p rrlpar -d 5 -v
    HSCLA9CE The managed system is not in a valid state to support partition remote restart operations.
    $ rrstartlpar -o restart -m source-machine -t dest-machine -p rrlpar -d 5 -v
    Warnings:
    HSCLA32F The specified partition ID is no longer valid. The next available partition ID will be used.
    

    step3
    rr_step4_mod
    step5

    Cleanup

    When the source machine is ready to be up (after an outage for instance) just boot the machine and its Virtual I/O Server. After the machine is up you can notice that the rrlpar profile is still there and it can be a huge problem if somebody is trying to boot this machine because it is started on the other machine after the remote restart operation. To prevent such an error you have to cleanup your remote restart partition by using the rrstartlpar command again. Be careful not to check the option to boot the partitions after the machine is started :

    • Restart the source machine and its Virtual I/O Servers :
    $ chsysstate -m source-machine -r sys -o on
    $ chsysstate -r lpar -m source-machine -n vios1 -o on -f default_profile
    $ chsysstate -r lpar -m source-machine -n vios2 -o on -f default_profile
    

    rr_step6_mod

  • Perform the cleanup operation to remove the profile of the remote restart partition (if you want later to LPM back your machine you have to keep the device of the reserved device storage pool in the pool, if you do not use the –retaindev option the device will be automatically removed from the pool) :
  • $ rrstartlpar -o cleanup -m source-machine -p rrlpar --retaindev -d 5 -v --force
    

    rr_step7_mod

    Refresh the partition and profile data

    During my test I encounter a problem. The configuration was not correctly synced between the device used in the reserved device storage pool and the current partition profile. I had to use a command named refdev (for refresh device) to synchronize the partition and profile data to the storage device.

    $ refdev -m source-machine -p refdev -m sys1 -p temp3-b642c120-00000133 -v 
    

    What’s in the reserved storage device ?

    I’m a curious guy. After playing with remote restart I asked myself a question, what is really stored in the reserved device storage device assigned to the remote restart partition. Looking in the documentation on the internet does not answer to my question so I had to look on it on my own. By ‘dding” the reserved storage device assigned to a partition I realized that the profile is stored in xml format. Maybe this format is the same format that the one used by the HMC 8 templates library. For the moment and during my tests on Power7+ machine the state of the memory of the partition is not transferred to the destination machine, maybe because I had to shutdown the whole source machine to test. Maybe the memory state of the machine is transferred to the destination machine if this one is in error state or is dumping. I had not chance to test this :

    root@vios1:/home/padmin# dd if=/dev/hdisk17 of=/tmp/hdisk17.out bs=1024 count=10
    10+0 records in
    10+0 records out
    root@vios1:/home/padmin# more hdisk17.out
    [..]
    AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    BwEAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgDIAZAAAQAEAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" Profile="H4sIAAAAA
    98VjxbxEAhNaZEqpEptPS/iMJO4cTJBdHVj38zcYvu619fTGQlQVmxY0AUICSH4A5XYorJgA1I3sGMBCx5Vs4RNd2zgXI89tpNMxslIiRzPufec853zfefk/t/osMfRBYPZRbpuF9ueUTQsShxR1NSl9dvEEPPMMgnfvPnVk
    a2ixplLuOiVCHaUKn/yYMv/PY/ydTRuv016TbgOzdVv4w6+KM0vyheMX62jgq0L7hsCXtxBH6J814WoZqRh/96+4a+ff3Br8+o3uTE0pqJZA7vYoKKnOgYnNoSsoiPECp7KzHfELTQV/lnBAgt0/Fbfs4Wd1sV+ble7Lup/c
    be0LQj01FJpoVpecaNP15MhHxpcJP8al6b7fg8hxCnPY68t8LpFjn83/eKFhcffjqF8DRUshs0almioaFK0OfHaUKCue/1GcN0ndyfg9/fwsyzQ6SblellXK6RDDaIIwem6L4iXCiCfCuBZxltFz6G4eHed2EWD2sVVx6Mth
    eEOtnzSjQoVwLbo2+uEf3T/s2emPv3z4xA16eD0AC6oRN3FXNnYoA6U7y3OfFc1g5hOIiTQsVUHSusSc43QVluEX2wKdKJZq4q2YmJXEF7hhuqYJA0+inNx3YTDab2m6T7vEGpBlAaJnU0qjWofTkj+uT2Tv3Rl69prZx/9s
    thQTBMK42WK7XSzrizqFhPL5E6FeHGVhnSJQLlKKreab1l6z9MwF0C/jTi3OfmKCsoczcJGwITgy+f74Z4Lu2OU70SDyIdXg1+JAApBWZoAbLaEj4InyonZIDbjvZGwv3H5+tb7C5tPThQA9oUdsCN0HsnWoLxWLjPHAdJSp
    Ja45pBarVb3JDyUJOn3aemXcIqtUfgPi3wCuiw76tMh6mVtNVDHOB+BxqEUDWZGtPgPrFc9oBgBhhJzEdsEVI9zC1gr0JTexhwgThzIwYEG7lLbt3dcPyHQLKQqfGzVsSNzVSvenkDJU/lUoiXGRNrdxLy2soyhtcNX47INZ
    nHKOCjYfsoeR3kpm58GdYDVxipIZXDgSmhfCDCPlKZm4dZoVFORzEX0J6CLvK4py6N7Pz94yiXlPBAArd3zqIEtjXFZ4izJzQ44sCv7hh3bTnY5TbKdnOtHGtatTjrEynTuWFNXV3ouaUKIIKfDgE5XrrpWb/SHWyWCbXMM5
    DkaHNzXVJws6csK57jnpToLopiQLZdgHJJh9wm+M+wbof7GzSRJBYvAAaV0RvE8ZlA5yxSob4fAiJiNNwwQAwu2y5/O881fvvz3HxgK70ZDwc1FS8JezBgKR0e/S4XR3ta8OwmdS56akXJITAmYBpElF5lZOdlXuO+8N0opU
    m0HeJTw76oiD8PS9QfRECUYqk0B1KGkZ+pRGQPUhPFEb12XIoe7u4WXuwdVqTAnZT8gyYrvAPlL/sYG4RkDmAx5HFZpFIVnAz9Lrlyh9tFIc4nZAColOLNGdFRKmE8GJd5zZx++zMiAoTOWNrJvBjODNo1UOGuXngzcHWjrn
    LgmkxjBXLj+6Fjy1DHFF0zV6lVH/p+VYO6pbZzYD9/ORFLouy6MwvlGuRz8Qz10ugawprAdtJ4GxWAOtmQjZXJ+Lg58T/fDy4K74bYWr9CyLIVdQiplHPLbjinZRu4BZuAENE6jxTP2zNkBVgfiWiFcv7f3xYjFqxs/7vb0P
     lpar_name="rrlpar" lpar_uuid="0D80582A44F64B43B2981D632743A6C8" lpar_uuid_gen_method="0"><SourceLparConfig additional_mac_addr_bases="" ame_capability="0" auto_start_e
    rmal" conn_monitoring="0" desired_proc_compat_mode="default" effective_proc_compat_mode="POWER7" hardware_mem_encryption="10" hardware_mem_expansion="5" keylock="normal
    "4" lpar_placement="0" lpar_power_mgmt="0" lpar_rr_dev_desc="	<cpage>		<P>1</P>
    		<S>51</S>
    		<VIOS_descri
    00010E0000000000003FB04214503IBMfcp</VIOS_descriptor>
    	</cpage>
    " lpar_rr_status="6" lpar_tcc_slot_id="65535" lpar_vtpm_status="65535" mac_addres
    x_virtual_slots="10" partition_type="rpa" processor_compatibility_mode="default" processor_mode="shared" shared_pool_util_authority="0" sharing_mode="uncapped" slb_mig_
    ofile="1" time_reference="0" uncapped_weight="128"><VirtualScsiAdapter is_required="false" remote_lpar_id="2" src_vios_slot_number="4" virtual_slot_number="4"/><Virtual
    "false" remote_lpar_id="1" src_vios_slot_number="3" virtual_slot_number="3"/><Processors desired="4" max="8" min="1"/><VirtualFibreChannelAdapter/><VirtualEthernetAdapt
    " filter_mac_address="" is_ieee="0" is_required="false" mac_address="82776CE63602" mac_address_flags="0" qos_priority="0" qos_priority_control="false" virtual_slot_numb
    witch_id="1" vswitch_name="vdct"/><Memory desired="8192" hpt_ratio="7" max="16384" memory_mode="ded" min="256" mode="ded" psp_usage="3"><IoEntitledMem usage="auto"/></M
     desired="200" max="400" min="10"/></SourceLparConfig></SourceLparInfo></SourceInfo><FileInfo modification="0" version="1"/><SriovEthMappings><SriovEthVFInfo/></SriovEt
    VirtualFibreChannelAdapterInfo/></VfcMappings><ProcPools capacity="0"/><TargetInfo concurr_mig_in_prog="-1" max_msp_concur_mig_limit_dynamic="-1" max_msp_concur_mig_lim
    concur_mig_limit="-1" mpio_override="1" state="nonexitent" uuid_override="1" vlan_override="1" vsi_override="1"><ManagerInfo/><TargetMspInfo port_number="-1"/><TargetLp
    ar_name="rrlpar" processor_pool_id="-1" target_profile_name="mig3_9117_MMD_10C94CC141109224549"><SharedMemoryConfig pool_id="-1" primary_paging_vios_id="0"/></TargetLpa
    argetInfo><VlanMappings><VlanInfo description="VkVSU0lPTj0xClZJT19UWVBFPVZFVEgKVkxBTl9JRD0zMzMxClZTV0lUQ0g9dmRjdApCUklER0VEPXllcwo=" vlan_id="3331" vswitch_mode="VEB" v
    ibleTargetVios/></VlanInfo></VlanMappings><MspMappings><MspInfo/></MspMappings><VscsiMappings><VirtualScsiAdapterInfo description="PHYtc2NzaS1ob3N0PgoJPGdlbmVyYWxJbmZvP
    mVyc2lvbj4KCQk8bWF4VHJhbmZlcj4yNjIxNDQ8L21heFRyYW5mZXI+CgkJPGNsdXN0ZXJJRD4wPC9jbHVzdGVySUQ+CgkJPHNyY0RyY05hbWU+VTkxMTcuTU1ELjEwQzk0Q0MtVjItQzQ8L3NyY0RyY05hbWU+CgkJPG1pb
    U9TcGF0Y2g+CgkJPG1pblZJT1Njb21wYXRhYmlsaXR5PjE8L21pblZJT1Njb21wYXRhYmlsaXR5PgoJCTxlZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4xPC9lZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4KCTwvZ2VuZ
    TxwYXJ0aXRpb25JRD4yPC9wYXJ0aXRpb25JRD4KCTwvcmFzPgoJPHZpcnREZXY+CgkJPHZEZXZOYW1lPnJybHBhcl9yb290dmc8L3ZEZXZOYW1lPgoJCTx2TFVOPgoJCQk8TFVBPjB4ODEwMDAwMDAwMDAwMDAwMDwvTFVBP
    FVOU3RhdGU+CgkJCTxjbGllbnRSZXNlcnZlPm5vPC9jbGllbnRSZXNlcnZlPgoJCQk8QUlYPgoJCQkJPHR5cGU+dmRhc2Q8L3R5cGU+CgkJCQk8Y29ubldoZXJlPjE8L2Nvbm5XaGVyZT4KCQkJPC9BSVg+CgkJPC92TFVOP
    gkJCTxyZXNlcnZlVHlwZT5OT19SRVNFUlZFPC9yZXNlcnZlVHlwZT4KCQkJPGJkZXZUeXBlPjE8L2JkZXZUeXBlPgoJCQk8cmVzdG9yZTUyMD50cnVlPC9yZXN0b3JlNTIwPgoJCQk8QUlYPgoJCQkJPHVkaWQ+MzMyMTM2M
    DAwMDAwMDAwMDNGQTA0MjE0NTAzSUJNZmNwPC91ZGlkPgoJCQkJPHR5cGU+VURJRDwvdHlwZT4KCQkJPC9BSVg+CgkJPC9ibG9ja1N0b3JhZ2U+Cgk8L3ZpcnREZXY+Cjwvdi1zY3NpLWhvc3Q+" slot_number="4" sou
    _slot_number="4"><PossibleTargetVios/></VirtualScsiAdapterInfo><VirtualScsiAdapterInfo description="PHYtc2NzaS1ob3N0PgoJPGdlbmVyYWxJbmZvPgoJCTx2ZXJzaW9uPjIuNDwvdmVyc2lv
    NjIxNDQ8L21heFRyYW5mZXI+CgkJPGNsdXN0ZXJJRD4wPC9jbHVzdGVySUQ+CgkJPHNyY0RyY05hbWU+VTkxMTcuTU1ELjEwQzk0Q0MtVjEtQzM8L3NyY0RyY05hbWU+CgkJPG1pblZJT1NwYXRjaD4wPC9taW5WSU9TcGF0
    YXRhYmlsaXR5PjE8L21pblZJT1Njb21wYXRhYmlsaXR5PgoJCTxlZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4xPC9lZmZlY3RpdmVWSU9TY29tcGF0YWJpbGl0eT4KCTwvZ2VuZXJhbEluZm8+Cgk8cmFzPgoJCTxwYXJ0
    b25JRD4KCTwvcmFzPgoJPHZpcnREZXY+CgkJPHZEZXZOYW1lPnJybHBhcl9yb290dmc8L3ZEZXZOYW1lPgoJCTx2TFVOPgoJCQk8TFVBPjB4ODEwMDAwMDAwMDAwMDAwMDwvTFVBPgoJCQk8TFVOU3RhdGU+MDwvTFVOU3Rh
    cnZlPm5vPC9jbGllbnRSZXNlcnZlPgoJCQk8QUlYPgoJCQkJPHR5cGU+dmRhc2Q8L3R5cGU+CgkJCQk8Y29ubldoZXJlPjE8L2Nvbm5XaGVyZT4KCQkJPC9BSVg+CgkJPC92TFVOPgoJCTxibG9ja1N0b3JhZ2U+CgkJCTxy
    UlZFPC9yZXNlcnZlVHlwZT4KCQkJPGJkZXZUeXBlPjE8L2JkZXZUeXBlPgoJCQk8cmVzdG9yZTUyMD50cnVlPC9yZXN0b3JlNTIwPgoJCQk8QUlYPgoJCQkJPHVkaWQ+MzMyMTM2MDA1MDc2ODBDODAwMDEwRTAwMDAwMDAw
    ZmNwPC91ZGlkPgoJCQkJPHR5cGU+VURJRDwvdHlwZT4KCQkJPC9BSVg+CgkJPC9ibG9ja1N0b3JhZ2U+Cgk8L3ZpcnREZXY+Cjwvdi1zY3NpLWhvc3Q+" slot_number="3" source_vios_id="1" src_vios_slot_n
    tVios/></VirtualScsiAdapterInfo></VscsiMappings><SharedMemPools find_devices="false" max_mem="16384"><SharedMemPool/></SharedMemPools><MigrationSession optional_capabil
    les" recover="na" required_capabilities="veth_switch,hmc_compatibilty,proc_compat_modes,remote_restart_capability,lpar_uuid" stream_id="9988047026654530562" stream_id_p
    on>
    

    About the state of the source machine ?

    You have to know this before using remote restart : at the time of writing this post the remote restart feature is still young and have to evolve before being usable in real life, I’m saying this because the FSP of the source machine has to be up to perform a remote restart operation. To be clear the remote restart feature does not answer to the total loss of one of your site. It’s just useful to restart partitions of a system with a problem that is not an FSP problem (problem with memory DIMM, problem with CPUs for instance). It can be used in your DRP exercises but not if your whole site is totally down which is -in my humble opinion- one of the key feature that remote restart needs to answer. Don’t be afraid read the conclusion ….

    Conclusion

    This post have been written using Power7+ machines, my goal was to give you an example of remote restart operations : a summary of what is is, how it work, and where and when to use it. I’m pretty sure that a lot of things are going to change about remote restart. First, on Power8 machines you don’t have to recreate the partitions to make them remote restart aware. Second, I know that changes are on the way for remote restart on Power8 machines, especially about reserved storage devices and about the state of the source machine. I’m sure this feature will have a bright future and used with PowerVC it can be a killer feature. Hope to see all this changes in a near future ;-). Once again I hope this post helps you.

    Automating systems deployment & other new features : HMC8, IBM Provisioning Toolkit for PowerVM and LPM Automation Tool

    $
    0
    0

    I am involved in a project where we are going to deploy dozen of Power Systems (still Power7 for the moment, and Power8 in a near future). All the systems will be the same : same models with the same slots emplacements and the same Virtual I/O Server configuration. To be sure that all my machines are the same and to allow other people (who are not aware of the design or are not skilled enough to do it by themselves) I had to find a solution to automatize the deployment of the new machines. For the virtual machines the solution is now to use PowerVC but what about the Virtual I/O Servers, what about the configuration of the Shared Ethernet Adapters. In other words what about the infrastructure deployment ? I spent a week with an IBM US STG Lab services consultant (Bonnie Lebarron) for a PowerCare (you have now a PowerCare included with every high end machine you buy) about the IBM Provisioning Toolkit for PowerVM (which is a very powerful tool that allows you to deploy your Virtual I/O Server and your virtual machines automatically) and the Live Partition Mobility Automation tool. With the new Hardware Management Console (8R8.2.0) you now have the possibility to create templates not just for the new virtual machines creation, but also to deploy create and configure your Virtual I/O Severs. The goal of this post is to show that there are different way to do that but also to show you the new features embedded with the new Hardware Management Console and to spread the world about those two STG Labs Services wonderful tools that are well know in US but not so much in Europe. So it’s a HUGE post, just take what is useful for you in it. Here we go :

    Hardware Management Console 8 : System templates

    The goal of the systems templates is to deploy a new server in minutes without having to logging on different servers to do some tasks, you now just have to connect on the HMC to do all the work. The systems templates will deploy the Virtual I/O Server image by using your NIM server or by using the images stored in the Hardware Management Console media repository. Please note a few points :

    • You CAN’T deploy a “gold” mksysb of your Virtual I/O Server using the Hardware Management Console repository. I’ve tried this myself and it is for the moment impossible (if someone has a solution …). I’ve tried two different ways. Creating a backupios image without the mksysb flag (it will produce a tar file impossible to upload on the image repository, but usable by the installios command). Creating a backupios image with the mksysb flag and use the mkcd/mkdvd command to create iso images. Both method were failing at the installation process.
    • The current Virtual I/O Server images provided in the Eelectonic Software Delivry (2.2.3.4 at the moment) are provided in the .udf format and not the .iso format. This is not a huge problem, just rename both files to .iso before uploading the file on the Hardware Management Console.
    • If you want to deploy your own mksysb you can still choose to use your NIM server, but you will have to manually create the NIM objects, and to manually configure a bosinst installation (in my humble opinion what we are trying to do is to reduce manual interventions, but you can still do that for the moment, that’s what I do because I don’t have the choice). You’ll have to give the IP address of the NIM server and the HMC will boot the Virtual I/O Servers with the network settings already configured.
    • The Hardware Management Console installation with the media repository is based on the old well known installios command. You still need to have the NIM port opened between your HMC and the Virtual I/O Server management network (the one you will choose to install both Virtual I/O Servers) (installios is based on NIMOL). You may experience some problems if you already install your Virtual I/O Servesr this way and you may have to reset some things. My advice is to always run these three commands before deploying a system template :
    # installios -F -e -R default1
    # installios -u 
    # installios -q
    

    Uploading an iso file on the Hardware Management Console

    Upload the images on the Hardware Management Console, I’ll not explain this in details …:

    hmc_add_virtual_io_server_to_repo
    hmc_add_virtual_io_server_to_repo2

    Creating a system template

    To create a system template you have first to copy an existing predefined template provided by the Hardware Management Console (1) and then edit this template to fit you own needs (2) :

    create_template_1

    • You can’t edit the physical I/O part when editing a new template, you first have to deploy a system with this template to choose the physical I/O for each Virtual I/O Server and then capture this deployed system as an HMC template. Change the properties of your Virtual I/O Server :
    • create_template_2

    • Create your Shared Ethernet Adapters : let’s say we want to create one Shared Ethernet Adapter in sharing mode with four virtual adapters :
    • Adapter 1 : PVID10, vlans=1024;1025
    • Adapter 2 : PVID11, vlans=1028;1029
    • Adapter 3 : PVID12, vlans=1032;1033
    • Adapter 4 : PVID13, vlans=1036;1037
    • In the new HMC8 the terms are changing and are not the same : Virtual Network Bridge = Shared Ethernet Adapter; Load (Balance) Group = A pair of virtual adapters with the same PVID on both Virtual I/O Server.
    • Create the Shared Ethernet Adapter with the first (with PVID10) and the second (with PVID11) adapter and the first vlan (vlan 1024 has to be added on adapter with PVID 10) :
    • create_sea1
      create_sea2
      create_sea3

    • Add the second vlan (the vlan 1028) in our Shared Ethernet Adapter (Virtual Network Bridge) and choose to put it on the adapter with PVID 11 (Load Balance Group 11) :
    • create_sea4
      create_sea5
      create_sea6

    • Repeat this operation for the next vlan (1032), but this time we have to create new virtual adapters with PVID 12 (Load Balance Group 12) :
    • create_sea7

    • Repeat this operation for the next vlan (1036), but this time we have to create new virtual adapters with PVID 13 (Load Balance Group 13).
    • You can check on this picture our 4 virtual adapters with two vlans for each ones :
    • create_sea8
      create_sea9

    • I’ll not detail the other part which are very simple to understand. You can check at the end our template is created 2 Virtual I/O Servers and 8 virtual networks.

    The Shared Ethernet Adapter problem : Are you deploying a Power8/Power7 with a 780 firmware or a Power6/7 server ?

    When creating a system template you probably notice that when your are defining your your Shared Ethernet Adapters … sorry your Virtual Network Bridges there is no possibility to create any control channel adapters or any possibility to assign a vlan id for this control channel. If you choose to create the system template by hand with the HMC the template will be usable by all Power8 systems and all Power7 system with a firmware that allows you to create a Shared Ethernet Adapter without any control channel (780 firmwares). I’ve tried this myself and we will check that later. If you are deploying a system template an older power 7 system the deployment will fail because of this reason. You have two solutions to this problem. Create your first system “by hand” and create your Shared Ethernet Adapters with control channel on your own and then capture the system to redeploy on other machines or you have the choice to edit the XML of you current template to add the control channel adapter in it …no comments.

    failed_sea_ctl_chan

    If you choose to edit the template to add the control channel on your own, export your template as an xml file and edit it by hand (here is an example on the picture below), and then re-imported the modified xml file :

    sea_control_channel_template

    Capture an already deployed system

    As you can see creating a system template from scratch can be hard and cannot match all your needs especially with this Shared Ethernet Adapter problem. My advice is to deploy by hand or by using the toolkit your first system and then capture the system to create and Hardware Management Console template based on this one. By doing this all the Shared Ethernet Adapters will be captured as configured, the ones with control channels and the ones without control channel. It can match all the cases without having to edit the xml file by hand.

    • Click “Capture configuration as template with physical I/O” :
    • capture_template_with_physical_io

    • The whole system will be captured and if you put your physical I/O in the same slot (as we do in my team) each time you deploy a new server you will not have to choice which physical I/O will belong to which Virtual I/O server :
    • capture_template_with_physical_io_capturing

    • In the system template library you can check that the physical I/O are captured and that we do not have to define our Shared Ethernet Adapter (the screenshot below shows you 49 vlans ready to be deployed) :
    • capture_template_library_with_physical_io_and_vlan

  • To do this don’t forget to edit the template and check the box “Use captured I/O information” :
  • use_captured_io_informations

    Deploying a system template

    BE VERY CAREFUL BEFORE DEPLOYING A SYSTEM TEMPLATE ALL THE ALREADY EXISTING VIRTUAL I/O SERVERS AND PARTITIONS WILL BE REMOVED BY DOING THIS. THE HMC WILL PROMPT YOU A WARNING MESSAGE. Go in the template library and right click on the template you want to deploy, then click deploy :

    reset_before_deploy1
    reset_before_deploy2

    • If you are deploying a “non captured template” choose the physical I/O for each Virtual I/O Servers :
    • choose_io1

    • If you are deploying a “captured template” the physical I/O will be automatically choose for each Virtual I/O Servers :
    • choose_io2

    • The Virtual I/O Server profiles are craved here :
    • craving_virtual_io_servers

    • You next have the choice to use a NIM server or to use the HMC image repository to deploy the Virtual I/O Servers in both cases you have to choose the adapter used to deploy the image :
    • NIM way :
    • nim_way

    • HMC way (check the tip at the beginning of the post about installios if you are choosing this method :
    • hmc_way

    • Click start when you are ready. The start button will invoke the lpar_netboot command with the settings you put in the previous screen :
    • start_dep

    • You can monitor the installation process by clicking monitoring vterm (on the images below you can check the ping is successful, the bootp is ok, the tftp is downloading, and the being mksysb restored :
    • monitor1
      monitor2
      monitor3

    • The RMC connection has to be up on both Virtual I/O Servers to build the Shared Ethernet Adapters and the Virtual I/O Server license must be accepted. Check both are ok.
    • RMCok
      licenseok

    • Choose where the Shared Ethernet Adapters will be created and the create the link aggregation device here (choose here on which network adapters and network ports will your Shared Ethernet Adapters be created) :
    • choose_adapter

    • Click start on the next screen to create the Shared Ethernet Adapter automatically :
    • sea_creation_ok

    • After a successful deployment of a system template a summary will be displayed on the screen :
    • template_ok

    IBM Provisioning Toolkit for PowerVM : A tool created by the Admins for the Admins

    As you now know the HMC templates are ok, but there are some drawbacks about using this method. In my humble opinion the HMC templates are good for a beginner, the user is now guided step by step and it is much simpler for someone who doesn’t know anything about PowerVM to build a server from scratch, without knowing and understanding all the features of PowerVM (Virtual I/O Server, Shared Ethernet Adapter). The deployment is not fully automatized the HMC will not mirror your rootvg, will not set any attributes on your fiber channel adapters, will never run a custom script after the installation to fit your needs. Last point, I’m sure that as a system administrator you probably prefer using command line tools than a “crappy” GUI, a template can not be created, neither deployed in command line (change this please). There is another way to build your server and it’s called IBM PowerVM Provisioning toolkit. This tool is developed by STG Lab Services US and is not well known in Europe but I can assure you that a lot of US customers are using it (raise your voice in comments us guys). This tool can help you in many ways :

    • Carving Virtual I/O Servers profiles.
    • Building and deploying Virtual I/O Servers with a NIM Server without having to create anything by hand.
    • Creating your SEA with or without control channel, failover/sharing, tagged/non-tagged.
    • Setting attributes on your fire channel adapters.
    • Building and deploying Virtual I/O Clients in NPIV and vscsi.
    • Mirroring you rootvg.
    • Capturing a whole frame and redeploy it on another server.
    • A lot of other things.

    Just to let you understand the approach of the tool let’s begin with an example. I want to deploy a new machine with two Virtual I/O Server :

    • 1 (white) – I’m writing a profile file : in this one I’m putting all the information that are the same all the machines (virtual switches, shared processor pools, Virtual I/O Server profiles, Shared Ethernet Adapter definition, image chosen to deploy the Virtual I/O Server, physical I/O adapter for each Virtual I/O Server)
    • 2 (white) – I’m writing a config file : in this one I’m putting all the information that are unique for each machine (name, ip, HMC name used to deploy, CEC serial number, and so on)
    • 3 (yellow) – I’m launching the provisioning toolkit to build my machine, the NIM objects are created (networks, standalone machines) and the bosinst operation is launched from the NIM server
    • 4 (red) – The Virtual I/O Servers profiles are created and the lpar_netboot command is launched an ssh key has to be shared between the NIM server and the Hardware management console
    • 5 (blue) – Shared Ethernet Adapter are created and post configuration is launched on the Virtual I/O Server (mirror creation, vfc attributes …)

    toolkit

    Let me show you a detailed example of a new machine deployment :

    • On the NIM server, the toolkit is located in /export/nim/provision. You can see that the main script called buildframe.ksh.v3.24.2, and two directories one for the profiles (build_profiles) and one for the configuration files (config_files). The work_area directory is the log directory :
    # cd /export/nim/provision
    # ls
    build_profiles          buildframe.ksh.v3.24.2  config_files       lost+found              work_area
    
  • Let’s check a configuration file a new Power720 deployment :
  • # vi build_profiles/p720.conf
    
  • Some variables will be set in the configuration file put N/A value for this ones :
  • VARIABLES      (SERVERNAME)=NA
    VARIABLES      (BUILDHMC)=NA
    [..]
    VARIABLES      (BUILDUSER)=hscroot
    VARIABLES      (VIO1_LPARNAME)=NA
    VARIABLES      (vio1_hostname)=(VIO1_LPARNAME)
    VARIABLES      (VIO1_PROFILE)=default_profile
    
    VARIABLES      (VIO2_LPARNAME)=NA
    VARIABLES      (vio2_hostname)=(VIO2_LPARNAME)
    VARIABLES      (VIO2_PROFILE)=default_profile
    
    VARIABLES      (VIO1_IP)=NA
    VARIABLES      (VIO2_IP)=NA
    
  • Choose the ports that will be used to restore the Virtual I/O Server mksysb :
  • VARIABLES      (NIMPORT_VIO1)=(CEC1)-P1-C6-T1
    VARIABLES      (NIMPORT_VIO2)=(CEC1)-P1-C7-T1
    
  • In the example I’m building the Virtual I/O Server with 3 Shared Ethernet Adapters, and I’m not creating any LACP aggregation :
  • # SEA1
    VARIABLES      (SEA1VLAN1)=401
    VARIABLES      (SEA1VLAN2)=402
    VARIABLES      (SEA1VLAN3)=403
    VARIABLES      (SEA1VLAN4)=404
    VARIABLES      (SEA1VLANS)=(SEA1VLAN1),(SEA1VLAN2),(SEA1VLAN3),(SEA1VLAN4)
    # SEA2
    VARIABLES      (SEA2VLAN1)=100,101,102
    VARIABLES      (SEA2VLAN2)=103,104,105
    VARIABLES      (SEA2VLAN3)=106,107,108
    VARIABLES      (SEA2VLAN4)=109,110
    VARIABLES      (SEA2VLANS)=(SEA2VLAN1),(SEA2VLAN2),(SEA2VLAN3),(SEA2VLAN4)
    # SEA3
    VARIABLES      (SEA3VLAN1)=200,201,202,203,204,309
    VARIABLES      (SEA3VLAN2)=205,206,207,208,209,310
    VARIABLES      (SEA3VLAN3)=210,300,301,302,303
    VARIABLES      (SEA3VLAN4)=304,305,306,307,308
    VARIABLES      (SEA3VLANS)=(SEA3VLAN1),(SEA3VLAN2),(SEA3VLAN3),(SEA3VLAN4)
    # SEA DEF (I'm putting adapter ID and PVID here)
    SEADEF         seadefid=SEA1,networkpriority=S,vswitch=vdct,seavirtid=10,10,(SEA1VLAN1):11,11,(SEA1VLAN2):12,12,(SEA1VLAN3):13,13,(SEA1VLAN4),seactlchnlid=14,99,vlans=(SEA1VLANS),netmask=(SEA1NETMASK),gateway=(SEA1GATEWAY),etherchannel=NO,lacp8023ad=NO,vlan8021q=YES,seaat
    trid=nojumbo
    SEADEF         seadefid=SEA2,networkpriority=S,vswitch=vdca,seavirtid=15,15,(SEA2VLAN1):16,16,(SEA2VLAN2):17,17,(SEA2VLAN3):18,18,(SEA2VLAN4),seactlchnlid=19,98,vlans=(SEA2VLANS),netmask=(SEA2NETMASK),gateway=(SEA2GATEWAY),etherchannel=NO,lacp8023ad=NO,vlan8021q=YES,seaat
    trid=nojumbo
    SEADEF         seadefid=SEA3,networkpriority=S,vswitch=vdcb,seavirtid=20,20,(SEA3VLAN1):21,21,(SEA3VLAN2):22,22,(SEA3VLAN3):23,23,(SEA3VLAN4),seactlchnlid=24,97,vlans=(SEA3VLANS),netmask=(SEA3NETMASK),gateway=(SEA3GATEWAY),etherchannel=NO,lacp8023ad=NO,vlan8021q=YES,seaat
    trid=nojumbo
    # SEA PHYSICAL PORTS 
    VARIABLES      (SEA1AGGPORTS_VIO1)=(CEC1)-P1-C6-T2
    VARIABLES      (SEA1AGGPORTS_VIO2)=(CEC1)-P1-C7-T2
    VARIABLES      (SEA2AGGPORTS_VIO1)=(CEC1)-P1-C1-C3-T1
    VARIABLES      (SEA2AGGPORTS_VIO2)=(CEC1)-P1-C1-C4-T1
    VARIABLES      (SEA3AGGPORTS_VIO1)=(CEC1)-P1-C4-T1
    VARIABLES      (SEA3AGGPORTS_VIO2)=(CEC1)-P1-C5-T1
    # SEA ATTR 
    SEAATTR        seaattrid=nojumbo,ha_mode=sharing,largesend=1,large_receive=yes
    
  • I’m defining each physical I/O adapter for each Virtual I/O Servers :
  • VARIABLES      (HBASLOTS_VIO1)=(CEC1)-P1-C1-C1,(CEC1)-P1-C2
    VARIABLES      (HBASLOTS_VIO2)=(CEC1)-P1-C1-C2,(CEC1)-P1-C3
    VARIABLES      (ETHSLOTS_VIO1)=(CEC1)-P1-C6,(CEC1)-P1-C1-C3,(CEC1)-P1-C4
    VARIABLES      (ETHSLOTS_VIO2)=(CEC1)-P1-C7,(CEC1)-P1-C1-C4,(CEC1)-P1-C5
    VARIABLES      (SASSLOTS_VIO1)=(CEC1)-P1-T9
    VARIABLES      (SASSLOTS_VIO2)=(CEC1)-P1-C19-T1
    VARIABLES      (NPIVFCPORTS_VIO1)=(CEC1)-P1-C1-C1-T1,(CEC1)-P1-C1-C1-T2,(CEC1)-P1-C1-C1-T3,(CEC1)-P1-C1-C1-T4,(CEC1)-P1-C2-T1,(CEC1)-P1-C2-T2,(CEC1)-P1-C2-T3,(CEC1)-P1-C2-T4
    VARIABLES      (NPIVFCPORTS_VIO2)=(CEC1)-P1-C1-C2-T1,(CEC1)-P1-C1-C2-T2,(CEC1)-P1-C1-C2-T3,(CEC1)-P1-C1-C2-T4,(CEC1)-P1-C3-T1,(CEC1)-P1-C3-T2,(CEC1)-P1-C3-T3,(CEC1)-P1-C3-T4
    
  • I’m defining the mksysb image to use and the Virtual I/O Server profiles :
  • BOSINST        bosinstid=viogold,source=mksysb,mksysb=golden-vios-2234-29122014-mksysb,spot=golden-vios-2234-29122014-spot,bosinst_data=no_prompt_hdisk0-bosinst_data,accept_licenses=yes,boot_client=no
    
    PARTITIONDEF   partitiondefid=vioPartition,bosinstid=viogold,lpar_env=vioserver,proc_mode=shared,min_proc_units=0.4,desired_proc_units=1,max_proc_units=16,min_procs=1,desired_procs=4,max_procs=16,sharing_mode=uncap,uncap_weight=255,min_mem=1024,desired_mem=8192,max_mem=12
    288,mem_mode=ded,max_virtual_slots=500,all_resources=0,msp=1,allow_perf_collection=1
    PARTITION      name=(VIO1_LPARNAME),profile_name=(VIO1_PROFILE),partitiondefid=vioPartition,lpar_netboot=(NIM_IP),(vio1_hostname),(VIO1_IP),(NIMNETMASK),(NIMGATEWAY),(NIMPORT_VIO1),(NIM_SPEED),(NIM_DUPLEX),NA,YES,NO,NA,NA
    PARTITION      name=(VIO2_LPARNAME),profile_name=(VIO2_PROFILE),partitiondefid=vioPartition,lpar_netboot=(NIM_IP),(vio2_hostname),(VIO2_IP),(NIMNETMASK),(NIMGATEWAY),(NIMPORT_VIO2),(NIM_SPEED),(NIM_DUPLEX),NA,YES,NO,NA,NA
    
    • Let’s now check a configuration file for a specific machine (as you can see I’m putting the Virtual I/O Server name here, the ip address and all that is specific to the new machines (CEC serial number and so on)) :
    # cat P720-8202-E4D-1.conf
    (BUILDHMC)=myhmc
    (SERVERNAME)=P720-8202-E4D-1
    (CEC1)=WZSKM8U
    (VIO1_LPARNAME)=labvios1
    (VIO2_LPARNAME)=labvios2
    (VIO1_IP)=10.14.14.1
    (VIO2_IP)=10.14.14.2
    (NIMGATEWAY)=10.14.14.254
    (VIODNS)=10.10.10.1,10.10.10.2
    (VIOSEARCH)=lab.chmod66.org,prod.chmod666.org
    (VIODOMAIN)=chmod666.org
    
  • We are now ready to build the new machine. the first thing to do is to create the vswitches on the machine (you have to confirm all operations):
  • ./buildframe.ksh.v3.24.2 -p p720 -c P720-8202-E4D-1.conf -f vswitch
    150121162625 Start of buildframe DATE: (150121162625) VERSION: v3.24.2
    150121162625        profile: p720.conf
    150121162625      operation: FRAMEvswitch
    150121162625 partition list:
    150121162625   program name: buildframe.ksh.v3.24.2
    150121162625    install dir: /export/nim/provision
    150121162625    post script:
    150121162625          DEBUG: 0
    150121162625         run ID: 150121162625
    150121162625       log file: work_area/150121162625_p720.conf.log
    150121162625 loading configuration file: config_files/P720-8202-E4D-1.conf
    [..]
    Do you want to continue?
    Please enter Y or N Y
    150121162917 buildframe is done with return code 0
    
  • Let’s now build the Virtual I/O Servers, create the Shared Ethernet Adapters and let’s have a coffee ;-)
  • # ./buildframe.ksh.v3.24.2 -p p720 -c P720-8202-E4D-1.conf -f build
    [..]
    150121172320 Creating partitions
    150121172320                 --> labvios1
    150121172322                 --> labvios2
    150121172325 Updating partition profiles
    150121172325   updating VETH adapters in partition: labvios1 profile: default_profile
    150121172329   updating VETH adapters in partition: labvios1 profile: default_profile
    150121172331   updating VETH adapters in partition: labvios1 profile: default_profile
    150121172342   updating VETH adapters in partition: labvios2 profile: default_profile
    150121172343   updating VETH adapters in partition: labvios2 profile: default_profile
    150121172344   updating VETH adapters in partition: labvios2 profile: default_profile
    150121172345   updating IOSLOTS in partition: labvios1 profile: default_profile
    150121172347   updating IOSLOTS in partition: labvios2 profile: default_profile
    150121172403 Configuring NIM for partitions
    150121172459 Executing--> lpar_netboot   -K 255.255.255.0 -f -t ent -l U78AA.001.WZSKM8U-P1-C6-T1 -T off -D -s auto -d auto -S 10.20.20.1 -G 10.14.14.254 -C 10.14.14.1 labvios1 default_profile s00ka9936774-8202-E4D-845B2CV
    150121173247 Executing--> lpar_netboot   -K 255.255.255.0 -f -t ent -l U78AA.001.WZSKM8U-P1-C7-T1 -T off -D -s auto -d auto -S 10.20.20.1 -G 10.14.14.254 -C 10.14.14.2 labvios2 default_profile s00ka9936774-8202-E4D-845B2CV
    150121174028 buildframe is done with return code 0
    
  • After the mksysb is deployed you can tail the logs on each Virtual I/O Server to check what is going on :
  • [..]
    150121180520 creating SEA for virtID: ent4,ent5,ent6,ent7
    ent21 Available
    en21
    et21
    150121180521 Success: running /usr/ios/cli/ioscli mkvdev -sea ent1 -vadapter ent4,ent5,ent6,ent7 -default ent4 -defaultid 10 -attr ctl_chan=ent8  ha_mode=sharing largesend=1 large_receive=yes, rc=0
    150121180521 found SEA ent device: ent21
    150121180521 creating SEA for virtID: ent9,ent10,ent11,ent12
    [..]
    ent22 Available
    en22
    et22
    150121180523 Success: running /usr/ios/cli/ioscli mkvdev -sea ent20 -vadapter ent9,ent10,ent11,ent12 -default ent9 -defaultid 15 -attr ctl_chan=ent13  ha_mode=sharing largesend=1 large_receive=yes, rc=0
    150121180523 found SEA ent device: ent22
    150121180523 creating SEA for virtID: ent14,ent15,ent16,ent17
    [..]
    ent23 Available
    en23
    et23
    [..]
    150121180540 Success: /usr/ios/cli/ioscli cfgnamesrv -add -ipaddr 10.10.10.1, rc=0
    150121180540 adding DNS: 10.10.10.1
    150121180540 Success: /usr/ios/cli/ioscli cfgnamesrv -add -ipaddr 10.10.10.2, rc=0
    150121180540 adding DNS: 159.50.203.10
    150121180540 adding DOMAIN: lab.chmod666.org
    150121180541 Success: /usr/ios/cli/ioscli cfgnamesrv -add -dname fr.net.intra, rc=0
    150121180541 adding SEARCH: lab.chmod666.org prod.chmod666.org
    150121180541 Success: /usr/ios/cli/ioscli cfgnamesrv -add -slist lab.chmod666.org prod.chmod666.org, rc=0
    [..]
    150121180542 Success: found fcs device for physical location WZSKM8U-P1-C2-T4: fcs3
    150121180542 Processed the following FCS attributes: fcsdevice=fcs4,fcs5,fcs6,fcs7,fcs0,fcs1,fcs2,fcs3,fcsattrid=fcsAttributes,port=WZSKM8U-P1-C1-C1-T1,WZSKM8U-P1-C1-C1-T2,WZSKM8U-P1-C1-C1-T3,WZSKM8U-P1-C1-C1-T4,WZSKM8U-P1-C2-T1,WZSKM8U-P1-C2-T2,WZSKM8U-P1-C2-T3,WZSKM8U-P
    1-C2-T4,max_xfer_size=0x100000,num_cmd_elems=2048
    150121180544 Processed the following FSCSI attributes: fcsdevice=fcs4,fcs5,fcs6,fcs7,fcs0,fcs1,fcs2,fcs3,fscsiattrid=fscsiAttributes,port=WZSKM8U-P1-C1-C1-T1,WZSKM8U-P1-C1-C1-T2,WZSKM8U-P1-C1-C1-T3,WZSKM8U-P1-C1-C1-T4,WZSKM8U-P1-C2-T1,WZSKM8U-P1-C2-T2,WZSKM8U-P1-C2-T3,WZS
    KM8U-P1-C2-T4,fc_err_recov=fast_fail,dyntrk=yes
    [..]
    150121180546 Success: found device U78AA.001.WZSKM8U-P2-D4: hdisk0
    150121180546 Success: found device U78AA.001.WZSKM8U-P2-D5: hdisk1
    150121180546 Mirror hdisk0 -->  hdisk1
    150121180547 Success: extendvg -f rootvg hdisk1, rc=0
    150121181638 Success: mirrorvg rootvg hdisk1, rc=0
    150121181655 Success: bosboot -ad hdisk0, rc=0
    150121181709 Success: bosboot -ad hdisk1, rc=0
    150121181709 Success: bootlist -m normal hdisk0 hdisk1, rc=0
    150121181709 VIOmirror <- rc=0
    150121181709 VIObuild <- rc=0
    150121181709 Preparing to reboot in 10 seconds, press control-C to abort
    

    The new server was deployed in one command and you avoid any manual mistake by using the toolkit. The example above is just one of the many was to use the toolkit. This is a very powerful and simple tool and I really want to see other Europe customers using it, so ask you IBM Pre-sales, ask for PowerCare and take the control of you deployment by using the toolkit. The toolkit is also used to capture and redeploy a whole frame for disaster recovery plan.

    Live Partition Mobility Automation Tool

    Because understanding the provisioning toolkit didn't takes me one full week we still had plenty of time the with Bonnie from STG Lab Service and we decided to give a try to another tool called Live Partition Mobility Automation Tool. I'll not talk about it in details but this tool allows you to automatize your Live Partition Mobility moves. It's a web interface coming with a tomcat server that you can run on a Linux or directly on your laptop. This web application is taking control of your Hardware Management Console and allows you to do a lot of things LPM related :

    • You can run a validation on every partitions on a system.
    • You can move you partitions by spreading or packing them on destination server.
    • You can "record" a move to replay it later (very very very useful for my previous customer for instance, we were making our moves by clients, all clients were hosted on two big P795)
    • You can run a dynamic platform optimizer after the moves.
    • You have an option to move back the partitions to their original location and this is (in my humble opinion) what's make this tool so powerfull

    lpm_toolkit

    Since I have this tool I'm now running on a week basis a validation of all my partition to check if there are any errors. I'm now using it to move and move back the partitions when I have to. So I really recommends the Live Partition Mobility Automation tool.

    Hardware Management Console 8 : Other new features

    Adding a VLAN to an already existing Shared Ethernet Adapter

    With the new Hardware Management Console you can easily add a new vlan to an already existing Shared Ethernet Adapter (failover and shared, with and without control channel : no restriction) without having to perform a dlpar operation on each Virtual I/O Server and then modifying your profiles (if you do not have the synchronization enabled). Even better by using this method to add your new vlans you will avoid any misconfiguration, for instance by forgetting to add the vlan on one or the Virtual I/O Server or by not choosing the same adapter on both side.

    • Open the Virtual Network page in the HMC and click "Add a Virtual Network". You have to remember that a Virtual Network Bridge is an Shared Ethernet Adapter, and a Load balance group is a pair of virtual adapters on both Virtual I/O Server with the same PVID :
    • add_vlan5

    • Choose the name of your vlan (in my case VLAN3331), then choose bridged network (bridged network is the new name for Shared Ethernet Adapters ...), choose "yes" for vlan tagging, and put the vlan id (in my case 3331). By choosing the virtual switch, the HMC will only let you choose a Shared Ethernet Adapter configured in the virtual switch (no mistake possible). DO NOT forget to check the box "Add new virtual network to all Virtual I/O servers" to add the vlan on both sides :
    • add_vlan

    • On the next page you have to choose the Shared Ethernet Adapter on which the vlan will be added (in my case this is super easy, I ALWAYS create one Shared Ethernet Adapter per virtual switch to avoid misconfiguration and network loops created by adding with the same vlan id on two differents Shared Ethernet Adapter) :
    • add_vlan2

    • At last choose or create a new "Load Sharing Group". A load sharing group is one of the virtual adapter of your Shared Ethernet Adapter. In my case my Shared Ethernet Adapter was created with two virtual adapters with id 10 and 11. On this screenshot I'm telling the HMC to add the new vlan on the adapter with the id 10 on both Virtual I/O Servers. You can also create a new virtual adapter to be included in the Shared Ethernet Adapter by choosing "Create a new load sharing group" :
    • add_vlan3

    • Before applying the configuration a summary is prompted to the user to check the changes :
    • add_vlan4

    Partition Templates

    You can also use the template to capture and created partitions not just systems. I'll not give you all the details because the HMC is well documented for this part and there is no tricky things to do, just follow the GUI. One more time the HMC8 is for the noobs \o/. Here are a few screenshot of partitions templates (capture and deploy) :

    create_part2
    create_part6

    A new a nice look and feel for the new Hardware Management Console

    Everybody that the HMC GUI is not very nice but it's working great. One of the major new thing of the HMC 8r8.2.0 is the new GUI. In my opinion the new GUI is awesome the design is nice and I love it. Look at the pictures below :

    hmc8
    virtual_network_diagram

    Conclusion

    The Hardware Management Console 8 is still young but offers a lot of new cool features like system and partitions template, performance dashboard and a new GUI. In my opinion the new GUI is slow and there are a lot of bugs for the moment, my advice is to use when you have the time to use it, not in a rush. Learn the new HMC on your own by trying to do all the common tasks with the new GUI (there are still impossible things to do ;-)). I can assure you that you will need more than a few hour to be familiarized with all those new features. And don't forget to call you pre-sales to have a demonstration of the STG's toolkits, both provisioning and LPM are awesome. Use it !

    What is going on in this world

    This blog is not and will never be the place for political things but with the darkest days we had in France two weeks ago with this insane and inhuman terrorists attacks I had to say a few words about it (because even if my whole life is about AIX for the moment, I'm also an human being .... if you doubt about it). Since the tragic death of 17 men and women in France everybody is raising his voice to tell us (me ?) what is right and what is wrong without thinking seriously about it. Things like this terrorist attack should never happen again. I just wanted to say that I'm for liberty, no only for the "liberty of expression", but just the liberty. By defending this liberty we have to be very careful because in the name of this defense things that are done by our government may take us what we call liberty forever. Are the phone and the internet going to be tapped and logged in the name of the liberty ? Is this liberty ? Think about it and resist.


    Using Chef and cloud-init with PowerVC 1.2.2.2 | What’s new in version 1.2.2.2

    $
    0
    0

    I’ve been busy; very busy and I apologize for that … almost two months since the last update on the blog, but I’m still alive and I love AIX more than ever ;-). There is no blog post about it but I’ve developped a tool called “lsseas” which can be useful to all PowerVM administrators (you can find the script on github at this address https://github.com/chmod666org/lsseas). I’ll not talk to much about it but I thought sharing the information to all my readers who are not following me on twitter was the best way to promote the tool. Have a look on it, submit your own changes on github, code and share !

    This said we can talk about this new blog post. PowerVC 1.2.2.2 has been released since a few months and there are a few things I wanted to talk about. The new version include new features making the product more powerful than ever (export/import images, activation input, vscsi lun management). PowerVC is only building “empty” machine, it’s a good start but we can do better. The activation engine can customize the virtual machines but is limited and in my humble opinion not really usable for post-installation tasks. With the recent release of cloud-init and Chef for AIX PowerVC can be utilized to build your machines from nothing … and finally get your application running in minutes. Using cloud-init and Chef can help you making your infrastructure repeatable, “versionable” and testable this is what we call infrastructure as code and it is damn powerful.

    A big thank you to Jay Kruemcke (@chromeaix), Philippe Hermes (@phhermes) and S.Tran (https://github.com/transt) , they gave me very useful help about the cloud-init support on AIX. Follow them on twitter !

    PowerVC 1.2.2.1 mandatory fixes

    Before starting please note that I strongly recommend to have the latest ifixes installed on your Virtual I/O Server. These ones are mandatory for PowerVC, install these ifixes no matter what :

    • On Virtual I/O Servers install IV66758m4c, rsctvios2:
    # emgr -X -e /mnt/VIOS_2.2.3.4_IV66758m4c.150112.epkg.Z
    # emgr -l
    [..]
    ID  STATE LABEL      INSTALL TIME      UPDATED BY ABSTRACT
    === ===== ========== ================= ========== ======================================
    1    S    rsctvios2  03/03/15 12:13:42            RSCT fixes for VIOS
    2    S    IV66758m4c 03/03/15 12:16:04            Multiple PowerVC fixes VIOS 2.2.3.4
    3    S    IV67568s4a 03/03/15 14:12:45            man fails in VIOS shell
    [..]
    
  • Check you have the latest version of the Hardware Management Console (I strongly recommend v8.2.20 Service Pack 1):
  • hscroot@myhmc:~> lshmc -V
    "version= Version: 8
     Release: 8.2.0
     Service Pack: 1
    HMC Build level 20150216.1
    ","base_version=V8R8.2.0
    "
    

    Exporting and importing image from another PowerVC

    The PowerVC latest version allows you to export and import images. It’s a good thing ! Let’s say that like me you have a few PowerVC hosts, on different SAN networks with different storage arrays, you probably do not want to create your images on each one and you prefer to be sure to use the same image for each PowerVC. Just create one image and use the export/import feature to copy/move this image to a different storage array or PowerVC host:

    • To do so map your current image disk on the PowerVC itself (in my case by using the SVC), you can’t attach volume used for an image volume directly from PowerVC so you have to do it on the storage side by hand:
    • maptohost
      maptohost2

    • On the PowerVC host, rescan the volume and copy the whole new discovered lun with a dd:
    powervc_source# rescan-scsi-bus.sh
    [..]
    powervc_source# multipath -ll
    mpathe (3600507680c810010f800000000000097) dm-10 IBM,2145
    [..]
    powervc_source# dd if=/dev/mapper/mpathe of=/data/download/aix7100-03-04-cloudinit-chef-ohai bs=4M
    16384+0 records in
    16384+0 records out
    68719476736 bytes (69 GB) copied, 314.429 s, 219 MB/s                                         
    
  • Map a new volume to the new PowerVC server and upload this new created file on the new PowerVC server, then dd the file back to the new volume:
  • mapnewlun

    powervc_dest# scp /data/download/aix7100-03-04-cloudinit-chef-ohai new_powervc:/data/download
    aix7100-03-04-cloudinit-chef-ohai          100%   64GB  25.7MB/s   42:28.
    powervc_dest# dd if=/data/download/aix7100-03-04-cloudinit-chef-ohai of=/dev/mapper/mpathc bs=4M
    16384+0 records in
    16384+0 records out
    68719476736 bytes (69 GB) copied, 159.028 s, 432 MB/s
    
  • Unmap the volume from the new PowerVC after the dd operation, and import it with the PowerVC graphical interface.
  • Manage the existing current volume you just created (note that the current PowerVC code does not allows you to choose cloud-init as an activation engine even if it is working great) :
  • manage_ex1
    manage_ex2

  • Import the image:
  • import1
    import2
    import3
    import4

    You can also use the command powervc-volume-image-import to import the new volume by using the command line instead of the graphical user interface. Here is an example with a Red Hat Enterprise Linux 6.4 image:

    powervc_source# dd if=/dev/hdisk4 of=/apps/images/rhel-6.4.raw bs=4M
    5815360+0 records in
    15360+0 records out
    powervc_dest# scp 10.255.248.38:/apps/images/rhel-6.4.raw .
    powervc_dest# dd if=/home/rhel-6.4.raw of=/dev/mapper/mpathe
    30720+0 records in
    30720+0 records out
    64424509440 bytes (64 GB) copied, 124.799 s, 516 MB/s
    powervc_dest# powervc-volume-image-import --name rhel64 --os rhel --volume volume_capture2 --activation-type ae
    Password:
    Image creation complete for image id: e3a4ece1-c0cd-4d44-b197-4bbbc2984a34
    

    Activation input (cloud-init and ae)

    Instead of doing post-installation tasks by hand after the deployment of the machine you can now use the activation input added recently in PowerVC. The activation input can be utilized to run any scripts you want or even better things (such as cloud-config syntax) if you are using cloud-init instead of the old activation engine. You have to remember that cloud-init is not yet officially supported by PowerVC, for this reason I think most of customers will still use the old activation engine. Latest activation engine version is also working with the activation input. On the examples below I’m of course using cloud-init :-). Don’t worry I’ll detail later in this post how to install and use cloud-init on AIX:

    • If you are using the activation engine please be sure to use the latest version. The current version of the activation engine in PowerVC 1.2.2.* is vmc-vsae-ext-2.4.5-1, the only way to be sure your are using this version is to check the size of /opt/ibm/ae/AS/vmc-sys-net/activate.py. The size of this file is 21127 bytes for the latest version. Check this before trying to do anything with the activation input. More information can be found here: Activation input documentation.
    • A simple shebang script can be used, on the example below this one is just writing a file, but it can be anything you want:
    • ai1

    # cat /tmp/activation_input
    Activation input was used on this server
    
  • If you are using cloud-init you can directly put cloud-config “script” in the activation input. The first line is always mandatory to tell the format of the activation input. If you forget to put this first line the activation input can not determine the format and the script will not be executed. Check the next point for more information about activation input:
  • ai2

    # cat /tmp/activation_input
    cloud-config activation input
    
  • There are additional fields called “server meta data key/value pairs”, just do not use them. They are used by images provided by IBM with customization of the activation engine. Forget about this it is useless, use this field only if IBM told you to do so.
  • cloud-init valid activation input can be found here: http://cloudinit.readthedocs.org/en/latest/topics/format.html. As you can see on the two examples above shell scripts and cloud-config format can be utilized, but you can also upload a gzip archive, or use a part handler format. Go on the url above for more informations.
  • vscsi and mix NPIV/vscsi machine creation

    This is one of the major enhancement, PowerVC is now able create and map vscsi disks, even better you can create mixed NPIV vscsi machine. To do so create storage connectivity groups for each technology you want to use. You can choose a different way to create disk for boot volumes and for data volumes. Here are three examples, full NPIV, full vscsi, and a mixed vscsi(boot) and NPIV(data):

    connectivitygroup1
    connectivitygroup2
    connectivitygroup3

    What is really cool about this new feature is that PowerVC can use existing mapped luns on the Virtual I/O Server, please note that PowerVC will only use SAN backed devices and cannot use iSCSI or local disk (local disk can be use in the express version). You obviously have to make the zoning of your Virtual I/O Server by yourself. Here is an example where I have 69 devices mapped to my Virtual I/O Server, you can see that PowerVC is using one of the existing device for its deployment. This can be very useful if you have different teams working for the SAN and the system side, the storage guys will not change their habits and still can map you bunch of luns on the Virtual I/O Server, this can be used as a transition if you did not succeed in convincing guys from you storage team:

    $ lspv | wc -l
          69
    

    connectivitygroup_deploy1

    $ lspv | wc -l
          69
    $ lsmap -all -fmt :
    vhost1:U8202.E4D.845B2DV-V2-C28:0x00000009:vtopt0:Available:0x8100000000000000:/var/vio/VMLibrary/vopt_c1309be1ed244a5c91829e1a5dfd281c: :N/A:vtscsi1:Available:0x8200000000000000:hdisk66:U78AA.001.WZSKM6P-P1-C3-T1-W500507680C11021F-L41000000000000:false
    

    Please note that you still need to add fabrics and storage on PowerVC even if you have pre-mapped luns on your Virtual I/O Servers. This is mandatory for PowerVC image management and creation.

    Maintenance Mode

    This last feature is probably the one I like the most. You can now put your host in maintenance mode, this means that when you put a host in maintenance mode all the virtual machines hosted on this one are migrated with live partition mobility (remember the migrlpar –all option, I’m pretty sure this option is utilized for the PowerVC maintenance mode). By putting an host in maintenance mode this one is no longer available for new machines deployment and for mobility operations. The host can be shutdown for instance for a firmware upgrade.

    • Select a host and click the “Enter maintenance mode button”:
    • maintenance1

    • Choose where you want to move virtual machines, or let PowerVC decide for you (packing or stripping placement policy):
    • maintenance2

    • The host is entering maintenance mode:
    • maintenance3

    • Once the host is in maintenance mode this one is ready to be shutdown:
    • maintenance4

    • Leave the maintenance mode when you are ready:
    • maintenance5

    An overview of Chef and cloud-init

    With PowerVC you are now able to deploy new AIX virtual machines in a few minutes but there is still some work to do. What about post-installation tasks ? I’m sure that most of you are using NIM post-install scripts for post installation tasks. PowerVC does not use NIM and even if you can run your own shell scripts after a PowerVC deployment the goal of this tool is to automate a full installation… post-install included.

    If the activation engine do the job to change the hostname and ip address of the machine it is pretty hard to customize it to do other tasks. Documentation is hard to find and I can assure you that it is not easy at all to customize and maintain. PowerVC Linux user’s are probably already aware of cloud-init. cloud-init is a tool (like the activation engine) in charge of the reconfiguration of your machine after its deployment, as the activation engine do today cloud-init change the hostname and the ip address of the machine but it can do way more than that (create user, add ssh-keys, mounting a filesystem, …). The good news is that cloud-init is now available an AIX since a few days, and you can use it with PowerVC. Awesome \o/.

    If cloud-init can do one part of this job, it can’t do all and is not designed for that! It is not a configuration management tool, configurations are not centralized in a server, there is now way to create cookbooks, runbooks (or whatever you call it), you can’t pull product sources from a git server, there are a lot of things missing. cloud-init is a light tool designed for a simple job. I recently (at work and in my spare time) played a lot with configuration management tools. I’m a huge fan of Saltstack but unfortunately salt-minion (which are Saltstack clients) is not available on AIX… I had to find another tool. A few months ago Chef (by Opscode) announced the support of AIX and a release of chef-client for AIX, I decided to give it a try and I can assure you that this is damn powerful, let me explain this further.

    Instead of creating shell scripts to do your post installation, Chef allows you to create cookbooks. Cookbooks are composed by recipes and each recipes is doing a task, for instance install an Oracle client, create the home directory for root user and create its profile file, enable or disable service on the system. The recipes are coded in a Chef language, and you can directly put Ruby code inside a recipe. Chef recipes are idempotent, it means that if something has already be done, it will not be done again. The advantage of using a solution like this is that you don’t have to maintain shell code and shells scripts which are difficult to change/rewrite. Your infrastructure is repeatable and changeable in minutes (after Chef is installed you can for instance told him to change /etc/resolv.conf for all your Websphere server). This is called “infrastructure as a code”. Give it a try and you’ll see that the first thing you’ll think will be “waaaaaaaaaaaaaooooooooooo”.

    Trying to explain how PowerVC, cloud-init and Chef can work together is not really easy, a nice diagram is probably better than a long text:

    chef

    1. You have built an AIX virtual machine. On this machine cloud-init is installed, Chef client 12 is installed. cloud-init is configured to register the chef-client on the chef-server, and to run a cookbook for a specific role. This server has been captured with PowerVC and is now ready to be deployed.
    2. Virtual machines are created with PowerVC.
    3. When the machine is built cloud-init is running on first boot. The ip address and the hostname of this machine is changed with the values provided in PowerVC. cloud-init create the chef-client configuration (client.rb, validation.pem). Finally chef-client is called.
    4. chef-client is registering on chef-server. Machine are now known by the chef-server.
    5. chef-client is resolving and downloading cookbooks for a specific role. Cookbooks and recipes are executed on the machine. After cookbooks execution the machine is ready and configured.
    6. Administrator create and upload cookbooks an recipe from his knife workstation. (knife is the tool to interact with the chef-server this one can be hosted anywhere you want, your laptop, a server …)

    In a few step here is what you need to do to use PowerVC, cloud-init, and Chef together:

    1. Create a virtual machine with PowerVC.
    2. Download cloud-init, and install cloud-init in this virtual machine.
    3. Download chef-client, and install chef-client in this virtual machine.
    4. Configure cloud-init, modifiy /opt/freeware/etc/cloud.cfg. In this file put the Chef configuration of the cc_chef cloud-init module.
    5. Create mandatory files, such as /etc/chef directory, put your ohai plugins in /etc/chef/ohai-plugins directory.
    6. Stop the virtual machine.
    7. Capture the virtual machine with PowerVC.
    8. Obviously as prerequisites a chef-server is up and running, cookbooks, recipes, roles, environments are ok in this chef-server.

    cloud-init installation

    cloud-init is now available on AIX, but you have to build the rpm by yourself. Sources can be found on github at this address : https://github.com/transt/cloud-init-0.7.5. There are a lot of prerequisites, most of them can be found on the github page, some of them on famous perzl site, download and install these prerequisites; it is mandatory (links to download the prerequisites are on the github page, the zip file containing cloud-init can be downloaded here : https://github.com/transt/cloud-init-0.7.5/archive/master.zip

    # rpm -ivh --nodeps gettext-0.17-8.aix6.1.ppc.rpm
    [..]
    gettext                     ##################################################
    # for rpm in bzip2-1.0.6-2.aix6.1.ppc.rpm db-4.8.24-4.aix6.1.ppc.rpm expat-2.1.0-1.aix6.1.ppc.rpm gmp-5.1.3-1.aix6.1.ppc.rpm libffi-3.0.11-1.aix6.1.ppc.rpm openssl-1.0.1g-1.aix6.1.ppc.rpm zlib-1.2.5-6.aix6.1.ppc.rpm gdbm-1.10-1.aix6.1.ppc.rpm libiconv-1.14-1.aix6.1.ppc.rpm bash-4.2-9.aix6.1.ppc.rpm info-5.0-2.aix6.1.ppc.rpm readline-6.2-3.aix6.1.ppc.rpm ncurses-5.9-3.aix6.1.ppc.rpm sqlite-3.7.15.2-2.aix6.1.ppc.rpm python-2.7.6-1.aix6.1.ppc.rpm python-2.7.6-1.aix6.1.ppc.rpm python-devel-2.7.6-1.aix6.1.ppc.rpm python-xml-0.8.4-1.aix6.1.ppc.rpm python-boto-2.34.0-1.aix6.1.noarch.rpm python-argparse-1.2.1-1.aix6.1.noarch.rpm python-cheetah-2.4.4-2.aix6.1.ppc.rpm python-configobj-5.0.5-1.aix6.1.noarch.rpm python-jsonpointer-1.0.c1ec3df-1.aix6.1.noarch.rpm python-jsonpatch-1.8-1.aix6.1.noarch.rpm python-oauth-1.0.1-1.aix6.1.noarch.rpm python-pyserial-2.7-1.aix6.1.ppc.rpm python-prettytable-0.7.2-1.aix6.1.noarch.rpm python-requests-2.4.3-1.aix6.1.noarch.rpm libyaml-0.1.4-1.aix6.1.ppc.rpm python-setuptools-0.9.8-2.aix6.1.noarch.rpm fdupes-1.51-1.aix5.1.ppc.rpm ; do rpm -ivh $rpm ;done
    [..]
    python-oauth                ##################################################
    python-pyserial             ##################################################
    python-prettytable          ##################################################
    python-requests             ##################################################
    libyaml                     ##################################################
    

    Build the rpm by following the commands below. You can reuse this rpm on every AIX on which you want to install cloud-init package:

    # jar -xvf cloud-init-0.7.5-master.zip
    inflated: cloud-init-0.7.5-master/upstart/cloud-log-shutdown.conf
    # mv cloud-init-0.7.5-master  cloud-init-0.7.5
    # chmod -Rf +x cloud-init-0.7.5/bin
    # chmod -Rf +x cloud-init-0.7.5/tools
    # cp cloud-init-0.7.5/packages/aix/cloud-init.spec.in /opt/freeware/src/packages/SPECS/cloud-init.spec
    # tar -cvf cloud-init-0.7.5.tar cloud-init-0.7.5
    [..]
    a cloud-init-0.7.5/upstart/cloud-init.conf 1 blocks
    a cloud-init-0.7.5/upstart/cloud-log-shutdown.conf 2 blocks
    # gzip cloud-init-0.7.5.tar
    # cp cloud-init-0.7.5.tar.gz /opt/freeware/src/packages/SOURCES/cloud-init-0.7.5.tar.gz
    # rpm -v -bb /opt/freeware/src/packages/SPECS/cloud-init.spec
    [..]
    Requires: cloud-init = 0.7.5
    Wrote: /opt/freeware/src/packages/RPMS/ppc/cloud-init-0.7.5-4.1.aix7.1.ppc.rpm
    Wrote: /opt/freeware/src/packages/RPMS/ppc/cloud-init-doc-0.7.5-4.1.aix7.1.ppc.rpm
    Wrote: /opt/freeware/src/packages/RPMS/ppc/cloud-init-test-0.7.5-4.1.aix7.1.ppc.rpm
    

    Finally install the rpm:

    # rpm -ivh /opt/freeware/src/packages/RPMS/ppc/cloud-init-0.7.5-4.1.aix7.1.ppc.rpm
    cloud-init                  ##################################################
    # rpm -qa | grep cloud-init
    cloud-init-0.7.5-4.1
    

    cloud-init configuration

    By installing cloud-init package on AIX some entries have been added to /etc/rc.d/rc2.d:

    ls -l /etc/rc.d/rc2.d | grep cloud
    lrwxrwxrwx    1 root     system           33 Apr 26 15:13 S01cloud-init-local -> /etc/rc.d/init.d/cloud-init-local
    lrwxrwxrwx    1 root     system           27 Apr 26 15:13 S02cloud-init -> /etc/rc.d/init.d/cloud-init
    lrwxrwxrwx    1 root     system           29 Apr 26 15:13 S03cloud-config -> /etc/rc.d/init.d/cloud-config
    lrwxrwxrwx    1 root     system           28 Apr 26 15:13 S04cloud-final -> /etc/rc.d/init.d/cloud-final
    

    The default configuration file is located in /opt/freeware/etc/cloud/cloud.cfg, this configuration file is splited in three parts. The first one called cloud_init_module tells cloud-init to run specifics modules when the cloud-init script is started at boot time. For instance set the hostname of the machine (set_hostname), reset the rmc (reset_rmc) and so on. In our case this part will automatically change the hostname and the ip address of the machine by the values provided in PowerVC at the deployement time. This cloud_init_module part is splited in two, the local one and the normal one. The local on is using information provided by the cdrom build by PowerVC at the time of the deployment. This cdrom provides ip and hostname of the machine, activation input script, nameservers information. The datasource_list stanza tells cloud-init to use the “ConfigDrive” (in our case virtual cdrom) to get ip and hostname needed by some cloud_init_modules. The second one called cloud_config_module tells cloud-init to run specific modules when cloud-config script is called, at this stage the minimal requirements have already been configured by the previous cloud_init_module stage (dns, ip address, hostname are ok). We will configure and setup the chef-client in this stage. The last part called cloud_final_module tells cloud-init to run specific modules when the cloud-final script is called. You can at this step print a final message, reboot the host and so on (In my case host reboot is needed by my install_sddpcm Chef recipe). Here is an overview of the cloud.cfg configuration file:

    cloud-init

    • The datasource_list stanza tells cloud-init to use the virtual cdrom as a source of information:
    datasource_list: ['ConfigDrive']
    
  • cloud_init_module:
  • cloud_init_modules:
    [..]
     - set-multipath-hcheck-interval
     - update-bootlist
     - reset-rmc
     - set_hostname
     - update_hostname
     - update_etc_host
    
  • cloud_config_module:
  • cloud_config_modules:
    [..]
      - mounts
      - chef
      - runcmd
    
  • cloud_final_module:
  • cloud_final_modules:
      [..]
      - final-message
    

    If you do not want to use Chef at all you can modify the cloud.cfg file to fit you needs (running homemade scripts, mounting filesystems …), but my goal here is to do the job with Chef. We will try to do the minimal job with cloud-init, so the goal here is to configure cloud-init to configure chef-client. Anyway I also wanted to play with cloud-init and see its capabilities. The full documentation of cloud-init can be found here https://cloudinit.readthedocs.org/en/latest/. Here are a few thing I just added (the Chef part will be detailed later), but keep in mind you can just use cloud-init without Chef if you want (setup you ssh key, mount or create filesystems, create files and so on):

    write_files:
      - path: /tmp/cloud-init-started
        content: |
          cloud-init was started on this server
        permissions: '0755'
      - path: /var/log/cloud-init-sub.log
        content: |
          starting chef logging
        permissions: '0755'
    
    final_message: "The system is up, cloud-init is finished"
    

    EDIT : The IBM developper of cloud-init for AIX just send me a mail yesterday about the new support of cc_power_state. As I need to reboot my host at the end of the build I can with the latest version of cloud-init for AIX use the power_state stanza, I here use poweroff as an example, use reboot … for reboot:

    power_state:
     delay: "+5"
     mode: poweroff
     message: cloud-init mandatory reboot for sddpcm
     timeout: 5
    

    power_state1

    Rerun cloud-init for testing purpose

    You probably want to test your cloud-init configuration before of after capturing the machine. When cloud-init is launched by the startup script a check is performed to be sure that cloud-init has not already been run. Some “semaphores” files are created in /opt/freeware/var/lib/cloud/instance/sem to tell modules have already been executed. If you want to re-run cloud-init by hand without having to rebuild a machine, just remove these files in this directory :

    # rm -rf /opt/freeware/var/lib/cloud/instance/sem
    

    Let’s say we just want to re-run the Chef part:

    # rm /opt/freeware/var/lib/cloud/instance/sem/config_chef
    

    To sum up here is what I want to do with cloud-init:

    1. Use the cdrom as datasource.
    2. Set the hostname and ip.
    3. Setup my chef-client.
    4. Print a final message.
    5. Do a mandatory reboot at the end of the installation.

    chef-client installation and configuration

    Before modifying the cloud.cfg file to tell cloud-init to setup the Chef client we first have to download and install the chef-client on the AIX host we will capture later. Download the Chef client bff file at this address: https://opscode-omnibus-packages.s3.amazonaws.com/aix/6.1/powerpc/chef-12.1.2-1.powerpc.bff and install it:

    # installp -aXYgd . chef
    [..]
    +-----------------------------------------------------------------------------+
                             Installing Software...
    +-----------------------------------------------------------------------------+
    
    installp: APPLYING software for:
            chef 12.1.2.1
    [..]
    Installation Summary
    --------------------
    Name                        Level           Part        Event       Result
    -------------------------------------------------------------------------------
    chef                        12.1.2.1        USR         APPLY       SUCCESS
    chef                        12.1.2.1        ROOT        APPLY       SUCCESS
    # lslpp -l | grep -i chef
      chef                      12.1.2.1    C     F    The full stack of chef
    # which chef-client
    /usr/bin/chef-client
    

    The configuration file of chef-client created by cloud-init will be created in the /etc/chef directory, by default the /etc/chef directory does not exists, so you’ll have to create it

    # mkdir -p /etc/chef
    # mkdir -p /etc/chef/ohai_plugins
    

    If -like me- you are using custom ohai plugins, you have two things to do. cloud-init is using templates files to build configuration files needed by Chef. Theses templates files are located in /opt/freeware/etc/cloud/templates. Modify the chef_client.rb.tmpl file to add a configuration line for ohai plugin_path. Copy your ohai plugin in /etc/chef/ohai_plugins:

    # tail -1 /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl
    Ohai::Config[:plugin_path] << '/etc/chef/ohai_plugins'
    # ls /etc/chef/ohai_plugins
    aixcustom.rb
    

    Add the chef stanza in the /opt/freeware/cloud/cloud.cfg. After this step the image is ready to be captured (Check ohai plugin configuration if you need one), so the chef-client is already installed. Put the force_install stanza to false, put the server_url, the validation_name of your Chef server, the organization and finally put the validation RSA private key provided in your Chef server (in the example below the key has been truncated for obvious purpose; server_url and validation_name have also been replaced). As you can see below, I tell here to Chef to run all recipes defined in the aix7 cookbook, we'll see later how to create a cookbook and recipes :

    chef:
      force_install: false
      server_url: "https://chefserver.lab.chmod666.org/organizations/chmod666"
      validation_name: "chmod666-validator"
      validation_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpQIBAAKCAQEApj/Qqb+zppWZP+G3e/OA/2FXukNXskV8Z7ygEI9027XC3Jg8
        [..]
        XCEHzpaBXQbQyLshS4wAIVGxnPtyqXkdDIN5bJwIgLaMTLRSTtjH/WY=
        -----END RSA PRIVATE KEY-----
      run_list:
        - "role[aix7]"
    
    runcmd:
      - /usr/bin/chef-client
    

    EDIT: With the latest build of cloud-init for AIX there is no need to run chef-client with the runcmd stanza. Just add exec: 1 in the chef stanza.

    To sum up, cloud-init is installed, cloud-init is configured to run a few actions at boot time but mainly to configure chef-client and run it with a specific role> The chef-client is installed. The machine can now be shutdown and is ready to be deployed. At the deployement time cloud-init will do the job to change ip address and hostname, and configure Chef. Chef will retreive the cookbooks and recipes and run it on the machine.

    If you want to use custom ohai plugins read the ohai part before capturing your machine.

    capture
    capture2

    Use chef-solo for testing

    You will have to create your own recipes. My advice is to use chef-solo to debug. The chef-solo binary file is provided with the chef-client package. This one can be use without a Chef server to run and execute Chef recipes:

    • Create a test recipe:
    # mkdir -p ~/chef/cookbooks/testing/recipes
    # cat  ~/chef/cookbooks/testing/recipes/test.rb
    file "/tmp/helloworld.txt" do
      owner "root"
      group "system"
      mode "0755"
      action :create
      content "Hello world !"
    end
    
  • Create a run_list with you test recipe:
  • # cat ~/chef/node.json
    {
      "run_list": [ "recipe[testing::test]" ]
    }
    
  • Create attribute file for chef-solo execution:
  • # cat  ~/chef/solo.rb
    file_cache_path "/root/chef"
    cookbook_path "/root/chef/cookbooks"
    json_attribs "/root/chef/node.json"
    
  • Run chef-solo:
  • # chef-solo -c /root/chef/solo.rb
    

    chef-solo

    cookbooks and recipes example on AIX

    Let's say you have written all you recipes using chef-solo on a test server. On the Chef server you now want to put all these recipes in a cookbook. From the workstation, create a cookbook :

    # knife cookbook create test
    ** Creating cookbook test in /home/kadmin/.chef/cookbooks
    ** Creating README for cookbook: aix7
    ** Creating CHANGELOG for cookbook: aix7
    ** Creating metadata for cookbook: aix7
    

    In the .chef directory you can now find a directory for the aix7 cookbook. In this one you will find a directory for each Chef objects : recipes, templates, files, and so on. This place is called the chef-repo. I strongly recommend using this place as a git repository (you will by doing this save all modifications of any object in the cookbook).

    # ls /home/kadmin/.chef/cookbooks/aix7/recipes
    create_fs_rootvg.rb  create_profile_root.rb  create_user_group.rb  delete_group.rb  delete_user.rb  dns.rb  install_sddpcm.rb  install_ssh.rb  ntp.rb  ohai_custom.rb  test_ohai.rb
    # ls /home/kadmin/.chef/cookbooks/aix7/templates/default
    aixcustom.rb.erb  ntp.conf.erb  ohai_test.erb  resolv.conf.erb
    

    Recipes

    Here are a few examples of my own recipes:

    • install_ssh, the recipe is mounting an nfs filesystem (nim server). The nim_server is an attribute coming from role default attribute (we will check that later), the oslevel is an ohai attribute coming from an ohai custom plugin (we will check that later too). openssh.license and openssh.server filesets are installed, the filesystem is unmounted, and finally ssh service is started:
    # creating temporary directory
    directory "/var/mnttmp" do
      action :create
    end
    # mouting nim server
    mount "/var/mnttmp" do
      device "#{node[:nim_server]}:/export/nim/lppsource/#{node['aixcustom']['oslevel']}"
      fstype "nfs"
      action :mount
    end
    # installing ssh packages (openssh.license, openssh.base)
    bff_package "openssh.license" do
      source "/var/mnttmp"
      action :install
    end
    bff_package "openssh.base" do
      source "/var/mnttmp"
      action :install
    end
    # umount the /var/mnttmp directory
    mount "/var/mnttmp" do
      fstype "nfs"
      action :umount
    end
    # deleting temporary directory
    directory "/var/mnttmp" do
      action :delete
    end
    # start and enable ssh service
    service "sshd" do
      action :start
    end
    
  • install_sddpcm, the recipe is mounting an nfs filesystem (nim server). The nim_server is an attribute coming from role default attribute (we will check that later), the platform_version is coming from ohai. devices.fcp.disk.ibm.mpio and devices.sddpcm.71.rte filesets are installed, the filesystem is unmounted:
  • # creating temporary directory
    directory "/var/mnttmp" do
      action :create
    end
    # mouting nim server
    mount "/var/mnttmp" do
      device "#{node[:nim_server]}:/export/nim/lpp_source/#{node['platform_version']}/sddpcm-71-2660"
      fstype "nfs"
      action :mount
    end
    # installing sddpcm packages (devices.fcp.disk.ibm.mpio, devices.sddpcm.71.rte)
    bff_package "devices.fcp.disk.ibm.mpio" do
      source "/var/mnttmp"
      action :install
    end
    bff_package "devices.sddpcm.71.rte" do
      source "/var/mnttmp"
      action :install
    end
    # umount the /var/mnttmp directory
    mount "/var/mnttmp" do
      fstype "nfs"
      action :umount
    end
    # deleting temporary directory
    directory "/var/mnttmp" do
      action :delete
    end
    
  • create_fs_rootvg, some filesystems are extended, an /apps filesystem is created and mounted. Please note that there are no cookbooks for AIX lvm for the moment and you have here to use the execute statement which is the only not to be idempotent:
  • execute "hd3" do
      command "chfs -a size=1024M /tmp"
    end
    execute "hd9var" do
      command "chfs -a size=512M /var"
    end
    execute "/apps" do
      command "crfs -v jfs2 -g rootvg -m /apps -Ay -a size=1M ; chlv -n appslv fslv00"
      not_if { ::File.exists?("/dev/appslv")}
    end
    mount "/apps" do
      device "/dev/appslv"
      fstype "jfs2"
    end
    
  • ntp, ntp.conf.erb located in the template directory is copied to /etc/ntp.conf:
  • template "/etc/ntp.conf" do
      source "ntp.conf.erb"
    end
    
  • dns, resolv.conf.erb located in the template directory is copied to /etc/resolv.conf:
  • template "/etc/resolv.conf" do
      source "resolv.conf.erb"
    end
    
  • crearte_user_group, a user for tadd is created:
  • user "taddmux" do
      gid 'sys'
      uid 421
      home '/home/taddmux'
      comment 'user TADDM connect SSH'
    end
    

    Templates

    On the recipes above templates are used for ntp and dns configuration. Templates files are files in which some strings are replaced by Chef attributes found in the roles, the environments, in ohai, or even directly in recipes, here are the two files I used for dns and ntp

    • ntp.conf.erb, ntpserver1,2,3 attributes are found in environments (let's say I have siteA and siteB and ntp are different for each site, I can define an environment for siteA en siteB):
    [..]
    server <%= node['ntpserver1'] %>
    server <%= node['ntpserver2'] %>
    server <%= node['ntpserver3'] %>
    driftfile /etc/ntp.drift
    tracefile /etc/ntp.trace
    
  • resolv.conf.erb, nameserver1,2,3 and namesearch are found in environments:
  • search  <%= node['namesearch'] %>
    nameserver      <%= node['nameserver1'] %>
    nameserver      <%= node['nameserver2'] %>
    nameserver      <%= node['nameserver3'] %>
    

    role assignation

    Chef roles can be used to run different chef recipes depending of the type of server you want to post install. You can for instance create a role for webserver in which the Websphere recipe will be executed and create a role for databases server in which the recipe for Oracle will be executed. In my case and for the simplicity of this example I just create one role called aix7

    # knife role create aix7
    Created role[aix7]
    # knife role edit aix7
    {
      "name": "aix7",
      "description": "",
      "json_class": "Chef::Role",
      "default_attributes": {
        "nim_server": "nimsrv01"
      },
      "override_attributes": {
    
      },
      "chef_type": "role",
      "run_list": [
        "recipe[aix7::ohai_custom]",
        "recipe[aix7::create_fs_rootvg]",
        "recipe[aix7::create_profile_root]",
        "recipe[aix7::test_ohai]",
        "recipe[aix7::install_ssh]",
        "recipe[aix7::install_sddpcm]",
        "recipe[aix7::ntp]",
        "recipe[aix7::dns]"
      ],
      "env_run_lists": {
    
      }
    }
    

    What we can se here are two important things. We created an attribute specific to this role called nim_server. In all recipes, templates "node['nim_server']" will be replaced by nimsrv01 (remember the recipes above, and remember we told chef-client to run the aix7 role). We created a run_list telling that recipes coming from aix7 cookbook : install_ssh, install_sddpcm, ... should be exectued on a server calling chef-client with the aix7 role.

    environments

    Chef environments can be use to separate you environments, for instance production, developpement, backup, or in my example sites. In my company depending the site on which you are building a machine nameservers and ntp servers will differ. Remember that we are using templates files for resolv.conf and ntp.conf files :

    knife environment show siteA
    chef_type:           environment
    cookbook_versions:
    default_attributes:
      namesearch:  lab.chmod666.org chmod666.org
      nameserver1: 10.10.10.10
      nameserver2: 10.10.10.11
      nameserver3: 10.10.10.12
      ntpserver1:  11.10.10.10
      ntpserver2:  11.10.10.11
      ntpserver3:  11.10.10.12
    description:         production site
    json_class:          Chef::Environment
    name:                siteA
    override_attributes:
    

    When chef-client will be called with -E siteA attribute it will replace node['namesearch'] by "lab.chmod666.org chomd666.org" in all recipes, and templates files.

    A Chef run

    When you are ok with your cookbook upload it to the Chef server:

    # knife cookbook upload aix7
    Uploading aix7           [0.1.0]
    Uploaded 1 cookbook.
    

    When chef-client is not executed by cloud-init you can run it by hand. I thought it is interessting to put an output of chef-client here, you can see that files are modified, packages installed and so on ;-) :

    chef-clientrun1
    chef-clientrun2

    Ohai

    ohai is a command delivered with chef-client. Its purpose is to gather information about the machine on which chef-client is executed. Each time chef-client is running a call to ohai is launched. By default ohai is gathering a lot of information such as ip address of the machine, the lpar id, the lpar name, and so on. A call to ohai is returning a json tree. Each element of this json tree can be accessed in Chef recipes or in Chef templates. For instance to get the lpar name the 'node['virtualization']['lpar_name']' can be called. Here is an example of a single call to ohai:

    # ohai | more
      "ipaddress": "10.244.248.56",
      "macaddress": "FA:A3:6A:5C:82:20",
      "os": "aix",
      "os_version": "1",
      "platform": "aix",
      "platform_version": "7.1",
      "platform_family": "aix",
      "uptime_seconds": 14165,
      "uptime": "3 hours 56 minutes 05 seconds",
      "virtualization": {
        "lpar_no": "7",
        "lpar_name": "s00va9940866-ada56a6e-0000004d"
      },
    

    At the time of writing this blog post there is -at my humble opinion- some attirbutes missing in ohai. For instance if you want to install a specific package from an lpp_source you first need to know what is you current oslevel (I mean the output of oslevel -s). Fortunately ohai can be surcharged by custom plugin and you can add your own attributes what ever it is.

    • In ohai 7 (the one shipped with chef-client 12) an attribute needs to be added to the Chef client.rb configuration to tells where the ohai plugins will be located. Remember that the chef-client is configured by cloud-init, to do so you need to modify the template used by cloud-init the build the client.rb file. This one is located in /opt/freeware/etc/cloud/template:
    # tail -1 /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl
    Ohai::Config[:plugin_path] << '/etc/chef/ohai_plugins'
    # mkdir -p /etc/chef/ohai_plugins
    
  • After this modification the machine is ready to be captured.
  • You want your custom ohai plugins to be uploaded to the chef-client machine at the time of chef-client execution, here is an example of custom ohai plugin used as a template. This one will gather the oslevel (oslevel -s), the node name, the partition name and the memory mode of the machine. These attributes are gathered with lparstat command:
  • Ohai.plugin(:Aixcustom) do
      provides "aixcustom"
    
      collect_data(:aix) do
        aixcustom Mash.new
    
        oslevel = shell_out("oslevel -s").stdout.split($/)[0]
        nodename = shell_out("lparstat -i | awk -F ':' '$1 ~ \"Node Name\" {print $2}'").stdout.split($/)[0]
        partitionname = shell_out("lparstat -i | awk -F ':' '$1 ~ \"Partition Name\" {print $2}'").stdout.split($/)[0]
        memorymode = shell_out("lparstat -i | awk -F ':' '$1 ~ \"Memory Mode\" {print $2}'").stdout.split($/)[0]
    
        aixcustom[:oslevel] = oslevel
        aixcustom[:nodename] = nodename
        aixcustom[:partitionname] = partitionname
        aixcustom[:memorymode] = memorymode
      end
    end
    
  • The custom ohai plugin is written. Remember that you want this one to be uploaded on the machine a the chef-client execution. New attributes created by this plugin needs to be added in ohai. Here is a recipe uploading the custom ohai plugin, at the time the plugin is uploaded ohai is reloaded and new attributes can be utilized in any further templates (for recipes you have no other choice than putting the custom ohai plugin in the directroy before the capture):
  • cat ~/.chef/cookbooks/aix7/recipes/ohai_custom.rb
    ohai "reload" do
      action :reload
    end
    
    template "/etc/chef/ohai_plugins/aixcustom.rb" do
      notifies :reload, "ohai[reload]", :immediately
    end
    

    chef-server, chef workstation, knife

    I'll not detail here how to setup a Chef server, and how configure you Chef workstation (knife). There are plenty of good tutorials about that on the internet. Please just note that you need to use Chef sever 12 if you are using Chef client 12. Here are some good link to start.

    I had some difficulties during the configuration here are a few tricks to know :

    • cacert can by found here: /opt/opscode/embedded/ssl/cert/cacert.pem
    • The Chef validation key can be found in /etc/chef/chef-validator.pem

    Building the machine, checking the logs

    • The write_file part was executed, the file is present in /tmp filesystem:
    # cat /tmp/cloud-init-started
    cloud-init was started on this server
    
  • The chef-client was configured, file are present in /etc/chef directory, looking at the log file these files were created by cloud-init
  • # ls -l /etc/chef
    total 32
    -rw-------    1 root     system         1679 Apr 26 23:46 client.pem
    -rw-r--r--    1 root     system          646 Apr 26 23:46 client.rb
    -rw-r--r--    1 root     system           38 Apr 26 23:46 firstboot.json
    -rw-r--r--    1 root     system         1679 Apr 26 23:46 validation.pem
    
    # grep chef | /var/log/cloud-init-output.log
    2015-04-26 23:46:22,463 - importer.py[DEBUG]: Found cc_chef with attributes ['handle'] in ['cloudinit.config.cc_chef']
    2015-04-26 23:46:22,879 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instances/a8b8fe0d-34c1-4bdb-821c-777fca1c391f/sem/config_chef - wb: [420] 23 bytes
    2015-04-26 23:46:22,882 - helpers.py[DEBUG]: Running config-chef using lock ()
    2015-04-26 23:46:22,884 - util.py[DEBUG]: Writing to /etc/chef/validation.pem - wb: [420] 1679 bytes
    2015-04-26 23:46:22,887 - util.py[DEBUG]: Reading from /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl (quiet=False)
    2015-04-26 23:46:22,889 - util.py[DEBUG]: Read 892 bytes from /opt/freeware/etc/cloud/templates/chef_client.rb.tmpl
    2015-04-26 23:46:22,954 - util.py[DEBUG]: Writing to /etc/chef/client.rb - wb: [420] 646 bytes
    2015-04-26 23:46:22,958 - util.py[DEBUG]: Writing to /etc/chef/firstboot.json - wb: [420] 38 bytes
    
  • The runcmd part was executed:
  • # cat /opt/freeware/var/lib/cloud/instance/scripts/runcmd
    #!/bin/sh
    /usr/bin/chef-client
    
    2015-04-26 23:46:22,488 - importer.py[DEBUG]: Found cc_runcmd with attributes ['handle'] in ['cloudinit.config.cc_runcmd']
    2015-04-26 23:46:22,983 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instances/a8b8fe0d-34c1-4bdb-821c-777fca1c391f/sem/config_runcmd - wb: [420] 23 bytes
    2015-04-26 23:46:22,986 - helpers.py[DEBUG]: Running config-runcmd using lock ()
    2015-04-26 23:46:22,987 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instances/a8b8fe0d-34c1-4bdb-821c-777fca1c391f/scripts/runcmd - wb: [448] 31 bytes
    2015-04-26 23:46:25,868 - util.py[DEBUG]: Running command ['/opt/freeware/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=False, capture=False)
    
  • The final message was printed in the output of the cloud-init log file
  • 2015-04-26 23:06:01,203 - helpers.py[DEBUG]: Running config-final-message using lock ()
    The system is up, cloud-init is finished
    2015-04-26 23:06:01,240 - util.py[DEBUG]: The system is up, cloud-init is finished
    2015-04-26 23:06:01,242 - util.py[DEBUG]: Writing to /opt/freeware/var/lib/cloud/instance/boot-finished - wb: [420] 57 bytes
    

    On the Chef server you can check the client registred itself and get details about it.

    # knife node list | grep a8b8fe0d-34c1-4bdb-821c-777fca1c391f
    a8b8fe0d-34c1-4bdb-821c-777fca1c391f
    # knife node show a8b8fe0d-34c1-4bdb-821c-777fca1c391f
    Node Name:   a8b8fe0d-34c1-4bdb-821c-777fca1c391f
    Environment: _default
    FQDN:
    IP:          10.10.208.61
    Run List:    role[aix7]
    Roles:       france_testing
    Recipes:     aix7::create_fs_rootvg, aix7::create_profile_root
    Platform:    aix 7.1
    Tags:
    

    What's next ?

    If you have a look on the Chef supermarket (the place where you can download Chef cookbooks written by the community and validated by opscode) you'll see that there are not a lot of cookbooks for AIX. I'm currently writting my own cookbook for AIX logical volume manager and filesystems creation, but there is still a lot of work to do on cookbooks creation for AIX. Here is a list of cookbooks that needs to be written by the community : chdev, multibos, mksysb, nim client, wpar, update_all, ldap_client .... I can continue this list but I'm sure that you have a lot of ideas. Last word learn ruby and write cookbooks, they will be used by the community and we can finally have a good configuration management tool on AIX. With PowerVC, cloud-init and Chef support AIX will have a full "DevOps" stack and can finally fight against Linux. As always hope this blog post helps you to understand PowerVC, cloud-init and Chef !

    Using the Simplified Remote Restart capability on Power8 Scale Out Servers

    $
    0
    0

    A few weeks ago I had to work on simplified remote restart. I’m not lucky enough yet -because of some political decisions in my company- to have access to any E880 or E870. We just have a few scale-out machines to play with (S814). For some critical applications we need in the future to be able to reboot the virtual machine if the system hosting the machine has failed (Hardware problem). We decided a couple of month ago not to use remote restart because it was mandatory to use a reserved storage pool device and it was too hard to manage because of this mandatory storage. We now have enough P8 boxes to try and understand the new version of remote restart called simplified remote restart which does not need any reserved storage pool device. For those who want to understand what remote restart is I strongly recommend you to check my previous blog post about remote restart on two P7 boxes: Configuration of a remote restart partition. For the others here is what I learned about the simplified version of this awesome feature.

    Please keep in mind that the FSP of the machine must be up to perform a simplified remote restart operation. It means that if for instance you loose one of your datacenter or the link between your two datacenters you cannot use simplified remote restart to restart you partitions on the main/backup site. Simplified Remote Restart only prevents you from an hardware failure of your machine. Maybe this will change in a near future but for the moment it is the most important thing to understand about simplified remote restart.

    Updating to the latest version of firmware

    I was very surprised when I got my Power8 machines. After deploying these boxes I decided to give a try to simplified remote restart but It was just not possible. Since the Power8 Scale Out servers were release they were NOT simplified remote restart capable. The release of the SV830 firmware now enables the Simplified Remote restart on Power8 Scale Out machines. Please note that there is nothing about it in the patch note, so chmod666.org is the only place where you can get this information :-). Here is the patch note: here. Last word you will find on the internet that you need Power8 to use simplified remote restart. It’s true but partially true. YOU NEED A P8 MACHINE WITH AT LEAST A 820 FIRMWARE.

    The first thing to do is to update your firmware to the SV830 version (on both systems participating in the simplified remote restart operation):

    # updlic -o u -t sys -l latest -m p814-1 -r mountpoint -d /home/hscroot/SV830_048 -v
    [..]
    # lslic -m p814-1 -F activated_spname,installed_level,ecnumber
    FW830.00,48,01SV830
    # lslic -m p814-2 -F activated_spname,installed_level,ecnumber
    FW830.00,48,01SV830
    

    You can check the firmware version directly from the Hardware Management Console or in the ASMI:

    fw1
    fw3

    After the firmware upgrade verify that you now have the Simplfied Remote Restart capability set to true.

    fw2

    # lssyscfg -r sys -F name,powervm_lpar_simplified_remote_restart_capable
    p720-1,0
    p814-1,1
    p720-2,0
    p814-2,1
    

    Prerequisites

    These prerequisites are true ONLY for Scale out systems:

    • To update to the firmware SV830_048 you need the latest Hardware Management Console release which is v8r8.3.0 plus MH01514 PTF.
    • Obviously on Scale out system SV830_048 is the minimum firmware requirement.
    • Minimum level of Virtual I/O Servers is 2.2.3.4 (for both source and destination systems).
    • PowerVM enterprise. (to be confirmed)

    Enabling simplified remote restart of an existing partition

    You probably want to enable simplified remote restart after an LPM migration/evacuation. After migrating your virtual machine(s) to a Power 8 with the Simplified Remote Restart Capability you have to enable this capability on all the virtual machines. This can only be done when the machine is shutdown, so you first have to stop the virtual machines (after a live partition mobility move) if you want to enable the SRR. It can’t be done without having to reboot the virtual machine:

    • List current partition running on the system and check which one are “simplified remote restart capable” (here only one is simplified remote restart capable):
    # lssyscfg -r lpar -m p814-1 -F name,simplified_remote_restart_capable
    vios1,0
    vios2,0
    lpar1,1
    lpar2,0
    lpar3,0
    lpar4,0
    lpar5,0
    lpar6,0
    lpar7,0
    
  • For each lpar not simplified remote restart capable change the simplified_remote_restart_capable attribute using the chssyscfg command. Please note that you can’t do this using the Hardware Management Console gui (in the latest 8r8.3.0, when enabling it by the Hardware management console the GUI is telling you that you need a reserved device storage which is needed by the Remote Restart Capability and not by the simplified version of remote restart. You have to use the command line ! (check screenshot below)
  • You can’t change this attribute while the machine is running:
  • gui_change_to_srr

  • You can’t do it with the GUI after the machine is shutdown:
  • gui_change_to_srr2
    gui_change_to_srr3

  • The only way to enable this attribute is to do it by using the Hardware Management Console command line (please note in the output below that running lpar cannot be changed):
  • # for i in lpar2 lpar3 lpar4 lpar5 lpar6 lpar7 ; do chsyscfg -r lpar -m p824-2 -i "name=$i,simplified_remote_restart_capable=1" ; done
    An error occurred while changing the partition named lpar6.
    HSCLA9F8 The remote restart capability of the partition can only be changed when the partition is shutdown.
    An error occurred while changing the partition named lpar7.
    HSCLA9F8 The remote restart capability of the partition can only be changed when the partition is shutdown.
    # lssyscfg -r lpar -m p824-1 -F name,simplified_remote_restart_capable,lpar_env | grep -v vioserver
    lpar1,1,aixlinux
    lpar2,1,aixlinux
    lpar3,1,aixlinux
    lpar4,1,aixlinux
    lpar5,1,aixlinux
    lpar6,0,aixlinux
    lpar7,0,aixlinux
    

    Remote restarting

    If you are trying to do a live partition mobility operation back to a P7 or P8 box without the simplified remote restart capability it will not be possible. Enabling the simplified remote restart will force the virtual machine to stay on P8 boxes with simplified remote restart capability. This is one of the reason why most of customers are not doing it:

    # migrlpar -o v -m p814-1 -t p720-1 -p lpar2
    Errors:
    HSCLB909 This operation is not allowed because managed system p720-1 does not support PowerVM Simplified Partition Remote Restart.
    

    lpm_not_capable_anymore

    On the Hardware Management Console you can see that the virtual machine is simplified remote restart capable by checking its properties:

    gui_change_to_srr4

    You can now try to remote restart your virtual machines to another server. As always the status of the server has to be different from Operating (Power Off, Error, Error – Dump in progress, Initializing). As always my advice is to validate before restarting:

    # rrstartlpar -o validate -m p824-1 -t p824-2 -p lpar1
    # echo $?
    0
    # rrstartlpar -o restart -m p824-1 -t p824-2 -p lpar1
    HSCLA9CE The managed system is not in a valid state to support partition remote restart operations.
    
    # lssyscfg -r sys -F name,state
    p824-2,Operating
    p824-1,Power Off
    # rrstartlpar -o restart -m p824-1 -t p824-2 -p lpar1
    

    By doing a remote restart operation the machine will boot automatically. You can check in the errpt that in most cases the partition ID will be changed (proving that you are on another machine):

    # errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A6DF45AA   0618170615 I O RMCdaemon      The daemon is started.
    1BA7DF4E   0618170615 P S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0618170615 I S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0618170615 I S SRC            SOFTWARE PROGRAM ERROR
    D872C399   0618170615 I O sys0           Partition ID changed and devices recreat
    

    Be very careful with the ghostdev sys0 attribute. Every VM remote restarted needs to have ghostdev set to 0 to avoid an ODM wipe (If you remote restart an lpar with ghostdev set to 1 you will loose all ODM customization)

    # lsattr -El sys0 -a ghostdev
    ghostdev 0 Recreate ODM devices on system change / modify PVID True
    

    When the source machine is up and running you have to clean the old definition of the remote restarted lpar by launching a cleanup operation. This will wipe the old lpar defintion:>

    # rrstartlpar -o cleanup -m p814-1 -p lpar1
    

    The RRmonitor (modified version)

    There is a script delivered by IBM called rrMonitor, this one is looking at the PowerSystem‘s state and if this one is in particular state is restarting a specific virtual machine. This script is just not usable by a user because it has to be executed directly on the HMC (you need a pesh password to put the script on the hmc) and is only checking one particular virtual machine. I had to modify this script to ssh to the HMC and then check for every lpar on the machine and not just one in particular. You can download my modified version here : rrMonitor. Here is what’s the script is doing:

    • Checking the state of the source machine.
    • If this one is not “Operating”, the script search for every remote restartable lpars on the machine.
    • The script is launching remote restart operations to remote restart all the partitions.
    • The script is telling the user the command to cleanup the old lpar when the source machine will be running again.
    # ./rrMonitor p814-1 p814-2 all 60 myhmc
    Getting remote restartable lpars
    lpar1 is rr simplified capable
    lpar1 rr status is Remote Restartable
    lpar2 is rr simplified capable
    lpar2 rr status is Remote Restartable
    lpar3 is rr simplified capable
    lpar3 rr status is Remote Restartable
    lpar4 is rr simplified capable
    lpar4 rr status is Remote Restartable
    Checking for source server state....
    Source server state is Operating
    Checking for source server state....
    Source server state is Operating
    Checking for source server state....
    Source server state is Power Off In Progress
    Checking for source server state....
    Source server state is Power Off
    It's time to remote restart
    Remote restarting lpar1
    Remote restarting lpar2
    Remote restarting lpar3
    Remote restarting lpar4
    Thu Jun 18 20:20:40 CEST 2015
    Source server p814-1 state is Power Off
    Source server has crashed and hence attempting a remote restart of the partition lpar1 in the destination server p814-2
    Thu Jun 18 20:23:12 CEST 2015
    The remote restart operation was successful
    The cleanup operation has to be executed on the source server once the server is back to operating state
    The following command can be used to execute the cleanup operation,
    rrstartlpar -m p814-1 -p lpar1 -o cleanup
    Thu Jun 18 20:23:12 CEST 2015
    Source server p814-1 state is Power Off
    Source server has crashed and hence attempting a remote restart of the partition lpar2 in the destination server p814-2
    Thu Jun 18 20:25:42 CEST 2015
    The remote restart operation was successful
    The cleanup operation has to be executed on the source server once the server is back to operating state
    The following command can be used to execute the cleanup operation,
    rrstartlpar -m sp814-1 -p lpar2 -o cleanup
    Thu Jun 18 20:25:42 CEST 2015
    [..]
    

    Conclusion

    As you can see the Simplified version of the remote restart feature is simpler that the normal one. My advice is to create all your lpars with the simplified remote restart attribute. It’s that easy :). If you plan to LPM back to P6 or P7 box, don’t use simplified remote restart. I think this functionality will become more popular when all the old P7 and P6 will be replaced by P8. As always I hope it helps.

    Here are a couple of link with great documentations about Simplified Remote Restart:

    • Simplified Remote Restart Whitepaper: here
    • Original rrMonitor: here
    • Materials about lastest HMC release and a couple of videos related to the Simplified Remote Restart: here

    Tips and tricks for PowerVC 1.2.3 (PVID, ghostdev, clouddev, rest API, growing volumes, deleting boot volume) | PowerVC 1.2.3 Redbook

    $
    0
    0

    Writing a Redbook was one of my main goal. After working days and nights for more than 6 years on PowerSystems IBM gave me the opportunity to write a Redbook. I was looking on the Redbook residencies page since a very very long time to find the right one. As there was nothing new on AIX and PowerVM (which are my favorite topics) I decided to give a try to the latest PowerVC Redbook (this Redbook is an update, but a huge one. PowerVC is moving fast). I am a Redbook reader since I’m working on AIX. Almost all Redbooks are good, most of them are the best source of information for AIX and Power administrators. I’m sure that like me, you saw that part about becoming an author every time you are reading a RedBook. I can now say THAT IT IS POSSIBLE (for everyone). I’m now one of this guys and you can also become one. Just find the Redbook that will fit for you and apply on the Redbook webpage (http://www.redbooks.ibm.com/residents.nsf/ResIndex). I wanted to say a BIG Thank you to all the people who gave me the opportunity to do that, especially Philippe Hermes, Jay Kruemcke, Eddie Shvartsman, Scott Vetter, Thomas R Bosthworth. In addition to these people I wanted also to thanks my teammates on this Redbook: Guillermo Corti, Marco Barboni and Liang Xu, they are all true professional people, very skilled and open … this was a great team ! One more time thank you guys. Last, I take the opportunity here to thanks the people who believed in me since the very beginning of my AIX career: Julien Gabel, Christophe Rousseau, and JL Guyot. Thank you guys ! You deserve it, stay like you are. I’m now not an anonymous guy anymore.

    redbook

    You can download the Redbook at this address: http://www.redbooks.ibm.com/redpieces/pdfs/sg248199.pdf. I’ve learn something during the writing of the Redbook and by talking to the members of the team. Redbooks are not there to tell and explain you what’s “behind the scene”. A Redbook can not be too long, and needs to be written in almost 3 weeks, there is no place for everything. Some topics are better integrated in a blog post than in a Redbook, and Scott told me that a couple of time during the writing session. I totally agree with him. So here is this long awaited blog post. The are advanced topics about PowerVC read the Redbook before reading this post.

    Last one thanks to IBM (and just IBM) for believing in me :-). THANK YOU SO MUCH.

    ghostdev, clouddev and cloud-init (ODM wipe if using inactive live partition mobility or remote restart)

    Everybody who is using cloud-init should be aware of this. Cloud-init is only supported with AIX version that have the clouddev attribute available on sys0. To be totally clear at the time of writing this blog post you will be supported by IBM only if you use AIX 7.1 TL3 SP5 or AIX 6.1 TL9 SP5. All other versions are not supported by IBM. Let me explain why and how you can still use cloud-init on older versions just by doing a little trick. But let’s first explain what the problem is:

    Let’s say you have different machines some of them using AIX 7100-03-05 and some of them using 7100-03-04, both use cloud-init for the activation. By looking at cloud-init code at this address here we can say that:

    • After the cloud-init installation cloud-init is:
    • Changing clouddev to 1 if sys0 has a clouddev attribute:
    # oslevel -s
    7100-03-05-1524
    # lsattr -El sys0 -a ghostdev
    ghostdev 0 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    clouddev 1 N/A True
    
  • Changing ghostdev to 1 if sys0 don’t have a clouddev attribute:
  • # oslevel -s
    7100-03-04-1441
    # lsattr -El sys0 -a ghostdev
    ghostdev 1 Recreate ODM devices on system change / modify PVID True
    # lsattr -El sys0 -a clouddev
    lsattr: 0514-528 The "clouddev" attribute does not exist in the predefined
            device configuration database.
    

    This behavior can directly be observed in the cloud-init code:

    ghostdev_clouddev_cloudinit

    Now that we are aware of that, let’s make a remote restart test between two P8 boxes. I take the opportunity here to present you one of the coolest feature of PowerVC 1.2.3. You can now remote restart your virtual machines directly from the PowerVC GUI if you have one of your host in a failure state. I highly encourage you to check my latest post about this subject if you don’t know how to setup remote restartable partitions http://chmod666.org/index.php/using-the-simplified-remote-restart-capability-on-power8-scale-out-servers/:

    • Only simplified remote restart can be managed by PowerVC 1.2.3, the “normal” version of remote restart is not handle by PowerVC 1.2.3
    • In the compute template configuration there is now a checkbox allowing you to create remote restartable partition. Be careful: you can’t go back to a P7 box without having to reboot the machine. So be sure your Virtual Machines will stay on P8 box if you check this option.
    • remote_restart_compute_template

    • When the machine is shutdown or there is a problem on it you can click the “Remotely Restart Virtual Machines” button:
    • rr1

    • Select the machines you want to remote restart:
    • rr2
      rr3

    • While the Virtual Machines are remote restarting, you can check the states of the VM and the state of the host:
    • rr4
      rr5

    • After the evacuation the host is in “Remote Restart Evacuated State”:

    rr6

    Let’s now check the state of our two Virtual Machines:

    • The ghostdev one (the sys0 messages in the errpt indicates that the partition ID has changed AND DEVICES ARE RECREATED (ODM Wipe)) (no more ip address set on en0):
    # errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A6DF45AA   0803171115 I O RMCdaemon      The daemon is started.
    1BA7DF4E   0803171015 P S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    CB4A951F   0803171015 I S SRC            SOFTWARE PROGRAM ERROR
    D872C399   0803171015 I O sys0           Partition ID changed and devices recreat
    # ifconfig -a
    lo0: flags=e08084b,c0
            inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
    
  • The clouddev one (the sys0 message in the errpt indicate that the partition ID has changed) (note that the errpt message does not indicate that the devices are recreated):
  • # errpt |more
    60AFC9E5   0803232015 I O sys0           Partition ID changed since last boot.
    # ifconfig -a
    en0: flags=1e084863,480
            inet 10.10.10.20 netmask 0xffffff00 broadcast 10.244.248.63
             tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
    lo0: flags=e08084b,c0
            inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
            inet6 ::1%1/0
             tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
    

    VSAE is designed to manage ghostdev only OS on the other hand cloud-init is designed to manage clouddev OS. To be perfectly clear here are how ghostdev and clouddev works. But we first need to answer a question. Why do we need to set clouddev or ghostdev to 1 ? The answer is pretty obvious, one of this attribute needs to be set to 1 before capturing the Virtual Machine. When the Virtual Machines is captured, one of this attributes is set to 1. When you will deploy a new Virtual Machine this flag is needed to wipe the ODM before reconfiguring the virtual machine with the parameters set in the PowerVC GUI (ip, hostname). In both clouddev and ghostdev cases it is obvious that we need to wipe the ODM at the machine build/deploy time. Then VSAE or cloud-init (using config drive datasource) is setting hostname, ip address previously wiped by clouddev and ghostdev attributes. This is working well for a new deploy because we need to wipe the ODM in all cases but what about an inactive live partition mobility or a remote restart operation ? The Virtual Machine has moved (not on the same host, and not with the same lpar ID) and we need to keep the ODM as it is. How is it working:

    • If you are using VSAE, this one is managing the ghostdev attribute for you. At the capture time ghostdev is set to 1 by VSAE (when you run the pre-capture script). When deploying a new VM, at the activation time, VSAE is setting ghostdev back to 0. Inactive live partition mobility and remote restart operations will work fine with ghostdev set to 0.
    • If you are using cloud-init on a supported system clouddev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing with both attributes at the activation time IBM needs to find a way to avoid wiping the ODM after the remote restart operation. The clouddev device was introduced: this one is writing a flag in the NVRAM, so when a new VM is built, there is no flag in the NVRAM for this one, the ODM is wiped. When an already existing VM is remote restarted, the flag exists in the NVRAM, the ODM is not wiped. By using clouddev there is no post deploy action needed.
    • If you are using cloud-init on an unsupported system ghostdev is set to 1 at the installation of cloud-init. As cloud-init is doing nothing at post-deploy time, ghostdev will remains to 1 in all cases and the ODM will always be wiped.

    cloudghost

    There is a way to use cloud-init on unsupported system. Keep in mind that in this case you will not be supported by IBM. So do this at you own risk. To be totally honest I’m using this method in production to use the same activation engine for all my AIX version:

    1. Pre-capture, set ghostdev to 1. What ever happens THIS IS MANDATORY.
    2. Post-capture, reboot the captured VM and set ghostdev to 0.
    3. Post-deploy on every Virtual machine set ghostdev to 0. You can put this in the activation input to do the job:
    4. #cloud-config
      runcmd:
       - chdev -l sys0 -a ghostdev=0
      

    The PVID problem

    I realized I had this problem after using PowerVC for a while. As PowerVC images for rootvg and other volume group are created using Storage Volume Controller flashcopy (in case of a SVC configuration, but there are similar mechanisms for other storage providers) the PVID for both rootvg and additional volume groups will always be the same for each new virtual machines (all new virtual machines will have the same PVID for their rootvg, and the same PVID for each captured volume group). I did contact IBM about this and the PowerVC team told me that this behavior is totally normal and was observed since the release of VMcontrol. They didn’t have any issues related to this, so if you don’t care about it, just do nothing and keep this behavior as it is. I recommend doing nothing about this!

    It’s a shame but most AIX administrators like to keep things as they are and don’t want any changes. (In my humble opinion this is one of the reason AIX is so outdated compared to Linux, we need a community, not narrow-minded people keeping their knowledge for them, just to stay in their daily job routine without having anything to learn). If you are in this case, facing angry colleagues about this particular point you can use the solution proposed below to calm the passions of the few ones who do not want to change !. :-). This is my rant : CHANGE !

    By default if you build two virtual machines and check the PVID of each one, you will notice that the PVID are the same:

    • Machine A:
    root@machinea:/root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00ca5fbddd55077e                    appsvg          active
    
  • Machine B:
  • root@machineb:root# lspv
    hdisk0          00c7102d2534adac                    rootvg          active
    hdisk1          00c7102d00d14660                    appsvg         active
    

    For the rootvg the PVID is always set to 00c7102d2534adac and for the appsvg the PVID is always set to 00c7102d00d14660.

    For the rootvg the solution is to change the ghostdev (only the ghostdev) to 2, and to reboot the machine. Putting ghostdev to 2 will change the PVID of the rootvg at the reboot time (after the PVID is changed ghostdev will be automatically set back to 0)

    # lsattr -El sys0 -a ghostdev
    ghostdev 2 Recreate ODM devices on system change / modify PVID True
    # lsattr -l sys0 -R -a ghostdev
    0...3 (+1)
    

    For the non rootvg volume group this is a little bit tricky but still possible, the solution is to use the recreatevg (-d option) command to change the PVID of all the physical volumes of your volume group. Before rebooting the server ensure that:

    • Umount all the filesystems in the volume group on which you want to change the PVID.
    • varyoff the volume group.
    • Get the physical volumes names composing the volume group.
    • export the volume group.
    • recreate the volume group (this action will change the PVID)
    • re-import the volume group.

    Here is the shell commands doing the trick:

    # vg=appsvg
    # lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "fuser -k " $NF }' | sh
    # lsvg -l $vg | awk '$6 == "open/syncd" && $7 != "N/A" { print "umount " $NF }' | sh
    # varyoffvg $vg
    # pvs=$(lspv | awk -v my_vg=$vg '$3 == my_vg {print $1}')
    # recreatevg -y $vg -d $pvs
    # importvg -y $vg $(echo ${pvs} | awk '{print $1}'
    

    We now agree that you want to do this, but as you are a smart person you want to do it automatically using cloud-init and the activation input, there are two way to do it, the silly way (using shell) and the noble way (using cloudinit syntax):

    PowerVC activation engine (shell way)

    Use this short ksh script in the activation input, this is not my recommendation, but you can do it for simplicity:

    activation_input_shell

    PowerVC activation engine (cloudinit way)

    Here is the cloud-init way. Important note: use the latest version of cloud-init, the first one I used had a problem with the cc_power_state_change.py not using the right parameters for AIX:

    activation_input_ci

    Working with REST Api

    I will not show you here how to work with the PowerVC RESTful API. I prefer to share a couple of scripts on my github account. Nice examples are often better than how to tutorials. So check the scripts on the github if you want a detailed how to … scripts are well commented. Just a couple of things to say before closing this topic: the best way to work with RESTful api is to code in python, there are a lot existing python libs to work with RESTful api (httplib2, pycurl, request). For my own understanding I prefer in my script using the simple httplib. I will put all my command line tools in a github repository called pvcmd (for PowerVC command line). You can download the scripts at this address, or just use git to clone the repo. One more time it is a community project, feel free to change and share anything: https://github.com/chmod666org/pvcmd:

    Growing data lun

    To be totally honest here is what I do when I’m creating a new machine with PowerVC. My customers always needs one additionnal volume groups for applications (we will call it appsvg). I’ve create a multi volume image with this volume group created (with a bunch of filesystem in it). As most of customers are asking for the volume group to be 100g large the capture was made with this size. Unfortunately for me we often get requests to create a bigger volume groups let’s say 500 or 600 Gb. Instead of creating a new lun and extending the volume group PowerVC allows you to grow the lun to the desired size. For volume group other than the boot one you must use the RESTful API to extend the volume. To do this I’ve created a python script to called pvcgrowlun (feel free to check the code on github) https://github.com/chmod666org/pvcmd/blob/master/pvcgrowlun. At each virtual machine creation I’m checking if the customer needs a larger volume group and extend it using the command provided below.

    While coding this script I got a problem using the os-extend parameter in my http request. PowerVC is not exactly using the same parameters as Openstack is, if you want to code by yourself be aware of this and check in the PowerVC online documentation if you are using “extended attributes” (Thanks to Christine L Wang for this one):

    • In the Openstack documentation the attribute is “os-extend” link here:
    • os-extend

    • In the PowerVC documentation the attribute is “ibm-extend” link here:
    • ibm-extend

    • Identify the lun you want to grow (as the script is taking the name of the volume as parameter) (I have one not published to list all the volumes, tell me if you want it). In my case the volume name is multi-vol-bf697dfa-0000003a-828641A_XXXXXX-data-1, and I want to change its size from 60 to 80. This is not stated in the offical PowerVC documentation but this will work for both boot and data lun.
    • Check the size of the lun is lesser than the desired size:
    • before_grow

    • Run the script:
    # pvcgrowlun -v multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 -s 80 -p localhost -u root -P mysecretpassword
    [info] growing volume multi-vol-bf697dfa-0000003a-828641A_XXXXX-data-1 with id 840d4a60-2117-4807-a2d8-d9d9f6c7d0bf
    JSON Body: {"ibm-extend": {"new_size": 80}}
    [OK] Call successful
    None
    
  • Check the size is changed after the command execution:
  • aftergrow_grow

  • Don’t forget to do the job in the operating system by running a “chvg -g” (check total PPS here):
  • # lsvg vg_apps
    VOLUME GROUP:       vg_apps                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      239 (61184 megabytes)
    MAX LVs:            256                      FREE PPs:       239 (61184 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    MIRROR POOL STRICT: off
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    # chvg -g appsvg
    # lsvg appsvg
    VOLUME GROUP:       appsvg                  VG IDENTIFIER:  00f9aff800004c000000014e6ee97071
    VG STATE:           active                   PP SIZE:        256 megabyte(s)
    VG PERMISSION:      read/write               TOTAL PPs:      319 (81664 megabytes)
    MAX LVs:            256                      FREE PPs:       319 (81664 megabytes)
    LVs:                0                        USED PPs:       0 (0 megabytes)
    OPEN LVs:           0                        QUORUM:         2 (Enabled)
    TOTAL PVs:          1                        VG DESCRIPTORS: 2
    STALE PVs:          0                        STALE PPs:      0
    ACTIVE PVs:         1                        AUTO ON:        yes
    MAX PPs per VG:     32768                    MAX PVs:        1024
    LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
    HOT SPARE:          no                       BB POLICY:      relocatable
    MIRROR POOL STRICT: off
    PV RESTRICTION:     none                     INFINITE RETRY: no
    DISK BLOCK SIZE:    512                      CRITICAL VG:    no
    

    My own script to create VMs

    I’m creating Virtual Machine every weeks, sometimes just a couple and sometime I got 10 Virtual Machines to create in a row. We are here using different storage connectivity groups, and different storage templates if the machine is in production, in development, and so on. We also have to choose the primary copy on the SVC side if the machine is in production (I am using a streched cluster between two distant sites, so I have to choose different storage templates depending on the site where the Virtual Machine is hosted). I make mistakes almost every time using the PowerVC gui (sometime I forgot to put the machine name, sometimes the connectivity group). I’m a lazy guy so I decided to code a script using the PowerVC rest api to create new machines based on a template file. We are planing to give the script to our outsourced teams to allow them to create machine, without knowing what PowerVC is \o/. The script is taking a file as parameter and create the virtual machine:

    • Create a file like the one below with all the information needed for your new virtual machine creation (name, ip address, vlan, host, image, storage connectivity group, ….):
    # cat test.vm
    name:test
    ip_address:10.16.66.20
    vlan:vlan6666
    target_host:Default Group
    image:multi-vol
    storage_connectivity_group:npiv
    virtual_processor:1
    entitled_capacity:0.1
    memory:1024
    storage_template:storage1
    
  • Launch the script, the Virtual Machine will be created:
  • pvcmkvm -f test.vm -p localhost -u root -P mysecretpassword
    name: test
    ip_address: 10.16.66.20
    vlan: vlan666
    target_host: Default Group
    image: multi-vol
    storage_connectivity_group: npiv
    virtual_processor: 1
    entitled_capacity: 0.1
    memory: 1024
    storage_template: storage1
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found network vlan666 with id 5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5
    [info] found host aggregation Default Group with id 1
    [info] found storage template storage1 with id bfb4f8cc-cd68-46a2-b3a2-c715867de706
    [info] found image multi-vol with id 041d830c-8edf-448b-9892-560056c450d8
    [info] found a volume with id b3783a95-822c-4179-8c29-c7db9d060b94
    [info] found a volume with id 9f2fc777-eed3-4c1f-8a02-00c9b7c91176
    JSON Body: {"os:scheduler_hints": {"host_aggregate_id": 1}, "server": {"name": "test", "imageRef": "041d830c-8edf-448b-9892-560056c450d8", "networkRef": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5", "max_count": 1, "flavor": {"OS-FLV-EXT-DATA:ephemeral": 10, "disk": 60, "extra_specs": {"powervm:max_proc_units": 32, "powervm:min_mem": 1024, "powervm:proc_units": 0.1, "powervm:max_vcpu": 32, "powervm:image_volume_type_b3783a95-822c-4179-8c29-c7db9d060b94": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:image_volume_type_9f2fc777-eed3-4c1f-8a02-00c9b7c91176": "bfb4f8cc-cd68-46a2-b3a2-c715867de706", "powervm:min_proc_units": 0.1, "powervm:storage_connectivity_group": "npiv", "powervm:min_vcpu": 1, "powervm:max_mem": 66560}, "ram": 1024, "vcpus": 1}, "networks": [{"fixed_ip": "10.244.248.53", "uuid": "5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5"}]}}
    {u'server': {u'links': [{u'href': u'https://powervc.lab.chmod666.org:8774/v2/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'self'}, {u'href': u'https://powervc.lab.chmod666.org:8774/1471acf124a0479c8d525aa79b2582d0/servers/fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'rel': u'bookmark'}], u'OS-DCF:diskConfig': u'MANUAL', u'id': u'fc3ab837-f610-45ad-8c36-f50c04c8a7b3', u'security_groups': [{u'name': u'default'}], u'adminPass': u'u7rgHXKJXoLz'}}
    

    One of the major advantage of using this is batching Virtual Machine creation. By using the script you can create one hundred Virtual Machine in a couple of minutes. Awesome !

    Working with Openstack commands

    PowerVC is based on Openstack, so why not using the Openstack command to work with PowerVC. It is possible, but I repeat one more time that this is not supported by IBM at all. Use this trick at you own risk. I was working with cloud manager with openstack (ICMO) and a script including shells variables is provided to “talk” to the ICMO Openstack. Based on the same file I created the same one for PowerVC. Before using any Openstack commands create a powervcrc file that match you PowerVC environement:

    # cat powervcrc
    export OS_USERNAME=root
    export OS_PASSWORD=mypasswd
    export OS_TENANT_NAME=ibm-default
    export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
    export OS_IDENTITY_API_VERSION=3
    export OS_CACERT=/etc/pki/tls/certs/powervc.crt
    export OS_REGION_NAME=RegionOne
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    

    Then source the powervcrc file, and you are ready to play with all Openstack commands:

    # source powervcrc
    

    You can then play with Openstack commands, here are a few nice example:

    • List virtual machines:
    # nova list
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    | ID                                   | Name                  | Status | Task State | Power State | Networks               |
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    | dc5c9fce-c839-43af-8af7-e69f823e57ca | ghostdev0clouddev1    | ACTIVE | -          | Running     | vlan666=10.16.66.56    |
    | d7d0fd7e-a580-41c8-b3d8-d7aab180d861 | ghostdevto1cloudevto1 | ACTIVE | -          | Running     | vlan666=10.16.66.57    |
    | bf697dfa-f69a-476c-8d0f-abb2fdcb44a7 | multi-vol             | ACTIVE | -          | Running     | vlan666=10.16.66.59    |
    | 394ab4d4-729e-44c7-a4d0-57bf2c121902 | deckard               | ACTIVE | -          | Running     | vlan666=10.16.66.60    |
    | cd53fb69-0530-451b-88de-557e86a2e238 | priss                 | ACTIVE | -          | Running     | vlan666=10.16.66.61    |
    | 64a3b1f8-8120-4388-9d64-6243d237aa44 | rachael               | ACTIVE | -          | Running     |                        |
    | 2679e3bd-a2fb-4a43-b817-b56ead26852d | batty                 | ACTIVE | -          | Running     |                        |
    | 5fdfff7c-fea0-431a-b99b-fe20c49e6cfd | tyrel                 | ACTIVE | -          | Running     |                        |
    +--------------------------------------+-----------------------+--------+------------+-------------+------------------------+
    
  • Reboot a machine:
  • # nova reboot multi-vol
    
  • List the hosts:
  • # nova hypervisor-list
    +----+---------------------+-------+---------+
    | ID | Hypervisor hostname | State | Status  |
    +----+---------------------+-------+---------+
    | 21 | 828641A_XXXXXXX     | up    | enabled |
    | 23 | 828641A_YYYYYYY     | up    | enabled |
    +----+---------------------+-------+---------+
    
  • Migrate a virtual machine (run a live partition mobility operation):
  • # nova live-migration ghostdevto1cloudevto1 828641A_YYYYYYY
    
  • Evacuate and set a server in maintenance mode and move all the partitions to another host:
  • # nova maintenance-enable --migrate active-only --target-host 828641A_XXXXXX 828641A_YYYYYYY
    
  • Virtual Machine creation (output truncated):
  • # nova boot --image 7100-03-04-cic2-chef --flavor powervm.tiny --nic net-id=5fae84a7-b463-4a1a-b4dd-9ab24cdb66b5,v4-fixed-ip=10.16.66.51 novacreated
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    | Property                            | Value                                                                                                                                            |
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    | OS-DCF:diskConfig                   | MANUAL                                                                                                                                           |
    | OS-EXT-AZ:availability_zone         | nova                                                                                                                                             |
    | OS-EXT-SRV-ATTR:host                | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | -                                                                                                                                                |
    | OS-EXT-SRV-ATTR:instance_name       | novacreated-bf704dc6-00000040                                                                                                                    |
    | OS-EXT-STS:power_state              | 0                                                                                                                                                |
    | OS-EXT-STS:task_state               | scheduling                                                                                                                                       |
    | OS-EXT-STS:vm_state                 | building                                                                                                                                         |
    | accessIPv4                          |                                                                                                                                                  |
    | accessIPv6                          |                                                                                                                                                  |
    | adminPass                           | PDWuY2iwwqQZ                                                                                                                                     |
    | avail_priority                      | -                                                                                                                                                |
    | compliance_status                   | [{"status": "compliant", "category": "resource.allocation"}]                                                                                     |
    | cpu_utilization                     | -                                                                                                                                                |
    | cpus                                | 1                                                                                                                                                |
    | created                             | 2015-08-05T15:56:01Z                                                                                                                             |
    | current_compatibility_mode          | -                                                                                                                                                |
    | dedicated_sharing_mode              | -                                                                                                                                                |
    | desired_compatibility_mode          | -                                                                                                                                                |
    | endianness                          | big-endian                                                                                                                                       |
    | ephemeral_gb                        | 0                                                                                                                                                |
    | flavor                              | powervm.tiny (ac01ba9b-1576-450e-a093-92d53d4f5c33)                                                                                              |
    | health_status                       | {"health_value": "PENDING", "id": "bf704dc6-f255-46a6-b81b-d95bed00301e", "value_reason": "PENDING", "updated_at": "2015-08-05T15:56:02.307259"} |
    | hostId                              |                                                                                                                                                  |
    | id                                  | bf704dc6-f255-46a6-b81b-d95bed00301e                                                                                                             |
    | image                               | 7100-03-04-cic2-chef (96f86941-8480-4222-ba51-3f0c1a3b072b)                                                                                      |
    | metadata                            | {}                                                                                                                                               |
    | name                                | novacreated                                                                                                                                      |
    | operating_system                    | -                                                                                                                                                |
    | os_distro                           | aix                                                                                                                                              |
    | progress                            | 0                                                                                                                                                |
    | root_gb                             | 60                                                                                                                                               |
    | security_groups                     | default                                                                                                                                          |
    | status                              | BUILD                                                                                                                                            |
    | storage_connectivity_group_id       | -                                                                                                                                                |
    | tenant_id                           | 1471acf124a0479c8d525aa79b2582d0                                                                                                                 |
    | uncapped                            | -                                                                                                                                                |
    | updated                             | 2015-08-05T15:56:02Z                                                                                                                             |
    | user_id                             | 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                                                                 |
    +-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
    
    

    LUN order, remove a boot lun

    If you are moving to PowerVC you will probably need to migrate existing machines to your PowerVC environment. One of my customer is asking to move its machines from old boxes using vscsi, to new PowerVC managed boxes using NPIV. I am doing it with the help of a SVC for the storage side. Instead of creating the Virtual Machine profile on the HMC, and then doing the zoning and masking on the Storage Volume Controller and on the SAN switches, I decided to let PowerVC do the job for me. Unfortunately, PowerVC can’t only “carve” Virtual Machine, if you want to do so you have to build a Virtual Machine (rootvg include). This is what I am doing. During the migration process I have to replace the PowerVC created lun by the lun used for the migration …. and finally delete the PowerVC created boot lun. There is a trick to know if you want to do this:

    • Let’s say the lun created by PowerVC is the one named “volume-clouddev-test….” and the orignal rootvg is named “good_rootvg”. The Virtual Machine is booted on the “good_rootvg” lun and I want to remove the “volume-clouddev-test….”:
    • root1

    • You first have to click the “Edit Details” button:
    • root2

    • Then toggle the boot set to “YES” for the “good_rootvg” lun and click move up (the rootvg order must be set to 1, it is mandatory, the lun at order 1 can’t be deleted):
    • root3

    • Toggle the boot set to “NO” for the PowerVC created rootvg:
    • root4

    • If you are trying to detach the volume in first position you will got an error:
    • root5

    • When the order are ok, you can detach and delete the lun created by PowerVC:
    • root6
      root7

    Conclusion

    There are always good things to learn about PowerVC and related AIX topics. Tell me if these tricks are useful for you and I will continue to write posts like this one. You don’t need to understand all this details to work with PowerVC, most customers don’t. I’m sure you prefer understand what is going on “behind the scene” instead of just clicking a nice GUI. I hope it helps you to better understand what PowerVC is made of. And don’t be shy share you tricks with me. Next: more to come about Chef ! Up the irons !

    Updating AIX TL and SP using Chef

    $
    0
    0

    Creating something to automate the update of a service pack or a technology level has always been a dream that never come true. You can trust me almost every customers that I know tried to make that dream come true. Different customers, same story everywhere. They tried to do something and then tripped up in a miserable way. A fact that is always true in those stories is that the decision of doing that is always taken by someone that do not understand that AIX cannot be managed like a workstation or any other OS (who said windows). A good example of that is an IBM (and you know that I’m an IBM fan) tool call BigFix/TEM (for Tivoli Endpoint Manager): I’m not an expert about TEM (so maybe I am wrong) but you can use this one to update your Windows OS, your Linux, your AIX and even your Iphones or Android devices. LET ME LAUGH! How can it be possible that someone think about this: updating an Iphone the same way as you update an AIX. A good joke! (To be clear I am always and will always support IBM but my role is also to say what I think). Another good example is the utilization of IBM Systems Director (unfortunately … or fortunately this one has been withdrawn since a couple of days). I tried this one myself a few years ago (you can check this post). System Director was (in my humble opinion) the least worst solution to update an AIX or a Virtual I/O Server in a automated way. So how are we going to do this in a world that is always asking to do more with less people ?. I had to find a solution a few months ago to update more than 700 hosts from AIX 6.1 to AIX 7.1, the job was to create something that anybody can launch without knowing anything about AIX (one more time who can even think this is possible ?). I tried to do things like writing scripts to automate nimadm and I’m pretty happy with this solution (almost 80% were ok without any errors, but there were tons of prerequisites before launching the scripts and we faced some problems that were inevitable (nimsh error, sendmail configuration, broken filesets) forcing the AIX L3 team to fix tons of migrations). As everybody knows now I’m working on Chef since a few months and this can be the solution to what our world is asking today : replacing hundred of peoples by a single man launching a magical thing that can do everything without knowing anything about anything and save money! This is obviously ironical but unfortunately this is the reality of what happends today in France. “Money” and “resource” rules everything without having any plans about the future (to be clear I’m here talking about a generality, nothing here can reflect what’s going on in my place). It is like it is and as a good soldier I’m going to give you solutions to face the reality of this harsh world. But now it’s action time ! I don’t want to be too pessimistic but this is unfortunately the reality of what is happening today and my anger about that only reflects the fact that I’m living in fear, the fear of becoming bad or the fear of doing a job I really don’t like. I think I have to find a solution about this problem. The picture below is clear enough to give you a good a example of what I’m trying to do with Chef.

    CF8j9_dWgAAOuyC

    How do we update machines

    I’m not here to teach you how to update a service pack or a technology level (I’m sure everybody know that) but in an automated way we need to talk about the method and identify each needed steps to perform an update. As there is always one more way to do it I have identified three ways to update a machine (the multibos way, the nimclient way and finally the alt_disk_copy way). To be able to update using Chef we obviously need to have an available provider for each method (you can do this with the execute resource, but we’re here to have fun and to learn some new things). So we need one provider capable of managing multibos, one capable of managing nimclient, and one capable of managing alt_disk_copy. All of these three providers are available now and can be used to write different recipes doing what is necessary to update a machine. Obviously there are pre-update and post-update steps needed (removing efixes, checking filesets). Let’s identify the step required first:

    • Verify with lppchk the consistency of all installed packages.
    • Remove any installed efixes (using emgr provider)
    • The multibos way:
      • You don’t need to create a backup of the rootvg using the multibos way.
      • Mount the SP or TL directory from the NIM server (using Chef mount resource).
      • Create the multibos instance and update using the remote mounted directory (using multibos resource).
    • The nimclient way:
      • Create a backup of your rootvg (using the altdisk resource).
      • Use nimclient to run a cust operation (using niminit,nimclient resource).
    • The alt_disk_copy way:
      • You don’t new to create a backup of the rootvg using the alt_disk_copy way.
      • Mount the SP or TL directory from the NIM server (using Chef mount).
      • Create the altinst_rootvg volume group and update it using the remote mounted directory (using altdisk provider).
    • Reboot the machine.
    • Remove any unwanted bos, old_rootvg.

    Reminder where to download the AIX Chef cookbook:

    Before trying to do all these steps in a single way let’s try to use the resources one by one to understand what each one is doing.

    Fixes installation

    This one is simple and allows you to install or remove fixes from your AIX machine, in the example below we are going to show how to do that with two Chef recipes: one for installing and the other one for removing! Super easy.

    Installing fixes

    In the recipe provides all the fixes name in an array and specify the name of the directory in which the filesets are (this can be an NFS mount point if you want to). Please note here that I’m using the cookbook_file resource to download the fixes, this resource allows you to download a file directly from the cookbook (so from the Chef server). Imagine using this single recipe to install a fix on all your machines. Quite easy ;-)

    directory "/var/tmp/fixes" do
      action :create
    end
    
    cookbook_file "/var/tmp/fixes/IV75031s5a.150716.71TL03SP05.epkg.Z" do
      source 'IV75031s5a.150716.71TL03SP05.epkg.Z'
      action :create
    end
    
    cookbook_file "/var/tmp/fixes/IV77596s5a.150930.71TL03SP05.epkg.Z" do
      source 'IV77596s5a.150930.71TL03SP05.epkg.Z'
      action :create
    end
    
    aix_fixes "installing fixes" do
      fixes ["IV75031s5a.150716.71TL03SP05.epkg.Z", "IV77596s5a.150930.71TL03SP05.epkg.Z"]
      directory "/var/tmp/fixes"
      action :install
    end
    
    directory "/var/tmp/fixes" do
      recursive true
      action :delete
    end
    

    emgr1

    Removing fixes

    The recipe is almost the same but with the remove action instead of the install action. Please note that you can specify which fixes to remove or use the keyword all to remove all the installed fixes (in the case of our recipe to update our servers we will use “all” as we want to remove all fixes before trying launch the update).

    aix_fixes "remove fixes IV75031s5a and IV77596s5a" do
      fixes ["IV75031s5a", "IV77596s5a]
      action :remove
    end
    
    aix_fixes "remove all fixes" do
      fixes ["all"]
    end
    

    emgr2

    Alternate disks

    In most AIX places I have seen the solution to backup your system before doing anything is to create an alternate disk using the alt_disk_copy command. Sometimes in some places where sysadmins love their job this disk is updated on the go to do a TL or SP upgrade. The altdisk resource I’ve coded for Chef take care of this. I’ll not detail with examples every actions available and will focus on create and cust:

    • create: This action create an alternate disk we will detail the attributes in the next section.
    • cleanup: Cleanup the alternate disk (remove it).
    • rename: Rename the alternate disk.
    • sleep: Put the alternate disk in sleep (umount every /alt_inst/* filesystem and varyoff the volume group)
    • wakeup: Wake up the alternate disk (varyon the volume group and mount every filesystems)
    • customize: Run a cust operation (the current resource is coded to use a directory to update the alternate disk with all the filesets present in a directory).

    Creation

    The alternate disk create action create an alternate disk an helps you to find an available disk for this creation. In any cases only free disks will be choosen (disks with no PVID and no volume group defined). Different types are available to choose the disk on which the alternate disk will be created:

    • Size: If type is size a disk by the exact same size of the value attribute will be used.
    • Name: If type is name a disk by the name of the value attribute will be used.
    • Auto: In auto mode available values for value are bigger and equals. If bigger is choose the first disk found with a size bigger than the current rootvg size will be used. If equals is choose the first disk found with a size equals to the current rootvg size is used.
    aix_altdisk "cloning rootvg by name" do
      type :name
      value "hdisk3"
      action :create
    end
    
    aix_altdisk "cloning rootvg by size 66560" do
      type :size
      value "66560"
    end
    
    aix_altdisk "removing old alternates" do
      action :cleanup
    end
    
    aix_altdisk "cloning rootvg" do
      type :auto
      value "bigger"
      action :create
    end
    

    altdisk1

    Customization

    The customization action will update the previously created alternate disk with the filesets presents in an NFS mounted directory (from the NIM server). Please note in the recipe below that we are mounting the directory from NFS. The node[:nim_server] is an attribute of the node telling which nim server will be mounted. For instance you can define a nim server used for production environment and a nim server used for development environment.

    # mounting /mnt
    mount "/mnt" do
      device '#{node[:nim_server]}:/export/nim/lpp_source'
      fstype 'nfs'
      action :mount
    end
    
    # updating the current disk
    aix_altdisk "altdisk_update" do
      image_location "/mnt/7100-03-05-1524"
      action :customize
    end
    
    mount "/mnt" do
      action :umount
    end
    

    altdisk_cust

    niminit/nimclient

    The niminit and nimclient resources are used to register the nimclient to the nim master and then run a nimclient operation from the client. In my humble opinion this is the best way to do the update at the time of writing this blog post. One cool thing is that you can specify on which adapter the nimclient will be configured by using some ohai attributes. It’s an elegant way to do that, one more time this is showing you the power of Chef ;-) . Let’s start with some examples:

    niminit

    aix_niminit node[:hostname] do
      action :remove
    end
    
    aix_niminit node[:hostname] do 
      master "nimcloud"
      connect "nimsh"
      pif_name node[:network][:default_interface]
      action :setup
    end
    

    nimclient1

    nimclient

    nimclient can first be used to install some filesets you may need. The provider is intelligent and can choose the good lpp_source for you. Please note that you will need lpp_source with a specific naming convention if you want to use this feature. To find the next/latest available sp/tl the provider is checking the current oslevel of the current machine and compare it with the available lpp_source present on you nim server. The naming convention needed is $(oslevel s)-lpp_source (ie. 7100-03-05-1524-lpp_source) (same principle is applicable to the spot when you need to use spot)

    $ lsnim -t lpp_source | grep 7100
    7100-03-00-0000-lpp_source             resources       lpp_source
    7100-03-01-1341-lpp_source             resources       lpp_source
    7100-03-02-1412-lpp_source             resources       lpp_source
    7100-03-03-1415-lpp_source             resources       lpp_source
    7100-03-04-1441-lpp_source             resources       lpp_source
    7100-03-05-1524-lpp_source             resources       lpp_source
    

    If your nim resources name are ok the lpp_source attribute can be:

    • latest_sp: the latest available service pack.
    • next_sp: the next available service.
    • latest_tl: the latest available technology level.
    • next_tl: the next available techonlogy level.
    • If you do not want to do this you can still specify the name of the lpp_source by hand.

    Here are a few example to install packages

    aix_nimclient "installing filesets" do
      installp_flags "aXYg"
      lpp_source "7100-03-04-1441-lpp_source"
      filesets ["openssh.base.client","openssh.base.server","openssh.license"]
      action :cust
    end
    
    aix_nimclient "installing filesets" do
      installp_flags "aXYg"
      lpp_source "7100-03-04-1441-lpp_source"
      filesets ["bos.compat.cmds", "bos.compat.libs"]
      action :cust
    end
    
    aix_nimclient "installing filesets" do
      installp_flags "aXYg"
      lpp_source "7100-03-04-1441-lpp_source"
      filesets ["Java6_64.samples"]
      action :cust
    end
    

    nimclient2

    Please note that some filesets were already installed and the resource did not converge because of that ;-) . Let’s now try to update to the latest service pack:

    aix_nimclient "updating to latest sp" do
      installp_flags "aXYg"
      lpp_source "latest_sp"
      fixes "update_all"
      action :cust
    end
    

    nimclient3

    Tadam the machine was updated from 7100-03-04-1441 to 7100-03-05-1524 using a single a recipe and without knowing which service pack was available to update!

    multibos

    I really like the multibos way and I don’t know why today so few peoples are using it, anyway, I know some customers who are only working this way so I thought it was worth it working on a multibos resource. Here is a nice recipe creating a bos and updating it.

    # creating dir for mount
    directory "/var/tmp/mnt" do
      action :create
    end
    
    # mounting /mnt
    mount "/var/tmp/mnt" do
      device "#{node[:nim_server]}:/export/nim/lpp_source"
      fstype 'nfs'
      action :mount
    end
    
    # removing standby multibos
    aix_multibos "removing standby bos" do
      action :remove
    end
    
    # create multibos and updateit
    aix_multibos "creating bos " do
      action :create
    end
    
    aix_multibos "update bos" do
      update_device "/var/tmp/mnt/7100-03-05-1524"
      action :update
    end
    
    # unmount /mnt
    mount "/var/tmp/mnt" do
      action :umount
    end
    
    # deleting temp directory
    directory "/var/tmp/mnt" do
      action :delete
    end
    

    multibos1
    multibos2

    Full recipes for updates

    Let’s now write a big recipe doing all the things we need for an update. Remember that if one resource is failing the recipe stop by itself. For instance you’ll see in the recipe below that I’m doing an “lppchk -vm3″. If it returns something other than 0, the resources fail and the recipe fail. It’s obviously a normal behavior, it’s seems ok not to continue if there is a problem. So to sum up here are all the steps this recipe is doing: check fileset consistency, removing all fixes, committing filesets, creating an alternate disk, configuring the nimclient, running the update, deallocating resources

    # if lppchk -vm return code is different
    # than zero recipe will fail
    # no guard needed here
    execute "lppchk" do
      command 'lppchk -vm3'
    end
    
    # removing any efixes
    aix_fixes "remvoving_efixes" do
      fixes ["all"]
      action :remove
    end
    
    # committing filesets
    # no guard needed here
    execute 'commit' do
      command 'installp -c all'
    end
    
    # cleaning exsiting altdisk
    aix_altdisk "cleanup alternate rootvg" do
      action :cleanup
    end
    
    # creating an alternate disk using the
    # first disk bigger than the actual rootvg
    # bootlist to false as this disk is just a backup copy
    aix_altdisk "altdisk_by_auto" do
      type :auto
      value "bigger"
      change_bootlist true
      action :create
    end
    
    # nimclient configuration
    aix_niminit node[:hostname] do
      master "nimcloud"
      connect "nimsh"
      pif_name "en1"
      action :setup
    end
    
    # update to latest available tl/sp
    aix_nimclient "updating to latest sp" do
      installp_flags "aXYg"
      lpp_source "latest_sp"
      fixes "update_all"
      action :cust
    end
    
    # dealloacate resource
    aix_nimclient "deallocating resources" do
      action :deallocate
    end
    

    How about a single point of management “knife ssh”, “pushjobs”

    Chef is and was designed on a pull model, it means that the client is asking to server to get the recipes and cookbooks and then execute them. This is the role of the chef-client. In a Linux environment, people are often running the client in demonized mode, it means that the client is waking up on a time interval basis and is executed (then every change to the cookbooks are run by the client). I’m almost sure that every AIX shop will be against this method because this one is dangerous. If you are doing that run the change first in test environment, then in dev, and finally in production. To be honest this is not the model I want to build where I am working. We want for some actions (like updates) a push model. By default Chef is delivered with a feature called push jobs. Push jobs is a way to run jobs like “execute the chef-client” from your knife workstation, unfortunately push jobs needs plugin to the chef-client and this one is only available on Linux OS …. not yet one AIX. Anyway we have an alternative, this one is the ssh knife plugin. This plugin that is in knife by default allows you to run commands on the nodes with ssh. Even better if you already have an ssh gateway with key sharing enabled knife ssh can use this gateway to communicate with the clients. Using knife ssh you’ll have the possibility to say “run chef-client on all my AIX 6.1 nodes” or “run this recipes installing this fix on all my AIX 7.1 nodes”, possibilities are infinite. Last note about knife ssh. This one is creating tunnels through your ssh gateway to communicate with the node, so if you use a shared key you have to copy the private key on the knife workstation (it tooks me time to understand that). Here are somes exmples:

    knifessh

    • On two nodes check the current os level:
    • ssh1

    • Run the update with Chef:
    • update3

    • Alternates disk have been created:
    • update4

    • Both systems are up to date:
    • update5

    Conclusion

    I think this blog post helped you to better understand Chef and what is Chef capable of. We are still on the very beginning of the Chef cookbook and I’m sure plenty of new things (recipes, providers) will come in the next few months. Try it by yourself and I’m sure you’ll like the way it work. I must admit that it is difficult to learn and to start but if you are doing this right you’ll get the benefit of an automation tool working on AIX … and honestly AIX needs an automation tool. I’m almost sure it will be Chef (in fact we have no other choice). Help us to write postinstall recipes, updates recipes and any other recipes you can think about. We need your help and it is happening now! You have the opportunity to be a part of this, a part of something new that will help AIX in the future. We don’t want a dying os, Chef will give AIX the opportunity to be an OS with a fully working automation tool. Go give it a try now!

    IBM Technical University for PowerSystems 2015 – Cannes (both sessions files included)

    $
    0
    0

    I’m traveling the world since my first IBM Technical University for PowerSystems in Dublin (4 years ago as far as I remember). I had the chance to be in Budapest last year and in Cannes this year (a little bit less funny for a French guy than Dublin and Budapest) but in a different way. I had this year the opportunity to be a speaker for two sessions (and two repeats) thanks to the kindness of Alex Abderrazag (thank you for trusting me Alex). My first plan was to go to Tokyo for the Openstack summit to talk about PowerVC but unfortunately for me I was not able to make it because of confidentiality issues I had with my current company (the goal here was to be a customer reference for PowerVC). I didn’t realized that creating two sessions from scratch on two topics which are pretty new would have been so hard for me. I thought it would take me a couple of hours for each one but it took me so many hours for each one that I now have to be impressed by people who are doing this as their daily job ;-) . Something that took me even more hours than creating the slides is the preparation of these two sessions (Speaker notes, practicing (special thanks here to the people who helped me to practice the sessions especially the fantastic Bill Miller ;-) ) and so on …). One last thing I didn’t realized is that you have to manage your stress. As it was my first time in a such a big event I can assure you that I was super stressed. One funny thing about the stress is that I didn’t have any stress anymore just one hour before the session. Before that moment I had to find solution to deal with the stress … and I just realized that I wasn’t stress because of the sessions but because I had to speak English in front of so much people (super tricky thing to do for a shy french guy, trust me !). My first sessions (on both topics) were full (no more chairs available in the room) and the repeat were ok too, so I think it was ok and I think I was not so bad at it ;-) .

    IMG_20151104_233030

    I wanted here to thanks all the people who helped me to do this. Philippe Hermes (best pre-sales in France ;-) ) for believing in me and helping me to do that (re-reading my Powerpoint, and taking care of me during the event). Alex Abderrazag for allowing me to do that. Nigel Griffiths for re-reading the PowerVC session and giving me a couple of tips and tricks about being a speaker. Bill Miller and Alain Lechevalier for the rehearsal of both sessions and finally Rosa Davidson (she gave me the envy to do that). I’m not forgetting Jay Kruemcke who gave me some IBM shirts to do these sessions (and also for a lot of other things). Sorry for those whom I may have forgotten.

    Many people asked me to share my Powerpoint files, you will find both files below in this post, here are the two presentations:

    • PowerVC for PowerVM deep dive – Tips & Tricks.
    • Using Chef Automation on AIX.

    PowerVC for PowerVM deep dive – Tips & Tricks

    This session is for PowerVC advanced users. You’ll find a lot of tips and tricks allowing you to customize your PowerVC. More than a couple of tips and tricks you’ll also find in this session how PowerVC works (images, activation, cloud-init, and so on). If you are not a PowerVC user this session can be a little bit difficult for you. But these tips and tricks are the lessons I learned from the field using PowerVC in a production environment:

    Using Chef Automation on AIX

    This session will give you all the basis to understand what is Chef and what you can do with this tool. You’ll also find examples on how to update service pack and technology level on AIX using Chef. Good examples about using Chef for post installation tasks and how to use it with PowerVC are also provided in this session.

    Conclusion

    I hope you enjoyed the session if you were at Cannes this year. On my side I really enjoyed doing that, it was a very good experience for me. I hope I’ll have the opportunity to do that again. Feel free to tell my if want to see me in future technical events like these one. The next step is now to do something at Edge … not so sure this dream will come true any time ;-) .

    A first look at SRIOV vNIC adapters

    $
    0
    0

    I have the chance to participate in the current Early Shipment Program (ESP) for Power Systems, especially the software part. One of my tasks is to test a new feature called SRIOV vNIC. For those who does not know anything about SRIOV this technology is comparable to LHEA except it is based on a industry standard (and have a couple of other features). By using SRIOV adapter you can divide a physical port into what we call a Virtual Function (or a Logical Port) and map this Virtual Function to a partition. You can also set “Quality Of Service” on these Virtual Functions. At the creation you will setup the Virtual Function allowing it to take a certain percentage of the physical port. These can be very useful if you want to be sure that your production server will always have a guaranteed bandwidth instead of using a Shared Ethernet Adapter where every clients partitions are competing for the bandwidth. Customers are also using SRIOV adapters for performance purpose ; as nothing is going through the Virtual I/O Server the latency added by this action is eliminated and CPU cycles are saved on the Virtual I/O Server side (Shared Ethernet Adapter consume a lot of CPU cycles). If you are not aware of what SRIOV is I encourage you to check the IBM Redbook about it (http://www.redbooks.ibm.com/abstracts/redp5065.html?Open. Unfortunately you can’t move a partition by using Live Partition Mobility if this one have a Virtual Function assigned to it. Using vNICs allows you to use SRIOV through the Virtual I/O Servers and enable the possibility to move your partition even if you are using an SRIOV logical port. The better of two worlds : performance/qos and virtualization. Is this the end of the Shared Ethernet Adapter ?

    SRIOV vNIC, what’s this ?

    Before talking about the technical details it is important to understand what vNICs are. When I’m explaining this to newbies I often refer to NPIV. Imagine something similar as the NPIV but for the network part. By using SRIOV vNIC:

    • A Virtual Function (SRIOV Logical Port) is created and assigned to the Virtual I/O Server.
    • A vNIC adapter is created in the client partition.
    • The Virtual Function and the vNIC adapter are linked (mapped) together.
    • This is a one to one relationship between a Virtual Function and a vNIC (like a vfcs adapter is a one to one relationship between your vfcs and the physical fiber channel adapter).

    On the image below, the vNIC lpars are the “yellow” ones, you can see here that the SRIOV adapter is divided in different Virtual Function, and some of them are mapped to the Virtual I/O Server. The relationship between the Virtual Function and the vNIC is achieved by a vnicserver (this is a special Virtual I/O Server device).
    vNIC

    One of the major advantage of using vNIC is that you eliminate the need of the Virtual I/O Server for data flows:

    • The network data flow is direct between the partition memory and the SRIOV adapter, there is no data copy passing through the Virtual I/O Server and it eliminate the CPU cost and the latency of doing that. This is achieved by LRDMA. Pretty cool !
    • The vNIC will inherits the bandwidth allocation of the Virtual Function (QoS). If the VF is configured with a capacity of 2% the vNIC will also have this capacity.
    • vNIC2

    vNIC Configuration

    Before checking all the details on how to configure an SRIOV vNIC adapter you have to check all the prerequisites. As this is a new feature you will need the latest level of …. everything. My advice is to stay up to date as much as possible.

    vNIC Prerequisites

    These outputs are taken from the early shipment program. All of this can be changed at the GA release:

    • Hardware Management Console v840:
    # lshmc -V
    lshmc -V
    "version= Version: 8
     Release: 8.4.0
     Service Pack: 0
    HMC Build level 20150803.3
    ","base_version=V8R8.4.0
    "
    
  • Power 8 only, firmware 840 at least (both enterprise and scale out systems):
  • firmware

  • AIX 7.1TL4 or AIX 7.2:
  • # oslevel -s
    7200-00-00-0000
    # cat /proc/version
    Oct 20 2015
    06:57:03
    1543A_720
    @(#) _kdb_buildinfo unix_64 Oct 20 2015 06:57:03 1543A_720
    
  • Obviously at least on SRIOV capable adapter!
  • Using the HMC GUI

    The configuration of a vNIC is done at the partition level. The configuration is only available on the enhanced version of the GUI. Select the virtual machine on which you want to add the vNIC and in the Virtual I/O tab you’ll see that a new Virtual NICs session is here. Click on “Virtual NICs” and a new panel will be opened with a new button called “Add Virtual NIC”, just click this one to add a Virtual NIC:

    vnic_n1
    vnic_conf2

    All the SRIOV capable port will be displayed on the next screen. Choose the SRIOV port you want (a virtual function will be created on this one. Don’t do anything more, the creation of a vNIC will automatically create a Virtual Function; assign it to Virtual I/O Server and do the mapping to the vNIC for you). Choose the Virtual I/O Server that will be used for this vNIC (the vNIC server will be created on this Virtual I/O Server. Don’t worry we will talk about vNIC redundancy later in this post) and the Virtual NIC Capacity (the percentage the Phyiscal SRIOV port that will be dedicated to this vNIC)(this has to be a multiple of 2)(be careful with that it can’t be changed afterwards and you’ll have to delete your vNIC to redo the configuration) :

    vnic_conf3

    The “Advanced Virtual NIC Settings” allows you to choose the Virtual NIC Adapter ID, choosing a MAC Address, and configuring the vlan restrictions and vlan tagging. In the example below I’m configuring my Virtual NIC in the vlan 310:

    vnic_conf4
    vnic_conf5
    allvnic

    Using the HMC Command Line

    As always the configuration can be achieved using the HMC command line, using lshwres to list vNIC and chhwres to create a vNIC.

    List SRIOV adapters to get the adapter_id needed by the chhwres command:

    # lshwres -r sriov --rsubtype adapter -m blade-8286-41A-21AFFFF
    adapter_id=1,slot_id=21020014,adapter_max_logical_ports=48,config_state=sriov,functional_state=1,logical_ports=48,phys_loc=U78C9.001.WZS06RN-P1-C12,phys_ports=4,sriov_status=running,alternate_config=0
    # lshwres -r virtualio  -m blade-8286-41A-21AFFFF --rsubtype vnic --level lpar --filter "lpar_names=72vm1"
    lpar_name=72vm1,lpar_id=9,slot_num=7,desired_mode=ded,curr_mode=ded,port_vlan_id=310,pvid_priority=0,allowed_vlan_ids=all,mac_addr=ee3b8cd87707,allowed_os_mac_addrs=all,desired_capacity=2.0,backing_devices=sriov/vios1/2/1/1/27004008/2.0
    

    Create the vNIC:

    # chhwres -r virtualio -m blade-8286-41A-21AFFFF -o a -p 72vm1 --rsubtype vnic -v -a "port_vlan_id=310,backing_devices=sriov/vios2/1/1/1/2"
    

    List the vNIC after create:

    # lshwres -r virtualio  -m blade-8286-41A-21AFFFF --rsubtype vnic --level lpar --filter "lpar_names=72vm1"
    lpar_name=72vm1,lpar_id=9,slot_num=7,desired_mode=ded,curr_mode=ded,port_vlan_id=310,pvid_priority=0,allowed_vlan_ids=all,mac_addr=ee3b8cd87707,allowed_os_mac_addrs=all,desired_capacity=2.0,backing_devices=sriov/vios1/2/1/1/27004008/2.0
    lpar_name=72vm1,lpar_id=9,slot_num=2,desired_mode=ded,curr_mode=ded,port_vlan_id=310,pvid_priority=0,allowed_vlan_ids=all,mac_addr=ee3b8cd87702,allowed_os_mac_addrs=all,desired_capacity=2.0,backing_devices=sriov/vios2/1/1/1/2700400a/2.0
    

    System and Virtual I/O Server Side:

    • On the Virtual I/O Server you can use two commands to check your vNIC configuration. You can first use the lsmap command to check the one to one relationship between the VF and the vNIC (you see on the output below that a VF and a vnicserver device are created)(you can also see the name of the vNIC in the client partition side) :
    # lsdev | grep VF
    ent4             Available   PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)
    # lsdev | grep vnicserver
    vnicserver0      Available   Virtual NIC Server Device (vnicserver)
    # lsmap -vadapter vnicserver0 -vnic
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver0   U8286.41A.21FFFFF-V2-C32897             6 72nim1         AIX
    
    Backing device:ent4
    Status:Available
    Physloc:U78C9.001.WZS06RN-P1-C12-T4-S16
    Client device name:ent1
    Client device physloc:U8286.41A.21FFFFF-V6-C3
    
  • You can get more details (QoS, vlan tagging, port states) by using the vnicstat command:
  • # vnicstat -b vnicserver0
    [..]
    --------------------------------------------------------------------------------
    VNIC Server Statistics: vnicserver0
    --------------------------------------------------------------------------------
    Device Statistics:
    ------------------
    State: active
    Backing Device Name: ent4
    
    Client Partition ID: 6
    Client Partition Name: 72nim1
    Client Operating System: AIX
    Client Device Name: ent1
    Client Device Location Code: U8286.41A.21FFFFF-V6-C3
    [..]
    Device ID: df1028e214103c04
    Version: 1
    Physical Port Link Status: Up
    Logical Port Link Status: Up
    Physical Port Speed: 1Gbps Full Duplex
    [..]
    Port VLAN (Priority:ID): 0:3331
    [..]
    VF Minimum Bandwidth: 2%
    VF Maximum Bandwidth: 100%
    
  • On the client side you can list your vNIC and as always have details using the entstat command:
  • # lsdev -c adapter -s vdevice -t IBM,vnic
    ent0 Available  Virtual NIC Client Adapter (vnic)
    ent1 Available  Virtual NIC Client Adapter (vnic)
    ent3 Available  Virtual NIC Client Adapter (vnic)
    ent4 Available  Virtual NIC Client Adapter (vnic)
    # entstat -d ent0 | more
    [..]
    ETHERNET STATISTICS (ent0) :
    Device Type: Virtual NIC Client Adapter (vnic)
    [..]
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    ------------------------------------------------------
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    
    Speed Running:  1 Gbps Full Duplex
    
    Jumbo Frames: Disabled
    [..]
    Port VLAN ID Status: Enabled
            Port VLAN ID: 3331
            Port VLAN Priority: 0
    

    Redundancy

    You will certainly agree that having a such new cool feature without having something that is fully redundant would be a shame. Hopefully we have here a solution with the return with a great fanfare of the Network Interface Backup (NIB). As I told you before each time a vNIC is created a vnicserver is created on one of the Virtual I/O Server. (At the vNIC creation you have to choose on which Virtual I/O server it will be created). So to be fully redundant and to have a failover feature the only way is to create two vNIC adapters (one using the first Virtual I/O Server and the second one using the second Virtual I/O Server, on top of this you then have to create a Network Interface Backup, like in the old times :-) ). Here are a couple of things and best practices to know before doing this.

    • You can’t use two VF coming from the same SRIOV adapter physical port (the NIB creation will be ok, but any configuration on top of this NIB will fail).
    • You can use two VF coming from the same SRIOV adapter but with two different logical ports (this is the example I will show below).
    • The best partice is to use two VF coming from two different SRIOV adapters (you can then afford to loose one of the two SRIOV adapter).

    vNIC_nib

    Verify on your partition that you have two vNIC adapters and check that the status are ok using the ‘entstat‘ command:

    • Both vNIC are available on the client partition:
    # lsdev -c adapter -s vdevice -t IBM,vnic
    ent0 Available  Virtual NIC Client Adapter (vnic)
    ent1 Available  Virtual NIC Client Adapter (vnic)
    # lsdev -c adapter -s vdevice -t IBM,vnic -F physloc
    U8286.41A.21FFFFF-V6-C2
    U8286.41A.21FFFFF-V6-C3
    
  • You can check on the first Virtual I/O Server that “Current Link State”, “Logical Port State” and “Physical Port State” are ok (all of them needs to be up):
  • # entstat -d ent0 | grep -p vnic
    -------------------------------------------------------------
    ETHERNET STATISTICS (ent0) :
    Device Type: Virtual NIC Client Adapter (vnic)
    Hardware Address: ee:3b:86:f6:45:02
    Elapsed Time: 0 days 0 hours 0 minutes 0 seconds
    
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    ------------------------------------------------------
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    
  • Same on the second Virtual I/O Server:
  • # entstat -d ent1 | grep -p vnic
    -------------------------------------------------------------
    ETHERNET STATISTICS (ent1) :
    Device Type: Virtual NIC Client Adapter (vnic)
    Hardware Address: ee:3b:86:f6:45:03
    Elapsed Time: 0 days 0 hours 0 minutes 0 seconds
    
    Virtual NIC Client Adapter (vnic) Specific Statistics:
    ------------------------------------------------------
    Current Link State: Up
    Logical Port State: Up
    Physical Port State: Up
    

    Verify on both Virtual I/O Server that the two vNIC are coming from two different SRIOV adapters (for the purpose of this test I’m using two different ports on the same SRIOV adapters but it remains the same with two different adapters). You can see on the output below that on Virtual I/O Server 1 the vNIC is backed to the adapter on position 3 (T3) and that on Virtual I/O Server 2 the vNIC is backed to the adapter on position 4 (T4):

    • Once again use the lsmap command on the first Virtual I/O Server to check that (note that you can check the client name, and the client device):
    # lsmap -vadapter vnicserver0 -vnic
    Name          Physloc                            ClntID ClntName       ClntOS
    ------------- ---------------------------------- ------ -------------- -------
    vnicserver0   U8286.41A.21AFF8V-V1-C32897             6 72nim1         AIX
    
    Backing device:ent4
    Status:Available
    Physloc:U78C9.001.WZS06RN-P1-C12-T3-S13
    Client device name:ent0
    Client device physloc:U8286.41A.21AFF8V-V6-C2
    
  • Same thing on the second Virtual I/O Server:
  • # lsmap -vadapter vnicserver0 -vnic -fmt :
    vnicserver0:U8286.41A.21AFF8V-V2-C32897:6:72nim1:AIX:ent4:Available:U78C9.001.WZS06RN-P1-C12-T4-S14:ent1:U8286.41A.21AFF8V-V6-C3
    

    Finally create the Network Interface Backup and put and IP on top of it:

    # mkdev -c adapter -s pseudo -t ibm_ech -a adapter_names=ent0 -a backup_adapter=ent1
    ent2 Available
    # mktcpip -h 72nim1 -a 10.44.33.223 -i en2 -g 10.44.33.254 -m 255.255.255.0 -s
    en2
    72nim1
    inet0 changed
    en2 changed
    inet0 changed
    [..]
    # echo "vnic" | kdb
    +-------------------------------------------------+
    |       pACS       | Device | Link |    State     |
    |------------------+--------+------+--------------|
    | F1000A0032880000 |  ent0  |  Up  |     Open     |
    |------------------+--------+------+--------------|
    | F1000A00329B0000 |  ent1  |  Up  |     Open     |
    +-------------------------------------------------+
    

    Let’s now try different things to see if the redundancy is working ok. First let’s shutdown one of the Virtual I/O Server and let’s ping our machine from another one:

    # ping 10.14.33.223
    PING 10.14.33.223 (10.14.33.223) 56(84) bytes of data.
    64 bytes from 10.14.33.223: icmp_seq=1 ttl=255 time=0.496 ms
    64 bytes from 10.14.33.223: icmp_seq=2 ttl=255 time=0.528 ms
    64 bytes from 10.14.33.223: icmp_seq=3 ttl=255 time=0.513 ms
    [..]
    64 bytes from 10.14.33.223: icmp_seq=40 ttl=255 time=0.542 ms
    64 bytes from 10.14.33.223: icmp_seq=41 ttl=255 time=0.514 ms
    64 bytes from 10.14.33.223: icmp_seq=47 ttl=255 time=0.550 ms
    64 bytes from 10.14.33.223: icmp_seq=48 ttl=255 time=0.596 ms
    [..]
    --- 10.14.33.223 ping statistics ---
    50 packets transmitted, 45 received, 10% packet loss, time 49052ms
    rtt min/avg/max/mdev = 0.457/0.525/0.596/0.043 ms
    
    # errpt | more
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    59224136   1120200815 P H ent2           ETHERCHANNEL FAILOVER
    F655DA07   1120200815 I S ent0           VNIC Link Down
    3DEA4C5F   1120200815 T S ent0           VNIC Error CRQ
    81453EE1   1120200815 T S vscsi1         Underlying transport error
    DE3B8540   1120200815 P H hdisk0         PATH HAS FAILED
    # echo "vnic" | kdb
    (0)> vnic
    +-------------------------------------------------+
    |       pACS       | Device | Link |    State     |
    |------------------+--------+------+--------------|
    | F1000A0032880000 |  ent0  | Down |   Unknown    |
    |------------------+--------+------+--------------|
    | F1000A00329B0000 |  ent1  |  Up  |     Open     |
    +-------------------------------------------------+
    

    Same test with the addition of an address to ping, and I’m only loosing 4 packets:

    # ping 10.14.33.223
    [..]
    64 bytes from 10.14.33.223: icmp_seq=41 ttl=255 time=0.627 ms
    64 bytes from 10.14.33.223: icmp_seq=42 ttl=255 time=0.548 ms
    64 bytes from 10.14.33.223: icmp_seq=46 ttl=255 time=0.629 ms
    64 bytes from 10.14.33.223: icmp_seq=47 ttl=255 time=0.492 ms
    [..]
    # errpt | more
    59224136   1120203215 P H ent2           ETHERCHANNEL FAILOVER
    F655DA07   1120203215 I S ent0           VNIC Link Down
    3DEA4C5F   1120203215 T S ent0           VNIC Error CRQ
    

    vNIC Live Partition Mobility

    By default you can use Live Partition Mobility with SRIOV vNIC, it is super simple and it is fully supported by IBM, as always I’ll show you how to do that using the HMC GUI and the command line:

    Using the GUI

    First validate the mobility operation, it will allow you to choose the destination SRIOV adapter/port on which to map your current vNIC. You have to choose:

    • The adapter (if you have more than one SRIOV adapter).
    • The Physical port on which the vNIC will be mapped.
    • The Virtual I/O Server on which the vnicserver will be created.

    New options are now available in the mobility validation panel:

    lpmiov1

    Modify each vNIC to match your destination SRIOV adapter and ports (choose the destination Virtual I/O Server here):

    lpmiov2
    lpmiov3

    Then migrate:

    lpmiov4

    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A5E6DB96   1120205915 I S pmig           Client Partition Migration Completed
    4FB9389C   1120205915 I S ent1           VNIC Link Up
    F655DA07   1120205915 I S ent1           VNIC Link Down
    11FDF493   1120205915 I H ent2           ETHERCHANNEL RECOVERY
    4FB9389C   1120205915 I S ent1           VNIC Link Up
    4FB9389C   1120205915 I S ent0           VNIC Link Up
    [..]
    59224136   1120205915 P H ent2           ETHERCHANNEL FAILOVER
    B50A3F81   1120205915 P H ent2           TOTAL ETHERCHANNEL FAILURE
    F655DA07   1120205915 I S ent1           VNIC Link Down
    3DEA4C5F   1120205915 T S ent1           VNIC Error CRQ
    F655DA07   1120205915 I S ent0           VNIC Link Down
    3DEA4C5F   1120205915 T S ent0           VNIC Error CRQ
    08917DC6   1120205915 I S pmig           Client Partition Migration Started
    

    The ping test during the lpm show only 9 ping lost, due to etherchannel failover (on of my port was down at the destination server):

    # ping 10.14.33.223
    64 bytes from 10.14.33.223: icmp_seq=23 ttl=255 time=0.504 ms
    64 bytes from 10.14.33.223: icmp_seq=31 ttl=255 time=0.607 ms
    

    Using the command line

    I’m moving back the partition using the HMC command line interface, check the manpage for all the details. Here is the details for the vnic_mappings: slot_num/ded/[vios_lpar_name]/[vios_lpar_id]/[adapter_id]/[physical_port_id]/[capacity]:

    • Validate:
    # migrlpar -o v -m blade-8286-41A-21AFFFF -t  runner-8286-41A-21AEEEE  -p 72nim1 -i 'vnic_mappings="2/ded/vios1/1/1/2/2,3/ded/vios2/2/1/3/2"'
    
    Warnings:
    HSCLA291 The selected partition may have an open virtual terminal session.  The management console will force termination of the partition's open virtual terminal session when the migration has completed.
    
  • Migrate:
  • # migrlpar -o m -m blade-8286-41A-21AFFFF -t  runner-8286-41A-21AEEEE  -p 72nim1 -i 'vnic_mappings="2/ded/vios1/1/1/2/2,3/ded/vios2/2/1/3/2"'
    

    Port Labelling

    One thing very annoying using LPM with vNIC is that you have to do the mapping of your vNIC each time you are moving. The default choices are never ok and the GUI will always show you the first port or the first adapter and you will have to do that job by yourself. Even worse with the command line the vnic_mappings can give you some headaches :-) . Hopefully there is a feature called port labelling. You can put a label on each SRIOV Physical port and all your machines. My advice is to tag the ports that are serving the same network and the same vlan with the same label on all your machines. During the mobility operation if labels are matching between two machine the adapter/port combination matching the label will be automatically chosen for the mobility and you will have nothing to do to map on your own. Super useful. The outputs below show you how to label your SRIOV ports:

    label1
    label2

    # chhwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport -o s -a "adapter_id=1,phys_port_id=3,phys_port_label=adapter1port3"
    # chhwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport -o s -a "adapter_id=1,phys_port_id=2,phys_port_label=adapter1port2"
    
    # lshwres -m s00ka9942077-8286-41A-21C9F5V -r sriov --rsubtype physport --level eth -F adapter_id,phys_port_label
    1,adapter1port2
    1,adapter1port3
    

    At the validation time source and destination ports will automatically be matched:

    labelautochoose

    What about performance

    One of the main reason I’m looking for SRIOV vNIC adapter is performance. As all of our design is based on the fact that we need to move all of our virtual machines from a host to one another we need a solution allowing both mobility and performance. If you have tried to run a TSM server in a virtualized environment you’ll probably understand what I mean about performance and virtualization. In the case of TSM you need a lot of network bandwidth. My current customer and my previous one tried to do that using Shared Ethernet Adapters and of course this solution did not work because a classic Virtual Ethernet Adapter is not able to provide enough bandwidth for a single Virtual I/O client. I’m not an expert about network performance but the result you will see below are pretty obvious to understand and will show you the power of vNIC and SRIOV (I know some optimization can be done on the SEA side but it’s just a super simple test).

    Methodology

    I will try here to compare a classic Virtual Ethernet Adapter with a vNIC in the same configuration, both environments are the same, using same machines, same switches on so on:

    • Two machines are used to do the test. In case of vNIC both are using a single vNIC bacedk to a 10Gb adapter, in case of Virtual Ethernet Adapter both are backed to a SEA build on top of a 10Gb adapter.
    • The two machines are running on two different s814.
    • Entitlement and memory are the same for source and destination machines.
    • In the case of vNIC the capacity of the VF is set at 100% and the physical port of the SRIOV adapter is dedicated to the vNIC.
    • In the case of vent the SEA is dedicated to the test virtual machine.
    • In both cases a MTU of 1500 is utilized.
    • The tool used for the performance test is iperf (MTU 1500, Window Size 64K, and 10 TCP thread)

    SEA test for reference only

    • iperf server:
    • seaserver1

    • iperf client:
    • seacli1

    vNIC SRIOV test

    We are here running the exact same test:

    • iperf server:
    • iperf_vnic_client2

    • iperf client:
    • iperf_vnic_client

    By using a vNIC I get 300% of the bandwidth I get with an virtual ethernet adapter. Just awesome ;-) no tuning (out of the box configuration). Nothing more to add about it it’s pretty obvious that the usage of vNIC for performance will be a must.

    Conclusion

    Are SRIOV vNICs the end of the SEAs ? Maybe, but not yet ! For some cases like performance and QoS it will be very useful and adopted (I’m pretty sure I will use that for my current customer to virtualized the TSM servers). But today in my opinion SRIOV lacks a real redundancy feature at the adapter level. What I want is a heartbeat communication between the two SRIOV adapters. Having such a feature on a SRIOV adapter will finish to convince customers to move from SEA to SRIOV vNIC. I know nothing about the future but I hope something like that will be available in the next few years. To sum up SRIOV vNICs are powerful, easy to use and simplify the configuration and management of your Power Servers. Please wait for the GA and try this new killer functionality. As always I hope it helps.

    What’s new in VIOS 2.2.4.10 and PowerVM : Part 1 Virtual I/O Server Rules

    $
    0
    0

    I will post a series of mini blog posts about new features of PowerVM and Virtual I/O Server that are release this month. By this I mean Hardware Management Console 840 + Power firmware 840 + Virtual I/O Sever 2.2.4.10. As writing blog posts is not a part of my job and that I’m doing in that in my spare time some of the topics I will talk about have already been covered by other AIX bloggers but I think the more materials we have and the better it is. Other ones like this first one will be new to you. So please accept my apologize if topics are not what I’m calling “0 day” (the day of release). Anyway writing things help me to understand better and I add little details I have not seen in others blog post or in official documentation. Last point I will always try in these mini posts to give something new to you at least my point of view as an IBM customer. I hope it will be useful for you.

    The first topic I want to talk about is Virtual I/O Server Rules. With the latest version three new commands called “rules” and “rulescfgset” and “rulesdeploy” are now available in the Virtual I/O Servers. Theses ones helps you configure your devices attributes by creating, deploying, or checking rules (with the current configuration). I’m 100% sure that every time you are installing a Virtual I/O Server you are doing the same thing over and over again: you check your buffers attributes, you check attributes on fiber channels adapters and so on. The rules is a way to be sure everything is the same on all your Virtual I/O Servers (you can create a rule file (xml format) that can be deploy on every Virtual I/O Server you install). Even better, if you are a PowerVC user like me you want to be sure that any new device created by PowerVC are created with the attributes you want (for instance buffer for Virtual Ethernet Adapters). In the “old days” you have to use the chdef command, you can now do this by using the rules. Better than giving you a list of command I’ll show you here what I’m now doing on my Virtual I/O Server in 2.2.4.10.

    Creating and modifying existing default rules

    Before starting here are (a non exhaustive list) the attributes I’m changing on all my Virtual I/O Servers at deploy time. I now want to do that using the rules (these are just examples, you can do much more using the rules):

    • On fcs Adapters I’m changing the max_xfer_size attribute to 0x200000.
    • On fcs Adapters I’m changing the num_cmd_elems attribute to 2048.
    • On fscsi Devices I’m changing the dyntrk attribute to yes.
    • On fscsi Devices I’m changing the fc_err_recov to fast_fail.
    • On Virtual Ethernet Adapters I’m changing the max_buf_tiny attribute to 4096.
    • On Virtual Ethernet Adapters I’m changing the min_buf_tiny attribute to 4096.
    • On Virtual Ethernet Adapters I’m changing the max_buf_small attribute to 4096.
    • On Virtual Ethernet Adapters I’m changing the min_buf_small attribute to 4096.
    • On Virtual Ethernet Adapters I’m changing the max_buf_medium attribute to 512.
    • On Virtual Ethernet Adapters I’m changing the min_buf_medium attribute to 512.
    • On Virtual Ethernet Adapters I’m changing the max_buf_large attribute to 128.
    • On Virtual Ethernet Adapters I’m changing the min_buf_large attribute to 128.
    • On Virtual Ethernet Adapters I’m changing the max_buf_huge attribute to 128.
    • On Virtual Ethernet Adapters I’m changing the min_buf_huge attribute to 128.

    Modify existing attributes using rules

    By default a “factory” default rule file now exist in the Virtual I/O Server. This one is located in /home/padmin/rules/vios_current_rules.xml, you can check the content of the file (it’s an xml file) and list the rules contains in it:

    # ls -l /home/padmin/rules
    total 40
    -r--r-----    1 root     system        17810 Dec 08 18:40 vios_current_rules.xml
    $ oem_setup_env
    # head -10 /home/padmin/rules/vios_current_rules.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <Profile origin="get" version="3.0.0" date="2015-12-08T17:40:37Z">
     <Catalog id="devParam.disk.fcp.mpioosdisk" version="3.0">
      <Parameter name="reserve_policy" value="no_reserve" applyType="nextboot" reboot="true">
       <Target class="device" instance="disk/fcp/mpioosdisk"/>
      </Parameter>
     </Catalog>
     <Catalog id="devParam.disk.fcp.mpioapdisk" version="3.0">
      <Parameter name="reserve_policy" value="no_reserve" applyType="nextboot" reboot="true">
       <Target class="device" instance="disk/fcp/mpioapdisk"/>
    [..]
    
    $ rules -o list -d
    

    Let’s now say you have an existing Virtual I/O Server with en existing SEA configured on it. You want two things by using the rules:

    • Applying the rules to modify to the existing devices.
    • Be sure that new devices will be created using the rules.

    For the purpose of this example we will work here on the buffers attributes of a Virtual Network Adapter (same concepts are applying to other devices type). So we have an SEA with Virtual Network Adapters and we want to change the buffers attributes. Let’s first check the current values of the Virtual Adapters:

    $ lsdev -type adapter | grep -i Shared
    ent13            Available   Shared Ethernet Adapter
    $ lsdev -dev ent13 -attr virt_adapters
    value
    
    ent8,ent9,ent10,ent11
    
    $ lsdev -dev ent8 -attr max_buf_huge,max_buf_large,max_buf_medium,max_buf_small,max_buf_tiny,min_buf_huge,min_buf_large,min_buf_medium,min_buf_small,min_buf_tiny
    value
    
    64
    64
    256
    2048
    2048
    24
    24
    128
    512
    512
    $ lsdev -dev ent9 -attr max_buf_huge,max_buf_large,max_buf_medium,max_buf_small,max_buf_tiny,min_buf_huge,min_buf_large,min_buf_medium,min_buf_small,min_buf_tiny
    value
    
    64
    64
    256
    2048
    2048
    24
    24
    128
    512
    512
    

    Let’s now check the value in the current Virtual I/O Servers rules:

    $ rules -o list | grep buf
    adapter/vdevice/IBM,l-lan      max_buf_tiny         2048
    adapter/vdevice/IBM,l-lan      min_buf_tiny         512
    adapter/vdevice/IBM,l-lan      max_buf_small        2048
    adapter/vdevice/IBM,l-lan      min_buf_small        512
    

    For the tiny and small buffer I can change the rules easily using the rules command (using modify operation):

    $ rules -o modify -t adapter/vdevice/IBM,l-lan -a max_buf_tiny=4096
    $ rules -o modify -t adapter/vdevice/IBM,l-lan -a min_buf_tiny=4096
    $ rules -o modify -t adapter/vdevice/IBM,l-lan -a max_buf_small=4096
    $ rules -o modify -t adapter/vdevice/IBM,l-lan -a min_buf_small=4096
    

    I’m re-running the rules command to check rules are now modified :

    $ rules -o list | grep buf
    adapter/vdevice/IBM,l-lan      max_buf_tiny         4096
    adapter/vdevice/IBM,l-lan      min_buf_tiny         4096
    adapter/vdevice/IBM,l-lan      max_buf_small        4096
    adapter/vdevice/IBM,l-lan      min_buf_small        4096
    

    I can check the current values of my system against the current defined rules by using the diff operation:

    # rules -o diff -s
    devParam.adapter.vdevice.IBM,l-lan:max_buf_tiny device=adapter/vdevice/IBM,l-lan    2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_tiny device=adapter/vdevice/IBM,l-lan     512 | 4096
    devParam.adapter.vdevice.IBM,l-lan:max_buf_small device=adapter/vdevice/IBM,l-lan   2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_small device=adapter/vdevice/IBM,l-lan    512 | 4096
    

    Creating new attributes using rules

    In the current Virtual I/O Server rules embedded with the current Virtual I/O Server release there are no existing rules for the medium, large and huge buffer. Unfortunately for me I’m modifying these attributes by default and I want a rule capable of doing that. The goal is now to create a new set of rules for the other buffers not already present in the default file … Let’s try to do that using the add operation:

    # rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_medium=512
    The rule is not supported or does not exist.
    

    Annoying, I can’t add a rule for the medium buffer (same for the large and huge ones). The available attributes for each device is based on the current AIX artex catalog. You can check all the files present in the catalog to check what are the available attributes for each device type, you can see in the output below that there is nothing in the current ARTEX catalog for the medium buffer.

    $ oem_setup_env
    # cd /etc/security/artex/catalogs
    # ls -ltr | grep l-lan
    -r--r-----    1 root     security       1261 Nov 10 00:30 devParam.adapter.vdevice.IBM,l-lan.xml
    # grep medium devParam.adapter.vdevice.IBM,l-lan.xml
    # 
    

    To show that this is possible to add new rules I’ll show you a simple example to add the new ‘src_lun_val’ and ‘dst_lun_val’ on the vioslpm0 device. First I check that I can add this rules by looking in the ARTEX catalog:

    $ oem_setup_env
    # cd /etc/security/artex/catalogs
    # ls -ltr | grep lpm
    -r--r-----    1 root     security       2645 Nov 10 00:30 devParam.pseudo.vios.lpm.xml
    # grep -iE "src_lun_val|dest_lun_val" devParam.pseudo.vios.lpm.xml
      <ParameterDef name="dest_lun_val" type="string" targetClass="device" cfgmethod="attr" reboot="true">
      <ParameterDef name="src_lun_val" type="string" targetClass="device" cfgmethod="attr" reboot="true">
    

    Then I’m checking the ‘range’ of authorized values for both attributes:

    # lsattr -l vioslpm0 -a src_lun_val -R
    on
    off
    # lsattr -l vioslpm0 -a dest_lun_val -R
    on
    off
    restart_off
    lpm_off
    

    I’m searching the type using the lsdev command (here pseudo/vios/lpm):

    # lsdev -P | grep lpm
    pseudo         lpm             vios           VIOS LPM Adapter
    

    I’m finally adding the rules and checking the differences:

    $ rules -o add -t pseudo/vios/lpm -a src_lun_val=on
    $ rules -o add -t pseudo/vios/lpm -a dest_lun_val=on
    $ rules -o diff -s
    devParam.adapter.vdevice.IBM,l-lan:max_buf_tiny device=adapter/vdevice/IBM,l-lan    2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_tiny device=adapter/vdevice/IBM,l-lan     512 | 4096
    devParam.adapter.vdevice.IBM,l-lan:max_buf_small device=adapter/vdevice/IBM,l-lan   2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_small device=adapter/vdevice/IBM,l-lan    512 | 4096
    devParam.pseudo.vios.lpm:src_lun_val device=pseudo/vios/lpm                          off | on
    devParam.pseudo.vios.lpm:dest_lun_val device=pseudo/vios/lpm                 restart_off | on
    

    But what about my buffers, is there any possibility to add these attributes in the current ARTEX catalog. The answer is yes. By looking in catalog used for Virtual Ethernet Adapters (file named: devParam.adapter.vdevice.IBM,l-lan.xml) you will see that a catalog file named ‘vioent.cat’ is utilized by this xml file. Check the content of this catalog file by using the dspcat command and find if there is anything related to medium, large and huge buffers (all the catalogs files are location in /usr/lib/methods):

    $ oem_setup_env
    # cd /usr/lib/methods
    # dspcat vioent.cat |grep -iE "medium|large|huge"
    1 : 10 Minimum Huge Buffers
    1 : 11 Maximum Huge Buffers
    1 : 12 Minimum Large Buffers
    1 : 13 Maximum Large Buffers
    1 : 14 Minimum Medium Buffers
    1 : 15 Maximum Medium Buffers
    

    Modify the xml file located in the ARTEX catalog and add the necessary information for these three new buffers type:

    $ oem_setup_env
    # vi /etc/security/artex/catalogs/devParam.adapter.vdevice.IBM,l-lan.xml
    <?xml version="1.0" encoding="UTF-8"?>
    
    <Catalog id="devParam.adapter.vdevice.IBM,l-lan" version="3.0" inherit="devCommon">
    
      <ShortDescription><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="1">Virtual I/O Ethernet Adapter (l-lan)</NLSCatalog></ShortDescription>
    
      <ParameterDef name="min_buf_huge" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="10">Minimum Huge Buffers</NLSCatalog></Description>
      </ParameterDef>
    
      <ParameterDef name="max_buf_huge" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="11">Maximum Huge Buffers</NLSCatalog></Description>
      </ParameterDef>
    
      <ParameterDef name="min_buf_large" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="12">Minimum Large Buffers</NLSCatalog></Description>
      </ParameterDef>
    
      <ParameterDef name="max_buf_large" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="13">Maximum Large Buffers</NLSCatalog></Description>
      </ParameterDef>
    
      <ParameterDef name="min_buf_medium" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="14">Minimum Medium Buffers<</NLSCatalog></Description>
      </ParameterDef>
    
      <ParameterDef name="max_buf_medium" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="15">Maximum Medium Buffers</NLSCatalog></Description>
      </ParameterDef>
    
    [..]
      <ParameterDef name="max_buf_tiny" type="integer" targetClass="device" cfgmethod="attr" reboot="true">
        <Description><NLSCatalog catalog="vioent.cat" setNum="1" msgNum="19">Maximum Tiny Buffers</NLSCatalog></Description>
      </ParameterDef>
    
    
    

    Then I’m retrying to add the rules of the medium,large and huge buffers …. and it’s working great:

    # rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_medium=512
    # rules -o add -t adapter/vdevice/IBM,l-lan -a min_buf_medium=512
    # rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_huge=128
    # rules -o add -t adapter/vdevice/IBM,l-lan -a min_buf_huge=128
    # rules -o add -t adapter/vdevice/IBM,l-lan -a max_buf_large=128
    # rules -o add -t adapter/vdevice/IBM,l-lan -a min_buf_large=128
    

    Deploying the rules

    Now that a couple of rules are defined let’s now apply them on the Virtual I/O server. First check the differences you will get after applying the rules by using the diff operation of the rules command:

    $ rules -o diff -s
    devParam.adapter.vdevice.IBM,l-lan:max_buf_tiny device=adapter/vdevice/IBM,l-lan    2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_tiny device=adapter/vdevice/IBM,l-lan     512 | 4096
    devParam.adapter.vdevice.IBM,l-lan:max_buf_small device=adapter/vdevice/IBM,l-lan   2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_small device=adapter/vdevice/IBM,l-lan    512 | 4096
    devParam.adapter.vdevice.IBM,l-lan:max_buf_medium device=adapter/vdevice/IBM,l-lan   256 | 512
    devParam.adapter.vdevice.IBM,l-lan:min_buf_medium device=adapter/vdevice/IBM,l-lan   128 | 512
    devParam.adapter.vdevice.IBM,l-lan:max_buf_huge device=adapter/vdevice/IBM,l-lan      64 | 128
    devParam.adapter.vdevice.IBM,l-lan:min_buf_huge device=adapter/vdevice/IBM,l-lan      24 | 128
    devParam.adapter.vdevice.IBM,l-lan:max_buf_large device=adapter/vdevice/IBM,l-lan     64 | 128
    devParam.adapter.vdevice.IBM,l-lan:min_buf_large device=adapter/vdevice/IBM,l-lan     24 | 128
    devParam.pseudo.vios.lpm:src_lun_val device=pseudo/vios/lpm                          off | on
    devParam.pseudo.vios.lpm:dest_lun_val device=pseudo/vios/lpm                 restart_off | on
    

    Let’s now deploy the rules using the deploy operation of the rules command, you can notice that for some rules a mandatory reboot is needed to change the existing devices this is the case for the buffers, but not for the vioslpm0 attributes (we can check again that we now have no differences … some attributes are applied using the -P attribute of the chdev command):

    $ rules -o deploy 
    A manual post-operation is required for the changes to take effect, please reboot the system.
    $ lsdev -dev ent8 -attr min_buf_small
    value
    
    4096
     lsdev -dev vioslpm0 -attr src_lun_val
    value
    
    on
    $ rules -o diff -s
    

    Don’t forget to reboot the Virtual I/O Server and check everything is ok after the reboot (check the kernel values by using enstat):

    $ shutdown -force -restart
    [..]
    $ for i in ent8 ent9 ent10 ent11 ; do lsdev -dev $i -attr max_buf_huge,max_buf_large,max_buf_medium,max_buf_small,max_buf_tiny,min_buf_huge,min_buf_large,min_buf_medium,min_buf_small,min_buf_tiny ; done
    [..]
    128
    128
    512
    4096
    4096
    128
    128
    512
    4096
    4096
    $ entstat -all ent13 | grep -i buf
    [..]
    No mbuf Errors: 0
      Transmit Buffers
        Buffer Size             65536
        Buffers                    32
          No Buffers                0
      Receive Buffers
        Buffer Type              Tiny    Small   Medium    Large     Huge
        Min Buffers              4096     4096      512      128      128
        Max Buffers              4096     4096      512      128      128
    

    For the fibre channels adapters I’m using theses rules:

    $ rules -o modify -t driver/iocb/efscsi -a dyntrk=yes
    $ rules -o modify -t driver/qliocb/qlfscsi -a dyntrk=yes
    $ rules -o modify -t driver/qiocb/qfscsi -a dyntrk=yes
    $ rules -o modify -t driver/iocb/efscsi -a fc_err_recov=fast_fail
    $ rules -o modify -t driver/qliocb/qlfscsi -a fc_err_recov=fast_fail
    $ rules -o modify -t driver/qiocb/qfscsi -a fc_err_recov=fast_fail
    

    What about new devices ?

    Let’s now create a new SEA by adding new Virtual Ethernet Adapter using DLPAR and check the devices are created with the good values. (I’m not showing you here how to create the VEA I’m doing it the GUI for simplicity) (14,15,16,17 are the new ones):

    $ lsdev | grep ent
    ent12            Available   EtherChannel / IEEE 802.3ad Link Aggregation
    ent13            Available   Shared Ethernet Adapter
    ent14            Available   Virtual I/O Ethernet Adapter (l-lan)
    ent15            Available   Virtual I/O Ethernet Adapter (l-lan)
    ent16            Available   Virtual I/O Ethernet Adapter (l-lan)
    ent17            Available   Virtual I/O Ethernet Adapter (l-lan)
    $ lsdev -dev ent14 -attr
    buf_mode        min            Receive Buffer Mode                        True
    copy_buffs      32             Transmit Copy Buffers                      True
    max_buf_control 64             Maximum Control Buffers                    True
    max_buf_huge    128            Maximum Huge Buffers                       True
    max_buf_large   128            Maximum Large Buffers                      True
    max_buf_medium  512            Maximum Medium Buffers                     True
    max_buf_small   4096           Maximum Small Buffers                      True
    max_buf_tiny    4096           Maximum Tiny Buffers                       True
    min_buf_control 24             Minimum Control Buffers                    True
    min_buf_huge    128            Minimum Huge Buffers                       True
    min_buf_large   128            Minimum Large Buffers                      True
    min_buf_medium  512            Minimum Medium Buffers                     True
    min_buf_small   4096           Minimum Small Buffers                      True
    min_buf_tiny    4096           Minimum Tiny Buffers                       True
    $  mkvdev -sea ent0 -vadapter ent14 ent15 ent16 ent17 -default ent14 -defaultid 14 -attr ha_mode=sharing largesend=1 large_receive=yes
    ent18 Available
    $ entstat -all ent18 | grep -i buf
    No mbuf Errors: 0
      Transmit Buffers
        Buffer Size             65536
        Buffers                    32
          No Buffers                0
      Receive Buffers
        Buffer Type              Tiny    Small   Medium    Large     Huge
        Min Buffers              4096     4096      512      128      128
        Max Buffers              4096     4096      512      128      128
      Buffer Mode: Min
    [..]
    

    Deploying these rules to another Virtual I/O Server

    The goal is now to use this rule file and deploy it on all my Virtual I/O Servers to be sure all the attributes are the same on all the Virtual I/O Servers.

    I’m copying my rule file and copy it to another Virtual I/O Server:

    $ oem_setup_env
    # cp /home/padmin/rules
    # scp /home/padmin/rules/custom_rules.xml anothervios:/home/padmin/rules
    custom_rules.xml                   100%   19KB  18.6KB/s   00:00
    # scp /etc/security/artex/catalogs/devParam.adapter.vdevice.IBM,l-lan.xml anothervios:/etc/security/artex/catalogs/
    devParam.adapter.vdevice.IBM,l-lan.xml
    devParam.adapter.vdevice.IBM,l-lan.xml    100% 2737     2.7KB/s   00:00
    

    I’m now connecting to the new Virtual I/O Server and applying the rules:

    $ rules -o import -f /home/padmin/rules/custom_rules.xml
    $ rules -o diff -s
    devParam.adapter.vdevice.IBM,l-lan:max_buf_tiny device=adapter/vdevice/IBM,l-lan    2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_tiny device=adapter/vdevice/IBM,l-lan     512 | 4096
    devParam.adapter.vdevice.IBM,l-lan:max_buf_small device=adapter/vdevice/IBM,l-lan   2048 | 4096
    devParam.adapter.vdevice.IBM,l-lan:min_buf_small device=adapter/vdevice/IBM,l-lan    512 | 4096
    devParam.adapter.vdevice.IBM,l-lan:max_buf_medium device=adapter/vdevice/IBM,l-lan   256 | 512
    devParam.adapter.vdevice.IBM,l-lan:min_buf_medium device=adapter/vdevice/IBM,l-lan   128 | 512
    devParam.adapter.vdevice.IBM,l-lan:max_buf_huge device=adapter/vdevice/IBM,l-lan      64 | 128
    devParam.adapter.vdevice.IBM,l-lan:min_buf_huge device=adapter/vdevice/IBM,l-lan      24 | 128
    devParam.adapter.vdevice.IBM,l-lan:max_buf_large device=adapter/vdevice/IBM,l-lan     64 | 128
    devParam.adapter.vdevice.IBM,l-lan:min_buf_large device=adapter/vdevice/IBM,l-lan     24 | 128
    devParam.pseudo.vios.lpm:src_lun_val device=pseudo/vios/lpm                          off | on
    devParam.pseudo.vios.lpm:dest_lun_val device=pseudo/vios/lpm                 restart_off | on
    $ rules -o deploy
    A manual post-operation is required for the changes to take effect, please reboot the system.
    $ entstat -all ent18 | grep -i buf
    [..]
        Buffer Type              Tiny    Small   Medium    Large     Huge
        Min Buffers               512      512      128       24       24
        Max Buffers              2048     2048      256       64       64
    [..]
    $ shutdown -force -restart
    $ entstat -all ent18 | grep -i buf
    [..]
       Buffer Type              Tiny    Small   Medium    Large     Huge
        Min Buffers              4096     4096      512      128      128
        Max Buffers              4096     4096      512      128      128
    [..]
    

    rulescfgset

    If you don’t care at all about creating your own rules you can just use the rulecfgset command as padmin to apply default Virtual I/O Server rules, my advice for newbies is to do that just after the Virtual I/O Server is installed. By doing that you will be sure to have the default IBM rules. It is a good pratice to do that every time you will deploy a new Virtual I/O Server.

    # rulescfgset
    

    Conclusion

    Use rules ! It is a good way to be sure your Virtual I/O Server devices attributes are the same. I hope my example are good enough to convince you to use it. For PowerVC user like me using rules is a must. As PowerVC is creating devices for you, you want to be sure all your devices are created with the exact same attributes. My example about Virtual Ethernet Adapter buffers is just a mandatory thing to do now for PowerVC users. As always I hope it helps.


    NovaLink ‘HMC Co-Management’ and PowerVC 1.3.0.1Dynamic Resource Optimizer

    $
    0
    0

    Everybody now knows that I’m using PowerVC a lot in my current company. My environment is growing bigger and bigger and we are now managing more than 600 virtual machines with PowerVC (the goal is to reach ~ 3000 this year). Some of them were build by PowerVC itself and some of them were migrated through an homemade python script calling the PowerVC rest api and moving our old vSCSI machines to the new full NPIV/Live Partition Mobility/PowerVC environment (Still struggling with the “old mens” to move on SSP, but I’m alone versus everybody on this one). I’m happy with that but (there is always a but) I’m facing a lot problems. The first one is that we are doing more and more stuffs with PowerVC (Virtual Machine creation, virtual machines resizing, adding additional disks, moving machine with LPM, and finally using this python scripts to migrate the old machines to the new environment). I realized that the machine hosting the PowerVC was slower and slower and the more actions we do the more the PowerVC was “unresponsive”. By this I mean that the GUI was slow, creating objects was slower and slower. By looking at CPU graphs in lpar2rrd we noticed that the CPU consumption was growing as fast as we were doing stuffs on PowerVC (check the graph below). The second problem was my teams (unfortunately for me, we have here different teams doing different sort of stuffs here and everybody is using the Hardware Management Consoles it’s own way, some people are renaming the machine making them unusable with PowerVC, some people were changing the profiles disabling the synchronization, even worse we have some third party tools used for capacity planning making the Hardware Management Console unusable by PowerVC). The solution to all these problems is to use NovaLink and especially the NovaLink Co-Management. By doing this the Hardware Management Consoles will be restricted to a read-only view and PowerVC will stop querying the HMCs and will directly query the NovaLink partitions on each hosts instead of querying the Hardware Management Consoles.

    cpu_powervc

    What is NovaLink ?

    If you are using PowerVC you know that this one is based on OpenStack. Until now all the Openstack services where running on the PowerVC host. If you check on the PowerVC today you can see that there is one Nova per managed host. In the example below I’m managing ten hosts so I have ten different Nova processes running :

    # ps -ef | grep [n]ova-compute
    nova       627     1 14 Jan16 ?        06:24:30 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10D6666.log
    nova       649     1 14 Jan16 ?        06:30:25 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_65E6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_65E6666.log
    nova       664     1 17 Jan16 ?        07:49:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1086666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1086666.log
    nova       675     1 19 Jan16 ?        08:40:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_06D6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_06D6666.log
    nova       687     1 18 Jan16 ?        08:15:57 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6576666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6576666.log
    nova       697     1 21 Jan16 ?        09:35:40 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6556666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6556666.log
    nova       712     1 13 Jan16 ?        06:02:23 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10A6666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10A6666.log
    nova       728     1 17 Jan16 ?        07:49:02 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1016666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1016666.log
    nova       752     1 17 Jan16 ?        07:34:45 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1036666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_1036666.log
    nova       779     1 13 Jan16 ?        05:54:52 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6596666.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9119MHE_6596666.log
    # ps -ef | grep [n]ova-compute | wc -l
    10
    

    The goal of NovaLink is to move these processes on a dedicated partition running on each managed host (each PowerSystems). This partition is called the NovaLink partition. This one is running on an Ubuntu 15.10 Linux OS (Little endian) (so only available on Power8 hosts) and is in charge to run the Openstack nova processes. By doing that you will distribute the load across all the NovaLink partitions instead of charging one PowerVC host. Even better my understanding is that the NovaLink partition is able to communicate directly with the FSP. By using NovaLink you will be able to stop using the Hardware Management Consoles anymore and avoid the slowness of theses ones. As the NovaLink partition is hosted on the host itself the RMC connections are can now use a direct link (ipv6) through the PowerHypervisor. No more RMC connection problem at all ;-), it’s just awesome. NovaLink allows you to choose between two modes of management:

    • Full Nova Management: You install your new host directly with NovaLink on it and you will not need an Hardware Management Console Anymore (In this case the NovaLink installation is in charge to deploy the Virtual I/O Servers and the SEAs).
    • Nova Co-Management: Your host is already installed and you give the write access (setmaster) to the NovaLink partition, the Hardware Management Console will be limited in this mode (you will not be able to create partition anymore or modify profile, it’s not a “read only” mode as you will be able to start and stop the partitions and still do some stuffs with HMC but you will be very limited).
    • You can still mix NovaLink and Non-NovaLink management hosts, and still have P7/P6 managed by HMCs, P8 managed by HMCs, P8 Nova Co-Managed and P8 full Nova Managed ;-).
    • Nova1

    Prerequisites

    As always upgrade your systems to the latest code level if you want to use NovaLink and NovaLink Co-Management

    • Power 8 only with firmware version 840. (or later)
    • Virtual I/O Server 2.2.4.10 or later
    • For NovaLink co-management HMC V8R8.4.0
    • Obviously install NovaLink on each NovaLink managed system (install the latest patch version of NovaLink)
    • PowerVC 1.3.0.1 or later

    NovaLink installation on an existing system

    I’ll show you here how to install a NovaLink partition on an existing deployed system. Installing a new system from scratch is also possible. My advice is that you look at this address to start: , and check this youtube video showing you how a system is installed from scratch :

    The goal of this post is to show you how to setup a co-managed system on an already existing system with Virtual I/O Servers already deployed on the host. My advice is to be very careful. The first thing you’ll need to do is to created a partition (2VP 0.5EC and 5GB Memory) (I’m calling it nova in the example below) and use the Virtual Optical device to load the NovaLink system on this one. In the example below the machine is “SSP” backed. Be very careful when do that: setup the profile name, and all the configuration stuffs before moving to co-managed mode … after that it will be harder for you to change things as the new pvmctl command will be very new to you:

    # mkvdev -fbo -vadapter vhost0
    vtopt0 Available
    # lsrep
    Size(mb) Free(mb) Parent Pool         Parent Size      Parent Free
        3059     1579 rootvg                   102272            73216
    
    Name                                                  File Size Optical         Access
    PowerVM_NovaLink_V1.1_122015.iso                           1479 None            rw
    vopt_a19a8fbb57184aad8103e2c9ddefe7e7                         1 None            ro
    # loadopt -disk PowerVM_NovaLink_V1.1_122015.iso -vtd vtopt0
    # lsmap -vadapter vhost0 -fmt :
    vhost0:U8286.41A.21AFF8V-V2-C40:0x00000003:nova_b1:Available:0x8100000000000000:nova_b1.7f863bacb45e3b32258864e499433b52: :N/A:vtopt0:Available:0x8200000000000000:/var/vio/VMLibrary/PowerVM_NovaLink_V1.1_122015.iso: :N/A
    
    • At the gurb page select the first entry:
    • install1

    • Wait for the machine to boot:
    • install2

    • Choose to perform an installation:
    • install3

    • Accept the licenses
    • install4

    • padmin user:/li>
      install5
    • Put you network configuration:
    • install6

    • Accept to install the Ubuntu system:
    • install8

    • You can then modify anything you want in the configuration file (in my case the timezone):
    • install9

      By default NovaLink (I think not 100% sure) is designed to be installed on SAS disk, so without multipathing. If like me you decide to install the NovaLink partition in a “boot-on-san” lpar my advice is to launch the installation without any multipathing enabled (only one vscsi adapter or one virtual fibre channel adapter). After the installation is completed install the Ubuntu multipathd service and configure the second vscsi or virtual fibre channel adapter. If you don’t do that you may experience problem at the installation time (RAID error). Please remember that you have to do that before enabling the co-management. Last thing about the installation it may takes a lot of time to finish. So be patient (especially the preseed step).

    install10

    Updating to the latest code level

    The iso file provider in the Entitled Software Support is not updated to the latest available NovaLink code. Make a copy of the official repository available at this address: ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian. Serve the content of this ftp server on you how http server (use the command below to copy it):

    # wget --mirror ftp://public.dhe.ibm.com/systems/virtualization/Novalink/debian
    

    Modify the /etc/apt/sources.list (and source.list.d) and comment all the available deb repository to on only keep your copy

    root@nova:~# grep -v ^# /etc/apt/sources.list
    deb http://deckard.lab.chmod666.org/nova/Novalink/debian novalink_1.0.0 non-free
    root@nova:/etc/apt/sources.list.d# apt-get upgrade
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Calculating upgrade... Done
    The following packages will be upgraded:
      pvm-cli pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
    6 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 165 MB of archives.
    After this operation, 53.2 kB of additional disk space will be used.
    Do you want to continue? [Y/n]
    Get:1 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pypowervm all 1.0.0.1-151203-1553 [363 kB]
    Get:2 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-cli all 1.0.0.1-151202-864 [63.4 kB]
    Get:3 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-core ppc64el 1.0.0.1-151202-1495 [2,080 kB]
    Get:4 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-server ppc64el 1.0.0.1-151203-1563 [142 MB]
    Get:5 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-rest-app ppc64el 1.0.0.1-151203-1563 [21.1 MB]
    Get:6 http://deckard.lab.chmod666.org/nova/Novalink/debian/ novalink_1.0.0/non-free pvm-novalink ppc64el 1.0.0.1-151203-408 [1,738 B]
    Fetched 165 MB in 7s (20.8 MB/s)
    (Reading database ... 72094 files and directories currently installed.)
    Preparing to unpack .../pypowervm_1.0.0.1-151203-1553_all.deb ...
    Unpacking pypowervm (1.0.0.1-151203-1553) over (1.0.0.0-151110-1481) ...
    Preparing to unpack .../pvm-cli_1.0.0.1-151202-864_all.deb ...
    Unpacking pvm-cli (1.0.0.1-151202-864) over (1.0.0.0-151110-761) ...
    Preparing to unpack .../pvm-core_1.0.0.1-151202-1495_ppc64el.deb ...
    Removed symlink /etc/systemd/system/multi-user.target.wants/pvm-core.service.
    Unpacking pvm-core (1.0.0.1-151202-1495) over (1.0.0.0-151111-1375) ...
    Preparing to unpack .../pvm-rest-server_1.0.0.1-151203-1563_ppc64el.deb ...
    Unpacking pvm-rest-server (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
    Preparing to unpack .../pvm-rest-app_1.0.0.1-151203-1563_ppc64el.deb ...
    Unpacking pvm-rest-app (1.0.0.1-151203-1563) over (1.0.0.0-151110-1480) ...
    Preparing to unpack .../pvm-novalink_1.0.0.1-151203-408_ppc64el.deb ...
    Unpacking pvm-novalink (1.0.0.1-151203-408) over (1.0.0.0-151112-304) ...
    Processing triggers for ureadahead (0.100.0-19) ...
    ureadahead will be reprofiled on next reboot
    Setting up pypowervm (1.0.0.1-151203-1553) ...
    Setting up pvm-cli (1.0.0.1-151202-864) ...
    Installing bash completion script /etc/bash_completion.d/python-argcomplete.sh
    Setting up pvm-core (1.0.0.1-151202-1495) ...
    addgroup: The group `pvm_admin' already exists.
    Created symlink from /etc/systemd/system/multi-user.target.wants/pvm-core.service to /usr/lib/systemd/system/pvm-core.service.
    0513-071 The ctrmc Subsystem has been added.
    Adding /usr/lib/systemd/system/ctrmc.service for systemctl ...
    0513-059 The ctrmc Subsystem has been started. Subsystem PID is 3096.
    Setting up pvm-rest-server (1.0.0.1-151203-1563) ...
    The user `wlp' is already a member of `pvm_admin'.
    Setting up pvm-rest-app (1.0.0.1-151203-1563) ...
    Setting up pvm-novalink (1.0.0.1-151203-408) ...
    

    NovaLink and HMC Co-Management configuration

    Before adding the hosts on PowerVC you still need to do the most important thing. After the installation is finished enable the co-management mode to be able to have a system managed by NovaLink and still connected to an Hardware Management Console:

    • Enable the powerm_mgmt_capable attribute on the Nova partition:
    # chsyscfg -r lpar -m br-8286-41A-2166666 -i "name=nova,powervm_mgmt_capable=1"
    # lssyscfg -r lpar -m br-8286-41A-2166666 -F name,powervm_mgmt_capable --filter "lpar_names=nova"
    nova,1
    
  • Enable co-management (please not here that you have to setmaster (you’ll see that the curr_master_name is the HMC) and then relmaster (you’ll see that the curr_master_name is the NovaLink Partition, this is that state where we want to be)):
  • # lscomgmt -m br-8286-41A-2166666
    is_master=null
    # chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
    # lscomgmt -m br-8286-41A-2166666
    is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
    # chcomgmt -m br-8286-41A-2166666 -o relmaster
    # lscomgmt -m br-8286-41A-2166666
    is_master=0,curr_master_name=nova,curr_master_mtms=3*8286-41A*2166666,curr_master_type=norm,pend_master_mtms=none
    

    Going back to HMC managed system

    You can go back to an Hardware Management Console managed system whenever you want (set the master to the HMC, delete the nova partition and release the master from the HMC).

    # chcomgmt -m br-8286-41A-2166666 -o setmaster -t norm --terms agree
    # lscomgmt -m br-8286-41A-2166666
    is_master=1,curr_master_name=myhmc1,curr_master_mtms=7042-CR8*2166666,curr_master_type=norm,pend_master_mtms=none
    # chlparstate -o shutdown -m br-8286-41A-2166666 --id 9 --immed
    # rmsyscfg -r lpar -m br-8286-41A-2166666 --id 9
    # chcomgmt -o relmaster -m br-8286-41A-2166666
    # lscomgmt -m br-8286-41A-2166666
    is_master=0,curr_master_mtms=none,curr_master_type=none,pend_master_mtms=none
    

    Using NovaLink

    After the installation you are now able to login on the NovaLink partition. (You can gain root access with “sudo su -” command). A command new called pvmctl is available on the NovaLink partition allowing you to perform any actions (stop, start virtual machine, list Virtual I/O Servers, ….). Before trying to add the host double check that the pvmctl command is working ok.

    padmin@nova:~$ pvmctl lpar list
    Logical Partitions
    +------+----+---------+-----------+---------------+------+-----+-----+
    | Name | ID |  State  |    Env    |    Ref Code   | Mem  | CPU | Ent |
    +------+----+---------+-----------+---------------+------+-----+-----+
    | nova | 3  | running | AIX/Linux | Linux ppc64le | 8192 |  2  | 0.5 |
    +------+----+---------+-----------+---------------+------+-----+-----+
    

    Adding hosts

    On the PowerVC side add the NovaLink host by choosing the NovaLink option:

    addhostnovalink

    Some deb (ibmpowervc-power)packages will be installed on configured on the NovaLink machine:

    addhostnovalink3
    addhostnovalink4

    By doing this, on each NovaLink machine you can check that a nova-compute process is here. (By adding the host the deb was installed and configured on the NovaLink host:

    # ps -ef | grep nova
    nova      4392     1  1 10:28 ?        00:00:07 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf --log-file /var/log/nova/nova-compute.log
    root      5218  5197  0 10:39 pts/1    00:00:00 grep --color=auto nova
    # grep host_display_name /etc/nova/nova.conf
    host_display_name = XXXX-8286-41A-XXXX
    # tail -1 /var/log/apt/history.log
    Start-Date: 2016-01-18  10:27:54
    Commandline: /usr/bin/apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold -y install --force-yes --allow-unauthenticated ibmpowervc-powervm
    Install: python-keystoneclient:ppc64el (1.6.0-2.ibm.ubuntu1, automatic), python-oslo.reports:ppc64el (0.1.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm:ppc64el (1.3.0.1), python-ceilometer:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), ibmpowervc-powervm-compute:ppc64el (1.3.0.1, automatic), nova-common:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-oslo.service:ppc64el (0.11.0-2.ibm.ubuntu1, automatic), python-oslo.rootwrap:ppc64el (2.0.0-1.ibm.ubuntu1, automatic), python-pycadf:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), python-nova:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), python-keystonemiddleware:ppc64el (2.4.1-2.ibm.ubuntu1, automatic), python-kafka:ppc64el (0.9.3-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-monitor:ppc64el (1.3.0.1, automatic), ibmpowervc-powervm-oslo:ppc64el (1.3.0.1, automatic), neutron-common:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-os-brick:ppc64el (0.4.0-1.ibm.ubuntu1, automatic), python-tooz:ppc64el (1.22.0-1.ibm.ubuntu1, automatic), ibmpowervc-powervm-ras:ppc64el (1.3.0.1, automatic), networking-powervm:ppc64el (1.0.0.0-151109-25, automatic), neutron-plugin-ml2:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), python-ceilometerclient:ppc64el (1.5.0-1.ibm.ubuntu1, automatic), python-neutronclient:ppc64el (2.6.0-1.ibm.ubuntu1, automatic), python-oslo.middleware:ppc64el (2.8.0-1.ibm.ubuntu1, automatic), python-cinderclient:ppc64el (1.3.1-1.ibm.ubuntu1, automatic), python-novaclient:ppc64el (2.30.1-1.ibm.ubuntu1, automatic), python-nova-ibm-ego-resource-optimization:ppc64el (2015.1-201511110358, automatic), python-neutron:ppc64el (7.0.0-201511171221.ibm.ubuntu1.280, automatic), nova-compute:ppc64el (12.0.0-201511171221.ibm.ubuntu1.213, automatic), nova-powervm:ppc64el (1.0.0.1-151203-215, automatic), openstack-utils:ppc64el (2015.2.0-201511171223.ibm.ubuntu1.18, automatic), ibmpowervc-powervm-network:ppc64el (1.3.0.1, automatic), python-oslo.policy:ppc64el (0.5.0-1.ibm.ubuntu1, automatic), python-oslo.db:ppc64el (2.4.1-1.ibm.ubuntu1, automatic), python-oslo.versionedobjects:ppc64el (0.9.0-1.ibm.ubuntu1, automatic), python-glanceclient:ppc64el (1.1.0-1.ibm.ubuntu1, automatic), ceilometer-common:ppc64el (5.0.0-201511171217.ibm.ubuntu1.199, automatic), openstack-i18n:ppc64el (2015.2-3.ibm.ubuntu1, automatic), python-oslo.messaging:ppc64el (2.1.0-2.ibm.ubuntu1, automatic), python-swiftclient:ppc64el (2.4.0-1.ibm.ubuntu1, automatic), ceilometer-powervm:ppc64el (1.0.0.0-151119-44, automatic)
    End-Date: 2016-01-18  10:28:00
    

    The command line interface

    You can do ALL the stuffs you were doing on the HMC using the pvmctl command. The syntax is pretty simple: pvcmtl |OBJECT| |ACTION| where the OBJECT can be vios, vm, vea(virtual ethernet adapter), vswitch, lu (logical unit), or anything you want and ACTION can be list, delete, create, update. Here are a few examples :

    • List the Virtual I/O Servers:
    # pvmctl vios list
    Virtual I/O Servers
    +--------------+----+---------+----------+------+-----+-----+
    |     Name     | ID |  State  | Ref Code | Mem  | CPU | Ent |
    +--------------+----+---------+----------+------+-----+-----+
    | s00ia9940825 | 1  | running |          | 8192 |  2  | 0.2 |
    | s00ia9940826 | 2  | running |          | 8192 |  2  | 0.2 |
    +--------------+----+---------+----------+------+-----+-----+
    
  • List the partitions (note the -d for display-fields allowing me to print somes attributes):
  • # pvmctl vm list
    Logical Partitions
    +----------+----+----------+----------+----------+-------+-----+-----+
    |   Name   | ID |  State   |   Env    | Ref Code |  Mem  | CPU | Ent |
    +----------+----+----------+----------+----------+-------+-----+-----+
    | aix72ca> | 3  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    |   nova   | 4  | running  | AIX/Lin> | Linux p> |  8192 |  2  | 0.5 |
    | s00vl99> | 5  | running  | AIX/Lin> | Linux p> | 10240 |  2  | 0.2 |
    | test-59> | 6  | not act> | AIX/Lin> | 00000000 |  2048 |  1  | 0.1 |
    +----------+----+----------+----------+----------+-------+-----+-----+
    # pvmctl list vm -d name id 
    [..]
    # pvmctl vm list -i id=4 --display-fields LogicalPartition.name
    name=aix72-1-d3707953-00000090
    # pvmctl vm list  --display-fields LogicalPartition.name LogicalPartition.id LogicalPartition.srr_enabled SharedProcessorConfiguration.desired_virtual SharedProcessorConfiguration.uncapped_weight
    name=aix72capture,id=3,srr_enabled=False,desired_virtual=1,uncapped_weight=64
    name=nova,id=4,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=s00vl9940243,id=5,srr_enabled=False,desired_virtual=2,uncapped_weight=128
    name=test-5925058d-0000008d,id=6,srr_enabled=False,desired_virtual=1,uncapped_weight=128
    
  • Delete the virtual adapter on the partition name nova (note the –parent-id to select the partition) with a certain uuid which was found with (pvmclt list vea):
  • # pvmctl vea delete --parent-id name=nova --object-id uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
    
  • Power off the lpar named aix72-2:
  • # pvmctl vm power-off -i name=aix72-2-536bf0f8-00000091
    Powering off partition aix72-2-536bf0f8-00000091, this may take a few minutes.
    Partition aix72-2-536bf0f8-00000091 power-off successful.
    
  • Delete the lpar named aix72-2:
  • # pvmctl vm delete -i name=aix72-2-536bf0f8-00000091
    
  • Delete the vswitch named MGMTVSWITCH:
  • # pvmctl vswitch delete -i name=MGMTVSWITCH
    
  • Open a console:
  • #  mkvterm --id 4
    vterm for partition 4 is active.  Press Control+] to exit.
    |
    Elapsed time since release of system processors: 57014 mins 10 secs
    [..]
    
  • Power on an lpar:
  • # pvmctl vm power-on -i name=aix72capture
    Powering on partition aix72capture, this may take a few minutes.
    Partition aix72capture power-on successful.
    

    Is this a dream ? No more RMC connectivty problem anymore

    I’m 100% sure that you always have problems with RMC connectivity due to firwall issues, ports not opened, and IDS blocking RMC ongoing or outgoing traffic. NovaLink is THE solution that will solve all the RMC problems forever. I’m not joking it’s a major improvement for PowerVM. As the NovaLink partition is installed on each hosts this one can communicate through a dedicated IPv6 link with all the partitions hosted on the host. A dedicated virtual switch called MGMTSWITCH is used to allow the RMC flow to transit between all the lpars and the NovaLink partition. Of course this Virtual Switch must be created and one Virtual Ethernet Adapter must also be created on the NovaLink partition. These are the first two actions to do if you want to implement this solution. Before starting here are a few things you need to know:

    • For security reason the MGMTSWITCH must be created in Vepa mode. If you are not aware of what are VEPA and VEB modes here is a reminder:
    • In VEB mode all the the partitions connected to the same vlan can communicate together. We do not want that as it is a security issue.
    • The VEPA mode gives us the ability to isolate lpars that are on the same subnet. lpar to lpar traffic is forced out of the machine. This is what we want.
    • The PVID for this VEPA network is 4094
    • The adapter in the NovaLink partition must be a trunk adapter.
    • It is mandatory to name the VEPA vswitch MGMTSWITCH.
    • At the lpar creation if the MGMTSWITCH exists a new Virtual Ethernet Adapter will be automatically created on the deployed lpar.
    • To be correctly configured the deployed lpar needs the latest level of rsct code (3.2.1.0 for now).
    • The latest cloud-init version must be deploy on the captured lpar used to make the image.
    • You don’t need to configure any addresses on this adapter (on the deployed lpars the adapter is configured with the local-link address (it’s the same thing as 169.254.0.0/16 addresses used in IPv4 format but for IPv6)(please note that any IPv6 adapter must “by design” have a local-link address).

    mgmtswitch2

    • Create the virtual switch called MGMTSWITCH in Vepa mode:
    # pvmctl vswitch create --name MGMTSWITCH --mode=Vepa
    # pvmctl vswitch list  --display-fields VirtualSwitch.name VirtualSwitch.mode 
    name=ETHERNET0,mode=Veb
    name=vdct,mode=Veb
    name=vdcb,mode=Veb
    name=vdca,mode=Veb
    name=MGMTSWITCH,mode=Vepa
    
  • Create a virtual ethernet adapter on the NovaLink partition with the PVID 4094 and a trunk priorty set to 1 (it’s a trunk adapter). Note that we now have two adapters on the NovaLink partition (one in IPv4 (routable) and the other one in IPv6 (non-routable):
  • # pvmctl vea create --pvid 4094 --vswitch MGMTSWITCH --trunk-pri 1 --parent-id name=nova
    # pvmctl vea list --parent-id name=nova
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=False
      is_trunk=False
      loc_code=U8286.41A.216666-V3-C2
      mac=EE3B84FD1402
      pvid=666
      slot=2
      uuid=05a91ab4-9784-3551-bb4b-9d22c98934e6
      vswitch_id=1
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=True
      is_trunk=True
      loc_code=U8286.41A.216666-V3-C34
      mac=B6F837192E63
      pvid=4094
      slot=34
      trunk_pri=1
      uuid=fe7389a8-667f-38ca-b61e-84c94e5a3c97
      vswitch_id=4
    

    Configure the local-link IPv6 address in the NovaLink partition:

    # more /etc/network/interfaces
    [..]
    auto eth1
    iface eth1 inet manual
     up /sbin/ifconfig eth1 0.0.0.0
    # ifup eth1
    # ifconfig eth1
    eth1      Link encap:Ethernet  HWaddr b6:f8:37:19:2e:63
              inet6 addr: fe80::b4f8:37ff:fe19:2e63/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:1454 (1.4 KB)
              Interrupt:34
    

    Capture an AIX host with the latest version of rsct installed (3.2.1.0) or later and the latest version of cloud-init installed. This version of RMC/rsct handle this new feature so this is mandatory to have it installed on the captured host. When PowerVC will deploy a Virtual Machine on a Nova managed host with this version of rsct installed a new adapter with the PVID 4094 in the virtual switch MGMTSWITCH will be created and finally all the RMC traffic will use this adapter instead of your public IP address:

    # lslpp -L rsct*
      Fileset                      Level  State  Type  Description (Uninstaller)
      ----------------------------------------------------------------------------
      rsct.core.auditrm          3.2.1.0    C     F    RSCT Audit Log Resource
                                                       Manager
      rsct.core.errm             3.2.1.0    C     F    RSCT Event Response Resource
                                                       Manager
      rsct.core.fsrm             3.2.1.0    C     F    RSCT File System Resource
                                                       Manager
      rsct.core.gui              3.2.1.0    C     F    RSCT Graphical User Interface
      rsct.core.hostrm           3.2.1.0    C     F    RSCT Host Resource Manager
      rsct.core.lprm             3.2.1.0    C     F    RSCT Least Privilege Resource
                                                       Manager
      rsct.core.microsensor      3.2.1.0    C     F    RSCT MicroSensor Resource
                                                       Manager
      rsct.core.rmc              3.2.1.1    C     F    RSCT Resource Monitoring and
                                                       Control
      rsct.core.sec              3.2.1.0    C     F    RSCT Security
      rsct.core.sensorrm         3.2.1.0    C     F    RSCT Sensor Resource Manager
      rsct.core.sr               3.2.1.0    C     F    RSCT Registry
      rsct.core.utils            3.2.1.1    C     F    RSCT Utilities
    

    When this image will be deployed a new adapter will be created in the MGMTSWITCH virtual switch, an IPv6 local-link address will be configured on it. You can check the cloud-init activation to see the IPv6 address is configured at the activation time:

    # pvmctl vea list --parent-id name=aix72-2-0a0de5c5-00000095
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=True
      is_trunk=False
      loc_code=U8286.41A.216666-V5-C32
      mac=FA620F66FF20
      pvid=3331
      slot=32
      uuid=7f1ec0ab-230c-38af-9325-eb16999061e2
      vswitch_id=1
    --------------------------
    | VirtualEthernetAdapter |
    --------------------------
      is_tagged_vlan_supported=True
      is_trunk=False
      loc_code=U8286.41A.216666-V5-C33
      mac=46A066611B09
      pvid=4094
      slot=33
      uuid=560c67cd-733b-3394-80f3-3f2a02d1cb9d
      vswitch_id=4
    # ifconfig -a
    en0: flags=1e084863,14c0
            inet 10.10.66.66 netmask 0xffffff00 broadcast 10.14.33.255
             tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
    en1: flags=1e084863,14c0
            inet6 fe80::c032:52ff:fe34:6e4f/64
             tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
    sit0: flags=8100041
            inet6 ::10.10.66.66/96
    [..]
    

    Note that the local-link address is configured at the activation time (fe80 starting addresses):

    # more /var/log/cloud-init-output.log
    [..]
    auto eth1
    
    iface eth1 inet6 static
        address fe80::c032:52ff:fe34:6e4f
        hwaddress ether c2:32:52:34:6e:4f
        netmask 64
        pre-up [ $(ifconfig eth1 | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}') = "c2:32:52:34:6e:4f" ]
            dns-search fr.net.intra
    # entstat -d ent1 | grep -iE "switch|vlan"
    Invalid VLAN ID Packets: 0
    Port VLAN ID:  4094
    VLAN Tag IDs:  None
    Switch ID: MGMTSWITCH
    

    To be sure all is working correctly here is a proof test. I’m taking down the en0 interface on which the IPv4 public address is configured. Then I’m launching a tcpdump on the en1 (on the MGMTSWITCH address). Finally I’m resizing the Virtual Machine with PowerVC. AND EVERYTHING IS WORKING GREAT !!!! AWESOME !!! :-) (note the fe80 to fe80 communication):

    # ifconfig en0 down detach ; tcpdump -i en1 port 657
    tcpdump: WARNING: en1: no IPv4 address assigned
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on en1, link-type 1, capture size 96 bytes
    22:00:43.224964 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: S 4049792650:4049792650(0) win 65535 
    22:00:43.225022 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: S 2055569200:2055569200(0) ack 4049792651 win 28560 
    22:00:43.225051 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 1 win 32844 
    22:00:43.225547 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 1:209(208) ack 1 win 32844 
    22:00:43.225593 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: . ack 209 win 232 
    22:00:43.225638 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 1:97(96) ack 209 win 232 
    22:00:43.225721 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 209:377(168) ack 97 win 32844 
    22:00:43.225835 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 97:193(96) ack 377 win 240 
    22:00:43.225910 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 377:457(80) ack 193 win 32844 
    22:00:43.226076 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 193:289(96) ack 457 win 240 
    22:00:43.226154 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 457:529(72) ack 289 win 32844 
    22:00:43.226210 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 289:385(96) ack 529 win 240 
    22:00:43.226276 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: P 529:681(152) ack 385 win 32844 
    22:00:43.226335 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.32819: P 385:481(96) ack 681 win 249 
    22:00:43.424049 IP6 fe80::9850:f6ff:fe9c:5739.32819 > fe80::d09e:aff:fecf:a868.rmc: . ack 481 win 32844 
    22:00:44.725800 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88
    22:00:44.726111 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
    22:00:50.137605 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 632
    22:00:50.137900 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 88
    22:00:50.183108 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
    22:00:51.683382 IP6 fe80::9850:f6ff:fe9c:5739.rmc > fe80::d09e:aff:fecf:a868.rmc: UDP, length 408
    22:00:51.683661 IP6 fe80::d09e:aff:fecf:a868.rmc > fe80::9850:f6ff:fe9c:5739.rmc: UDP, length 88
    

    To be sure security requirements are met from the lpar I’m pinging the NovaLink host (the first one) which is answering and then I’m pinging the second lpar (the second ping) which is not working. (And this is what we want !!!).

    # ping fe80::d09e:aff:fecf:a868
    PING fe80::d09e:aff:fecf:a868 (fe80::d09e:aff:fecf:a868): 56 data bytes
    64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=0 ttl=64 time=0.203 ms
    64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=1 ttl=64 time=0.206 ms
    64 bytes from fe80::d09e:aff:fecf:a868: icmp_seq=2 ttl=64 time=0.216 ms
    ^C
    --- fe80::d09e:aff:fecf:a868 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0/0/0 ms
    # ping fe80::44a0:66ff:fe61:1b09
    PING fe80::44a0:66ff:fe61:1b09 (fe80::44a0:66ff:fe61:1b09): 56 data bytes
    ^C
    --- fe80::44a0:66ff:fe61:1b09 ping statistics ---
    2 packets transmitted, 0 packets received, 100% packet loss
    

    PowerVC 1.3.0.1 Dynamic Resource Optimizer

    In addition to the NovaLink part of this blog post I also wanted to talk about the killer app of 2016. Dynamic Resource Optimizer. This feature can be used on any PowerVC 1.3.0.1 managed hosts (you obviously need at least to hosts). DRO is in charge to re-balance your Virtual Machines across all the available hosts (in the host-group). To sum up if a host is experiencing an heavy load and reaching a certain amount of CPU consumption over a period of time, DRO will move your virtual machines to re-balance the load across all the available hosts (this is done at a host level). Here are a few details about DRO:

    • The DRO configuration is done at a host level.
    • You setup a threshold (in the capture below) to reach to trigger the Live Partition Moblity or Mobily Cores movements (Power Entreprise Pool).
    • droo6
      droo3

    • To be triggered this threshold must be reached a certain number of time (stabilization) over a period you are defining (run interval).
    • You can choose to move virtual machines using Live Partition Mobilty, or to move “cores” using Power Entreprise Pool (you can do both; moving CPU will always be preferred as moving partitions)
    • DRO can be run in advise mode (nothing is done, a warning is thrown in the new DRO events tab) or in active mode (which is doing the job and moving things).
      droo2
      droo1
    • Your most critical virtual machines can be excluded from DRO:
    • droo5

    How is DRO choosing which machines are moved

    I’m running DRO in production since now one month and I had the time to check what is going on behind the scene. How is DRO choosing which machines are moved when a Live Partition Moblity operation must be run to face an heavy load on a host ? To do so I decided to launch 3 different cpuhog (16 forks, 4VP, SMT4) processes (which are eating CPU ressource) on three different lpars with 4VP each. On the PowerVC I can check that before launching this processes the CPU consumption is ok on this host (the three lpars are running on the same host) :

    droo4

    # cat cpuhog.pl
    #!/usr/bin/perl
    
    print "eating the CPUs\n";
    
    foreach $i (1..16) {
          $pid = fork();
          last if $pid == 0;
          print "created PID $pid\n";
    }
    
    while (1) {
          $x++;
    }
    # perl cpuhog.pl
    eating the CPUs
    created PID 47514604
    created PID 22675712
    created PID 3015584
    created PID 21496152
    created PID 25166098
    created PID 26018068
    created PID 11796892
    created PID 33424106
    created PID 55444462
    created PID 65077976
    created PID 13369620
    created PID 10813734
    created PID 56623850
    created PID 19333542
    created PID 58393312
    created PID 3211988
    

    I’m waiting a couple of minutes and I realize that the virtual machines on which the cpuhog processes were launched are the ones which are migrated. So we can say that PowerVC is moving the machine that are eating CPU (another strategy could be to move all the non-eating CPU machines to let the working ones do their job without launching a mobility operation).

    # errpt | head -3
    IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
    A5E6DB96   0118225116 I S pmig           Client Partition Migration Completed
    08917DC6   0118225116 I S pmig           Client Partition Migration Started
    

    After the moves are ok I can see that the load is now ok on the host. DRO has done the job for me and moved the lpar to met the configured thresold ;-)

    droo7dro_effect

    The images below will show you a good example of the “power” of PowerVC and DRO. To update my Virtual I/O Servers to the latest version the PowerVC maintenance mode was used to free up the Virtual I/O Servers. After leaving the maintenance mode the DRO was doing the job to re-balance the Virtual Machines across all the hosts (The red arrows symbolize the maintenance mode action and the purple ones the DRO actions). You can also see that some lpars were moved across 4 different hosts during this process. All these pictures are taken from real life experience on my production systems. This not a lab environment, this is one part of my production. So yes DRO and PowerVC 1.3.0.1 are production ready. Hell yes!

    real1
    real2
    real3
    real4
    real5

    Conclusion

    As my environment is growing bigger the next step for me will be to move on NovaLink on my P8 hosts. Please note that the NovaLink Co-Management feature is today a “TechPreview” but should be released GA very soon. Talking about DRO I was waiting for that for years and it finally happens. I can assure you that it is production ready, to prove this I’ll just give you this number. To upgrade my Virtual I/O Servers to 2.2.4.10 release using PowerVC maintenance mode and DRO more than 1000 Live Partition Mobility moves were performed without any outage on production servers and during working hours. Nobody in my company was aware of this during the operations. It was a seamless experience for everybody.

    What’s new in VIOS 2.2.4.10 and PowerVM : Part 2 Shared Processor Pool weighting

    $
    0
    0

    First of all before beginning this blog post I owe you an explanation about these two months without new posts. These two months were very busy. On the personal side I was forced to move from my current apartment and had to find another one which was suitable for me (and I can assure you that this is not something really easy in Paris). As I was visiting apartments almost 3 days a week the time kept for writing blog posts (please remember that I’m doing that in my “after hours” work) was taken for something else :-(. At work things were crazy too, we had to build twelve new E870 boxes (with the provisioning toolkit and SRIOV adapters) and make them work with our current implementation of PowerVC. Then I had to do a huge vscsi to NPIV migration (more than 500 AIX machines to migrate from vscsi to NPIV and then move to P8 boxes in less than three weeks … yes more than 500 machines in less than 3 weeks (4000 zones created …). Thanks to the help of STG Lab Services consultant (Bonnie LeBarron) this was achieved using a modified version of her script (to fit our need (zoning and mapping part) (latest hmc releases)). I’m back in business now and I have planned a couple of blog posts this month. This first of this series is about the Shared Processor Pool weighting on the latest Power8 firmware versions. You’ll see that it changes a lot of things compared to P7 boxes.

    A short history of Shared Processor Pool weighting

    This long story began a few years ago for me (I’ll say at least 4 years ago) (I was planing to do a blog post about it a few years ago but decided not to do it because I was thinking this topic was considered as “sensible”, now that we have documentation and an official statement on this there is no reason to hide this anymore). I was working for a bank using two P795 with a lot of cores activated. We were using Multiple Shared Processor Pool in an unconventional way (as far as I remember two pools per customers one for Oracle and one for WAS, and we had more than 5 or 6 customers, so each box had at least 10 MSPP). As you may already know I only believe what I can see. So I decided to make tests on my own. By reading the Redbook I realized that there was not enough information about pool and partition weighting. We were like a lot of today’s customers having different weights for development (32), qualification (64), pre-production (128), production (192) and finally Virtual I/O Server (255). As we were using Shared Processor Pool I was expecting that when the Shared Processor Pool is full (contention) the weight will work and will prioritize the partition with the higher weight. What was my surprise when I realized the weighting was not working inside a Shared Processor Pool but only in the DefaultPool (Pool 0). Remember forever this statement on Power7 partition weighting is only working when the default pool is full. There is no “intelligence” in a Shared Processor Pool and you have to be very careful with the size of the pool because of that. On Power7 pools are used ONLY for licensing purpose. I then decided to contact my preferred IBM pre-sales in France to tell him about this incredible discovery. I had no answer for one month, then (as always) he came back with the answer of someone who already knows the truth about this. He introduced me to a performance expert (she was a performance expert at this time and is now specialized in security) and she was telling me that I was absolutely right with my discovery but that only a few people were aware of this. I decided to say nothing about it … but was sure that IBM realized there was a something to clarify about this. Then last year at the IBM Technical Collaboration Council I saw a PowerPoint slide telling that latest IBM Power8 firmware will add this long awaited feature. Partition weighting will work inside a Shared Processor Pool. Finally after waiting for more than four years I have what I want. As I was working on a new project in my current job I had to create a lot of Shared Processor Pool in a mixed Power7 (P770) and Power8 (E870) environment. It was the time to check if this new feature was really working and compare the differences between a Power8 (with latest firmware) and a Power7 machine (with latest firmware). The way we are implementing and monitoring the Shared Processor Pool on a Power8 will now be very different than it was on Power7 box. I think that this is really important and that everybody now needs to understand the differences for their future implementation. But let’s first have a look in the Redbooks to check the official statements:

    The Redbook talking about this is “IBM PowerVM Virtualization Introduction and Configuration”, here is the key paragraph to understand (page 113 and 114):

    redbook_statement

    It was super hard to find but there is place were IBM is talking about this. I’m below quoting this link: https://www.ibm.com/support/knowledgecenter/9119-MME/p8hat/p8hat_sharedproc.htm

    When the firmware is at level 8.3.0, or earlier, uncapped weight is used only when more virtual processors consume unused resources than the available physical processors in the shared processor pool. If no contention exists for processor resources, the virtual processors are immediately distributed across the physical processors, independent of their uncapped weights. This can result in situations where the uncapped weights of the logical partitions do not exactly reflect the amount of unused capacity.

    For example, logical partition 2 has one virtual processor and an uncapped weight of 100. Logical partition 3 also has one virtual processor, but an uncapped weight of 200. If logical partitions 2 and 3 both require more processing capacity, and there is not enough physical processor capacity to run both logical partitions, logical partition 3 receives two more processing units for every additional processing unit that logical partition 2 receives. If logical partitions 2 and 3 both require more processing capacity, and there is enough physical processor capacity to run both logical partitions, logical partition 2 and 3 receive an equal amount of unused capacity. In this situation, their uncapped weights are ignored.

    When the firmware is at level 8.4.0, or later, if multiple partitions are assigned to a shared processor pool, the uncapped weight is used as an indicator of how the processor resources must be distributed among the partitions in the shared processor pool with respect to the maximum amount of capacity that can be used by the shared processor pool. For example, logical partition 2 has one virtual processor and an uncapped weight of 100. Logical partition 3 also has one virtual processor, but an uncapped weight of 200. If logical partitions 2 and 3 both require more processing capacity, logical partition 3 receives two additional processing units for every additional processing unit that logical partition 2 receives.

    The server distributes unused capacity among all of the uncapped shared processor partitions that are configured on the server, regardless of the shared processor pools to which they are assigned. For example, if you configure logical partition 1 to the default shared processor pool and you configure logical partitions 2 and 3 to a different shared processor pool, all three logical partitions compete for the same unused physical processor capacity in the server, even though they belong to different shared processor pools.

    Testing methodology

    We now need to demonstrate that the behavior of the weighting is different between a Power7 and a Power8 machine, here is how we are going to proceed :

    • On a Power8 machine (E870 SC840_056) we create a Shared Processor Pool with a “Maximum Processing unit” set to 1.
    • On a Power7 machine we create a Shared Processor Pool with a “Maximum Processing unit” set to 1.
    • We create two partitions in the P8 pool (1VP, 0.1EC) called mspp1 and mspp2.
    • We create two partitions in the P7 pool (1VP, 0.1EC) called mspp3 and mspp4.
    • Using ncpu providev with the nstress tools (http://public.dhe.ibm.com/systems/power/community/wikifiles/PerfTools/nstress_AIX6_April_2014.tar) we create an heavy load on each partition. Obviously this load can’t be higher than 1 processing unit in total (sum of each physc).
    • We then use these testing scenarios (each test has a duration of 15 minutes, we are recording cpu and pool stats with nmon and lpar2rrd)
    1. First partition with a weight of 128, the second partition with a weight of 128 (test with the same weight).
    2. First partition with a weight of 64, the second partition with a weight of 128 (test weight multiply by two 1/2).
    3. First partition with a weight of 32, the second partition with a weight of 128 (test weight multiply by four 1/4).
    4. First partition with a weight of 1, the second partition with a weight of 2 (we try here to prove that the ratio between two values is more important that the value itself. Values of 1 and 2 should give us the same result as 64 and 128)
    5. First partition with a weight of 1, the second partition with a weight of 255 (a ratio of 1:255) (you’ll see here that the result is pretty interesting :-) ).
  • You’ll see that It will not be necessary to do all these tests on the P7 box …. :-)
  • The Power8 case

    Prerequistes

    Firmware P8 SC840* or SV840* are mandatory to enable the weighting in a Shared Processor Pool on a machine without contention for processor resources (no contention in the DefaultPool). This means that all P6, P7 and P8 (with a firmware < 840) machines do not have this feature coded in the firmware. My advice is to update all your P8 machines to the latest level to enable this new behavior.

    Tests

    For each test, we prove the weight of each partition using the lparstat command, then we capture a nmon file every 30 seconds and we launch ncpu for a duration of 15 minutes with four CPUs (we are in SMT4) on both P8 and P7 box. We will show you here that weight are taken into account in a Power8 MSPP, but are not taken into account in a Power7 MSPP.

    #lparstat -i | grep -iE "Variable Capacity Weight|^Partition"
    Partition Name                             : mspp1-23bad3d7-00000898
    Partition Number                           : 3
    Partition Group-ID                         : 32771
    Variable Capacity Weight                   : 255
    Desired Variable Capacity Weight           : 255
    # /usr/bin/nmon -F /admin/nmon/$(hostname)_weight255.nmon -s30 -c30 -t ; ./ncpu -p 4 -s 900
    # lparstat 1 10
    
    • Both weights at 128, you can check in the picture below that the “physc” value are strictly equal (0.5 for both lpars) (the ratio of 1 between the two weight is respected) :
    • weight128

    • One partition to 64 and one partition to 128, you can check in the pictures below (lparstat output, and nmon analyser graph) that we now have different values for the physc value (0.36 for the mssp2 lpar and 0.64 for the mssp1 lpar). We now have a ratio of 2, mspp1 physc is two time the mspp2 physc (the weights are respected in the Shared Processor Pool):
    • weight64_128

    nmonx2

    This lpar2rrd graph show you the weighting behavior on a Power8 machine (test one: both weights equal to 128, and test two: with two different weights of 128 and 64).

    graph_p8_128128_12864

    • One partition to 32 and one partition to 128: you can check in the picture below that the ratio of 3 (32:128) is respected (physc value to 0.26 and 0.74).
    • weight32_128

    • One partition to 1 and one partition to 2. The results here are exactly the same as the second test (128 and 64 weights), it proves you that the important thing to configure are the ratio between the weights and not the value itself (using 1 2 3 weights will give you the exact same results as 2 4 6):
    • weight1_2

    • Finally one partition to 1 and one partition to 255. Be careful here the ratio is big enough to have an unresponsive lpar when loading both partitions. I do not recommend putting such high ratios because of this:
    • weight1_255

    graph_p8_12832_12_1255

    The Power7 case

    Let’s do one test on a Power7 machine with on lpar with a weight of 1 and the other one with a weight of 255 … you’ll see a huge difference here … and I think it is clear enough to avoid doing all the test scenarios on the Power7 machine.

    Tests

    You can see here that I’m doing the exact same test, weight to 1 and 255, now both partition have an equal physc value (0.5 for both partitions). On a Power7 box the weights will be taken into account only if the DefaultPool (pool0) is full (contention). The pictures below show you the reality of the Multiple Shared Processors pool running on a Power7 box. On Power7 MSPP must be used only for licensing purpose and nothing else.

    weight1_255_power7
    graph_p7_1255

    Conclusion

    I hope you better understand the Multiple Shared Processor Pools differences between Power8 and Power7. Now that you are aware of this my advice is to have different strategies when you are implementing MSPP on Power7 and Power8. On Power7 double check and monitor your MSPP to be sure the pools are never full and that you can get enough capacity to run you load. On a Power8 box setup you weights wisely on your different environments (backup, production, development). You can then be sure that the production will be prioritized whatever appends even if you reduce your MSPP sizes, by doing this you’ll maximize licensing costs. As always I hope it help.

    Continuous integration for your Chef AIX cookbooks (using PowerVC, Jenkins, test-kitchen and gitlab)

    $
    0
    0

    My Journey to integrate Chef on AIX is still going on and I’m working more than ever on these topics. I know that using such tools is not something widely adopted by AIX customers. But what I also know is that whatever happens you will in a near -or distant- future use an automation tool. These tools are so widely used in the Linux world that you just can’t ignore it. The way you were managing your AIX ten years ago is not the same as what you are doing today, and what you do today will not be what you’ll do in the future. The AIX world needs a facelift to survive, a huge step has already be done (and is still ongoing) with PowerVC thanks to a fantastic team composed by very smart people at IBM (@amarteyp; @drewthorst, @jwcroppe, and all the other persons in this team!) The AIX world is now compatible with Openstack and with this other things are coming … such as automation. When all of these things will be ready AIX we will be able to offer something comparable to Linux. Openstack and automation are the first brick to what we call today “devops” (to be more specific it’s the ops part of the devops word).

    I will today focus on how to manage your AIX machines using Chef. By using the word “how” I mean what are the best practices and infrastructures to build to start using Chef on AIX. If you remember my session about Chef on AIX at the IBM Technical University in Cannes I was saying that by using Chef your infrastructure will be testable, repeatable, and versionnable. We will focus on this blog post on how to do that. To test your AIX Chef cookbooks you will need to understand what is the test kitchen (we will use the test kitchen to drive PowerVC to build virtual machines on the fly and run the chef recipes on it). To repeat this over and over to be sure everything is working (code review, be sure that your cookbook is converging) ok without having to do anything we will use Jenkins to automate these tests. Then to version your cookbooks development we will use gitlab.

    To better understand why I’m doing such a thing there is nothing better than a concrete example. My goal is to do all my AIX post-installation tasks using Chef (motd configuration, dns, devices attributes, fileset installation, enabling services … everything that you are today doing using korn shells scripts). Who has never experienced someone changing one of these scripts (most of the time without warning the other members of the team) resulting in a syntax error then resulting in an outage for all your new builds. Doing this is possible if you are in a little team creating one machine per month but is inconceivable in an environment driven by PowerVC where sysadmin are not doing anything “by hand”. In such an environment if someone is doing this kind of error all the new builds are failing …. even worse you’ll probably not be aware of this until someone who is connecting on the machine will say that there is an error (most of the time the final customer). By using continuous integration your AIX build will be tested at every change, all this changes will be stored in a git repository and even better you will not be able to put a change in production without passing all these tests. Even if using this is just mandatory to do that for people using PowerVC today people who are not can still do the same thing. By doing that you’ll have a clean and proper AIX build (post-install) and no errors will be possible anymore, so I highly encourage you to do this even if you are not adopting the Openstack way or even if today you don’t see the benefits. In the future this effort will pay. Trust me.

    The test-kitchen

    What is the kitchen

    The test-kitchen is a tool that allows you to run your AIX Chef cookbooks and recipes in a quick way without having to do manual task. During the development of your recipes if you don’t use the test kitchen you’ll have many tasks to do manually. Build a virtual machine, install the chef client, copy the cookbook and the recipes, run it, check everything is in the state that you want. Imagine doing that on different AIX version (6.1, 7.1, 7.2) everytime you are changing something in your post-installation recipes (I was doing that before and I can assure you that creating and destroy machine over and over and over is just a waste of time). The test kitchen is here to do the job for you. It will build the machine for you (using the PowerVC kitchen driver), install the chef-client (using an omnibus server), copy the content of your cookbook (the files), run a bunch of recipe (described in what we call suites) and then test it (using bats, or serverspec). You can configure your kitchen to test different kind of images (6.1, 7.1, 7.2) and differents suites (cookbooks, recipes) depending on the environment you want to test. By default the test kitchen is using a Linux tool called Vagrant to build your VM. Obsiouvly Vagrant is not able to build an AIX machine, that’s why we will use a modified version of the kitchen-openstack driver (modified by my self) called kitchen-powervc to build the virtual machines:

    Installing the kitchen and the PowerVC driver

    If you have an access to an enterprise proxy you can directly download and install the gem files from your host (in my case this is a Linux on Power … so Linux on Power is working great for this).

    • Install the test kitchen :
    # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 test-kitchen
    Successfully installed test-kitchen-1.7.2
    Parsing documentation for test-kitchen-1.7.2
    1 gem installed
    
  • Install kitchen-powervc :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 kitchen-powervc
    Successfully installed kitchen-powervc-0.1.0
    Parsing documentation for kitchen-powervc-0.1.0
    1 gem installed
    
  • Install kitchen-openstack :
  • # gem install --http-proxy http://bcreau:mypasswd@proxy:8080 kitchen-openstack
    Successfully installed kitchen-openstack-3.0.0
    Fetching: fog-core-1.38.0.gem (100%)
    Successfully installed fog-core-1.38.0
    Fetching: fuzzyurl-0.8.0.gem (100%)
    Successfully installed fuzzyurl-0.8.0
    Parsing documentation for kitchen-openstack-3.0.0
    Installing ri documentation for kitchen-openstack-3.0.0
    Parsing documentation for fog-core-1.38.0
    Installing ri documentation for fog-core-1.38.0
    Parsing documentation for fuzzyurl-0.8.0
    Installing ri documentation for fuzzyurl-0.8.0
    3 gems installed
    

    If you don’t have the access to an enterprise proxy you can still download the gems from home and install it on your work machine:

    # gem install test-kitchen kitchen-powervc kitchen-openstack -i repo --no-ri --no-rdoc
    # # copy the files (repo directory) on your destination machine
    # gem install *.gem
    

    Setup the kitchen (.kitchen.yml file)

    The kitchen configuration file is the .kitchen.yml, when you’ll run the kitchen command, the kitchen will look at this file. You have to put it in the chef-repo (where the cookbook directory is, the kitchen will copy the file from the cookbook to the test machine that’s why it’s important to put this file at the root of the chef-repo.) This file is separated in different sections:

    • The driver section. In this section you will configure howto created virtual machines. In our case how to connect to PowerVC (credentials, region). You’ll also tell in this section which image you want to use (PowerVC images), which flavor (PowerVC template) and which network will be used at the VM creation (please note that you can put some driver_config in the platform section, to tell which image or which ip you want to use for each specific platform.:
      • name: the name of the driver (here powervc).
      • openstack*: the PowerVC url, user, password, region, domain.
      • image_ref: the name of the image (we will put this in driver_config in the platform section).
      • flavor_ref: the name of the PowerVC template used at the VM creation.
      • fixed_ip: the ip_address used for the virtual machine creation.
      • server_name_prefix: each vm created by the kitchen will be prefixed by this parameter.
      • network_ref: the name of the PowerVC vlan to be used at the machine creation.
      • public_key_path: The kitchen needs to connect to the machine with ssh, you need to provide the public key used.
      • private_key_path: Same but for the private key.
      • username: The ssh username (we will use root, but you can use another user and then tell the kitchen to use sudo)
      • user_data: The activation input used by cloud-init we will in this one put the public key to be sure you can access the machine without password (it’s the PowerVC activation input).
    driver:
      name: powervc
      server_wait: 100
      openstack_username: "root"
      openstack_api_key: "root"
      openstack_auth_url: "https://mypowervc:5000/v3/auth/tokens"
      openstack_region: "RegionOne"
      openstack_project_domain: "Default"
      openstack_user_domain: "Default"
      openstack_project_name: "ibm-default"
      flavor_ref: "mytemplate"
      server_name_prefix: "chefkitchen"
      network_ref: "vlan666"
      public_key_path: "/home/chef/.ssh/id_dsa.pub"
      private_key_path: "/home/chef/.ssh/id_dsa"
      username: "root"
      user_data: userdata.txt
    
    #cloud-config
    ssh_authorized_keys:
      - ssh-dss AAAAB3NzaC1kc3MAAACBAIVZx6Pic+FyUisoNrm6Znxd48DQ/YGNRgsed+fc+yL1BVESyTU5kqnupS8GXG2I0VPMWN7ZiPnbT1Fe2D[..]
    
  • The provisioner section: This section can be use to specify if you want to user chef-zero or chef-solo as a provisioner. You can also specify an omnibus url (use to download and install the chef-client at the machine creation time). In my case the omnibus url is a link to an http server “serving” a script (install.sh) installing the chef client fileset for AIX (more details later in the blog post). I’m also putting “sudo” to false as I’ll connect with the root user:
  • provisioner:
      name: chef_solo
      chef_omnibus_url: "http://myomnibusserver:8080/chefclient/install.sh"
      sudo: false
    
  • The platefrom section: The plateform section will describe each plateform that the test-kitchen can create (I’m putting here the image_ref and the fixed_ip for each plateform (AIX 6.1, AIX 7.1, AIX 7.2)
  • platforms:
      - name: aix72
        driver_config:
          image_ref: "kitchen-aix72"
          fixed_ip: "10.66.33.234"
      - name: aix71
        driver_config:
          image_ref: "kitchen-aix71"
          fixed_ip: "10.66.33.235"
      - name: aix61
        driver_config:
          image_ref: "kitchen-aix61"
          fixed_ip: "10.66.33.236"
    
  • The suite section: this section describe which cookbook and which recipes you want to run in the machines created by the test-kitchen. For the simplicity of this example I’m just running two recipe the first on called root_authorized_keys (creating the /root directory, changing the home directory of root and the putting a public key in the .ssh directory) and the second one call gem_source (we will check later in the post why I’m also calling this recipe):
  • suites:
      - name: aixcookbook
        run_list:
        - recipe[aix::root_authorized_keys]
        - recipe[aix::gem_source]
        attributes: { gem_source: { add_urls: [ "http://10.14.66.100:8808" ], delete_urls: [ "https://rubygems.org/" ] } }
    
  • The busser section: this section describe how to run you tests (more details later in the post ;-) ):
  • busser:
      sudo: false
    

    After configuring the kitchen you can check the yml file is ok by listing what’s configured on the kitchen:

    # kitchen list
    Instance           Driver   Provisioner  Verifier  Transport  Last Action
    aixcookbook-aix72  Powervc  ChefSolo     Busser    Ssh        
    aixcookbook-aix71  Powervc  ChefSolo     Busser    Ssh        
    aixcookbook-aix61  Powervc  ChefSolo     Busser    Ssh        
    

    kitchen1
    kitchen2

    Anatomy of a kitchen run

    A kitchen run is divided into five steps. At first we are creating a virtual machine (the create action), then we are installing the chef-client (using an omnibus url) and running some recipes (converge), then we are installing testing tools on the virtual machine (in my case serverspec) (setup) and we are running the tests (verify). Finally if everything was ok we are deleting the virtual machines (destroy). Instead of running all theses steps one by one you can use the “test” option. This one will do destroy,create,converge,setup,verify,destroy in on single “pass”. Let’s check in details each steps:

    kitchen1

    • Create: This will create the virtual machine using PowerVC. If you choose to use the “fixed_ip” option in the .kitchen.yml file this ip will be choose at the machine creation time. If you prefer to pick an ip from the network (in the pool) don’t set the “fixed_ip”. You’ll see the details in the picture below. You can at the end test the connectivity (transport) (ssh) to the machine using “kitchen login”. The ssh public key was automatically added using the userdata.txt file used by cloud-init at the machine creation time. After the machine is created you can use the “kitchen list” command to check the machine was successfully created:
    # kitchen create
    

    kitchencreate3
    kitchencreate1
    kitchencreate2
    kitchenlistcreate1

    • Converge: This will converge the kitchen (on more time converge = chef-client installation and running chef-solo with the suite configuration describing which recipe will be launched). The converge action will download the chef client and install it on the machine (using the omnibus url) and run the recipe specified in the suite stanza of the .kitchen.yml file. Here is the script I use for the omnibus installation this script is “served” by an http server:
    # cat install.sh
    #!/usr/bin/ksh
    echo "[omnibus] [start] starting omnibus install"
    echo "[omnibus] downloading chef client http://chefomnibus:8080/chefclient/lastest"
    perl -le 'use LWP::Simple;getstore("http://chefomnibus:8080/chefclient/latest", "/tmp/chef.bff")'
    echo "[omnibus] installing chef client"
    installp -aXYgd /tmp/ chef
    echo "[omnibus] [end] ending omnibus install"
    
  • The http server is serving this install.sh file. Here is the httpd.conf configuration file for the omnibus installation on AIX:
  • # ls -l /apps/chef/chefclient
    total 647896
    -rw-r--r--    1 apache   apache     87033856 Dec 16 17:15 chef-12.1.2-1.powerpc.bff
    -rwxr-xr-x    1 apache   apache     91922944 Nov 25 00:24 chef-12.5.1-1.powerpc.bff
    -rw-------    2 apache   apache     76375040 Jan  6 11:23 chef-12.6.0-1.powerpc.bff
    -rwxr-xr-x    1 apache   apache          364 Apr 15 10:23 install.sh
    -rw-------    2 apache   apache     76375040 Jan  6 11:23 latest
    # cat httpd.conf
    [..]
         Alias /chefclient/ "/apps/chef/chefclient/"
         
             Options Indexes FollowSymlinks MultiViews
           AllowOverride None
           Require all granted
         
    
    # kitchen converge
    

    kitchenconverge1
    kitchenconverge2b
    kitchenlistconverge1

    • Setup and verify: these actions will run a bunch of tests to verify the machine is in the state you want. The test I am writing are checking that the root home directory was created and the key was successfully created in the .ssh directory. In a few words you need to write tests checking that your recipes are working well (in chef words: “check that the machine is in the correct state”). In my case I’m using serverspec to describe my tests (there are different tools using for testing, you can also use bats). To describe the tests suite just create serverspec files (describing the tests) in the chef-repo directory (in ~/test/integration//serverspec in my case ~/test/integration/aixcookbook/serverspec). All the serverspec test files are suffixed by _spec:
    # ls test/integration/aixcookbook/serverspec/
    root_authorized_keys_spec.rb  spec_helper.rb
    
  • The “_spec” file describe the tests that will be run by the kitchen. In my very simple tests here I’m just checking my files exists and the content of the public_key is the same as my public_key (the key created by cloud-init in AIX is located in ~/.ssh and my test recipe here is changing the root home directory and putting the key in the right place). By looking at the file you can see that the serverspec language is very simple to understand:
  • # ls test/integration/aixcookbook/serverspec/
    root_authorized_keys_spec.rb  spec_helper.rb
    
    # cat spec_helper.rb
    require 'serverspec'
    set :backend, :exec
    # cat root_authorized_keys_spec.rb
    require 'spec_helper'
    
    describe file('/root/.ssh') do
      it { should exist }
      it { should be_directory }
      it { should be_owned_by 'root' }
    end
    
    describe file('/root/.ssh/authorized_keys') do
      it { should exist }
      it { should be_owned_by 'root' }
      it { should contain 'from="1[..]" ssh-rsa AAAAB3NzaC1[..]' }
    end
    
  • The kitchen will try to install needed ruby gems for serverspec (serverspec needs to be installed on the server to run the automated test). As my server has no connectivity to the internet I need to run my own gem server. I’m lucky all the gem needed are installed on my chef workstation (if you have no internet access from the workstation use the tip described at the beginning of this blog post). I just need to run a local gem server by running “gem server” on the chef workstation. The server is listening on port 8808 and will serve all the needed gems:
  • # gem list | grep -E "busser|serverspec"
    busser (0.7.1)
    busser-bats (0.3.0)
    busser-serverspec (0.5.9)
    serverspec (2.31.1)
    # gem server
    Server started at http://0.0.0.0:8808
    
  • If you look on the output above you can see that the recipe gem_server was executed. This recipe change the gem source on the virtual machine (from https://rubygems.org to my own local server). In the .kitchen.yml file the urls to add and remove to the gem source are specified in the suite attributes:
  • # cat gem_source.rb
    ruby_block 'Changing gem source' do
      block do
        node['gem_source']['add_urls'].each do |url|
          current_sources = Mixlib::ShellOut.new('/opt/chef/embedded/bin/gem source')
          current_sources.run_command
          next if current_sources.stdout.include?(url)
          add = Mixlib::ShellOut.new("/opt/chef/embedded/bin/gem source --add #{url}")
          add.run_command
          Chef::Application.fatal!("Adding gem source #{url} failed #{add.status}") unless add.status == 0
          Chef::Log.info("Add gem source #{url}")
        end
    
        node['gem_source']['delete_urls'].each do |url|
          current_sources = Mixlib::ShellOut.new('/opt/chef/embedded/bin/gem source')
          current_sources.run_command
          next unless current_sources.stdout.include?(url)
          del = Mixlib::ShellOut.new("/opt/chef/embedded/bin/gem source --remove #{url}")
          del.run_command
          Chef::Application.fatal!("Removing gem source #{url} failed #{del.status}") unless del.status == 0
          Chef::Log.info("Remove gem source #{url}")
        end
      end
      action :run
    end
    
    # kitchen setup
    # kitchen verify
    

    kitchensetupeverify1
    kitchenlistverfied1

    • Destroy: This will destroy the virtual machine on PowerVC.
    # kitchen destroy
    

    kitchendestroy1
    kitchendestroy2
    kitchenlistdestroy1

    Now that you understand how the kitchen is working and that you are now able to run it to create and test AIX machines you are ready to use the kitchen to develop and create the chef cookbook that will fit your infrastructure. To run the all the steps “create,converge,setup,verify,destroy”, just use the “kitchen test” command:

    # kitchen test
    

    As you are going to change a lot of things in your cookbook you’ll need to version the code you are creating, for this we will use a gitlab server.

    Gitlab: version your AIX cookbook

    Unfortunately for you and for me I didn’t had the time to run gitlab on a Linux on Power machine. I’m sure it is possible (if you find a way to do this please mail me). Anyway my version of gitlab is running on an x86 box. The goal here is to allow the chef workstation (in my environment this user is “chef”) user to push all the new developments (providers, recipes) to the git development branch for this we will:

    • Allow the chef user to push its source to the git server trough ssh (we are creating a chefworkstation user and adding the key to authorize this user to push the changes to the git repository with ssh).
    • gitlabchefworkst

    • Create a new repository called aix-cookbook.
    • createrepo

    • Push your current work to the master branch. The master branch will be the production branch.
    # git config --global user.name "chefworkstation"
    # git config --global user.email "chef@myworkstation.chmod666.org"
    # git init
    # git add -A .
    # git commit -m "first commit"
    # git remote add origin git@gitlabserver:chefworkstation/aix-cookbook.git
    # git push origin master
    

    masterbranch

  • Create a development branch (you’ll need to push all your new development to this branch, and you’ll never have to do anything else on the master branch as Jenkins is going to do the job for us.
  • # git checkout -b dev
    # git commit -a
    # git push origin dev
    

    devbranch

    The git server is ready: we have a repository accessible by the chef user. Two branch created the dev one (the one we are working on used for all our development) and the master branch used for production that will be never touched by us and will be only updated (by jenkins) if all the tests (foodcritic, rubocop and the test-kitchen) are ok

    Automating the continous integration with Jenkins

    What is Jenkins

    The goal of Jenkins is to automate all tests and run them over and over again every time a change is applied onto the cookbook you are developing. By using Jenkins you will be sure that every change will be tested and you will never push something that is not working or not passing the tests you have defined in your production environment. To be sure the cookbook is working as desired we will use three different tools. foodcritic will check the will check your chef cookbook for common problems by checking rules that are defined within the tools (this rules will check that everything is ok for the chef execution, so you will be sure that there is no syntax error, and that all the coding convention will be respected), rubocop will check the ruby syntax, and then we will run a kitchen test to be sure that the developement branch is working with the kitchen and that all our serverspec tests are ok. Jenkins will automate the following steps:

    1. Pull the dev branch from git server (gitlab) if anything has changed on this branch.
    2. Run foodcritic on the code.
    3. If foodcritic tests are ok this will trigger the next step.
    4. Pull the dev branch again
    5. Run rubocop on the code.
    6. If rubocop tests are ok this will trigger the next step.
    7. Run the test-kitchen
    8. This will build a new machine on PowerVC and test the cookbook against it (kitchen test).
    9. If the test kitchen is ok push the dev branch to the master branch.
    10. You are ready for production :-)

    kitchen2

    First: Foodcritic

    The first test we are running is foodcritic. Better than trying to do my own explanation of this with my weird english I prefer to quote the chef website:

    Foodcritic is a static linting tool that analyzes all of the Ruby code that is authored in a cookbook against a number of rules, and then returns a list of violations. Because Foodcritic is a static linting tool, using it is fast. The code in a cookbook is read, broken down, and then compared to Foodcritic rules. The code is not run (a chef-client run does not occur). Foodcritic does not validate the intention of a recipe, rather it evaluates the structure of the code, and helps enforce specific behavior, detect portability of recipes, identify potential run-time failures, and spot common anti-patterns.

    # foodcritic -f correctness ./cookbooks/
    FC014: Consider extracting long ruby_block to library: ./cookbooks/aix/recipes/gem_source.rb:1
    

    In Jenkins here are the steps to create a foodcritic test:

    • Pull dev branch from gitlab:
    • food1

    • Check for changes (the Jenkins test will be triggered only if there was a change in the git repository):
    • food2

    • Run foodcritic
    • food3

    • After the build parse the code (to archive and record the evolution of the foodcritic errors) and run the rubocop project if the build is stable (passed without any errors):
    • food4

    • To configure the parser go in the Jenkins configuration and add the foodcritic compiler warnings:
    • food5

    Second: Rubocop

    The second test we are running is rubocop it’s a Ruby static code analyzer, based on the community Ruby style guide. Here is an example below

    # rubocop .
    Inspecting 71 files
    ..CCCCWWCWC.WC..CC........C.....CC.........C.C.....C..................C
    
    Offenses:
    
    cookbooks/aix/providers/fixes.rb:31:1: C: Assignment Branch Condition size for load_current_resource is too high. [20.15/15]
    def load_current_resource
    ^^^
    cookbooks/aix/providers/fixes.rb:31:1: C: Method has too many lines. [19/10]
    def load_current_resource ...
    ^^^^^^^^^^^^^^^^^^^^^^^^^
    cookbooks/aix/providers/sysdump.rb:11:1: C: Assignment Branch Condition size for load_current_resource is too high. [25.16/15]
    def load_current_resource
    

    In Jenkins here are the steps to create a rubocop test:

    • Do the same thing as foodcritic except for the build and post-build action steps:
    • Run rubocop:
    • rubo1

    • After the build parse the code and run the test-kitchen project even if the build is fails (rubocop will generate tons of things to correct … once you are ok with rubocop change this to “trigger only if the build is stable”) :
    • rubo2

    Third: test-kitchen

    I don’t have to explain again what is the test-kitchen ;-) . It is the third test we are creating with Jenkins and if this one is ok we are pushing the changes in production:

    • Do the same thing as foodcritic except for the build and post-build action steps:
    • Run the test-kitchen:
    • kitchen1

    • If the test kitchen is ok push dev branch to master branch (dev to production):
    • kitchen3

    More about Jenkins

    The three tests are now linked together. On the Jenkins home page you can check the current state of your tests. Here are a couple of screenshots:

    meteo
    timeline

    Conclusion

    I know that for most of you working this way is something totally new. As AIX sysadmins we are used to our ksh and bash scripts and we like the way it is today. But as the world is changing and as you are going to manage more and more machines with less and less admins you will understand how powerful it is to use automation and how powerful it is to work in a “continuous integration” way. Even if you don’t like this “concept” or this new work habit … give it a try and you’ll see that working this way is worth the effort. First for you … you’ll discover a lot of new interesting things, second for your boss that will discover that working this way is safer and more productive. Trust me AIX needs to face Linux today and we are not going anywhere without having a proper fight versus the Linux guys :-) (yep it’s a joke).

    Enhance your AIX packages management with yum and nim over http

    $
    0
    0

    As AIX is getting older and older our old favorite OS is still trying to struggle versus the mighty Linux and the fantastic Solaris (no sarcasm in that sentence I truly believe what I say). You may have notice that -with time- IBM is slowly but surely moving from proprietary code to something more open (ie. PowerVC/Openstack projects, integration with Chef, Linux on Power and tons of other examples). I’m a little bit deviating from the main topic of this blog post but speaking about open source I have many things to say. If someone from my company is reading this post please note that it is my point of view … but I’m still sure that we are going the WRONG way not being more open, and not publishing on github. Starting from now every AIX IT shop in world must consider using OpenSource software (git, chef, ansible, zsh and so on) instead of maintaining homemade tools, or worse paying for tools that are 100 % of the time worse than OpenSource tools. Even better, every IT admin and every team must consider sharing their sources with the rest of the world for one single good reason: “Alone we can do so little, together we can do so much”. Every company not considering this today is doomed. Take example on Bloomberg, Facebook (sharing to the world all their Chef’s cookbooks), twitter, they’re all using github to share their opensource projects. Even military, police and banks are doing the same. They’re still secure but they are open to world ready work to make and create things better and better. All of this to introduce you to new things coming on AIX. Instead of reinventing the wheel IBM had the great idea to use already well implanted tools. It was the case for Openstack/PowerVC and it is also for the tools I’ll talk about in this post. It is the case for yum (yellowdog updater modified). Instead of installing rpm packages by hand you now have the possibility to use yum and to definitely end the rpm dependency nightmare that we all had since AIX 5L was released. Next instead of using the proprietary nimsh protocol to install filesets (bff package) you can now tell the nim server and nimclient to this over http/https (secure is only for the authentication as far as I know) (an open protocol :-) ). By doing this you will enhance the way you are managing packages on AIX. Do this now on every AIX system you install, yum everywhere and stop using NFS … we’re now in an http world :-)

    yum: the yellow dog updater modified

    I’m not going to explain you what yum is. If you don’t know you’re not in the right place. Just note that my advice starting from now is to use yum to install every software of the AIX toolbox (ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/). IBM is providing an official repository than can be mirrored on your own site to avoid having to use a proxy or having an access to the internet from you servers (you must admit that this is almost impossible and every big company will try to avoid this). Let’s start by trying to install yum:

    Installing yum

    IBM is providing an archive with all the needed rpm mandatory to use and install yum on an AIX server, you can find this archive here: ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc/yum_bundle_v1.tar. Just download it and install every rpm in it and yum will be available on you system, simple as that:

    A specific version of rpm binary command is mandatory to use yum. Before doing anything update the rpm.rte fileset. As AIX is rpm “aware” it already have an rpm database, but this one will not be manageable by yum. The installation of rpm in a version greater than 4.9.1.3 is needed. This installation will migrate the existing rpm database to a new one usable by yum. The fileset in the right version can be found here ftp://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/INSTALLP/ppc/

    • By default the rpm command is installed by an AIX fileset:
    # which rpm
    /usr/bin/rpm
    # lslpp -w /usr/bin/rpm
      File                                        Fileset               Type
      ----------------------------------------------------------------------------
      /usr/bin/rpm                                rpm.rte               File
    # rpm --version
    RPM version 3.0.5
    
  • The rpm database is located in /usr/opt/freeware/packages :
  • # pwd
    /usr/opt/freeware/packages
    # ls -ltr
    total 5096
    -rw-r--r--    1 root     system         4096 Jul 01 2011  triggerindex.rpm
    -rw-r--r--    1 root     system         4096 Jul 01 2011  conflictsindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 nameindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 groupindex.rpm
    -rw-r--r--    1 root     system      2009224 Jul 21 00:54 packages.rpm
    -rw-r--r--    1 root     system       647168 Jul 21 00:54 fileindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 requiredby.rpm
    -rw-r--r--    1 root     system        81920 Jul 21 00:54 providesindex.rpm
    
  • Install the rpm.rte fileset in the right version (4.9.1.3):
  • # file rpm.rte.4.9.1.3
    rpm.rte.4.9.1.3: backup/restore format file
    # installp -aXYgd . rpm.rte
    +-----------------------------------------------------------------------------+
                        Pre-installation Verification...
    +-----------------------------------------------------------------------------+
    Verifying selections...done
    Verifying requisites...done
    Results...
    
    SUCCESSES
    ---------
      Filesets listed in this section passed pre-installation verification
      and will be installed.
    
      Selected Filesets
      -----------------
      rpm.rte 4.9.1.3                             # RPM Package Manager
    [..]
    #####################################################
            Rebuilding RPM Data Base ...
            Please wait for rpm_install background job termination
            It will take a few minutes
    [..]
    Installation Summary
    --------------------
    Name                        Level           Part        Event       Result
    -------------------------------------------------------------------------------
    rpm.rte                     4.9.1.3         USR         APPLY       SUCCESS
    rpm.rte                     4.9.1.3         ROOT        APPLY       SUCCESS
    
  • After the installation check you have the correct version of rpm, you can also notice some changes in the rpm database files:
  • # rpm --version
    RPM version 4.9.1.3
    # ls -ltr /usr/opt/freeware/packages
    total 25976
    -rw-r--r--    1 root     system         4096 Jul 01 2011  triggerindex.rpm
    -rw-r--r--    1 root     system         4096 Jul 01 2011  conflictsindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 nameindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 groupindex.rpm
    -rw-r--r--    1 root     system      2009224 Jul 21 00:54 packages.rpm
    -rw-r--r--    1 root     system       647168 Jul 21 00:54 fileindex.rpm
    -rw-r--r--    1 root     system        20480 Jul 21 00:54 requiredby.rpm
    -rw-r--r--    1 root     system        81920 Jul 21 00:54 providesindex.rpm
    -rw-r--r--    1 root     system            0 Jul 21 01:08 .rpm.lock
    -rw-r--r--    1 root     system         8192 Jul 21 01:08 Triggername
    -rw-r--r--    1 root     system         8192 Jul 21 01:08 Conflictname
    -rw-r--r--    1 root     system        28672 Jul 21 01:09 Dirnames
    -rw-r--r--    1 root     system       221184 Jul 21 01:09 Basenames
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Sha1header
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Requirename
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Obsoletename
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Name
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Group
    -rw-r--r--    1 root     system       815104 Jul 21 01:09 Packages
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Sigmd5
    -rw-r--r--    1 root     system         8192 Jul 21 01:09 Installtid
    -rw-r--r--    1 root     system        86016 Jul 21 01:09 Providename
    -rw-r--r--    1 root     system       557056 Jul 21 01:09 __db.004
    -rw-r--r--    1 root     system     83894272 Jul 21 01:09 __db.003
    -rw-r--r--    1 root     system      7372800 Jul 21 01:09 __db.002
    -rw-r--r--    1 root     system        24576 Jul 21 01:09 __db.001
    

    Then install yum. Please note that I already have some rpm installed on my current system that’s why I’m not installing db, or gdbm. If your system is free of any rpm install all the rpm found in the archive:

    # tar xvf yum_bundle_v1.tar
    x curl-7.44.0-1.aix6.1.ppc.rpm, 584323 bytes, 1142 media blocks.
    x db-4.8.24-3.aix6.1.ppc.rpm, 2897799 bytes, 5660 media blocks.
    x gdbm-1.8.3-5.aix5.2.ppc.rpm, 56991 bytes, 112 media blocks.
    x gettext-0.10.40-8.aix5.2.ppc.rpm, 1074719 bytes, 2100 media blocks.
    x glib2-2.14.6-2.aix5.2.ppc.rpm, 1686134 bytes, 3294 media blocks.
    x pysqlite-1.1.7-1.aix6.1.ppc.rpm, 51602 bytes, 101 media blocks.
    x python-2.7.10-1.aix6.1.ppc.rpm, 23333701 bytes, 45574 media blocks.
    x python-devel-2.7.10-1.aix6.1.ppc.rpm, 15366474 bytes, 30013 media blocks.
    x python-iniparse-0.4-1.aix6.1.noarch.rpm, 37912 bytes, 75 media blocks.
    x python-pycurl-7.19.3-1.aix6.1.ppc.rpm, 162093 bytes, 317 media blocks.
    x python-tools-2.7.10-1.aix6.1.ppc.rpm, 830446 bytes, 1622 media blocks.
    x python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm, 158584 bytes, 310 media blocks.
    x readline-6.1-2.aix6.1.ppc.rpm, 489547 bytes, 957 media blocks.
    x sqlite-3.7.15.2-2.aix6.1.ppc.rpm, 1334918 bytes, 2608 media blocks.
    x yum-3.4.3-1.aix6.1.noarch.rpm, 1378777 bytes, 2693 media blocks.
    x yum-metadata-parser-1.1.4-1.aix6.1.ppc.rpm, 62211 bytes, 122 media blocks.
    
    # rpm -Uvh curl-7.44.0-1.aix6.1.ppc.rpm glib2-2.14.6-2.aix5.2.ppc.rpm pysqlite-1.1.7-1.aix6.1.ppc.rpm python-2.7.10-1.aix6.1.ppc.rpm python-devel-2.7.10-1.aix6.1.ppc.rpm python-iniparse-0.4-1.ai
    x6.1.noarch.rpm python-pycurl-7.19.3-1.aix6.1.ppc.rpm python-tools-2.7.10-1.aix6.1.ppc.rpm python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm yum-3.4.3-1.aix6.1.noarch.rpm yum-metadata-parser-1.1.4-
    1.aix6.1.ppc.rpm
    # Preparing...                ########################################### [100%]
       1:python                 ########################################### [  9%]
       2:pysqlite               ########################################### [ 18%]
       3:python-iniparse        ########################################### [ 27%]
       4:glib2                  ########################################### [ 36%]
       5:yum-metadata-parser    ########################################### [ 45%]
       6:curl                   ########################################### [ 55%]
       7:python-pycurl          ########################################### [ 64%]
       8:python-urlgrabber      ########################################### [ 73%]
       9:yum                    ########################################### [ 82%]
      10:python-devel           ########################################### [ 91%]
      11:python-tools           ########################################### [100%]
    

    Yum is now ready to be configured and used :-)

    # which yum
    /usr/bin/yum
    # yum --version
    3.4.3
      Installed: yum-3.4.3-1.noarch at 2016-07-20 23:24
      Built    : None at 2016-06-22 14:13
      Committed: Sangamesh Mallayya  at 2014-05-29
    

    Setting up yum and you private yum repository for AIX

    A private repository

    As nobody wants to use the official IBM repository available directly on internet the goal here is to create your own repository. Download all the content of the official repository and “serve” this directory (the one where you download all the rpms) on an private http server (yum is using http/https obviously :-) ).

    • Using wget download the content of the whole official repository. You can notice here that IBM is providing the metadata needed (repodata directory) (if you don’t have this repodata directory yum can’t work properly. This one can be created using the createrepo command available on akk good Linux distros :-) ):
    # wget -r ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/RPMS/ppc/
    # ls -ltr
    [..]
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 readline
    drwxr-xr-x    2 root     system          256 Jul 11 22:08 rep-gtk
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 repodata
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 rpm
    drwxr-xr-x    2 root     system         4096 Jul 11 22:08 rsync
    drwxr-xr-x    2 root     system          256 Jul 11 22:08 ruby
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 rxvt
    drwxr-xr-x    2 root     system         4096 Jul 11 22:09 samba
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 sawfish
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 screen
    drwxr-xr-x    2 root     system          256 Jul 11 22:09 scrollkeeper
    
  • Configure you web server (here it’s just an alias because I’m using my http server for other things):
  • # more httpd.conf
    [..]
    Alias /aixtoolbox/  "/apps/aixtoolbox/"
    
        Options Indexes FollowSymLinks MultiViews
        AllowOverride None
        Require all granted
    
    
  • Restart your webserver and check you repository is accessible:
  • repo

  • That’s it the private repository is ready.
  • Configuring yum

    On the client just modify the /opt/freeware/etc/yum/yum.conf or add a file in /opt/freeware/etc/yum/yum.repos.d to point to your private repository:

    # cat /opt/freeware/etc/yum/yum.conf
    [main]
    cachedir=/var/cache/yum
    keepcache=1
    debuglevel=2
    logfile=/var/log/yum.log
    exactarch=1
    obsoletes=1
    
    [AIX_Toolbox]
    name=AIX ToolBox Repository
    baseurl=http://nimserver:8080/aixtoolbox/
    enabled=1
    gpgcheck=0
    
    # PUT YOUR REPOS HERE OR IN separate files named file.repo
    # in /etc/yum/repos.d
    

    That’s it the client is ready.

    Chef recipe to install and configre yum

    My readers all knows that I’m using Chef as a configuration management tools. As you are going to do this on every single system you have I think giving you the Chef recipe installing and configuring yum can be useful (if you don’t care about it just skip it and go to the next session). If you are not using a configuration management tool maybe this simple example will help you to move on and stop doing this by hand or writing ksh scripts. I have to do that on tons of system so for me it’s just mandatory. Here is my recipe to do all the job, configuring and installing yum, and installing some RPM:

    directory '/var/tmp/yum' do
      action :create
    end
    
    remote_file '/var/tmp/yum/rpm.rte.4.9.1.3'  do
      source "http://#{node['nimserver']}/powervc/rpm.rte.4.9.1.3"
      action :create
    end
    
    execute "Do the toc" do
      command 'inutoc /var/tmp/yum'
      not_if { File.exist?('/var/tmp/yum/.toc') }
    end
    
    bff_package 'rpm.rte' do
      source '/var/tmp/yum/rpm.rte.4.9.1.3'
      action :install
    end
    
    tar_extract "http://#{node['nimserver']/powervc/yum_bundle_v1.tar" do
      target_dir '/var/tmp/yum'
      compress_char ''
      user 'root'
      group 'system'
    end
    
    # installing some rpm needed for yum
    for rpm in [ 'curl-7.44.0-1.aix6.1.ppc.rpm', 'python-pycurl-7.19.3-1.aix6.1.ppc.rpm', 'python-urlgrabber-3.10.1-1.aix6.1.noarch.rpm', 'glib2-2.14.6-2.aix5.2.ppc.rpm', 'yum-metadata-parser-1.1.4-1.aix6.1.ppc.rpm', 'python-iniparse-0.4-1.aix6.1.noarch.rpm', 'pysqlite-1.1.7-1.aix6.1.ppc.rpm'  ]
      execute "installing yum" do
        command "rpm -Uvh /var/tmp/yum/#{rpm}"
        not_if "rpm -qa | grep $(echo #{rpm} | sed 's/.aix6.1//' | sed 's/.aix5.2//' | sed 's/.rpm//')"
      end
    end
    
    # updating python
    execute "updating python" do
      command "rpm -Uvh /var/tmp/yum/python-devel-2.7.10-1.aix6.1.ppc.rpm /var/tmp/yum/python-2.7.10-1.aix6.1.ppc.rpm"
      not_if "rpm -qa | grep python-2.7.10-1"
    end
    
    # installing yum
    execute "installing yum" do
      command "rpm -Uvh /var/tmp/yum/yum-3.4.3-1.aix6.1.noarch.rpm"
      not_if "rpm -qa | grep yum-3.4.3.1.noarch"
    end
    
    # changing yum configuration
    template '/opt/freeware/etc/yum/yum.conf' do
      source 'yum.conf.erb'
    end
    
    # installing some software with aix yum
    for soft in [ 'bash', 'bzip2', 'curl', 'emacs', 'gzip', 'screen', 'vim-enhanced', 'wget', 'zlib', 'zsh', 'patch', 'file', 'lua', 'nspr', 'git' ] do
      execute "install #{soft}" do
        command "yum -y install #{soft}"
      end
    end
    
    # removing temporary file
    execute 'removing /var/tmp/yum' do
      command 'rm -rf /var/tmp/yum'
      only_if { File.exists?('/var/tmp/yum')}
    end
    

    chef_yum1
    chef_yum2
    chef_yum3

    After running the chef recipe yum is fully usable \o/ :

    chef_yum4

    Using yum on AIX: what you need to know

    yum is usable just like it is on a Linux system. You may hit some issues when using yum on AIX. For instance you can have this kind of errors:

    # yum check
    AIX-rpm-7.2.0.1-2.ppc has missing requires of rpm
    AIX-rpm-7.2.0.1-2.ppc has missing requires of popt
    AIX-rpm-7.2.0.1-2.ppc has missing requires of file-libs
    AIX-rpm-7.2.0.1-2.ppc has missing requires of nss
    

    If you are not aware of what is the purpose of AIX-rpm please read this. This rpm is what I call a meta package. It does not install anything. This rpm is used because the rpm database does not know anything about things (binaries, libraries) installed by standard AIX filesets. By default rpm are not “aware” of what is installed by a fileset (bff) but most of rpms depends on things installed by filesets. When you install a fileset … let’s say it install a library like libc.a AIX run the updtvpkg program to rebuild this AIX-rpm and says “this rpm will resolve any rpm dependencies issue for libc.a. So first, never try to uninstall this rpm, second it’s not a real problem is this rpm has missing dependencies …. as it is providing nothing. If you really want to see what dependencies resolve AIX-rpm run the following command:

    # rpm -q --provides AIX-rpm-7.2.0.1-2.ppc | grep libc.a
    libc.a(aio.o)
    # lslpp -w /usr/lib/libc.a
      File                                        Fileset               Type
      ----------------------------------------------------------------------------
      /usr/lib/libc.a                             bos.rte.libc          Symlink
    

    If you want to get rid of these messages just install the missing rpm … using yum:

    # yum -y install popt file-libs
    

    A few examples

    Here are a few example a software installation using yum:

    • Installing git:
    # yum install git
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package git.ppc 0:4.3.20-4 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================================================================================================================================
     Package                                    Arch                                       Version                                         Repository                                          Size
    ================================================================================================================================================================================================
    Installing:
     git                                        ppc                                        4.3.20-4                                        AIX_Toolbox                                        215 k
    
    Transaction Summary
    ================================================================================================================================================================================================
    Install       1 Package
    
    Total size: 215 k
    Installed size: 889 k
    Is this ok [y/N]: y
    Downloading Packages:
    Running Transaction Check
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Installing : git-4.3.20-4.ppc                                                                                                                                                             1/1
    
    Installed:
      git.ppc 0:4.3.20-4
    
    Complete!
    
  • Removing git :
  • # yum remove git
    Setting up Remove Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package git.ppc 0:4.3.20-4 will be erased
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================================================================================================================================
     Package                                   Arch                                      Version                                           Repository                                          Size
    ================================================================================================================================================================================================
    Removing:
     git                                       ppc                                       4.3.20-4                                          @AIX_Toolbox                                       889 k
    
    Transaction Summary
    ================================================================================================================================================================================================
    Remove        1 Package
    
    Installed size: 889 k
    Is this ok [y/N]: y
    Downloading Packages:
    Running Transaction Check
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Erasing    : git-4.3.20-4.ppc                                                                                                                                                             1/1
    
    Removed:
      git.ppc 0:4.3.20-4
    
    Complete!
    
  • List available repo
  • yum repolist
    repo id                                                                                repo name                                                                                          status
    AIX_Toolbox                                                                            AIX ToolBox Repository                                                                             233
    repolist: 233
    

    Getting rid of nimsh: USE HTTPS !

    A new feature that is now available on latest version of AIX (7.2) allows you to use nim over http. It is a long awaited feature for different reasons (it’s just my opinion). I personally don’t like proprietary protocols such as nimsh and nimsh secure … security teams neither. Who has never experienced installation problems because of nimsh port not opened, because of ids, because of security teams ? Using http or https is the solution? No company is not allowing http or https ! This protocol is so used and secured, widely spread in a lot of products that everybody trust it. I personally prefer opening on single port than struggling opening all nimsh ports. You’ll understand that using http is far better than using nimsh. Before explaining this in details here are a few things you need to now. nimhttp is only available on latest version of AIX (7.2 SP0/1/2), same for the nimclient. If there is a problem using http the nimclient will automatically fallback in an NFS mode. Only certain nim operation are available over http:

    Configuring the nim server

    To use nim over http (nimhttp) you nim server must be at least deployed on an AIX 7.2 server (mine is updated to the latest service pack (SP2)). Start the service nimhttp on the nim server to allow nim to use http for its operations:

    # oslevel -s
    7200-00-02-1614
    # startsrc -s nimhttp
    0513-059 The nimhttp Subsystem has been started. Subsystem PID is 11665728.
    # lssrc -a | grep nimhttp
     nimhttp                           11665728     active
    

    The nimhttp service will listen on port 4901, this port is defined in the /etc/services :

    # grep nimhttp /etc/services
    nimhttp         4901/tcp
    nimhttp         4901/udp
    # netstat -an | grep 4901
    tcp4       0      0  *.4901                 *.*                    LISTEN
    # rmsock f1000e0004a483b8 tcpcb
    The socket 0xf1000e0004a48008 is being held by proccess 14811568 (nimhttpd).
    # ps -ef | grep 14811568
        root 14811568  4456760   0 04:03:22      -  0:02 /usr/sbin/nimhttpd -v
    

    If you want to enable crypto/ssl to encrypt http authentication, just add the -a “-c” to your command line. This “-c” argument will tell nimhttp to start in secure mode and encrypt the authentication:

    # startsrc -s nimhttp -a "-c"
    0513-059 The nimhttp Subsystem has been started. Subsystem PID is 14811570.
    # ps -ef | grep nimhttp
        root 14811570  4456760   0 22:57:51      -  0:00 /usr/sbin/nimhttpd -v -c
    

    Starting the service for the first time will create an httpd.conf file in the root home directory :

    # grep ^document_root ~/httpd.conf
    document_root=/export/nim/
    # grep ^service.log ~/httpd.conf
    service.log=/var/adm/ras/nimhttp.log
    

    If you choose to enable the secure authentication nimhttp will use the pem certificates file used by nim. If you are already using secure nimsh you don’t have to run the “nimconfig -c” command. If it is the first time this command will create the two pem files (root and server in /ssl_nim/certs) (check my blog post about secure nimsh for more information about that):

    # nimconfig -c
    # grep ^ssl. ~/httpd.conf
    ssl.cert_authority=/ssl_nimsh/certs/root.pem
    ssl.pemfile=/ssl_nimsh/certs/server.pem
    

    The document_root of the http server will define the resource the nim http will “serve”. The default one is /export/nim (default nim place for all nim resources (spot, mksysb, lpp_source) and cannot be changed today (I think it is now ok on SP2, I’ll change the blog post as soon as the test will be done). Unfortunately for me one of my production nim was created by someone not very aware of AIX and … resources are not in /export/nim (I had to recreate my own nim because of that :-( )

    On the client side ?

    On the client side you just have nothing to do. If you’re using AIX 7.2 and nimhttp is enabled the client will automatically use http for communication (if it is enabled on the nim server). Just note that if you’re using nimhttp in secure mode, you must enable your nimclient in secure mode too:

    # nimclient -c
    Received 2788 Bytes in 0.0 Seconds
    0513-044 The nimsh Subsystem was requested to stop.
    0513-077 Subsystem has been changed.
    0513-059 The nimsh Subsystem has been started. Subsystem PID is 13500758.
    # stopsrc -s nimsh
    # startsrc -s nimsh
    

    Changing nimhttp port

    You can easily change the port on which nimhttp is listening by modify the /etc/services file. Here is an example with the port 443 (I know this is not a good idea to use this one but it’s just for the example)

    #nimhttp                4901/tcp
    #nimhttp                4901/udp
    nimhttp         443/tcp
    nimhttp         443/udp
    # stopsrc -s nimhttp
    # startsrc -s nimhttp -a "-c"
    # netstat -Aan | grep 443
    f1000e00047fb3b8 tcp4       0      0  *.443                 *.*                   LISTEN
    # rmsock f1000e00047fb3b8 tcpcb
    The socket 0xf1000e00047fb008 is being held by proccess 14811574 (nimhttpd).
    

    Same on the client side, just change the /etc/services file and use your nimclient as usual

    # grep nimhttp /etc/services
    #nimhttp                4901/tcp
    #nimhttp                4901/udp
    nimhttp         443/tcp
    nimhttp         443/udp
    # nimclient -l
    

    To be sure I’m not using nfs anymore I’m removing any entries in my /etc/export file. I know that it will just work for some case (some type of resources) as nimesis is filling the file even if this one is empty:

    # > /etc/exports
    # exportfs -uav
    exportfs: 1831-184 unexported /export/nim/bosinst_data/golden-vios-2233-08192014-bosinst_data
    exportfs: 1831-184 unexported /export/nim/spot/golden-vios-22422-05072016-spot/usr
    exportfs: 1831-184 unexported /export/nim/spot/golden-vios-22410-22012015-spot/usr
    exportfs: 1831-184 unexported /export/nim/mksysb
    exportfs: 1831-184 unexported /export/nim/hmc
    exportfs: 1831-184 unexported /export/nim/lpp_source
    [..]
    

    Let’s do this

    Let’s now try this with a simple example. I’m here installing powervp on a machine using a cust operation from the nimclient, on the client I’m doing like I have always do running the exact same command as before. Super simple:

    # nimclient -o cust -a lpp_source=powervp1100-lpp_source -a filesets=powervp.rte
    
    +-----------------------------------------------------------------------------+
                        Pre-installation Verification...
    +-----------------------------------------------------------------------------+
    Verifying selections...done
    Verifying requisites...done
    Results...
    
    SUCCESSES
    ---------
      Filesets listed in this section passed pre-installation verification
      and will be installed.
    
      Selected Filesets
      -----------------
      powervp.rte 1.1.0.0                         # PowerVP for AIX
    
      << End of Success Section >>
    
    +-----------------------------------------------------------------------------+
                       BUILDDATE Verification ...
    +-----------------------------------------------------------------------------+
    Verifying build dates...done
    FILESET STATISTICS
    ------------------
        1  Selected to be installed, of which:
            1  Passed pre-installation verification
      ----
        1  Total to be installed
    
    +-----------------------------------------------------------------------------+
                             Installing Software...
    +-----------------------------------------------------------------------------+
    
    installp: APPLYING software for:
            powervp.rte 1.1.0.0
    
    0513-071 The syslet Subsystem has been added.
    Finished processing all filesets.  (Total time:  4 secs).
    
    +-----------------------------------------------------------------------------+
                                    Summaries:
    +-----------------------------------------------------------------------------+
    
    Installation Summary
    --------------------
    Name                        Level           Part        Event       Result
    -------------------------------------------------------------------------------
    powervp.rte                 1.1.0.0         USR         APPLY       SUCCESS
    powervp.rte                 1.1.0.0         ROOT        APPLY       SUCCESS
    
    

    On the server side I’m checking the /var/adm/ras/nimhttp.log (log file for nimhttp) and I can check that files are transferred from the server to the client using the http protocol. So it works great.

    # Thu Jul 21 23:44:19 2016        Request Type is GET
    Thu Jul 21 23:44:19 2016        Mime not supported
    Thu Jul 21 23:44:19 2016        Sending Response Header "200 OK"
    Thu Jul 21 23:44:19 2016        Sending file over socket 6. Expected length is 600
    Thu Jul 21 23:44:19 2016        Total length sent is 600
    Thu Jul 21 23:44:19 2016        handle_httpGET: Entering cleanup statement
    Thu Jul 21 23:44:20 2016        nim_http: queue socket create product (memory *)200739e8
    Thu Jul 21 23:44:20 2016        nim_http: 200739e8 6 200947e8 20098138
    Thu Jul 21 23:44:20 2016        nim_http: file descriptor is 6
    Thu Jul 21 23:44:20 2016        nim_buffer: (resize) buffer size is 0
    Thu Jul 21 23:44:20 2016        file descriptor is : 6
    Thu Jul 21 23:44:20 2016        family is : 2 (AF_INET)
    Thu Jul 21 23:44:20 2016        source address is : 10.14.33.253
    Thu Jul 21 23:44:20 2016        socks: Removing socksObject 2ff1ec80
    Thu Jul 21 23:44:20 2016        socks: 200739e8 132 <- 87 bytes (SSL)
    Thu Jul 21 23:44:20 2016        nim_buffer: (append) len is 87, buffer length is 87
    Thu Jul 21 23:44:20 2016        nim_http: data string passed to get_http_request: "GET /export/nim/lpp_source/powervp/powervp.1.1.0.0.bff HTTP/1.1
    

    Let's do the same thing with a fileset coming from a bigger lpp_source (in fact an simage one for the latest release of AIX 7.2):

    # nimclient -o cust -a lpp_source=7200-00-02-1614-lpp_source -a filesets=bos.loc.utf.en_KE
    [..]
    

    Looking on the nim server I notice that files are transfered from the server to the client, but NOT my fileset and it's dependencies .... but the whole lpp_source (seriously ? uh ? why ?)

    # tail -f /var/adm/ras/nimhttp.log
    Thu Jul 21 23:28:39 2016        Request Type is GET
    Thu Jul 21 23:28:39 2016        Mime not supported
    Thu Jul 21 23:28:39 2016        Sending Response Header "200 OK"
    Thu Jul 21 23:28:39 2016        Sending file over socket 6. Expected length is 4482048
    Thu Jul 21 23:28:39 2016        Total length sent is 4482048
    Thu Jul 21 23:28:39 2016        handle_httpGET: Entering cleanup statement
    Thu Jul 21 23:28:39 2016        nim_http: queue socket create product (memory *)200739e8
    Thu Jul 21 23:28:39 2016        nim_http: 200739e8 6 200947e8 20098138
    Thu Jul 21 23:28:39 2016        nim_http: file descriptor is 6
    Thu Jul 21 23:28:39 2016        nim_buffer: (resize) buffer size is 0
    Thu Jul 21 23:28:39 2016        file descriptor is : 6
    Thu Jul 21 23:28:39 2016        family is : 2 (AF_INET)
    Thu Jul 21 23:28:39 2016        source address is : 10.14.33.253
    Thu Jul 21 23:28:39 2016        socks: Removing socksObject 2ff1ec80
    Thu Jul 21 23:28:39 2016        socks: 200739e8 132 <- 106 bytes (SSL)
    Thu Jul 21 23:28:39 2016        nim_buffer: (append) len is 106, buffer length is 106
    Thu Jul 21 23:28:39 2016        nim_http: data string passed to get_http_request: "GET /export/nim/lpp_source/7200-00-02-1614/installp/ppc/X11.fnt.7.2.0.0.I HTTP/1.1
    

    If you have a deeper look of what is nimclient doing when using nimhttp .... he is just transfering the whole lpp_source from the server to the client and then installing the needed fileset from a local filesystem. Filesets are storred into /tmp so be sure you have a /tmp bigger enough to store your biggest lpp_source. Maybe this will be changed in the future but it is like it is for the moment :-) . The nimclient is creating temporary directory named (prefix) "_nim_dir_" to store the lpp_source:

    root@nim_server:/export/nim/lpp_source/7200-00-02-1614/installp/ppc# du -sm .
    7179.57 .
    root@nim_client:/tmp/_nim_dir_5964094/export/nim/lpp_source/7200-00-02-1614/installp/ppc# du -sm .
    7179.74 .
    

    More details ?

    You can notice while running a cust operation from the nim client that nimhttp is also running in background (on the client itself). The truth is that the nimhttp binary running on client act as an http client. In the output below the http client is getting the file Java8_64.samples.jnlp.8.0.0.120.U and

    # ps -ef |grep nim
        root  3342790 16253432   6 23:29:10  pts/0  0:00 /bin/ksh /usr/lpp/bos.sysmgt/nim/methods/c_installp -afilesets=bos.loc.utf.en_KE -alpp_source=s00va9932137:/export/nim/lpp_source/7200-00-02-1614
        root  6291880 13893926   0 23:29:10  pts/0  0:00 /bin/ksh /usr/lpp/bos.sysmgt/nim/methods/c_script -alocation=s00va9932137:/export/nim/scripts/s00va9954403.script
        root 12190194  3342790  11 23:30:06  pts/0  0:00 /usr/sbin/nimhttp -f /export/nim/lpp_source/7200-00-02-1614/installp/ppc/Java8_64.samples.jnlp.8.0.0.120.U -odest -s
        root 13500758  4325730   0 23:23:29      -  0:00 /usr/sbin/nimsh -s -c
        root 13893926 15991202   0 23:29:10  pts/0  0:00 /bin/ksh -c /var/adm/nim/15991202/nc.1469222947
        root 15991202 16974092   0 23:29:07  pts/0  0:00 nimclient -o cust -a lpp_source=7200-00-02-1614-lpp_source -a filesets=bos.loc.utf.en_KE
        root 16253432  6291880   0 23:29:10  pts/0  0:00 /bin/ksh /tmp/_nim_dir_6291880/script
    

    You can use the nimhttp as a client to download file directly from the nim server. Here I'm just listing the content of /export/nim/lpp_source from the client

    # nimhttp -f /export/nim/lpp_source -o dest=/tmp -v
    nimhttp: (source)       /export/nim/lpp_source
    nimhttp: (dest_dir)     /tmp
    nimhttp: (verbose)      debug
    nimhttp: (master_ip)    nimserver
    nimhttp: (master_port)  4901
    
    sending to master...
    size= 59
    pull_request= "GET /export/nim/lpp_source HTTP/1.1
    Connection: close
    
    "
    Writing 1697 bytes of data to /tmp/export/nim/lpp_source/.content
    Total size of datalen is 1697. Content_length size is 1697.
    # cat /tmp/export/nim/lpp_source/.content
    DIR: 71-04-02-1614 0:0 00240755 256
    DIR: 7100-03-00-0000 0:0 00240755 256
    DIR: 7100-03-01-1341 0:0 00240755 256
    DIR: 7100-03-02-1412 0:0 00240755 256
    DIR: 7100-03-03-1415 0:0 00240755 256
    DIR: 7100-03-04-1441 0:0 00240755 256
    DIR: 7100-03-05-1524 0:0 00240755 256
    DIR: 7100-04-00-1543 0:0 00240755 256
    DIR: 7100-04-01-1543 0:0 00240755 256
    DIR: 7200-00-00-0000 0:0 00240755 256
    DIR: 7200-00-01-1543 0:0 00240755 256
    DIR: 7200-00-02-1614 0:0 00240755 256
    FILE: MH01609.iso 0:0 00100644 1520027648
    FILE: aixtools.python.2.7.11.4.I 0:0 00100644 50140160
    

    Here I'm just downloading a python fileset !

    # nimhttp -f /export/nim/lpp_source/aixtools.python.2.7.11.4.I -o dest=/tmp -v
    [..]
    Writing 65536 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    Writing 69344 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    Writing 7776 bytes of data to /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    Total size of datalen is 50140160. Content_length size is 50140160.
    # ls -l /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    -rw-r--r--    1 root     system     50140160 Jul 23 01:21 /tmp/export/nim/lpp_source/aixtools.python.2.7.11.4.I
    

    Allowed operation

    All cust operations on nim objects type lpp_source, installp_bundle, fix_bundle, scripts, and file_res in push or pull are working great with nimhttp. Here are a few examples (from the official doc, thanks to Paul F for that ;-) ) :

    • Push:
    # nim –o cust –a file_res= 
    # nim –o cust –a script= 
    # nim –o cust –a lpp_source= -a filesets= 
    # nim –o cust –a lpp_source= -a installp_bundle= 
    # nim –o cust –a lpp_source= ‐a fixes=update_all 
    
  • Pull:
  • # nimclient -o cust -a lpp_source= -a filesets=
    # nimclient –o cust –a file_res=
    # nimclient –o cust –a script= nimclient –o cust –a lpp_source= -‐a filesets=
    # nimclient –o cust –a lpp_source= -a installp_bundle=
    # nimclient –o cust –a lpp_source= -a fixes=update”
    

    Proxying: use your own http server

    You can use you own webserver to host nimhttp and the nimhttp binary will just act as a proxy between your client and you http server. I have tried to do it but didn't succeed with that I'll let you know if I'm finding the solution:

    # grep ^proxt ~/httpd.conf
    service.proxy_port=80
    enable_proxy=yes
    

    Conclusion: "about administration and post-installation"

    Just a few words about best practices of post-installation and administration on AIX. On on the major purpose of this blog post is to prove to you than you need to get rid of an old way of working. The first thing to do is always to try using http or https instead of NFS. To give you an example of that I'm always using http to transfer my files whatever it is (configuration, product installation and so on ...). With an automation tool such as Chef it is so simple to integrate the download of a file from an http server that you must now avoid using NFS ;-) . Second good practice is to never install things "by hand" and using yum is one of the reflex you need to have instead of using the rpm command (Linux users will laugh reading that ... I'm laughing writing that, using yum is just something I'm doing for more than 10 years ... but for AIX admins it's still not the case and not so simple to understand :-) ). As always I hope it helps.

    About blogging

    I just wanted to say one word about blogging because I got a lot of questions about this (from friends, readers, managers, haters, lovers). I'm doing this for two reasons. The first one is that writing and explaining things force me to better understand what I'm doing and force me to always discover new features, new bugs, new everything. Second I'm doing this for you, for my readers because I remember how blogs were useful to me when I began AIX (Chris and Nigel are the best example of that). I don't care about being the best or the worst. I'm just me. I'm doing this because I love that that's all. Even if manager, recruiters or anybody else don't care about it I'll continue to do this whatever appends. I agree with them "It does not prove anything at all". I'm just like you a standard admin trying to do his job at his best. Sorry for the two months "break" about blogging but it was really crazy at work and in my life. Take care all. Haters gonna hate.

    Putting NovaLink in Production & more PowerVC (1.3.1.2) tips and tricks

    $
    0
    0

    I’ve been quite busy and writing the blog is getting to be more and more difficult with the amount of work I have but I try to stick to my thing as writing these blogs posts is almost the only thing I can do properly in my whole life. So why do without ? As my place is one of the craziest place I have ever worked in -(for the good … and the bad (I’ll not talk here about how are the things organized here or how is the recognition of your work but be sure it is probably be one the main reason I’ll probably leave this place one day or another)- the PowerSystems growth is crazy and the number of AIX partitions we are managing with PowerVC never stops increasing and I think that we are one the biggest PowerVC customer in the whole world (I don’t know if it is a good thing or not). Just to give you a couple of examples we have here on the biggest Power Enterprise Pool I have ever seen (384 Power8 mobile cores), the number of partitions managed by PowerVC is around 2600 and we have a PowerVC managing almost 30 hosts. You have understand well … theses numbers are huge. It’s seems to be very funny, but it’s not ; the growth is problem, a technical problem and we are facing problems that most of you will never hit. I’m speaking about density and scalability. Hopefully for us the “vertical” design of PowerVC can now be replaced by what I call an “horizontal” design. Instead of putting all the nova instances on one single machine, we now have the possibility to spread the load on each host by using NovaLink. As we needed to solve these density and scalability problems we decided to move all the P8 hosts to NovaLink (this process is still ongoing but most of the engineering stuffs are already done). As you now know we are not deploying a host every year but generally a couple by month and that’s why we needed to find a solution to automate this. So this blog post will talk about all the things and the best practices I have learn using and implementing NovaLink in a huge production environment (automated installation, tips and tricks, post-install, migration and so on). But we will not stop here I’ll also talk about the new things I have learn about PowerVC (1.3.1.2 and 1.3.0.1) and give more tips and tricks to use the product as it best. Before going any further I’d first want to say a big thank you to the whole PowerVC team for their kindness and the precious time they gave to us to advise and educate the OpenStack noob I am. (A special thanks to Drew Thorstensen for the long discussions we had about Openstack and PowerVC. He is probably one the most passionate guy I have ever met at IBM).

    Novalink Automated installation

    I’ll not write big introduction, let’s work and let’s start with NovaLink and how to automate the Novalink installation process. Copy the content of the installation cdrom to a directory that can be served by an http server on your NIM server (I’m using my NIM server for the bootp and tftp part). Note that I’m doing this with a tar command because there are symbolic links in the iso and a simple cp will end up with a full filesystem.

    # loopmount -i ESD_-_PowerVM_NovaLink_V1.0.0.3_062016.iso -o "-V cdrfs -o ro" -m /mnt
    # tar cvf iso.tar /mnt/*
    # tar xvf ios.tar -C /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso
    # ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso
    total 320
    dr-xr-xr-x    2 root     system          256 Jul 28 17:54 .disk
    -r--r--r--    1 root     system          243 Apr 20 21:27 README.diskdefines
    -r--r--r--    1 root     system         3053 May 25 22:25 TRANS.TBL
    dr-xr-xr-x    3 root     system          256 Apr 20 11:59 boot
    dr-xr-xr-x    3 root     system          256 Apr 20 21:27 dists
    dr-xr-xr-x    3 root     system          256 Apr 20 21:27 doc
    dr-xr-xr-x    2 root     system         4096 Aug 09 15:59 install
    -r--r--r--    1 root     system       145981 Apr 20 21:34 md5sum.txt
    dr-xr-xr-x    2 root     system         4096 Apr 20 21:27 pics
    dr-xr-xr-x    3 root     system          256 Apr 20 21:27 pool
    dr-xr-xr-x    3 root     system          256 Apr 20 11:59 ppc
    dr-xr-xr-x    2 root     system          256 Apr 20 21:27 preseed
    dr-xr-xr-x    4 root     system          256 May 25 22:25 pvm
    lrwxrwxrwx    1 root     system            1 Aug 29 14:55 ubuntu -> .
    dr-xr-xr-x    3 root     system          256 May 25 22:25 vios
    

    Prepare the PowerVM NovaLink repository. The content of the repository can be found in the NovaLink iso image in pvm/repo/pvmrepo.tgz:

    # ls -l /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/
    total 720192
    -r--r--r--    1 root     system          223 May 25 22:25 TRANS.TBL
    -rw-r--r--    1 root     system         2106 Sep 05 15:56 pvm-install.cfg
    -r--r--r--    1 root     system    368722592 May 25 22:25 pvmrepo.tgz
    

    Extract the content of this tgz file in a directory that can be served by the http server:

    # mkdir /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo
    # cp /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm/repo/pvmrepo.tgz
    # cd /export/nim/lpp_source/powervc/novalink/1.0.0.3/pvmrepo
    # gunzip pvmrepo.tgz
    # tar xvf pvmrepo.tar
    [..]
    x ./pool/non-free/p/pvm-core/pvm-core-dbg_1.0.0.3-160525-2192_ppc64el.deb, 54686380 bytes, 106810 media blocks.
    x ./pool/non-free/p/pvm-core/pvm-core_1.0.0.3-160525-2192_ppc64el.deb, 2244784 bytes, 4385 media blocks.
    x ./pool/non-free/p/pvm-core/pvm-core-dev_1.0.0.3-160525-2192_ppc64el.deb, 618378 bytes, 1208 media blocks.
    x ./pool/non-free/p/pvm-pkg-tools/pvm-pkg-tools_1.0.0.3-160525-492_ppc64el.deb, 170700 bytes, 334 media blocks.
    x ./pool/non-free/p/pvm-rest-server/pvm-rest-server_1.0.0.3-160524-2229_ppc64el.deb, 263084432 bytes, 513837 media blocks.
    # rm pvmrepo.tar 
    # ls -l 
    total 16
    drwxr-xr-x    2 root     system          256 Sep 11 13:26 conf
    drwxr-xr-x    2 root     system          256 Sep 11 13:26 db
    -rw-r--r--    1 root     system          203 May 26 02:19 distributions
    drwxr-xr-x    3 root     system          256 Sep 11 13:26 dists
    -rw-r--r--    1 root     system         3132 May 24 20:25 novalink-gpg-pub.key
    drwxr-xr-x    4 root     system          256 Sep 11 13:26 pool
    

    Copy the NovaLink boot files in a directory that can be served by your tftp server (I’m using /var/lib/tftpboot):

    # mkdir /var/lib/tftpboot
    # cp -r /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/pvm /var/lib/tftpboot
    # ls -l /var/lib/tftpboot
    total 1016
    -r--r--r--    1 root     system         1120 Jul 26 20:53 TRANS.TBL
    -r--r--r--    1 root     system       494072 Jul 26 20:53 core.elf
    -r--r--r--    1 root     system          856 Jul 26 21:18 grub.cfg
    -r--r--r--    1 root     system        12147 Jul 26 20:53 pvm-install-config.template
    dr-xr-xr-x    2 root     system          256 Jul 26 20:53 repo
    dr-xr-xr-x    2 root     system          256 Jul 26 20:53 rootfs
    -r--r--r--    1 root     system         2040 Jul 26 20:53 sample_grub.cfg
    

    I still don’t know why this is the case on AIX but the tftp server is searching for the grub.cfg in the root directory of your AIX system. It’s not the case for my RedHat Enterprise Linux installation but it’s the case for the NovaLink/Ubuntu installation. Copy the sample-grub.cfg to /grub.cfg and modify the content of the file:

    • As the gateway, netmask and nameserver will be provided the the pvm-install-config.cfg (the configuration file of the Novalink installer we will talk about this later) file comment those three lines.
    • The hostname will still be needed.
    • Modify the linux line and point to the vmlinux file provided in the NovaLink iso image.
    • Modify the live-installer to point to the filesystem.squashfs provided in the NovaLink iso image.
    • Modify the pvm-repo line to point to the pvm-repository directory we created before.
    • Modify the pvm-installer line to point to the NovaLink install configuration file (we will modify this one after).
    • Don’t do anything with the pvm-vios line as we are installing NovaLink on a system already having Virtual I/O Servers installed (I’m not installing Scale Out system but high end models only).
    • I’ll talk later about the pvm-disk line (this line is not by default in the pvm-install-config.template provided in the NovaLink iso image).
    # cp /var/lib/tftpboot/sample_grub.cfg /grub.cfg
    # cat /grub.cfg
    # Sample GRUB configuration for NovaLink network installation
    set default=0
    set timeout=10
    
    menuentry 'PowerVM NovaLink Install/Repair' {
     insmod http
     insmod tftp
     regexp -s 1:mac_pos1 -s 2:mac_pos2 -s 3:mac_pos3 -s 4:mac_pos4 -s 5:mac_pos5 -s 6:mac_pos6 '(..):(..):(..):(..):(..):(..)' ${net_default_mac}
     set bootif=01-${mac_pos1}-${mac_pos2}-${mac_pos3}-${mac_pos4}-${mac_pos5}-${mac_pos6}
     regexp -s 1:prefix '(.*)\.(\.*)' ${net_default_ip}
    # Setup variables with values from Grub's default variables
     set ip=${net_default_ip}
     set serveraddress=${net_default_server}
     set domain=${net_ofnet_network_domain}
    # If tftp is desired, replace http with tftp in the line below
     set root=http,${serveraddress}
    # Remove comment after providing the values below for
    # GATEWAY_ADDRESS, NETWORK_MASK, NAME_SERVER_IP_ADDRESS
    # set gateway=10.10.10.1
    # set netmask=255.255.255.0
    # set namserver=10.20.2.22
      set hostname=nova0696010
    # In this sample file, the directory novalink is assumed to exist on the
    # BOOTP server and has the NovaLink ISO content
     linux /export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/vmlinux \
     live-installer/net-image=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/filesystem.squashfs \
     pkgsel/language-pack-patterns= \
     pkgsel/install-language-support=false \
     netcfg/disable_dhcp=true \
     netcfg/choose_interface=auto \
     netcfg/get_ipaddress=${ip} \
     netcfg/get_netmask=${netmask} \
     netcfg/get_gateway=${gateway} \
     netcfg/get_nameservers=${nameserver} \
     netcfg/get_hostname=${hostname} \
     netcfg/get_domain=${domain} \
     debian-installer/locale=en_US.UTF-8 \
     debian-installer/country=US \
    # The directory novalink-repo on the BOOTP server contains the content
    # of the pvmrepo.tgz file obtained from the pvm/repo directory on the
    # NovaLink ISO file.
    # The directory novalink-vios on the BOOTP server contains the files
    # needed to perform a NIM install of VIOS server(s)
    #  pvmdebug=1
     pvm-repo=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/novalink-repo/ \
     pvm-installer-config=http://${serveraddress}/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg \
     pvm-viosdir=http://${serveraddress}/novalink-vios \
     pvmdisk=/dev/mapper/mpatha \
     initrd /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/install/netboot_initrd.gz
    }
    

    Modify the pvm-install.cfg, it’s the NovaLink installer configuration file. We just need to modify here the [SystemConfig],[NovaLinkGeneralSettings],[NovaLinkNetworkSettings],[NovaLinkAPTRepoConfig] and [NovaLinkAdminCredential]. My advice is to configure one NovaLink by hand (by doing an installation directly with the iso image, then after the installation your configuration file is saved in /var/log/pvm-install/novalink-install.cfg. You can copy this one as your template on your installation server. This file is filled by the answers you gave during the NovaLink installation)

    # more /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg
    [SystemConfig]
    serialnumber = XXXXXXXX
    lmbsize = 256
    
    [NovaLinkGeneralSettings]
    ntpenabled = True
    ntpserver = timeserver1
    timezone = Europe/Paris
    
    [NovaLinkNetworkSettings]
    dhcpip = DISABLED
    ipaddress = YYYYYYYY
    gateway = ZZZZZZZZ
    netmask = 255.255.255.0
    dns1 = 8.8.8.8
    dns2 = 8.8.9.9
    hostname = WWWWWWWW
    domain = lab.chmod666.org
    
    [NovaLinkAPTRepoConfig]
    downloadprotocol = http
    mirrorhostname = nimserver
    mirrordirectory = /export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/
    mirrorproxy =
    
    [VIOSNIMServerConfig]
    novalink_private_ip = 192.168.128.1
    vios1_private_ip = 192.168.128.2
    vios2_private_ip = 192.168.128.3
    novalink_netmask = 255.255.128.0
    viosinstallprompt = False
    
    [NovaLinkAdminCredentials]
    username = padmin
    password = $6$N1hP6cJ32p17VMpQ$sdThvaGaR8Rj12SRtJsTSRyEUEhwPaVtCTvbdocW8cRzSQDglSbpS.jgKJpmz9L5SAv8qptgzUrHDCz5ureCS.
    userdescription = NovaLink System Administrator
    

    Finally modify the /etc/bootptab file and add a line matching your installation:

    # tail -1 /etc/bootptab
    nova0696010:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.16:ht=ethernet:sa=10.255.228.37:gw=10.20.65.1:sm=255.255.255.0:
    

    Don’t forget to setup an http server, serving all the needed files. I know this configuration is super unsecured. But honestly I don’t care my NIM server is in a super secured network just accessible by the VIOS and NovaLink partition. So I’m good :-) :

    # cd /opt/freeware/etc/httpd/ 
    # grep -Ei "^Listen|^DocumentRoot" conf/httpd.conf
    Listen 80
    DocumentRoot "/"
    

    novaserved

    Instead of doing this over and over and over at every NovaLink installation I have written a custom script preparing my NovaLink installation file, what I do in this script is:

    • Preparing the pvm-install.cfg file.
    • Modifying the grub.cfg file.
    • Adding a line to the /etc/bootptab file.
    #  ./custnovainstall.ksh nova0696010 10.20.65.16 10.20.65.1 255.255.255.0
    #!/usr/bin/ksh
    
    novalinkname=$1
    novalinkip=$2
    novalinkgw=$3
    novalinknm=$4
    cfgfile=/export/nim/lpp_source/powervc/novalink/novalink-install.cfg
    desfile=/export/nim/lpp_source/powervc/novalink/1.0.0.3/mnt/pvm/repo/pvm-install.cfg
    grubcfg=/export/nim/lpp_source/powervc/novalink/grub.cfg
    grubdes=/grub.cfg
    
    echo "+--------------------------------------+"
    echo "NovaLink name: ${novalinkname}"
    echo "NovaLink IP: ${novalinkip}"
    echo "NovaLink GW: ${novalinkgw}"
    echo "NovaLink NM: ${novalinknm}"
    echo "+--------------------------------------+"
    echo "Cfg ref: ${cfgfile}"
    echo "Cfg file: ${cfgfile}.${novalinkname}"
    echo "+--------------------------------------+"
    
    typeset -u serialnumber
    serialnumber=$(echo ${novalinkname} | sed 's/nova//g')
    
    echo "SerNum: ${serialnumber}"
    
    cat ${cfgfile} | sed "s/serialnumber = XXXXXXXX/serialnumber = ${serialnumber}/g" | sed "s/ipaddress = YYYYYYYY/ipaddress = ${novalinkip}/g" | sed "s/gateway = ZZZZZZZZ/gateway = ${novalinkgw}
    /g" | sed "s/netmask = 255.255.255.0/netmask = ${novalinknm}/g" | sed "s/hostname = WWWWWWWW/hostname = ${novalinkname}/g" > ${cfgfile}.${novalinkname}
    cp ${cfgfile}.${novalinkname} ${desfile}
    cat ${grubcfg} | sed "s/  set hostname=WWWWWWWW/  set hostname=${novalinkname}/g" > ${grubcfg}.${novalinkname}
    cp ${grubcfg}.${novalinkname} ${grubdes}
    # nova1009425:bf=/var/lib/tftpboot/core.elf:ip=10.20.65.15:ht=ethernet:sa=10.255.248.37:gw=10.20.65.1:sm=255.255.255.0:
    echo "${novalinkname}:bf=/var/lib/tftpboot/core.elf:ip=${novalinkip}:ht=ethernet:sa=10.255.248.37:gw=${novalinkgw}:sm=${novalinknm}:" >> /etc/bootptab
    

    Novalink installation: vSCSI or NPIV ?

    NovaLink is not designed to be installed of top of NPIV it’s a fact. As it is designed to be installed on a totally new system without any Virtual I/O Servers configured the NovaLink installation is by default creating the Virtual I/O Servers and using these VIOS the installation process is creating backing devices on top of logical volumes created in the default VIOS storage pool. Then the Novalink installation partition is created on top of these two logical volumes and at the end mirrored. This is the way NovaLink is doing for Scale Out systems.

    For High End systems NovaLink is assuming your going to install the NovaLink partition on top of vSCSI (have personnaly tried with hdisk backed and SSP Logical Unit backed and both are working ok). For those like me who wants to install NovaLink on top of NPIV (I know this is not a good choice, but once again I was forced to do that) there still is a possiblity to do it. (In my humble opinion the NPIV design is done for high performance and the Novalink partition is not going to be an I/O intensive partition. Even worse our whole new design is based on NPIV for LPARs …. it’s a shame as NPIV is not a solution designed for high denstity and high scalability. Every PowerVM system administrator should remember this. NPIV IS NOT A GOOD CHOICE FOR DENSITY AND SCALABILITY USE IT FOR PERFORMANCE ONLY !!!. The story behind this is funny. I’m 100% sure that SSP is ten time a better choice to achieve density and scalability. I decided to open a poll on twitter asking this question “Will you choose SSP or NPIV to design a scalable AIX cloud based on PowerVC ?”. I was 100% sure SSP will win and made a bet with friend (I owe him beers now) that I’ll be right. What was my surprise when seeing the results. 90% of people vote for NPIV. I’m sorry to say that guys but there are two possibilities: 1/ You don’t really know what scalability and density means because you never faced it so that’s why you made the wrong choice. 2/ You know it and you’re just wrong :-) . This little story is another proof telling that IBM is not responsible about the dying of AIX and PowerVM … but unfortunately you are responsible of it not understanding that the only way to survive is to face high scalable solution like Linux is doing with Openstack and Ceph. It’s a fact. Period.)

    This said … if you are trying to install NovaLink on top of NPIV you’ll get an error. A workaround to this problem is to add the following line to the grub.cfg file

     pvmdisk=/dev/mapper/mpatha \
    

    If you do that you’ll be able to install NovaLink on your NPIV disk but still have an error the first time you’ll install it at the “grub-install step”. Just re-run the installation a second time and the grub-install command will work ok :-) (I’ll explain how to do to avoid this second issue later).

    One work-around to this second issue is to recreate the initrd by adding a line in the debian-installer config file.

    Fully automated installation by example

    • Here the core.elf file is downloaded by tftp. You can se in the capture below that the grub.cfg file is searched in / :
    • 1m
      13m

    • The installer is starting:
    • 2

    • The vmlinux is downloaded (http):
    • 3

    • The root.squashfs is downloaded (http):
    • 4m

    • The pvm-install.cfg configuration file is downloaded (http):
    • 5

    • pvm services are started. At this time if you are running in co-management mode you’ll see the Red lock in the HMC Server status:
    • 6

    • The Linux and Novalink istallation is ongoing:
    • 7
      8
      9
      10
      11
      12

    • System is ready:
    • 14

    Novalink code auto update

    When adding a NovaLink host to PowerVC the powervc packages coming from the powervc management host will be installed on the NovaLink partition. You can check this during the installation. Here is what’s going on when adding the NovaLink host to PowerVC:

    15
    16

    # cat /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log
    ################################################################################
    Starting the IBM PowerVC Novalink Installation on:
    2016-09-11T16:42:05+02:00
    ################################################################################
    
    LOG file is /opt/ibm/powervc/log/powervc_install_2016-09-11-164205.log
    
    2016-09-11T16:42:05.18+02:00 Installation directory is /opt/ibm/powervc
    2016-09-11T16:42:05.18+02:00 Installation source location is /tmp/powervc_img_temp_1473611916_1627713/powervc-1.3.1.2
    [..]
    Setting up python-neutron (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
    Setting up neutron-common (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
    Setting up neutron-plugin-ml2 (10:8.0.0-201608161728.ibm.ubuntu1.375) ...
    Setting up ibmpowervc-powervm-network (1.3.1.2) ...
    Setting up ibmpowervc-powervm-oslo (1.3.1.2) ...
    Setting up ibmpowervc-powervm-ras (1.3.1.2) ...
    Setting up ibmpowervc-powervm (1.3.1.2) ...
    W: --force-yes is deprecated, use one of the options starting with --allow instead.
    
    ***************************************************************************
    IBM PowerVC Novalink installation
     successfully completed at 2016-09-11T17:02:30+02:00.
     Refer to
     /opt/ibm/powervc/log/powervc_install_2016-09-11-165617.log
     for more details.
    ***************************************************************************
    

    17

    Installing the missing deb packages if NovaLink host was added before PowerVC upgrade

    If the NovaLink host was added in PowerVC 1.3.1.1 and you updated to PowerVC 1.3.1.2 you have to update the package by hand because there is a little bug during the update of some packages:

    • From the PowerVC management host copy the latest packages to the NovaLink host:
    # scp /opt/ibm/powervc/images/powervm/powervc-powervm-compute-1.3.1.2.tgz padmin@nova0696010:~
    padmin@nova0696010's password:
    powervc-powervm-compute-1.3.1.2.tgz
    
  • Update the packages on the NovaLink host
  • # tar xvzf powervc-powervm-compute-1.3.1.2.tgz
    # cd powervc-1.3.1.2/packages/powervm
    # dpkg -i nova-powervm_2.0.3-160816-48_all.deb
    # dpkg -i networking-powervm_2.0.1-160816-6_all.deb
    # dpkg -i ceilometer-powervm_2.0.1-160816-17_all.deb
    # /opt/ibm/powervc/bin/powervc-services restart
    

    rsct and pvm deb update

    Never forget to install latest rsct and pvm packages after the installation. You can clone the official IBM repository for pvm and rsct files (you can check my previous post about Novalink for more details about cloning the repository). Then create two files in /etc/apt/sources.list.d one for pvm, the other for rsct

    # vi /etc/apt/sources.list.d/pvm.list
    deb http://nimserver/export/nim/lpp_source/powervc/novalink/nova/debian novalink_1.0.0 non-free
    # vi /etc/apt/source.list.d/rsct.list
    deb http://nimserver/export/nim/lpp_source/powervc/novalink/rsct/ubuntu xenial main
    # dpkg -l | grep -i rsct
    ii  rsct.basic                                3.2.1.0-15300                           ppc64el      Reliable Scalable Cluster Technology - Basic
    ii  rsct.core                                 3.2.1.3-16106-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Core
    ii  rsct.core.utils                           3.2.1.3-16106-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Utilities
    # # dpkg -l | grep -i pvm
    ii  pvm-cli                                   1.0.0.3-160516-1488                     all          Power VM Command Line Interface
    ii  pvm-core                                  1.0.0.3-160525-2192                     ppc64el      PVM core runtime package
    ii  pvm-novalink                              1.0.0.3-160525-1000                     ppc64el      Meta package for all PowerVM Novalink packages
    ii  pvm-rest-app                              1.0.0.3-160524-2229                     ppc64el      The PowerVM NovaLink REST API Application
    ii  pvm-rest-server                           1.0.0.3-160524-2229                     ppc64el      Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink 
    # apt-get install rsct.core rsct.basic
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      docutils-common libpaper-utils libpaper1 python-docutils python-roman
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
      rsct.core.utils src
    The following packages will be upgraded:
      rsct.core rsct.core.utils src
    3 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.
    Need to get 9,356 kB of archives.
    After this operation, 548 kB disk space will be freed.
    [..]
    # apt-get install pvm-novalink
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      docutils-common libpaper-utils libpaper1 python-docutils python-roman
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
      pvm-core pvm-rest-app pvm-rest-server pypowervm
    The following packages will be upgraded:
      pvm-core pvm-novalink pvm-rest-app pvm-rest-server pypowervm
    5 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
    Need to get 287 MB of archives.
    After this operation, 203 kB of additional disk space will be used.
    Do you want to continue? [Y/n] Y
    [..]
    

    After the installation, here is what you should have if everything was updated properly:

    dpkg -l | grep rsct
    ii  rsct.basic                                3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Basic
    ii  rsct.core                                 3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Core
    ii  rsct.core.utils                           3.2.1.4-16154-1ubuntu1                  ppc64el      Reliable Scalable Cluster Technology - Utilities
    dpkg -l | grep pvm
    ii  pvm-cli                                   1.0.0.3-160516-1488                     all          Power VM Command Line Interface
    ii  pvm-core                                  1.0.0.3.1-160713-2441                   ppc64el      PVM core runtime package
    ii  pvm-novalink                              1.0.0.3.1-160714-1152                   ppc64el      Meta package for all PowerVM Novalink packages
    ii  pvm-rest-app                              1.0.0.3.1-160713-2417                   ppc64el      The PowerVM NovaLink REST API Application
    ii  pvm-rest-server                           1.0.0.3.1-160713-2417                   ppc64el      Holds the basic installation of the REST WebServer (Websphere Liberty Profile) for PowerVM NovaLink
    

    Novalink post-installation (my ansible way to do that)

    You all now know that I’m not very fond of doing the same things over and over again, that’s why I have create an ansible post-install playbook especially for NovaLink post installation. You can download it here: nova_ansible. Then install ansible on a host that has an ssh access to all your NovaLink partitions and run the the ansible playbook:

    • Untar the ansible playbook:
    # mkdir /srv/ansible
    # cd /srv/ansible
    # tar xvf novalink_ansible.tar 
    
  • Modify the group_vars/novalink.yml to fit your environment:
  • # cat group_vars/novalink.yml
    ntpservers:
      - ntpserver1
      - ntpserver2
    dnsservers:
      - 8.8.8.8
      - 8.8.9.9
    dnssearch:
      - lab.chmod666.org
    vepa_iface: ibmveth6
    repo: nimserver
    
  • Share root ssh key to the NovaLink host (be careful by default NovaLink does not allow root login you have to modify the sshd configuration file):
  • Put all your Novalink hosts into the inventory file:
  • #cat inventories/hosts.novalink
    [novalink]
    nova65a0cab
    nova65ff4cd
    nova10094ef
    nova06960ab
    
  • Run ansible-playbook and you’re done:
  • # ansible-playbook -i inventories/hosts.novalink site.yml
    

    ansible1
    ansible2
    ansible3

    More details about NovaLink

    MGMTSWITCH vswitch automatic creation

    Do not try to create the MGMTSWITCH by yourself. The NovaLink installer is doing it for you. As my Virtual I/O Servers are installed using the IBM Provisioning Toolkit for PowerVM … I was creating the MGMTSWITCH at this time but I was wrong. You can see this in the file /var/log/pvm-install/pvminstall.log on the NovaLink partition:

    # cat /var/log/pvm-install/pvminstall.log
    Fri Aug 12 17:26:07 UTC 2016: PVMDebug = 0
    Fri Aug 12 17:26:07 UTC 2016: Running initEnv
    [..]
    Fri Aug 12 17:27:08 UTC 2016: Using user provided pvm-install configuration file
    Fri Aug 12 17:27:08 UTC 2016: Auto Install set
    [..]
    Fri Aug 12 17:27:44 UTC 2016: Auto Install = 1
    Fri Aug 12 17:27:44 UTC 2016: Validating configuration file
    Fri Aug 12 17:27:44 UTC 2016: Initializing private network configuration
    Fri Aug 12 17:27:45 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o c
    Fri Aug 12 17:27:46 UTC 2016: Running /opt/ibm/pvm-install/bin/switchnetworkcfg -o n -i 3 -n MGMTSWITCH -p 4094 -t 1
    Fri Aug 12 17:27:49 UTC 2016: Start setupinstalldisk operation for /dev/mapper/mpatha
    Fri Aug 12 17:27:49 UTC 2016: Running updatedebconf
    Fri Aug 12 17:56:06 UTC 2016: Pre-seeding disk recipe
    

    NPIV lpar creation problem !

    As you know my environment is crazy. Every lpar we are creating have 4 virtual fibre channels adapters. Obviously two on fabric A and two on fabric B. And obviously again each fabric must be present on each Virtual I/O Servers. So to sum up. An lpar must have access to fabric A and B using VIOS1 and to fabric A and B using VIOS2. Unfortunately there was a little bug in the current NovaLink (1.0.0.3) code and all the lpar created were created with only two adapters. The PowerVC team gave my a patch to handle this particular issue patching the npiv.py file. This patch needs to be installed on the NovaLink partition itself.:

    # cd /usr/lib/python2.7/dist-packages/powervc_nova/virt/ibmpowervm/pvm/volume
    # sdiff npiv.py.back npiv.bck
    

    npivpb

    I’m intentionally not giving you the solution here (just by copying/pasting code) because an issue is addressed and an APAR has been opened for this issue and is resolved in 1.3.1.2 version. IT16534

    From NovaLink to HMC …. and the opposite

    One of the challenge for me was to be sure everything was working ok regarding LPM and NovaLink. So I decided to test different cases:

    • From NovaLink host to Novalink host (didn’t had any trouble) :-)
    • From NovaLink host to HMC host (didn’t had any trouble) :-)
    • From HMC host to Novalink host (had a trouble) :-(

    Once again this issue avoiding HMC to Novalink LPM to work correctly is related to storage. A patch is ongoing but let me explain this issue a little bit (only if you have to absolutely move an LPAR from HMC to NovaLink and your are in the same case as I am):

    PowerVC is not correctly doing the mapping to the destination Virtual I/O Servers and is trying to map two times the fabric A on the VIOS1 and two time the fabric B on the VIOS2. Hopefully for us you can do the migration by hand :

    • Do the LPM operation from PowerVC and check on the HMC side how PowerVC is doing the mapping (log on the HMC to check this):
    #  lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar
    time=08/31/2016 18:53:27,"text=HSCE2124 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios1/2//fcs2,3/vios2/1//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command failed."
    
  • One interesting point you can see here is that the NovaLink user used for LPM is not padmin but wlp. Have look on the Novalink machine if you are a little bit curious:
  • 18

  • If you are double checking the mapping you’ll see that PowerVC is mixing up the VIOS. Just rerun the command in the right order and you’ll see that you’re going to be able to do HMC to NovaLink LPM (By the way PowerVC is automattically detecting that the host has changed for this lpar (moved outside of PowerVC)):
  • # migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i '"virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"',shared_proc_pool_id=0 -o m
    # lssvcevents -t console -d 0 | grep powervc_admin | grep migrlpar
    time=08/31/2016 19:13:00,"text=HSCE2123 User name powervc_admin: migrlpar -m 9119-MME-656C38A -t 9119-MME-65A0C31 --id 18 --ip 10.22.33.198 -u wlp -i ""virtual_fc_mappings=6/vios2/1//fcs2,3/vios1/2//fcs2,4/vios2/1//fcs1,5/vios1/2//fcs1"",shared_proc_pool_id=0 -o m command was executed successfully."
    
    hmctonova
    

    One more time don't worry about this issue a patch is on the way. But I thought it was interessting to talk about it just to show you how PowerVC is handling this (user, key sharing, check on the HMC).

    Deep dive into the initrd

    I am curious and there is no way to change this. As I wanted to know how the NovaLink installer is working I had to check into the netboot_initrd.gz file. There are a lot of interesting stuff to check in this initrd. Run the commands below on a Linux partition if you also want to have a look:

    # scp nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz .
    # gunzip netboot_initrd
    # cpio -i < netboot_initrd
    185892 blocks
    

    The installer is located in opt/ibm/pvm-install:

    # ls opt/ibm/pvm-install/data/
    40mirror.pvm  debpkgs.txt  license.txt  nimclient.info  pvm-install-config.template  pvm-install-preseed.cfg  rsct-gpg-pub.key  vios_diagram.txt
    # ls opt/ibm/pvm-install/bin
    assignio.py        envsetup        installpvm                    monitor        postProcessing    pvmwizardmain.py  restore.py        switchnetworkcfg  vios
    cfgviosnetwork.py  functions       installPVMPartitionWizard.py  network        procmem           recovery          setupinstalldisk  updatedebconf     vioscfg
    chviospasswd       getnetworkinfo  ioadapter                     networkbridge  pvmconfigdata.py  removemem         setupviosinstall  updatenimsetup    welcome.py
    editpvmconfig      initEnv         mirror                        nimscript      pvmtime           resetsystem       summary.py        user              wizpkg
    

    You can for instance check what's the installer is exactly doing. Let's take again the exemple of the MGMTSWITCH creation, you can see in the output below that I was right saying that:

    initrd1

    Remember that I was telling you before that I had problem with installation on NPIV. You can avoid installing NovaLink two times by modifying the debian installer directly in the initrd by adding a line in the debian installer file opt/ibm/pvm-install/data/pvm-install-preseed.cfg (you have to rebuild the initrd after doing this) :

    # grep bootdev opt/ibm/pvm-install/data/pvm-install-preseed.cfg
    d-i grub-installer/bootdev string /dev/mapper/mpatha
    # find | cpio -H newc -o > ../new_initrd_file
    # gzip -9 ../new_initrd_file
    # scp ../new_initrdfile.gz nimdy:/export/nim/lpp_source/powervc/novalink/1.0.0.3/iso/install/netboot_initrd.gz
    

    You can also find good example here of pvmctl commands:

    # grep -R pvmctl *
    pvmctl lv create --size $LV_SIZE --name $LV_NAME -p id=$vid
    pvmctl scsi create --type lv --vg name=rootvg --lpar id=1 -p id=$vid --stor-id name=$LV_NAME
    

    Troubleshooting

    NovaLink is not PowerVC so here is a little reminder of what I do to troubleshot Novalink:

    • Installation troubleshooting:
    #cat /var/log/pvm-install/pvminstall.log
    
  • Neutron Agent log (always double check this one):
  • # cat /var/log/neutron/neutron-powervc-pvm-sea-agent.log
    
  • Nova logs for this host are not accessible on the PowerVC management host anymore, so check it on the NovaLink partition if needed:
  • # cat /var/log/nova/nova-compute.log
    
  • pvmctl logs:
  • # cat /var/log/pvm/pvmctl.log
    

    One last thing to add about NovaLink. One thing I like a lot is that Novalink is doing backups of the system and VIOS hourly/daily. These backup are stored in /var/backup/pvm :

    # crontab -l
    # VIOS hourly backups - at 15 past every hour except for midnight
    15 1-23 * * * /usr/sbin/pvm-backup --type vios --frequency hourly
    # Hypervisor hourly backups - at 15 past every hour except for midnight
    15 1-23 * * * /usr/sbin/pvm-backup --type system --frequency hourly
    # VIOS daily backups - at 15 past midnight
    15 0    * * * /usr/sbin/pvm-backup --type vios --frequency daily
    # Hypervisor daily backups - at 15 past midnight
    15 0    * * * /usr/sbin/pvm-backup --type system --frequency daily
    #ls -l /var/backups/pvm
    total 4
    drwxr-xr-x 2 root pvm_admin 4096 Sep  9 00:15 9119-MME*0265FF47B
    

    More PowerVC tips and tricks

    Let's finish this blog post with more PowerVC tips and tricks. Before giving you the tricks I have to warn you. All of these tricks are not supported by PowerVC, use them at your own risk OR contact your support before doing anything else. You may break and destroy everything if you are not aware of what you are doing. So please be very careful using all these tricks. YOU HAVE BEEN WARNED !!!!!!

    Accessing and querying the database

    This first trick is funny and will allow you to query and modify the PowerVC database. Once again do this a your own risks. One of the issue I had was strange. I do not remeber how it happends exactly but some of my luns that were not attached to any hosts and were still showing an attachmenent number equals to 1 and I didn't had the possibility to remove it. Even worse someone has deleted these luns on the SVC side. So these luns were what I called "ghost lun". Non existing but non-deletable luns. (I had also to remove the storage provider related to these luns). The only way to change this was to change the state to detached directly in the cinder database. Be careful this trick is only working with MariaDB.

    First get the database password. Get the encrypted password from /opt/ibm/powervc/data/powervc-db.conf file and decode it to have the clear password:

    # grep ^db_password /opt/ibm/powervc/data/powervc-db.conf
    db_password = aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==
    # python -c "from powervc_keystone.encrypthandler import EncryptHandler; print EncryptHandler().decode('aes-ctr:NjM2ODM5MjM0NTAzMTg4MzQzNzrQZWi+mrUC+HYj9Mxi5fQp1XyCXA==')"
    OhnhBBS_gvbCcqHVfx2N
    # mysql -u root -p cinder
    Enter password:
    MariaDB [cinder]> MariaDB [cinder]> show tables;
    +----------------------------+
    | Tables_in_cinder           |
    +----------------------------+
    | backups                    |
    | cgsnapshots                |
    | consistencygroups          |
    | driver_initiator_data      |
    | encryption                 |
    [..]
    

    Then get the lun uuid on the PowerVC gui for the lun you want to change, and follow the commands below:

    dummy

    MariaDB [cinder]> select * from volume_attachment where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G
    *************************** 1. row ***************************
       created_at: 2016-05-26 08:52:51
       updated_at: 2016-05-26 08:54:23
       deleted_at: 2016-05-26 08:54:23
          deleted: 1
               id: ce4238b5-ea39-4ce1-9ae7-6e305dd506b1
        volume_id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78
    attached_host: NULL
    instance_uuid: 44c7a72c-610c-4af1-a3ed-9476746841ab
       mountpoint: /dev/sdb
      attach_time: 2016-05-26 08:52:51
      detach_time: 2016-05-26 08:54:23
      attach_mode: rw
    attach_status: attached
    1 row in set (0.01 sec)
    MariaDB [cinder]> select * from volumes where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78' \G
    *************************** 1. row ***************************
                     created_at: 2016-05-26 08:51:57
                     updated_at: 2016-05-26 08:54:23
                     deleted_at: NULL
                        deleted: 0
                             id: 9cf6d85a-3edd-4ab7-b797-577ff6566f78
                         ec2_id: NULL
                        user_id: 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9
                     project_id: 1471acf124a0479c8d525aa79b2582d0
                           host: pb01_mn_svc_qual
                           size: 1
              availability_zone: nova
                         status: available
                  attach_status: attached
                   scheduled_at: 2016-05-26 08:51:57
                    launched_at: 2016-05-26 08:51:59
                  terminated_at: NULL
                   display_name: dummy
            display_description: NULL
              provider_location: NULL
                  provider_auth: NULL
                    snapshot_id: NULL
                 volume_type_id: e49e9cc3-efc3-4e7e-bcb9-0291ad28df42
                   source_volid: NULL
                       bootable: 0
              provider_geometry: NULL
                       _name_id: NULL
              encryption_key_id: NULL
               migration_status: NULL
             replication_status: disabled
    replication_extended_status: NULL
        replication_driver_data: NULL
            consistencygroup_id: NULL
                    provider_id: NULL
                    multiattach: 0
                previous_status: NULL
    1 row in set (0.00 sec)
    MariaDB [cinder]> update volume_attachment set attach_status='detached' where volume_id='9cf6d85a-3edd-4ab7-b797-577ff6566f78';
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    MariaDB [cinder]> update volumes set attach_status='detached' where id='9cf6d85a-3edd-4ab7-b797-577ff6566f78';
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    

    The second issue I had was about having some machines in deleted state but the reality was that the HMC just rebooted and for an unknow reason these machines where seen as 'deleted' .. but they were not. Using this trick I was able to force a re-evalutation of each machine is this case:

    #  mysql -u root -p nova
    Enter password:
    MariaDB [nova]> select * from instance_health_status where health_state='WARNING';
    +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
    | created_at          | updated_at          | deleted_at | deleted | id                                   | health_state | reason                                                                                                                                                                                                                | unknown_reason_details |
    +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
    | 2016-07-11 08:58:37 | NULL                | NULL       |       0 | 1af1805c-bb59-4bc9-8b6d-adeaeb4250f3 | WARNING      | [{"resource_local": "server", "display_name": "p00ww6754398", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "1af1805c-bb59-4bc9-8b6d-adeaeb4250f3"}]                |                        |
    | 2015-07-31 16:53:50 | 2015-07-31 18:49:50 | NULL       |       0 | 2668e808-10a1-425f-a272-6b052584557d | WARNING      | [{"resource_local": "server", "display_name": "multi-vol", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "2668e808-10a1-425f-a272-6b052584557d"}]                         |                        |
    | 2015-08-03 11:22:38 | 2015-08-03 15:47:41 | NULL       |       0 | 2934fb36-5d91-48cd-96de-8c16459c50f3 | WARNING      | [{"resource_local": "server", "display_name": "clouddev-test-754df319-00000038", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "2934fb36-5d91-48cd-96de-8c16459c50f3"}] |                        |
    | 2016-07-11 09:03:59 | NULL                | NULL       |       0 | 3fc42502-856b-46a5-9c36-3d0864d6aa4c | WARNING      | [{"resource_local": "server", "display_name": "p00ww3254401", "resource_property_key": "rmc_state", "resource_property_value": "initializing", "resource_id": "3fc42502-856b-46a5-9c36-3d0864d6aa4c"}]                |                        |
    | 2015-07-08 20:11:48 | 2015-07-08 20:14:09 | NULL       |       0 | 54d02c60-bd0e-4f34-9cb6-9c0a0b366873 | WARNING      | [{"resource_local": "server", "display_name": "p00wb3740870", "resource_property_key": "rmc_state", "resource_property_value": "inactive", "resource_id": "54d02c60-bd0e-4f34-9cb6-9c0a0b366873"}]                    |                        |
    | 2015-07-31 17:44:16 | 2015-07-31 18:49:50 | NULL       |       0 | d5ec2a9c-221b-44c0-8573-d8e3695a8dd7 | WARNING      | [{"resource_local": "server", "display_name": "multi-vol-sp5", "resource_property_key": "vm_state", "resource_property_value": "deleted", "resource_id": "d5ec2a9c-221b-44c0-8573-d8e3695a8dd7"}]                     |                        |
    +---------------------+---------------------+------------+---------+--------------------------------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------+
    6 rows in set (0.00 sec)
    MariaDB [nova]> update instance_health_status set health_state='PENDING',reason='' where health_state='WARNING';
    Query OK, 6 rows affected (0.00 sec)
    Rows matched: 6  Changed: 6  Warnings: 0
    

    pending

    The ceilometer issue

    When updating from PowerVC 1.3.0.1 to 1.3.1.1 PowerVC is changing the database backend from DB2 to MariaDB. This is a good thing but the way the update is done is by exporting all the data in flat files and then re-inserting it in the MariaDB database records per records. I had a huge problem because of this, just because my ceilodb base was huge because of the number of machines I had and the number of operations we run on PowerVC since it is in production. The DB insert took more than 3 days and never finish. If you don't need the ceilo data my advice is to change the retention from 270 days y default to 2 hours:

    # powervc-config metering event_ttl --set 2 --unit hr 
    # ceilometer-expirer --config-file /etc/ceilometer/ceilometer.conf
    

    If this is not enough an you still experiencing problems regarding the update the best way is to flush the entire table before the update:

    # /opt/ibm/powervc/bin/powervc-services stop
    # /opt/ibm/powervc/bin/powervc-services db2 start
    # /bin/su - pwrvcdb -c "db2 drop database ceilodb2"
    # /bin/su - pwrvcdb -c "db2 CREATE DATABASE ceilodb2 AUTOMATIC STORAGE YES ON /home/pwrvcdb DBPATH ON /home/pwrvcdb USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM PAGESIZE 16384 RESTRICTIVE"
    # /bin/su - pwrvcdb -c "db2 connect to ceilodb2 ; db2 grant dbadm on database to user ceilometer"
    # /opt/ibm/powervc/bin/powervc-dbsync ceilometer
    # /bin/su - pwrvcdb -c "db2 connect TO ceilodb2; db2 CALL GET_DBSIZE_INFO '(?, ?, ?, 0)' > /tmp/ceilodb2_db_size.out; db2 terminate" > /dev/null
    

    Multi tenancy ... how to deal with a huge environment

    As my environment is growing bigger and bigger I faced a couple people trying to force me to multiply the number of PowerVC machine we have. As Openstack is a solution designed to handle both density and scalability I said that doing this is just a "non-sense". Seriously people who still believe in this have not understand anything about the cloud, openstack and PowerVC. Hopefully we found a solution acceptable by everybody. As we are created what we are calling "building-block" we had to find a way to isolate one "block" from one another. The solution for host isolation is called mutly tenancy isolation. For the storage side we are just going to play with quotas. By doing this a user will be able to manage a couple of hosts and the associated storage (storage template) without having the right to do anything on the others:

    multitenancyisolation

    Before doing anything create the tenant (or project) and a user associated with it:

    # cat /opt/ibm/powervc/version.properties | grep cloud_enabled
    cloud_enabled = yes
    # ~/powervcrc
    export OS_USERNAME=root
    export OS_PASSWORD=root
    export OS_TENANT_NAME=ibm-default
    export OS_AUTH_URL=https://powervc.lab.chmod666.org:5000/v3/
    export OS_IDENTITY_API_VERSION=3
    export OS_CACERT=/etc/pki/tls/certs/powervc.crt
    export OS_REGION_NAME=RegionOne
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_COMPUTE_API_VERSION=2.25
    export OS_NETWORK_API_VERSION=2.0
    export OS_IMAGE_API_VERSION=2
    export OS_VOLUME_API_VERSION=2
    # source powervcrc
    # openstack project create hb01
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description |                                  |
    | domain_id   | default                          |
    | enabled     | True                             |
    | id          | 90d064b4abea4339acd32a8b6a8b1fdf |
    | is_domain   | False                            |
    | name        | hb01                             |
    | parent_id   | default                          |
    +-------------+----------------------------------+
    # openstack role list
    +----------------------------------+---------------------+
    | ID                               | Name                |
    +----------------------------------+---------------------+
    | 1a76014f12594214a50c36e6a8e3722c | deployer            |
    | 54616a8b136742098dd81eede8fd5aa8 | vm_manager          |
    | 7bd6de32c14d46f2bd5300530492d4a4 | storage_manager     |
    | 8260b7c3a4c24a38ba6bee8e13ced040 | deployer_restricted |
    | 9b69a55c6b9346e2b317d0806a225621 | image_manager       |
    | bc455ed006154d56ad53cca3a50fa7bd | admin               |
    | c19a43973db148608eb71eb3d86d4735 | service             |
    | cb130e4fa4dc4f41b7bb4f1fdcf79fc2 | self_service        |
    | f1a0c1f9041d4962838ec10671befe33 | vm_user             |
    | f8cf9127468045e891d5867ce8825d30 | viewer              |
    +----------------------------------+---------------------+
    # useradd hb01_admin
    # openstack role add --project hb01 --user hb01_admin admin
    

    Then associate each host group (aggregates in Openstack terms) (you have to put your allowed hosts in an host group to enable this feature) that are allowed for this tenant using filter_tenant_id meta-data. For each allowed host group add this field to the metatadata of the host. (first find the tenant id):

    # openstack project list
    +----------------------------------+-------------+
    | ID                               | Name        |
    +----------------------------------+-------------+
    | 1471acf124a0479c8d525aa79b2582d0 | ibm-default |
    | 90d064b4abea4339acd32a8b6a8b1fdf | hb01        |
    | b79b694c70734a80bc561e84a95b313d | powervm     |
    | c8c42d45ef9e4a97b3b55d7451d72591 | service     |
    | f371d1f29c774f2a97f4043932b94080 | project1    |
    +----------------------------------+-------------+
    # openstack aggregate list
    +----+---------------+-------------------+
    | ID | Name          | Availability Zone |
    +----+---------------+-------------------+
    |  1 | Default Group | None              |
    | 21 | aggregate2    | None              |
    | 41 | hg2           | None              |
    | 43 | hb01_mn       | None              |
    | 44 | hb01_me       | None              |
    +----+---------------+-------------------+
    # nova aggregate-set-metadata hb01_mn filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf 
    Metadata has been successfully updated for aggregate 43.
    | Id | Name    | Availability Zone | Hosts             | Metadata                                                                                                                                   
    | 43 | hb01_mn | -                 | '9119MME_1009425' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=4', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |
    # nova aggregate-set-metadata hb01_me filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf 
    Metadata has been successfully updated for aggregate 44.
    | Id | Name    | Availability Zone | Hosts             | Metadata                                                                                                                                   
    | 44 | hb01_me | -                 | '9119MME_0696010' | 'dro_enabled=False', 'filter_tenant_id=90d064b4abea4339acd32a8b6a8b1fdf', 'hapolicy-id=1', 'hapolicy-run_interval=1', 'hapolicy-stabilization=1', 'initialpolicy-id=2', 'runtimepolicy-action=migrate_vm_advise_only', 'runtimepolicy-id=5', 'runtimepolicy-max_parallel=10', 'runtimepolicy-run_interval=5', 'runtimepolicy-stabilization=2', 'runtimepolicy-threshold=70' |
    

    To make this work add the AggregateMultiTenancyIsolation to the scheduler_default_filter in nova.conf file and restart nova services:

    # grep scheduler_default_filter /etc/nova/nova.conf
    scheduler_default_filters = RamFilter,CoreFilter,ComputeFilter,RetryFilter,AvailabilityZoneFilter,ImagePropertiesFilter,ComputeCapabilitiesFilter,MaintenanceFilter,PowerVCServerGroupAffinityFilter,PowerVCServerGroupAntiAffinityFilter,PowerVCHostAggregateFilter,PowerVMNetworkFilter,PowerVMProcCompatModeFilter,PowerLMBSizeFilter,PowerMigrationLicenseFilter,PowerVMMigrationCountFilter,PowerVMStorageFilter,PowerVMIBMiMobilityFilter,PowerVMRemoteRestartFilter,PowerVMRemoteRestartSameHMCFilter,PowerVMEndianFilter,PowerVMGuestCapableFilter,PowerVMSharedProcPoolFilter,PowerVCResizeSameHostFilter,PowerVCDROFilter,PowerVMActiveMemoryExpansionFilter,PowerVMNovaLinkMobilityFilter,AggregateMultiTenancyIsolation
    # powervc-services restart
    

    We are done regarding the hosts.

    Enabling quotas

    To allow one user/tenant to create volumes only on onz storage provider we first need to enable quotas using the following commands:

    # grep quota /opt/ibm/powervc/policy/cinder/policy.json
        "volume_extension:quotas:show": "",
        "volume_extension:quotas:update": "rule:admin_only",
        "volume_extension:quotas:delete": "rule:admin_only",
        "volume_extension:quota_classes": "rule:admin_only",
        "volume_extension:quota_classes:validate_setup_for_nested_quota_use": "rule:admin_only",
    

    Then put to 0 all the non-allowed storage template for this tenant and let the only one you want to 10000. Easy:

    # cinder --service-type volume type-list
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    |                  ID                  |                     Name                    | Description | Is_Public |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    | 53434872-a0d2-49ea-9683-15c7940b30e5 |               svc2 base template            |      -      |    True   |
    | e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 |               svc1 base template            |      -      |    True   |
    | f45469d5-df66-44cf-8b60-b226425eee4f |                     svc3                    |      -      |    True   |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    # cinder --service-type volume quota-update --volumes 0 --volume-type "svc2" 90d064b4abea4339acd32a8b6a8b1fdf
    # cinder --service-type volume quota-update --volumes 0 --volume-type "svc3" 90d064b4abea4339acd32a8b6a8b1fdf
    +-------------------------------------------------------+----------+
    |                        Property                       |  Value   |
    +-------------------------------------------------------+----------+
    |                    backup_gigabytes                   |   1000   |
    |                        backups                        |    10    |
    |                       gigabytes                       | 1000000  |
    |              gigabytes_svc2 base template             | 10000000 |
    |              gigabytes_svc1 base template             | 10000000 |
    |                     gigabytes_svc3                    |    -1    |
    |                  per_volume_gigabytes                 |    -1    |
    |                       snapshots                       |  100000  |
    |             snapshots_svc2 base template              |  100000  |
    |             snapshots_svc1 base template              |  100000  |
    |                     snapshots_svc3                    |    -1    |
    |                        volumes                        |  100000  |
    |            volumes_svc2 base template                 |  100000  |
    |            volumes_svc1 base template                 |    0     |
    |                      volumes_svc3                     |    0     |
    +-------------------------------------------------------+----------+
    # powervc-services stop
    # powervc-services start
    

    By doing this you have enable the isolation between two tenants. Then use the appropriate user to do the appropriate task.

    PowerVC cinder above the Petabyte

    Now that quota are enabled use this command if you want to be able to have more that one petabyte of data managed by PowerVC:

    # cinder --service-type volume quota-class-update --gigabytes -1 default
    # powervc-services stop
    # powervc-services start
    

    PowerVC cinder above 10000 luns

    Change the osapi_max_limit in cinder.conf if you want to go above the 10000 lun limits (check every cinder configuration files; the cinder.conf if for the global number of volumes):

    # grep ^osapi_max_limit cinder.conf
    osapi_max_limit = 15000
    # powervc-services stop
    # powervc-services start
    

    Snapshot and consistncy group

    There is a new cool feature available with the latest version of PowerVC (1.3.1.2). This feature allows you to create snapshots of volume (only on SVC and Storwise for the moment). You now have the possibility to create consistency group (group of volumes) and create snapshots of these consistency groups (allowing for instance to make a backup of a volume group directly from OpenStack. I'm doing the example below using the command line because I think it is easier to understand with these commands rather than showing you the same thing with the rest api):

    First create a consistency group:

    # cinder --service-type volume type-list
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    |                  ID                  |                     Name                    | Description | Is_Public |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    | 53434872-a0d2-49ea-9683-15c7940b30e5 |              svc2 base template             |      -      |    True   |
    | 862b0a8e-cab4-400c-afeb-99247838f889 |             p8_ssp base template            |      -      |    True   |
    | e49e9cc3-efc3-4e7e-bcb9-0291ad28df42 |               svc1 base template            |      -      |    True   |
    | f45469d5-df66-44cf-8b60-b226425eee4f |                     svc3                    |      -      |    True   |
    +--------------------------------------+---------------------------------------------+-------------+-----------+
    # cinder --service-type volume consisgroup-create --name foovg_cg "svc1 base template"
    +-------------------+-------------------------------------------+
    |      Property     |                   Value                   |
    +-------------------+-------------------------------------------+
    | availability_zone |                    nova                   |
    |     created_at    |         2016-09-11T21:10:58.000000        |
    |    description    |                    None                   |
    |         id        |    950a5193-827b-49ab-9511-41ba120c9ebd   |
    |        name       |                  foovg_cg                 |
    |       status      |                  creating                 |
    |    volume_types   | [u'e49e9cc3-efc3-4e7e-bcb9-0291ad28df42'] |
    +-------------------+-------------------------------------------+
    # cinder --service-type volume consisgroup-list
    +--------------------------------------+-----------+----------+
    |                  ID                  |   Status  |   Name   |
    +--------------------------------------+-----------+----------+
    | 950a5193-827b-49ab-9511-41ba120c9ebd | available | foovg_cg |
    +--------------------------------------+-----------+----------+
    

    Create volume in this consistency group:

    # cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol1 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200
    # cinder --service-type volume create --volume-type "svc1 base template" --name foovg_vol2 --consisgroup-id 950a5193-827b-49ab-9511-41ba120c9ebd 200
    +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
    |           Property           |                                                                          Value                                                                           |
    +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
    |         attachments          |                                                                            []                                                                            |
    |      availability_zone       |                                                                           nova                                                                           |
    |           bootable           |                                                                          false                                                                           |
    |     consistencygroup_id      |                                                           950a5193-827b-49ab-9511-41ba120c9ebd                                                           |
    |          created_at          |                                                                2016-09-11T21:23:02.000000                                                                |
    |         description          |                                                                           None                                                                           |
    |          encrypted           |                                                                          False                                                                           |
    |        health_status         | {u'health_value': u'PENDING', u'id': u'8d078772-00b5-45fc-89c8-82c63e2c48ed', u'value_reason': u'PENDING', u'updated_at': u'2016-09-11T21:23:02.669372'} |
    |              id              |                                                           8d078772-00b5-45fc-89c8-82c63e2c48ed                                                           |
    |           metadata           |                                                                            {}                                                                            |
    |       migration_status       |                                                                           None                                                                           |
    |         multiattach          |                                                                          False                                                                           |
    |             name             |                                                                        foovg_vol2                                                                        |
    |    os-vol-host-attr:host     |                                                                           None                                                                           |
    | os-vol-tenant-attr:tenant_id |                                                             1471acf124a0479c8d525aa79b2582d0                                                             |
    |      replication_status      |                                                                         disabled                                                                         |
    |             size             |                                                                           200                                                                            |
    |         snapshot_id          |                                                                           None                                                                           |
    |         source_volid         |                                                                           None                                                                           |
    |            status            |                                                                         creating                                                                         |
    |          updated_at          |                                                                           None                                                                           |
    |           user_id            |                                             0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9                                             |
    |         volume_type          |                                                                   svc1 base template                                                                     |
    +------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
    

    You're now able to attach these two volumes to a machine from the PowerVC GUI:

    consist

    # lsmpio -q
    Device           Vendor Id  Product Id       Size    Volume Name
    ------------------------------------------------------------------------------
    hdisk0           IBM        2145                 64G volume-aix72-44c7a72c-000000e0-
    hdisk1           IBM        2145                100G volume-snap1-dab0e2d1-130a
    hdisk2           IBM        2145                100G volume-snap2-5e863fdb-ab8c
    hdisk3           IBM        2145                200G volume-foovg_vol1-3ba0ff59-acd8
    hdisk4           IBM        2145                200G volume-foovg_vol2-8d078772-00b5
    # cfgmr
    # lspv
    hdisk0          00c8b2add70d7db0                    rootvg          active
    hdisk1          00f9c9f51afe960e                    None
    hdisk2          00f9c9f51afe9698                    None
    hdisk3          none                                None
    hdisk4          none                                None
    

    Then you can create a snapshot fo these two volumes. It's that easy :-) :

    # cinder --service-type volume cgsnapshot-create 950a5193-827b-49ab-9511-41ba120c9ebd
    +---------------------+--------------------------------------+
    |       Property      |                Value                 |
    +---------------------+--------------------------------------+
    | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd |
    |      created_at     |      2016-09-11T21:31:12.000000      |
    |     description     |                 None                 |
    |          id         | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f |
    |         name        |                 None                 |
    |        status       |               creating               |
    +---------------------+--------------------------------------+
    # cinder --service-type volume cgsnapshot-list
    +--------------------------------------+-----------+------+
    |                  ID                  |   Status  | Name |
    +--------------------------------------+-----------+------+
    | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f | available |  -   |
    +--------------------------------------+-----------+------+
    # cinder --service-type volume cgsnapshot-show 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f
    +---------------------+--------------------------------------+
    |       Property      |                Value                 |
    +---------------------+--------------------------------------+
    | consistencygroup_id | 950a5193-827b-49ab-9511-41ba120c9ebd |
    |      created_at     |      2016-09-11T21:31:12.000000      |
    |     description     |                 None                 |
    |          id         | 20e2ce6b-9c4a-4eea-b05d-f0b0b6e4768f |
    |         name        |                 None                 |
    |        status       |              available               |
    +---------------------+--------------------------------------+
    

    cgsnap

    Conclusion

    Please keep in mind that the content of this blog post comes from real life and production examples. I hope you will be able to better understand that scalability, density, fast deployment, snapshots, multi tenancy are some features that are absolutely needed in the AIX world. As you can see the PowerVC team is moving fast. Probably faster than every customer I have ever seen. I must admit they are right. Doing this is the only way the face the Linux X86 offering. And I must confess this is damn fun to work on those things. I'm so happy to have the best of two worlds AIX/PowerSystem and Openstack. This is the only direction we have to take if we want AIX to survive. So please stop being scared or not convinced by these solutions they are damn good, production ready. Please face and embrace the future and stop looking at the past. As always I hope it help.

    Viewing all 24 articles
    Browse latest View live