Task 3. Prepare the nodes for distributed deployment

Perform the following steps to prepare the nodes for the distributed deployment of Service Manager Service Manager Service Portal.

Preparation

Install the packages on HA nodes as shown below.

HA nodes Packages
Load Balance Ansible (2.2.0.0), nginx (1.12.2 or later), gluster (3.3.8.0 or later), ntp and ntpdate (4.2.0 or later)
Service Manager Service Portal nodes gluster (3.3.8.0 or later), ntp and ntpdate (4.2.0 or later)
DB nodes pgpool (pgpool-II-pg96-3.5.5-1pgdg), ntp and ntpdate (4.2.0 or later)

Note During the installation of HA, there will be a pre-check script that validates all these prerequisites. Error messages will be displayed if the requirements are not met.

For reference, you can get these packages from the URLs listed in Appendix C. SMSP installation prerequisites download links, except for Ansible.

Step 1. Set a password for "propel" on each node

In this step, you will set a password for the "propel" user and allow this user to run commands from anywhere.

Note This password will be used in later configuration steps.

On each of the nodes (the LB node, application nodes, and database nodes), do the following:

  1. Run the following command:

    # passwd propel

    When prompted, enter a password (for example, propel2015).

  2. Run the following command:

    # visudo
  3. Insert the following line below the “root ALL=(ALL) ALL” line, as shown below:

    propel  ALL=(ALL)    ALL

Important Before performing the next steps, log on to the load balancer node as the "propel" user.

Step 2. Check network connectivity and get the hosts keys on the Load Balancer node

Verify the network connectivity between the Load Balancer (LB) node (acting as the Ansible Management Node) and all the Service Manager Service Manager Service Portal Node servers by making an ssh connection from the Load Balancer to the Service Manager Service Manager Service Portal servers (Cluster and DB nodes) using the FQDN.

Caution Do not forget to make an ssh connection to the Load Balancer server as well.

Run the following command:

# su - propel
# cd /opt/hp/propel/contrib/propel-distributed*
# ssh-keygen -t rsa -f ~/.ssh/id_rsa
# ssh-copy-id propel@<LB node host FQDN>
# ssh-copy-id propel@<application node 1 FQDN>
# ssh-copy-id propel@<application node 2 FQDN>
# ssh-copy-id propel@<master DB node FQDN>
# ssh-copy-id propel@<slave DB node FQDN>

After this step is completed, ssh calls should be executed without a password prompt.

Step 3. Define Ansible nodes (hosts) on the Load Balancer node

  1. Navigate to the /opt/hp/propel/contrib/propel-distributed.<version> directory.

  2. Copy the inventory/hosts.example file to inventory/hosts.default:

    # cp inventory/hosts.example inventory/hosts.default
  3. In the inventory/hosts.default file, change the fully qualified host names of all cluster nodes in the [lb], [propel], [db_m], and [db_s] sections, the IP address of the load balancer node in the [lb_ip] section, and the virtual IP address of the database cluster in the [db_vip] section to values that match your actual configuration. The following table provides a description of each section.

    Section Description
    [lb] Front-end load balancer address
    [lb_ip] IP address of the load balancer
    [propel] All Service Manager Service Manager Service Portal application nodes within the Service Manager Service Manager Service Portal cluster
    [db_m] Service Manager Service Manager Service Portal master DB node within the Service Manager Service Manager Service Portal cluster (one)
    [db_s] Service Manager Service Manager Service Portal slave DB node within the Service Manager Service Manager Service Portal cluster
    [db_vip] The VIP address for the PostgreSQL cluster. A VIP is a virtual IP address; so this is an address that uses a virtual adapter on the pg_pool cluster. Pgpool has a watchdog service that will float the IP address between the cluster nodes to provide a reliable connection. This unused IP should be ping-able, reachable within the same subnet as the Service Manager Service Manager Service Portal application and DB nodes, and will be linked to the primary Ethernet port (eth0) of the Service Manager Service Manager Service Portal DB nodes.
    [*:children]

    Support roles.

    Note Do not change this part.

    #vi inventory/hosts.default
    [lb]
    <LB node host FQDN>
     
    [lb_ip]
    <LB node host IP address>
     
    [propel]
    <application node 1 FQDN>
    <application node 2 FQDN>
     
    [db_m]
    <master DB node FQDN>
     
    [db_s]
    <slave DB node FQDN>
     
    [db_vip]
    <database cluster virtual IP address>
    

Step 4. Check your Ansible node hosts file on the Load Balancer node

Verify that your Ansible node hosts file is set up correctly and recognized by Ansible. Run the following commands and verify that the results is correct:

# cd /opt/hp/propel/contrib/propel-distributed*
# ansible propeld -u propel -m ping -c paramiko

Note For every host you may be asked if the fingerprint is correct. Type "yes" and press Enter. This command should finish without user input next time. The script should finish in seconds. If the execution takes longer, it may be waiting for your input.

Step 5. Install the distributed Service Manager Service Portal scripts on the Load Balancer node

Copy the group_vars/propeld.yml.example file to group_vars/propeld.yml:

# cp group_vars/propeld.yml.example group_vars/propeld.yml

Step 6. Check the network interface name on the database nodes

On each of the two database nodes, run the "ifconfig -a" command to check the network interface name.

Note The two database nodes should have the same network interface name. The default interface name is ens32, which is used as an example in the following steps.

Step 7. Define an alternate network interface name on the Load Balancer node

  1. On the Load Balancer node, run the following command:

    # vi group_vars/propeld.yml

    Uncomment the line:

    # interface: eth0

    Change eth0 in the line above to your network interface name that you obtained from the previous step. For example:

    interface: ens32
  2. Run the following command:

    # vi postgresql_handlers/defaults/main.yml

    Change eth0 in the interface line to ens32:

    interface: ens32