Task 4. Prepare the load balancer node

Perform the following steps to prepare the load balancer node.

Important Before performing the steps, log on to the load balancer node as the "propel" user.

Step 1. Check network connectivity and get the hosts keys

Verify the network connectivity between the Load Balancer Node (acting as the Ansible Management Node) and all the Service Manager Service Portal Node servers by making an ssh connection from the Load Balancer to the Service Manager Service Portal servers (Cluster and DB nodes) using the FQDN.

Caution Do not forget to make an ssh connection to the Load Balancer server as well.

Run the following command:

# su - propel
# cd /opt/hp/propel/contrib/propel-distributed*
# ssh-keygen -t rsa -f ~/.ssh/id_rsa
# ssh-copy-id propel@<LB node host FQDN>
# ssh-copy-id propel@<application node 1 FQDN>
# ssh-copy-id propel@<application node 2 FQDN>
# ssh-copy-id propel@<master DB node FQDN>
# ssh-copy-id propel@<slave DB node FQDN>

Note After completing this step, ssh calls should execute without a password prompt.

Step 2. Define Ansible nodes (hosts)

  1. Navigate to the /opt/hp/propel/contrib/propel-distributed.<version> directory.

  2. Copy the inventory/hosts.example file to inventory/hosts.default:

    # cp inventory/hosts.example inventory/hosts.default
  3. In the inventory/hosts.default file, change the fully qualified host names of all cluster nodes in the [lb], [propel], [db_m], and [db_s] sections, the IP address of the load balancer node in the [lb_ip] section, and the virtual IP address of the database cluster in the [db_vip] section to values that describe your actual configuration. The following table provides a description of each section.

    Section Description
    [lb] Front-end load balancer address
    [lb_ip] IP address of the load balancer
    [propel] All Service Manager Service Portal application nodes within the Service Manager Service Portal cluster
    [db_m] Service Manager Service Portal master DB node within the Service Manager Service Portal cluster (one)
    [db_s] Service Manager Service Portal slave DB node within the Service Manager Service Portal cluster
    [db_vip] The VIP address for the PostgreSQL cluster. A VIP is a virtual IP address; so this is an address that uses a virtual adapter on the pg_pool cluster. Pgpool has a watchdog service that will float the IP address between the cluster nodes to provide a reliable connection. This unused IP should be ping-able, reachable within the same subnet as the Service Manager Service Portal application and DB nodes, and will be linked to the primary Ethernet port (eth0) of the Service Manager Service Portal DB nodes.
    [*:children]

    Support roles.

    Caution Do not change this part unless you know what you are doing.

    #Vi inventory/hosts.default
    [lb]
    <LB node host FQDN>
     
    [lb_ip]
    <LB node host IP address>
     
    [propel]
    <application node 1 FQDN>
    <application node 2 FQDN>
     
    [db_m]
    <master DB node FQDN>
     
    [db_s]
    <slave DB node FQDN>
     
    [db_vip]
    <database cluster virtual IP address>

    The following is an example:

    [lb]
    vm0541.hpe.net
     
    [lb_ip]
    1x.1xx.1xx.xx
     
    [propel]
    vm0546.hpe.net
    vm0624.hpe.net
     
    [db_m]
    vm0671.hpe.net
     
    [db_s]
    vm0682.hpe.net
     
    [db_vip]
    1x.1xx.1xx.1xx
    

Step 3. Check your Ansible Node Hosts file

Verify that your Ansible node hosts file is set up correctly and recognized by Ansible. Run the following commands and verify the results look correct:

# cd /opt/hp/propel/contrib/propel-distributed*
# ansible propeld -u propel -m ping -c paramiko

Note For every host you may be asked if the fingerprint is correct. Type ‘yes’ and press Enter. This command should finish without user input next time. The script should finish in matter of seconds. If the execution takes longer, it might be waiting for your input.

Step 4. Install the distributed Service Manager Service Portal scripts

  1. Copy the group_vars/propeld.yml.example file to group_vars/propeld.yml:

    # cp group_vars/propeld.yml.example group_vars/propeld.yml
  2. Online installation: follow this step if your cluster has access to the Internet:

    Update Proxy Settings in group_vars/propeld.yml according to your corporate proxy settings:

    # vim group_vars/propeld.yml
    
    proxy_env:
    http_proxy: http://proxy.example.com:8080
    https_proxy: http://proxy.example.com:8080
  3. Offline installation: follow these steps if your cluster has no Internet access:

    1. Set propeld.packages.download to false in group_vars/propeld.yml:

      # vim  group_vars/propeld.yml
    2. Copy Service Manager Service Portal Distributed scripts to a machine that is connected to the Internet and run download_packages.sh. The script should finish without any errors. If not, check the script output, resolve possible issues and run it again. After a successful run, the .packages directory should be populated with RPM packages required for the installation.
    3. Copy the .packages directory to the load balancer node's Service Manager Service Portal Distributed scripts directory and proceed with the installation.

Step 5. Define an alternate network interface name on all database nodes

  1. Run ’ip a’ and check the network interface name.

    Note The default interface name is ens32, which is used as an example in the following steps.

  2. Run the following command:

    # vi group_vars/propeld.yml

    Uncomment the line:

    # interface: eth0

    Change eth0 in the line above to your network interface name. For example:

    interface: ens32
  3. Run the following command:

    # vi postgresql_handlers/defaults/main.yml

    Change eth0 in the interface line to ens32:

    interface: ens32