Task 3. Prepare the nodes for distributed deployment

Perform the following steps to prepare the nodes for the distributed deployment of Service Manager Service Portal.

Step 1. Set a password for "propel" on each node

In this step, you will set a password for the "propel" user and allow this user to run any commands from anywhere.

Note This password will be used in later configuration tasks.

On each of the nodes (the LB node, application nodes, and database nodes), do the following:

  1. Run the following command:

    # passwd propel

    When prompted, enter a password (for example, propel2015).

  2. Run the following command:

    # visudo
  3. Insert the following line below the “root ALL=(ALL) ALL” line, as shown below:

    propel  ALL=(ALL)    ALL
    

    After this change, the lines should look like the following:

    root ALL=(ALL)       ALL
    propel  ALL=(ALL)    ALL
    

Important Before performing the next steps, log on to the load balancer node as the "propel" user.

Step 2. Check network connectivity and get the hosts keys on the LB node

Verify the network connectivity between the Load Balancer (LB) node (acting as the Ansible Management Node) and all the Service Manager Service Portal Node servers by making an ssh connection from the Load Balancer to the Service Manager Service Portal servers (Cluster and DB nodes) using the FQDN.

Caution Do not forget to make an ssh connection to the Load Balancer server as well.

Run the following command:

# su - propel
# cd /opt/hp/propel/contrib/propel-distributed*
# ssh-keygen -t rsa -f ~/.ssh/id_rsa
# ssh-copy-id propel@<LB node host FQDN>
# ssh-copy-id propel@<application node 1 FQDN>
# ssh-copy-id propel@<application node 2 FQDN>
# ssh-copy-id propel@<master DB node FQDN>
# ssh-copy-id propel@<slave DB node FQDN>

Note After completing this step, ssh calls should execute without a password prompt.

Step 3. Define Ansible nodes (hosts) on the LB node

  1. Navigate to the /opt/hp/propel/contrib/propel-distributed.<version> directory.

  2. Copy the inventory/hosts.example file to inventory/hosts.default:

    # cp inventory/hosts.example inventory/hosts.default
  3. In the inventory/hosts.default file, change the fully qualified host names of all cluster nodes in the [lb], [propel], [db_m], and [db_s] sections, the IP address of the load balancer node in the [lb_ip] section, and the virtual IP address of the database cluster in the [db_vip] section to values that describe your actual configuration. The following table provides a description of each section.

    Section Description
    [lb] Front-end load balancer address
    [lb_ip] IP address of the load balancer
    [propel] All Service Manager Service Portal application nodes within the Service Manager Service Portal cluster
    [db_m] Service Manager Service Portal master DB node within the Service Manager Service Portal cluster (one)
    [db_s] Service Manager Service Portal slave DB node within the Service Manager Service Portal cluster
    [db_vip] The VIP address for the PostgreSQL cluster. A VIP is a virtual IP address; so this is an address that uses a virtual adapter on the pg_pool cluster. Pgpool has a watchdog service that will float the IP address between the cluster nodes to provide a reliable connection. This unused IP should be ping-able, reachable within the same subnet as the Service Manager Service Portal application and DB nodes, and will be linked to the primary Ethernet port (eth0) of the Service Manager Service Portal DB nodes.
    [*:children]

    Support roles.

    Caution Do not change this part unless you know what you are doing.

    #vi inventory/hosts.default
    [lb]
    <LB node host FQDN>
     
    [lb_ip]
    <LB node host IP address>
     
    [propel]
    <application node 1 FQDN>
    <application node 2 FQDN>
     
    [db_m]
    <master DB node FQDN>
     
    [db_s]
    <slave DB node FQDN>
     
    [db_vip]
    <database cluster virtual IP address>

    The following is an example:

    [lb]
    vm0541.hpe.net
     
    [lb_ip]
    1x.1xx.1xx.xx
     
    [propel]
    vm0546.hpe.net
    vm0624.hpe.net
     
    [db_m]
    vm0671.hpe.net
     
    [db_s]
    vm0682.hpe.net
     
    [db_vip]
    1x.1xx.1xx.1xx
    

Step 4. Check your Ansible node hosts file on the LB node

Verify that your Ansible node hosts file is set up correctly and recognized by Ansible. Run the following commands and verify the results look correct:

# cd /opt/hp/propel/contrib/propel-distributed*
# ansible propeld -u propel -m ping -c paramiko

Note For every host you may be asked if the fingerprint is correct. Type ‘yes’ and press Enter. This command should finish without user input next time. The script should finish in a matter of seconds. If the execution takes longer, it might be waiting for your input.

Step 5. Install the distributed Service Manager Service Portal scripts on the LB node

  1. Copy the group_vars/propeld.yml.example file to group_vars/propeld.yml:

    # cp group_vars/propeld.yml.example group_vars/propeld.yml
  2. Online installation: follow this step if your cluster has access to the Internet:

    Update Proxy Settings in group_vars/propeld.yml according to your corporate proxy settings:

    # vim group_vars/propeld.yml
    
    proxy_env:
    http_proxy: http://proxy.example.com:8080
    https_proxy: http://proxy.example.com:8080
  3. Offline installation: follow these steps if your cluster has no Internet access:

    1. Copy the packages.zip from the online host to the /opt/hp/propel/contrib/propel-distributed*/ directory on the LB node.

    2. Unzip the package file:

      # cd /opt/hp/propel/contrib/propel-distributed*/
      # unzip packages.zip
      # chown -R propel:root .packages
    3. Configure the propeld.yml file:

      # cd /opt/hp/content/propel-distributed*/group_vars
      # cp -f propeld.yml.example propeld.yml
      # vi propeld.yml
      

      Set the download parameter in this file to false:

      packages:
            enabled: true
            # disable and populate `.packages/` for no-internet installation
            download: false
            # disable for no copying of packages to remote nodes
            upload: true
      

Step 6. Check the network interface name on the DB nodes

On each of the two database nodes, run the "ifconfig -a" command to check the network interface name.

Note The two database nodes should have the same network interface name. The default interface name is ens32, which is used as an example in the following steps.

Step 7. Define an alternate network interface name on the LB node

  1. On the LB node, run the following command:

    # vi group_vars/propeld.yml

    Uncomment the line:

    # interface: eth0

    Change eth0 in the line above to your network interface name that you obtained from the previous step. For example:

    interface: ens32
  2. Run the following command:

    # vi postgresql_handlers/defaults/main.yml

    Change eth0 in the interface line to ens32:

    interface: ens32

Step 8. Prepare the LB node for NTP installation

Note The detailed steps vary depending on whether your environment has already a network time synchronization tool (such as NTP) installed.

If your environment has already Network Time Protocol (NTP) or another network time synchronization tool installed, perform the following steps on the LB node to skip NTP installation:

  1. Open the /opt/hp/propel/contrib/propel-distributed.contrib*/propeld.yml file.
  2. Comment out the ntp.yml file as follows:

    ---
    - include: propel-ova.yml
    - include: packages.yml
    - include: db.yml
    - include: storage.yml
    - include: propel-shared.yml
    - include: propel-security.yml
    - include: messenger.yml
    - include: monitoring.yml
    - include: lb.yml
    - include: propel.yml
    #- include: ntp.yml

If your environment does not have any network time synchronization tool installed, perform the following steps instead on the LB node:

  1. Open the /opt/hp/propel/contrib/propel-distributed.contrib*/ntp/tasks/main.yml file.

  2. Before the "name: NTP synchronize time" line, insert the lines highlighted:

    ---
    -
      name: NTP uninstall
      package:
        name: "{{ item }}"
        state: absent
      when: propeld.modules.packages.enabled
      with_items:
        - "ntp"
        - "ntpdate"
    
    -
      name: NTP install
      shell: "yum localinstall -y {{ yum.src.propel }}/{{ item }}/*.rpm"
      when: propeld.modules.packages.enabled
      with_items:
        - "ntp"
        - "ntpdate"
      notify:
        - NTP enable
    
    -
      name: NTP synchronize time
  3. In the /opt/hp/propel/contrib/propel-distributed.contrib*/packages_download/tasks/main.yml file, add the lines highlighted:

    -
      name: PACKAGES source
      command: "yumdownloader --destdir {{ tmp.stdout }}/{{ item.m }} --resolve {{ item.p }}"
      args:
        creates: "{{ tmp.stdout }}/{{ item.m }}/{{ item.p }}.rpm"
      with_items:
        - { m: "gluster", p: "glusterfs-server" }
        - { m: "nginx",   p: "nginx" }
        - { m: "pgpool",  p: "pgpool-II-pg{{ pgpool.version.postgresql }}-{{ pgpool.version.pgpool }}-1pgdg.rhel7.{{ ansible_architecture }}" }
        - { m: "ntp",   p: "ntp" }
        - { m: "ntpdate",   p: "ntpdate" }
      environment: "{{ proxy_env }}