Customized output from:
Document Release Date: June 2017 Software Release Date: June 2017 |
The only warranties for Seattle SpinCo, Inc and its subsidiaries (“Seattle”) products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Seattle shall not be liable for technical or editorial errors or omissions contained herein.
The information contained herein is subject to change without notice.
Confidential computer software. Except as specifically indicated, valid license from Seattle required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
(missing or bad snippet)© 2014-2017 Hewlett Packard Enterprise Development LP
Adobe® is a trademark of Adobe Systems Incorporated.
Apple is a trademark of Apple Computer, Inc., registered in the U.S. and other countries.
AMD is a trademark of Advanced Micro Devices, Inc.
Google™ is a registered trademark of Google Inc.
Intel®, Intel® Itanium®, Intel® Xeon®, and Itanium® are trademarks of Intel Corporation in the U.S. and other countries.
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
Internet Explorer, Lync, Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
Red Hat® Enterprise Linux Certified is a registered trademark of Red Hat, Inc. in the United States and other countries.
sFlow is a registered trademark of InMon Corp.
UNIX® is a registered trademark of The Open Group.
The Network Node Manager iSPI Performance for Traffic Software (NNM iSPI Performance for Traffic) extends the capability of HPE Network Node Manager i Software (NNMi) to monitor the performance of the network. The NNM iSPI Performance for Traffic enriches the obtained data from the IP flow records that are exported by the routers in your NNMi network.
You must install the following components in your environment:
You can build one of the following monitoring environments after installing the NNM iSPI Performance for Traffic:
You can upgrade the version 10.20 or 10.10 to the version 10.30.
Choose the operating system on which the NNM iSPI Performance for Traffic 10.1010.20 is currently installed.
The NNM iSPI Performance for Traffic supports the following operating systems:
For more information about the list of supported operating systems, see the NNMi Ultimate Support Matrix.
Select if the Master Collector is installed on the NNMi management server.
You can install the Master Collector in an HA cluster.
Also, specify if NNMi is installed in an HA cluster or an Application Failover cluster.
Specify if NNMi is installed in an HA cluster or an Application Failover cluster.
A database is used for storing NNMi and NNM iSPI Performance for Traffic data. You can select one of the following database options:
You can view your customized document on the screen, or print it.
If you have a PDF print driver installed on your computer, click Print to create PDF documents that are customized according to your selections. PDF print drivers are available from several open source and third-party providers.
The following steps are customized according to your selections. Check that your selections are correct.
If any selections are not correct, click Change.
Read the Supporting Documentation
Read the following documents to better prepare for this NNM iSPI Performance for Traffic installation:
Any system that you want to include as a node in an NNM iSPI Performance for Traffic HA cluster must meet the following requirements:
Virtual IP address for the HA cluster that is DNS-resolvable
Virtual hostname for the HA cluster that is DNS-resolvable
Additionally, the HA cluster configuration must, at a minimum, include the following items:
You must always use the following sequence while installing the NNM iSPI Performance for Traffic:
Tip: Before installing the NNM iSPI Performance for Traffic, make sure NNMi and NPS are successfully installed. The Master Collector installation process requires the details of the NPS system.
Depending on the scale of your network, you can install a single Master Collector with a single Leaf Collectors or multiple instances of Leaf Collectors with a single instance of the Master Collector. The NNMi Extension for iSPI Performance for Traffic must always be installed on the NNMi management server.
The NNMi Extension for iSPI Performance for Traffic adds the NNM iSPI Performance for Traffic related links and views into the NNMi workspace.
You must always install the NNMi Extension for iSPI Performance for Traffic on the NNMi management server.
In the NNMi Application Failover environment, make sure you install the NNMi Extension for iSPI Performance for Traffic on the primary and secondary NNMi systems. Before installing on the secondary system, fail over from the primary system to the secondary system.
Note: Install on both the nodes in the cluster. Put the NNMi resource group to the HA maintenance mode by placing the maintenance
file under the following directory:
%NNMDataDir%\hacluster\
<resource_group>
/var/opt/OV/hacluster/
<resource_group>
Log on to the NNMi management server with the administrative privileges. Make sure that the user with administrative privileges is part of Local administrator group.
Log on to the NNMi management server with the root privileges.
Extract the contents of the NNM iSPI Performance for Traffic installation media (or mount the media).
When NNMi is on Windows. Go to the Traffic_NNM_Extension\WinNT
folder on the installation media, and then double-click the setup.exe
file.
When NNMi is on Linux. Go to the Traffic_NNM_Extension/Linux
directory on the installation media, and then run the setup.bin
file.
Go to the Traffic_NNM_Extension/Linux
directory on the installation media, and then run the setup.bin
file.
The installer configures your system for the installation and initializes the installation process.
Note: The system account is a special administrator account that NNMi creates during installation (see the Installing NNMi section in the HPE Network Node Manager i Software Interactive Installation Guide).
Since you plan to install the Master Collector in a high availability (HA) cluster environment, you must specify the virtual IP address or virtual FQDN of the cluster.
The following details are automatically detected by the installer:
Restart NNMi’s processes by running the following commands:
ovstop -c ovjboss
ovstart -c
You can also select Start > All Programs > HP > Network Node Manager > ovstop / ovstart.
You can remove the maintenance
file from both the nodes now.
The installation log file (postInstall_traffic-nnm.log
) is available in the %temp% directory.
The installation log file (postInstall_traffic-nnm.log
) is available in the /tmp
directory on Linux.
You can install the NNM iSPI Performance for TrafficMaster Collector on the NNMi management server or on a standalone, remote server.
You can install only one Master Collector in your environment. In a Global Network Management (GNM) environment, you must install one Master Collector for every region.
Preinstallation Tasks
Create a New User with the Web Service Client Role on the NNMi Management Server
Create a user from the NNMi console with the Web Service Client role. For more information on creating a new user with the Web Service Client role, see the Network Node Manager i Software Help for Administrators.
Create New Oracle Users
If you configured NNMi to use an Oracle database, the NNM iSPI Performance for Traffic also must be configured to use Oracle as its database. You can use the same Oracle instance that is used by NNMi, but you must use a unique Oracle user instance for the Master Collector.
Note Down the Details of the Oracle Server
Note down the following details of the Oracle database instance that you want to use with the Master Collector:
Additional Oracle Requirements
The database administrator needs to create a tablespace that will only be used by the collector user accessing the database.
Assign a tablespace size depending on the number of nodes in your installation. For example, for an 18,000 node network, set your beginning tablespace size for 12 gigabytes (GB). Set the option for unlimited tablespace extensions in increments of 12 GB.
The database requirement grows as the collector collects additional records, so watch this growth carefully and expand your configured tablespace size when necessary.
Create an Oracle user and assign the user to the newly created tablespace. The user should have the following permissions:
SELECT ANY DICTIONARY
You can opt to not grant the SELECT ANY DICTIONARY
permission to the user. If you do not grant the SELECT ANY DICTIONARY
permission, NNM iSPI Performance for Traffic does not show any information in the Health tab (under the Help > System Information menu).
For Oracle 12.1.0.1.0 only. The SELECT ANY DICTIONARY
permission, however, is required for the collector installation if you use Oracle 12.1.0.1.0. You can revoke the SELECT ANY DICTIONARY
permission after the installation is complete.
Make note of the Oracle user name and password; you will need this information during the collector installation.
Oracle provides the high-availability solution Oracle Real Application Cluster (RAC). The RAC solution uses two physical Oracle database servers. If the first server malfunctions or the administrator invokes a failover (for example, to complete maintenance on the first server), the second server automatically takes over, and the collector begins using the second server. There is a short time window of data loss that might occur after the failover occurs. The amount of data loss increases with the size of the managed network and the rate of traps and incidents being evaluated. To configure RAC, work with your Oracle database administrator to install an Oracle database according to the instructions provided by Oracle.
Note Down the Details of the NNMi Management Server
nms-local.properties
file with a text editor.com.hp.ov.nms.fqdn
: The FQDN of the NNMi management server.nmsas.server.port.web.http
: The HTTP port used by NNMi.nmsas.server.port.web.https
: The HTTPS port used by NNMi.nmsas.server.port.naming.port
: The JNDI port of NNMi. If this property is commented out (with the #!
characters) in the file, NNMi uses the default JNDI port, 1099.If NNMi is installed and configured in the application failover mode, you must also note down the above properties from the nms-local.properties
file on the secondary NNMi management server.
Ensure the Availability of Necessary Ports
On the Master Collector system, make sure the following ports are available for use:
12043, 12080, 12083, 12084, 12085, 12086, 12087, 12092, 12099, 12458, 12500, 12501, 12712, 12713, 12714, and 12873
Ensure KornShell is Installed
On the Master Collector system, make sure KornShell is installed.
To check that KornShell is installed, open a command-line terminal and type the following command:
ksh
If KornShell is installed, the command prompt changes to ksh.
If KornShell is not installed, the following error message appears:
ksh is not found
If you do not have KornShell installed, you can download it from http://www.kornshell.com/software/ and install it.
Ensure the Availability of GID 26
Make sure that the Group ID (GID) 26 is not used by any existing user groups on the system.
Ensure the Availability of the Required Libraries
Make sure that both the 64-bit compat-libstdc++
and 32-bit compat-libstdc++
libraries are available before installing Master Collector. Master Collector requires the following exact library versions; however, the RPM versions may vary depending on the minor release of Linux.
HPE Public Key
You must import the HPE public key into the Linux RPM database before installation.
To import the HPE public key, follow the instructions available on the following web page:
https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPLinuxCodeSigning
To know more about prerequisite information, see NNMi Ultimate Support Matrix.
Install the collector on the active node in the HA cluster first.
Before Installing in the NNMi HA Cluster:
Put the NNMi resource group to the HA maintenance mode by placing the maintenance file under the following directory:
%NnmDataDir%\hacluster\
<resource_group>
/var/opt/OV/hacluster\
<resource_group>
Install the Master Collector on the primary (active) node in the cluster, but do not start the collector.
To install the Master Collector:
https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPLinuxCodeSigning
Go to the Traffic_Master
directory on the installation media and double-click the setup
file. The installer configures your system for the installation and initializes the installation process.
Traffic_Master
directory on the installation media and run the following command:
./setup
The installation initialization process prompts you to choose the language you want to use. The installer configures your system for the installation and initializes the installation process.
Host: The FQDN of the Oracle server
If you are using Oracle RAC:
On the Pre-Install Summary page, review your installation choices and click Install. The installation process begins.
The Choose Java JDK dialog box opens.
NNM iSPI Performance for Traffic requires that Java Development Kit (JDK) 1.8 be available on the system. This version of the NNM iSPI Performance for Traffic installer contains OpenJDK 1.8. You can select the Install bundled OpenJDK option to install OpenJDK 1.8 that is embedded with the NNM iSPI Performance for Traffic media.
Alternatively, if another version of JDK 1.8 is already available on the system, you can select the Use Already-Installed JDK option, and then click Browse to select the path to the JDK.
On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
For example:
To install Red Hat OpenJDK 1.8.x on Red Hat Enterprise Linux, run the following command:
yum install java-1.8.0-openjdk-devel.x86_64
To install SUSE OpenJDK 1.8.x on SUSE Linux, run the following command:
zypper install java-1_8_0-openjdk
To find out the directory where JDK is installed, run one of the following commands:
whereis java
which java
On Windows, it is recommended that you install the Oracle JDK 1.8.x.
Tip: Click Validate to check that the specified path is valid.
After making a selection, click Continue.
Do not specify any other parameter.
Note: If the NPS server is configured in a distributed deployment, you must type the FQDN of the Database Server (DB Server) in the Performance SPI Server Configuration dialog box.
Once the installer completes installing the Master Collector, the Installation Complete page opens.
The installation log file (postInstall_traffic-master.log
) is available in the %temp%
directory.
The installation log file (postInstall_traffic-master.log
) is available in the /tmp
directory.
Oracle RAC requires a secondary Oracle RAC server. To configure that information, after collector installation, do the following:
Open the following file in a text editor:
%TrafficDataDir%
\nmsas\traffic-master\/var/opt/OV/nmsas/traffic-master/server.properties
Add the following lines:
com.hp.ov.nms.oracle.otherHost=
<second_host_in_the_cluster>
com.hp.ov.nms.oracle.serviceName=
<logical_name>
In this instance, <second_host_in_the_cluster> is the FQDN of the second host in the Oracle RAC and <logical_name> is the logical name of the Oracle RAC.
Tip: If the properties already exist in the file, make sure that they are set to correct values.
Add the following string:
com.hp.ov.nms.oracle.connection.url=${com.hp.ov.nms.oracle.connection.cluster.url}
Save the file.
Use the following commands to restart the collector:
%TrafficInstallDir%\traffic-master\bin\
/opt/OV/traffic-master/bin/
nmstrafficmasterstop.ovpl
%TrafficInstallDir%\traffic-master\bin\
/opt/OV/traffic-master/bin/
nmstrafficmasterstart.ovpl
After installing on the active node:
Modify the login-config.xml
file from the %NnmInstallDir%\trafficmaster\server\conf /opt/OV/traffic-master/server/conf directory to reflect the virtual FQDN of the NNMi management server.
Look for the element <module-option name=”nnmAuthUrl”>
, and then modify the string contained within the element to reflect the virtual FQDN of the NNMi management server.
In the nnm.extended.properties
file, set the com.hp.ov.nms.spi.trafficmaster.Nnm.perfspidatapath
property to the value that was displayed by the nnmenableperfspi.ovpl
script.
The nnmenableperfspi.ovpl
script records all the details in the nnmenableperfspi_log.txt
file (available in the %NnmDataDir%\log /var/opt/OV/log
directory) on the NNMi system, which you can use for your reference.
maintenance
file that you created before installing the Master Collector on the active node.Run ovstatus -c to make sure that ovjboss is running.
Put the NNMi resource group to the HA maintenance mode by placing the maintenance
file under the following directory:
%NnmDataDir%\hacluster\
<resource_group>
/var/opt/OV/hacluster\
<resource_group>
Modify the login-config.xml
file from the %NnmInstallDir%\trafficmaster\server\conf /opt/OV/traffic-master/server/conf directory to reflect the virtual FQDN of the NNMi management server.
Look for the element <module-option name=”nnmAuthUrl”>
, and then modify the string contained within the element to reflect the virtual FQDN of the NNMi management server.
maintenance
file that you created above.Run the following command on the active node first, and then on the passive node:
%NnmInstallDir%\misc\nnm\ha\nnmhaconfigure.ovpl NNM -addon TRAFFIC
/opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl NNM -addon TRAFFIC
Verify that the Master Collector is successfully registered by running the following command:
%NnmInstallDir%\misc\nnm\ha\nnmhaclusterinfo.ovpl –config NNM –get NNM_ADD_ON_PRODUCTS
/opt/OV/misc/nnm/ha/nnmhaclusterinfo.ovpl –config NNM –get NNM_ADD_ON_PRODUCTS
After installing on the active node:
Run the following command after installation to create a user that can start the Master Collector processes:
%TrafficInstallDir%\traffic-master\bin\nmstrafficmastersetuser.ovpl [--domain=
<DomainName>] --username=
<AdministratorUsername> --password=
<AdministratorPassword>
In this Instance, <Domain Name> is the Fully Qualified Domain Name of the Master Collector. Note that Domain
is a mandatory parameter if you are using a domain account.
If NPS is installed on a Windows system, you must specify the user name and password of the user that was created by the nnmenableperfspi.ovpl
command (See the HPE Network Node Manager iSPI Performance for Metrics Interactive Installation Guide).
If you are using a non-English system, make sure the user is a member of the Administrators group on the Master Collector system.
Make sure that the username that you specified for nmstrafficmastersetuser.ovpl
is the same user that has the read/write access rights to the shared network directory.
Copy the NNM iSPI Performance for Traffic data to the shared disk:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhadisk.ovpl TRAFFIC -to <HA_mount_point>
/opt/OV/misc/nnm/ha/nnmhadisk.ovpl TRAFFIC -to <HA_mount_point>
Note: To prevent database corruption, run this command (with the -to option) only once.
Run the following command to configure the NNM iSPI Performance for Traffic HA resource group:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaconfigure.ovpl TRAFFIC
/opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl TRAFFIC
Specify the details specific to this cluster (and not the cluster where NNMi may exist) while answering the questions asked by the script (see Table: NNMi HA Primary Node Configuration Information in the NNMi Deployment Reference).
Run the following command to start the resource group:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhastartrg.ovpl TRAFFIC <resource_group>
/opt/OV/misc/nnm/ha/\nnmhastartrg.ovpl TRAFFIC <resource_group>
Run the following command after installation to create a user on the passive node that can start the Master Collector processes:
%TrafficInstallDir%\traffic-master\bin\nmstrafficmastersetuser.ovpl [--domain=
<DomainName>] --username=
<AdministratorUsername> --password=
<AdministratorPassword>
In this Instance, <Domain Name> is the Fully Qualified Domain Name of the Master Collector. Note that Domain
is a mandatory parameter if you are using a domain account.
If NPS is installed on a Windows system, you must specify the user name and password of the user that was created by the nnmenableperfspi.ovpl
command (See the HPE Network Node Manager iSPI Performance for Metrics Interactive Installation Guide).
Make sure that the username that you specified for nmstrafficmastersetuser.ovpl
is the same user that has the read/write access rights to the shared network directory.
If you are using a non-English system, make sure the user is a member of the Administrators group on the Master Collector system.
Run the following command to configure the NNM iSPI Performance for Traffic HA resource group:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaconfigure.ovpl TRAFFIC
/opt/OV/misc/nnm/ha/nnmhaconfigure.ovpl TRAFFIC
Provide the same details that were provided during active node configuration.
Run the following command to verify that the configuration was successful.
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaclusterinfo.ovpl -group <resource_group> -nodes
/opt/OV/misc/nnm/ha/nnmhaclusterinfo.ovpl -group <resource_group> -nodes
The command output lists all configured nodes for the specified HA resource group.
Alternatively, test the configuration by failing over to the server that was active when you started this procedure.
If you want to install multiple Leaf Collectors, you must install all instances of the Leaf Collector on systems where the Master Collector is not installed. The Master Collector and a Leaf Collector instance cannot coexist on the same system when multiple Leaf Collectors are installed on the network.
Preinstallation Tasks
Create New Oracle Users
If you configured NNMi to use an Oracle database, the NNM iSPI Performance for Traffic also must be configured to use Oracle as its database. You can use the same Oracle instance that is used by NNMi, but you must use a unique Oracle user instance for each Leaf Collector. For example, if you want to install five Leaf Collector Systems, create five different Oracle users.
Note Down the Details of the Oracle Server
Note down the following details of the Oracle database instance that you want to use with the NNM iSPI Performance for TrafficLeaf Collector.
Additional Oracle Requirements
The database administrator needs to create a tablespace that will only be used by the collector user accessing the database.
Assign a tablespace size depending on the number of nodes in your installation. For example, for an 18,000 node network, set your beginning tablespace size for 12 gigabytes (GB). Set the option for unlimited tablespace extensions in increments of 12 GB.
The database requirement grows as the collector collects additional records, so watch this growth carefully and expand your configured tablespace size when necessary.
Create an Oracle user and assign the user to the newly created tablespace. The user should have the following permissions:
SELECT ANY DICTIONARY
You can opt to not grant the SELECT ANY DICTIONARY
permission to the user. If you do not grant the SELECT ANY DICTIONARY
permission, NNM iSPI Performance for Traffic does not show any information in the Health tab (under the Help > System Information menu).
For Oracle 12.1.0.1.0 only. The SELECT ANY DICTIONARY
permission, however, is required for the collector installation if you use Oracle 12.1.0.1.0. You can revoke the SELECT ANY DICTIONARY
permission after the installation is complete.
Make note of the Oracle user name and password; you will need this information during the collector installation.
Oracle provides the high-availability solution Oracle Real Application Cluster (RAC). The RAC solution uses two physical Oracle database servers. If the first server malfunctions or the administrator invokes a failover (for example, to complete maintenance on the first server), the second server automatically takes over, and the collector begins using the second server. There is a short time window of data loss that might occur after the failover occurs. The amount of data loss increases with the size of the managed network and the rate of traps and incidents being evaluated. To configure RAC, work with your Oracle database administrator to install an Oracle database according to the instructions provided by Oracle.
Ensure the Availability of Necessary Ports
On the Leaf Collector systems, make sure the following ports are available for use:
11043, 11080, 11083, 11084, 11085, 11086, 11087, 11092, 11099, 11458, 11500, 11501, 11712, 11713, 11714, and 11813
Ensure KornShell is Installed
On the Leaf Collector system, make sure that KornShell is installed.
To check that KornShell is installed, open a command-line terminal, and then type the following command:
ksh
If KornShell is installed, the command prompt changes to ksh
.
If KornShell is not installed, the following error message appears:
ksh is not found
If you do not have KornShell installed, you can download it from http://www.kornshell.com/software/ and install it.
Ensure the Availability of GID 26
Make sure that the Group ID (GID) 26 is not used by any existing user groups on the system.
Ensure the Availability of the Required Libraries
Make sure that both the 64-bit compat-libstdc++
and 32-bit compat-libstdc++
libraries are available before installing Leaf Collector. Leaf Collector requires the following exact library versions. The RPM versions may vary depending on the minor release of Linux.
HPE Public Key
You must import the HPE public key into the Linux RPM database before installation.
To import the HPE public key, follow the instructions available on the following web page:
https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPLinuxCodeSigning
To know more about pre-requisite information, see NNMi Ultimate Support Matrix.
To install the Leaf Collector, follow these steps:
Note: Use the following procedure for all types of installation scenarios of the Leaf Collector: the Leaf Collector on the NNMi management server, the Leaf Collector on a standalone system, the Leaf Collector on the Master Collector system, and the Leaf Collector on the NPS system.
Log on to the system where you want to install the collector with the administrative privileges. Make sure that the user with administrative privileges is part of Local administrator group.
Log on to the system where you want to install the collector with the root privileges.
Extract the contents of installation media (or mount the media).
Go to the Traffic_Leaf
directory on the installation media and double-click the setup
file. The installer configures your system for the installation and initializes the installation process.
Use the cd
command to change to the /cdrom
directory, and then go to the Traffic_Leaf
directory and run the following command:
./setup
The installation initialization process prompts you to choose the language you want to use. The installer configures your system for the installation and initializes the installation process.
Host: The FQDN of the Oracle server
If you are using Oracle RAC:
On the Pre-Install Summary page, review your installation choices and click Install. The installation process begins.
The Choose Java JDK dialog box opens.
Note: The dialog box does not appear when the Leaf Collector is installed on the NNMi management server. If you are installing the Leaf Collector on the management server, skip to the next step.
NNM iSPI Performance for Traffic requires that Java Development Kit (JDK) 1.8 be available on the system. This version of the NNM iSPI Performance for Traffic installer contains OpenJDK 1.8. You can select the Install bundled OpenJDK option to install OpenJDK 1.8 that is embedded with the NNM iSPI Performance for Traffic media.
Alternatively, if another version of JDK 1.8 is already available on the system, you can select the Use Already-Installed JDK option, and then click Browse to select the path to the JDK.
On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
For example:
To install Red Hat OpenJDK 1.8.x on Red Hat Enterprise Linux, run the following command:
yum install java-1.8.0-openjdk-devel.x86_64
To install SUSE OpenJDK 1.8.x on SUSE Linux, run the following command:
zypper install java-1_8_0-openjdk
To find out the directory where JDK is installed, run one of the following commands:
whereis java
which java
On Windows, it is recommended that you install the Oracle JDK 1.8.x.
Tip: Click Validate to check that the specified path is valid.
After making a selection, click Continue.
Note down this password. You require this password to configure the Leaf Collector using the NNM iSPI Performance for Traffic Configuration form. You can specify a different password for every Leaf Collector that you install.
Retype Password: Type the password again.
The following details are automatically detected by the installer:
Traffic Leaf FQDN: The Fully Qualified Domain Name of the Leaf Collector
The installation log file (postInstall_traffic-leaf.log
) is available in the %temp%
directory.
The installation log file (postInstall_traffic-leaf.log
) is available in the /tmp
directory.
Oracle RAC requires a secondary Oracle RAC server. To configure that information, after collector installation, do the following:
Open the following file in a text editor:
%TrafficDataDir%
\nmsas\traffic-leaf\/var/opt/OV/nmsas/traffic-leaf/server.properties
Add the following properties:
Tip: If the properties already exist in the file, make sure that they are set to correct values.
com.hp.ov.nms.oracle.otherHost=
<second_host_in_the_cluster>
com.hp.ov.nms.oracle.serviceName=
<logical_name>
In this instance, <second_host_in_the_cluster> is the FQDN of the second host in the Oracle RAC and <logical_name> is the logical name of the Oracle RAC.
Add the following string:
com.hp.ov.nms.oracle.connection.url=${com.hp.ov.nms.oracle.connection.cluster.url}
Save the file.
Use the following commands to restart the collector:
%TrafficInstallDir%\traffic-leaf\bin\
/opt/OV/traffic-leaf/bin/
nmstrafficleafstop.ovpl
%TrafficInstallDir%\traffic-leaf\bin\
/opt/OV/traffic-leaf/bin/
nmstrafficleafstart.ovpl
The NNM iSPI Performance for Traffic interacts frequently with NNMi and NPS. After installing the NNM iSPI Performance for Traffic, you must make sure that the product is able to interact with both NNMi and NPS.
You can directly upgrade the NNM iSPI Performance for Traffic version 10.00 to the version 10.30.
This procedure helps you upgrade the NNM iSPI Performance for Traffic along with NNMi to the version 10.30.
This procedure follows a multi-step method of:
Upgrading the components of the NNM iSPI Performance for Traffic to the version 10.30 on new systems
Note: After upgrading all the components of the NNM iSPI Performance for Traffic to the version 10.30, you can upgrade the operating system of each system hosting the Master or Leaf Collector to Red Hat Enterprise Linux 7.x.
Pre-upgrade checks:
Make sure the latest patch (NNM iSPI Performance for Traffic 10.10) is applied on the existing NNM iSPI Performance for Traffic.
HPE Public Key
You must import the HPE public key into the Linux RPM database before installation.
To import the HPE public key, follow the instructions available on the following web page:
https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=HPLinuxCodeSigning
Before you begin the procedure, make sure the latest patch (NNM iSPI Performance for Traffic 10.10) is applied on the existing NNM iSPI Performance for Traffic.
Perform these tasks to prepare for the upgrade:
Install NNMi 10.1010.20 on a new server that is running on Red Hat Enterprise Linux 6.4. While specifying the details of the Oracle database instance, make sure to type the details of the Oracle database instance that you newly created for NNMi (created here).
If NNMi originally existed in the Application FailoverHA cluster environment, plan your installed in a similar environment with new servers.
Follow the instructions in the NNMi 10.30 Interactive Installation and Upgrade Guide.
Install the NNMi Extension for iSPI Performance for Traffic9.2010.20 on the new NNMi management server.
Follow the instructions in the NNM iSPI Performance for Traffic 9.2010.20 Installation Guide to install the NNMi Extension for iSPI Performance for Traffic.
Apply the 9.21 patch after installing NNM iSPI Performance for Traffic 9.20.
Install the Master Collector 10.1010.00 on a new serveron the new NNMi management server that is running on Red Hat Enterprise Linux 6.4.
Before installing the Master Collector on the Red Hat Enterprise Linux server, run the following commandcomplete these preinstallation tasks:
ln -s /bin/ksh /usr/bin/ksh
Run the following command:
ln -s /bin/ksh /usr/bin/ksh
Delete all instances of users that can create conflict with the Master Collector installer. Run the following command:
userdel postgres
Create a log file used by the Master Collector installer. Run the following commands:
touch /tmp/postgres.log
chmod 0777 /tmp/postgres.log
Follow the instructions in the NNM iSPI Performance for Traffic 9.2010.00 Installation Guide to install the collector.
If the Master Collector originally existed in the HA cluster environment, plan your installed in a similar environment with new servers.
When the installation wizard prompts you to specify the NPS hostname, type the hostname of the NPS system.
While specifying the details of the Oracle database instance, make sure to type the details of the Oracle database instance that you newly created for the Master Collector (created here).
Apply the 9.21 patch after installing NNM iSPI Performance for Traffic 9.20.
Note: Stop the collector after installing the patch.
Install each instance of the Leaf Collector 9.2010.00 on a new server that is running on Red Hat Enterprise Linux 6.4.
Before installing the Leaf Collector on the Red Hat Enterprise Linux server, run the following commandcomplete these preinstallation tasks:
ln -s /bin/ksh /usr/bin/ksh
Run the following command:
ln -s /bin/ksh /usr/bin/ksh
Delete all instances of users that can create conflict with the Leaf Collector installer. Run the following command:
userdel postgres
Create a log file used by the Leaf Collector installer. Run the following commands:
touch /tmp/postgres.log
chmod 0777 /tmp/postgres.log
Follow the instructions in the NNM iSPI Performance for Traffic 9.2010.00 Installation Guide to install the collector.
While installing a Leaf Collector on the NNMi management server and specifying the details of the Oracle database instance, make sure to type the details of the Oracle database instance that you newly created for the Leaf Collector (created here).
Apply the 9.21 patch after installing NNM iSPI Performance for Traffic 9.20.
Note: Stop the collector after installing the patch.
Note: Before you begin this step, make sure the latest patch (NNM iSPI Performance for Traffic 9.219.21) is applied on the newly created NNM iSPI Performance for Traffic environment.
Now you must restore all the data that was backed up on the old systems to the new systems.
Upgrade NNMi to the version 10.30 on the Red Hat Linux 6.4 system.
Upgrade NPS to the version 10.30.
See the NNM iSPI Performance for Metrics Interactive Installation Guide for more information.
After upgrading, make sure to stop the ETL process by running the following command:
stopETL.ovpl
In the NNMi Application failover environment, make sure you upgrade the NNMi Extension for iSPI Performance for Traffic on the primary and secondary NNMi systems. Before upgrading on the secondary system, fail over from the primary system to the secondary system.
To upgrade the NNMi Extension for iSPI Performance for Traffic to 10.30 on this server, log on to the system as rootadministrator, and then follow these steps:
Note: Upgrade on both the nodes in the cluster. Put the NNMi resource group to the HA maintenance mode by placing the maintenance
file under the following directory:
%NNMDataDir%\hacluster\
<resource_group>
/var/opt/OV/hacluster/
<resource_group>
You can remove the maintenance
file after upgrading the NNMi Extension for iSPI
Performance for Traffic.
Traffic_NNM_Extension/Linux
directory on the media. On the media, go to the Traffic_NNM_Extension/WinNT
directory if NNMi is on Windows or go to the Traffic_NNM_Extension/Linux
directory if NNMi is on Linux.Run the following command:
If NNMi is on Linux:
./setup.bin
If NNMi is on Windows:
setup.exe
The installation wizard opens.
You can delete the maintenance
file now from both the nodes.
Before upgrading the Master Collector:
Before upgrading:
If not already done, put the Master Collector resource group to the HA maintenance mode by placing the maintenance
file under the following directory on the active node:
%nnmdatadir\hacluster\
<resource_group>
/var/opt/OV/hacluster/
<resource_group>
Stop the ETL process on the NPS system by running the following command (on the NPS system):
%NnmInstallDir%\NNMPerformanceSPI\bin\stopETL.ovpl
/opt/OV/NNMPerformanceSPI/bin/stopETL.ovpl
Stop the Master Collector:
%NnmInstallDir%\traffic-master\bin\trafficmasterstop.ovpl --HA
/var/opt/OV/traffic-master/bin/trafficmasterstop.ovpl --HA
Now start upgrading the Master Collector on the active node.
To upgrade the Master Collector to 10.30 on this server, log on to the system as rootadministrator, and then follow these steps:
Go to the Traffic_Master
directory on the media.
Run the following command:
./setupsetup.bat
The installation wizard opens.
On the Preinstall Summary screen, click Upgrade.
The Choose Java JDK dialog box opens.
The NNM iSPI Performance for Traffic10.30 installer removes the JDK that was installed on the system by the previous version of the installer provides an option to install OpenJDK 1.8. You can select the Install bundled OpenJDK option to install OpenJDK 1.8 that is embedded with the NNM iSPI Performance for Traffic media.
Alternatively, if another version of JDK 1.8 is already available on the system, you can select the Use Already-Installed JDK option, and then click Browse to select the path to the JDK.
On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
For example:
To install Red Hat OpenJDK 1.8.x on Red Hat Enterprise Linux, run the following command:
yum install java-1.8.0-openjdk-devel.x86_64
To install SUSE OpenJDK 1.8.x on SUSE Linux, run the following command:
zypper install java-1_8_0-openjdk
To find out the directory where JDK is installed, run one of the following commands:
whereis java
which java
On Windows, it is recommended that you install the Oracle JDK 1.8.x.
Tip: Click Validate to check that the specified path is valid.
After making a selection, click Continue.
After the upgrade is complete, click Done.
Note: For Red Hat Enterprise Linux. After upgrading the Master Collector to the version 10.30, you can upgrade the operating system of this server to Red Hat Enterprise Linux 7.x.
After upgrading the Master Collector on the active node, follow these steps:
Start the Master Collector:
%NnmInstallDir%\traffic-master\bin\trafficmasterstart.ovpl --HA
/var/opt/OV/traffic-master/bin/trafficmasterstart.ovpl --HA
On the passive node, put the Master Collector resource group to the HA maintenance mode by placing the maintenance
file under the following directory on the active node:
%nnmdatadir%\hacluster\
<resource_group>
/var/opt/OV/hacluster/
<resource_group>
To upgrade the Leaf Collector to 10.30 on this server, log on to the system as rootadministrator, and then follow these steps:
Traffic_Leaf
directory on the media.Run the following command:
./setupsetup.bat
The installation wizard opens.
On the Preinstall Summary screen, click Upgrade.
The Choose Java JDK dialog box opens.
Note: The dialog box does not appear when the Leaf Collector is installed on the NNMi management server. If the Leaf Collector is installed on the management server, skip to the next step.
The NNM iSPI Performance for Traffic10.30 installer removes the JDK that was installed on the system by the previous version of the installer provides an option to install OpenJDK 1.8. You can select the Install bundled OpenJDK option to install OpenJDK 1.8 that is embedded with the NNM iSPI Performance for Traffic media.
Alternatively, if another version of JDK 1.8 is already available on the system, you can select the Use Already-Installed JDK option, and then click Browse to select the path to the JDK.
On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
For example:
To install Red Hat OpenJDK 1.8.x on Red Hat Enterprise Linux, run the following command:
yum install java-1.8.0-openjdk-devel.x86_64
To install SUSE OpenJDK 1.8.x on SUSE Linux, run the following command:
zypper install java-1_8_0-openjdk
To find out the directory where JDK is installed, run one of the following commands:
whereis java
which java
On Windows, it is recommended that you install the Oracle JDK 1.8.x.
Tip: Click Validate to check that the specified path is valid.
After making a selection, click Continue.
Note: For Red Hat Enterprise Linux. After upgrading the Leaf Collector to the version 10.30, you can upgrade the operating system of this server to Red Hat Enterprise Linux 7.x.
On the NNMi management server, run the nnmenableperfspi.ovpl
script. While running the script, make sure the specified details match the details provided during the NPS upgrade.
You can find these details specified during the last run of the nnmenableperfspi.ovpl
file in the following file on the NNMi management server:
On Windows:
%nnmdatadir%\log\nnmenableperfspi.txt
On Linux:
/var/opt/OV/log/nnmenableperfspi.txt
You can now start the ETL process on the NPS system by running the following command:
startETL.ovpl
The NNM iSPI Performance for Traffic requires that you apply an NNMi Ultimate license key. If you have already enabled an NNMi Ultimate license key, no additional license keys are required for the NNM iSPI Performance for Traffic.
For more information about NNMi Ultimate license key, see the Licensing section in the NNMi Ultimate Release Notes.
Most of the installationupgrade errors that you encounter, relate to incorrect installation configuration settings. Before you start working with the NNM iSPI Performance for Traffic, open the Installation Verification form from the NNM iSPI Performance for Traffic Configuration form, and then validate the configuration settings you set up while installingupgrading the NNM iSPI Performance for Traffic.
For details on how to resolve the installationupgrade errors, see the following troubleshooting scenarios and tips to resolve these issues.
Most of the upgrade errors that you encounter, relate to incorrect installation configuration settings. Before you start working with the NNM iSPI Performance for Traffic, open Installation Verification form from the NNM iSPI Performance for Traffic Configuration form and validate the configuration settings you set up while installing the NNM iSPI Performance for Traffic.
While uninstalling NNM iSPI Performance for Traffic, you must uninstall each component of the software separately.
Uninstalling NNMi Extension for iSPI Performance for Traffic
Note: Since NNMi is installed in an HA cluster, make sure to uninstall the NNMi Extension for iSPI Performance for Traffic from all the nodes in the cluster.
Note: Since NNMi is installed in an Application Failover cluster, make sure to uninstall the NNMi Extension for iSPI Performance for Traffic from both the nodes in the cluster.
To remove the NNMi Extension for iSPI Performance for Traffic, follow these steps:
%NnmInstallDir%\Uninstall\HPOvTENM
/opt/OV/Uninstall/HPOvTENM
setup
file. ./setup.
Alternatively, you can use the Add or Remove Programs (Uninstall a program) feature of the Windows system to remove the NNMi Extension for iSPI Performance for Traffic. Choose the HPE NNMi Extension for iSPI Performance for Traffic entry by using the Programs and Features window.
Uninstallation Log Files
The setup
program creates the following log files in the %temp% folder:
preRemove_traffic-nnm
postRemove_traffic-nnm
The setup
program creates the following log files in the /tmp
directory on Linux:
preRemove_traffic-nnm
postRemove_traffic-nnm
Uninstalling the Master Collector
To unconfigure the Master Collector from the HA cluster:
Determine which node in the HA cluster is active. On any node, run the following command:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaclusterinfo.ovpl -group<resource_group>-activeNode
/opt/OV/misc/nnm/ha/nnmhaclusterinfo.ovpl -group<resource_group>-activeNode
On the passive node, unconfigure the collector from the HA cluster by running the following command:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaunconfigure.ovpl TRAFFIC<resource_group>
/opt/OV/misc/nnm/ha/nnmhaunconfigure.ovpl TRAFFIC<resource_group>
This command removes access to the shared disk, but does not unconfigure the disk group or the volume group
On the passive node, remove the resource group-specific files. Delete all files in the following directory:
%TrafficInstallDir%\traffic-master\hacluster\
<resource_group>
/opt/OV/traffic-master/hacluster/
<resource_group>
On the active node, disable HA resource group monitoring by creating the following maintenance file:
%TrafficInstallDir%\traffic-master\hacluster\
<resource-group>\maintenance
/opt/OV/hacluster/
<resource-group>/maintenance
Stop the Master Collector by running the following command:
nmstrafficmasterstop.ovpl --HA
To prevent data corruption, make sure no instance of traffic Master Collector is running and accessing the shared disk.
Run the following command on the active node:
nnmhadisk.ovpl TRAFFIC -from <mount-point>
Remove all files from shared disk.
On the active node, stop the Master Collector HA resource group:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhastoprg.ovpl TRAFFIC <resource_group>
/opt/OV/misc/nnm/ha/nnmhastoprg.ovpl TRAFFIC <resource_group>
On the active node, unconfigure the Master Collector from the HA cluster:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaunconfigure.ovpl TRAFFIC <resource_group>
/opt/OV/misc/nnm/ha/nnmhaunconfigure.ovpl TRAFFIC <resource_group>
On the active node, remove the resource group-specific files. Delete all files in the following directory:
%TrafficInstallDir%\traffic-master\hacluster\
<resource-group>\
/opt/OV/hacluster/
<resource-group>
Unmount the shared disk.
Note: If you want to use the shared disk for another purpose, copy all data that you want to keep (as described in the next procedure), and then use the HA product commands to unconfigure the disk group and volume group.
To unconfigure the Master Collector from the HA cluster:
Determine which node in the HA cluster is active. On any node, run the following command:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaclusterinfo.ovpl -group <resource_group> -activeNode
/opt/OV/misc/nnm/ha/nnmhaclusterinfo.ovpl -group <resource_group> -activeNode
On the passive node, unconfigure the collector from the HA cluster by running the following command:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaunconfigure.ovpl TRAFFIC <resource_group>
/opt/OV/misc/nnm/ha/nnmhaunconfigure.ovpl TRAFFIC <resource_group>
This command removes access to the shared disk, but does not unconfigure the disk group or the volume group
On the passive node, remove the resource group-specific files. Delete all files in the following directory:
%TrafficInstallDir%\traffic-master\hacluster\
<resource_group>
/opt/OV/traffic-master/hacluster/
<resource_group>
On the active node, disable HA resource group monitoring by creating the following maintenance file:
%TrafficInstallDir%\traffic-master\hacluster\
<resource-group>\maintenance
/opt/OV/hacluster/
<resource-group>/maintenance
Stop the Master Collector by running the following command:
nmstrafficmasterstop.ovpl --HA
To prevent data corruption, make sure no instance of traffic Master Collector is running and accessing the shared disk.
Run the following command on the active node:
nnmhadisk.ovpl TRAFFIC -from<mount-point>
Remove all files from shared disk.
On the active node, stop the Master Collector HA resource group:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhastoprg.ovpl TRAFFIC <resource_group>
/opt/OV/misc/nnm/ha/nnmhastoprg.ovpl TRAFFIC <resource_group>
On the active node, unconfigure the Master Collector from the HA cluster:
%TrafficInstallDir%\traffic-master\misc\nnm\ha\nnmhaunconfigure.ovpl TRAFFIC <resource_group>
/opt/OV/misc/nnm/ha/nnmhaunconfigure.ovpl TRAFFIC <resource_group>
On the active node, remove the resource group-specific files. Delete all files in the following directory:
%TrafficInstallDir%\traffic-master\hacluster\
<resource-group>\
/opt/OV/hacluster/
<resource-group>
Unmount the shared disk.
Note: If you want to use the shared disk for another purpose, copy all data that you want to keep (as described in the next procedure), and then use the HA product commands to unconfigure the disk group and volume group.
To remove the Master Collector, follow these steps:
Note: Remove the Master Collector from both the nodes in the HA cluster.
%NnmInstallDir%\traffic-master\Uninstall\HPOvTRMiSPI
or
%TrafficInstallDir%\traffic-master\Uninstall\HPOvTRMiSPI
/opt/OV/traffic-master/Uninstall/HPOvTRMiSPI
setup
file. ./setup
Alternatively, you can use the Add or Remove Programs (Uninstall a program) feature of the Windows system to remove the NNM iSPI Performance for TrafficMaster Collector. Choose the Master Collector for iSPI Performance for Traffic entry by using the Programs and Features window.
After uninstalling the Master Collector, you must uninstall the NNM iSPI Performance for Traffic report extension packs manually.
Uninstalling the Leaf Collector
To remove the Leaf Collector, follow these steps:
%NnmInstallDir%\traffic-leaf\Uninstall\HPOvTRLiSPI
or
%TrafficInstallDir%\traffic-leaf\Uninstall\HPOvTRLiSPI
/opt/OV/traffic-leaf/Uninstall/HPOvTRLiSPI
setup
file../setup
.Alternatively, you can use the Add or Remove Programs (Uninstall a program) feature of the Windows system to remove the NNM iSPI Performance for TrafficLeaf Collector. Choose the Leaf Collector for iSPI Performance for Traffic entry by using the Programs and Features window.
© 2014-2017 Micro Focus or one of its affiliates