Install > Support Matrix

Support Matrix

This section provides information about the hardware and software requirements for installing NNMi and iSPIs:

Hardware Requirements

CPU

NNMi and iSPIs are supported on Intel 64-bit (x86-64) or AMD 64-bit (AMD64) processors of 2.5 GHz frequency.

For Intel 64-bit (x86-64), the following Xeon processor families are recommended:

  • Penryn, Nehalem, Westmere, Sandy Bridge, Ivy Bridge, Haswell or later for up to Medium tier
  • Sandy Bridge, Ivy Bridge, Haswell or later for Large, Very Large, or Extra Large tier and GNM global manager

Disk

For NNMi, it is recommended that RAID 1+0 (10) with battery-backed write cache on discs of 15,000 rpm or better.

NPS is tested with the following file systems:

  • For Windows: NTFS
  • For Linux: ext4

Network

NPS systems must be served by Gigabit Ethernet LAN interfaces.

Software Requirements

Operating Systems

NNMi and iSPIs can be installed on the following operating systems:

Table: Operating Systems
Operating System NNMi Management Server NPS System Intelligent Response Agent (iRA) NNM iSPI Performance for Traffic NNM iSPI NET Diagnostic Server Additional Information
 Windows Server 2012  

Windows Server 2012 Datacenter Edition (or later service pack)

YES NO YES YES YES  

Windows Server 2012 Standard Edition (or later service pack)

YES NO YES YES YES  

Windows Server 2012 R2 Datacenter Edition (or later service pack)

YES YES YES YES YES  
Windows Server 2012 R2 Standard Edition (or later service pack) YES YES YES YES YES  
 Red Hat Enterprise Linux  
Red Hat Enterprise Linux Server 6.x (starting with 6.4) YES YES YES YES YES  
Red Hat Enterprise Linux Server 7.x YES YES YES YES NO  
 Oracle Linux NNMiand iSPIs running on Oracle Linux in an HA cluster is not supported.
Oracle Linux Red Hat Compatible Kernel 6.x (starting with 6.4) YES NO YES YES YES  
Oracle Linux Red Hat Compatible Kernel 7.x YES NO YES YES NO

 

 SUSE Enterprise Linux
SUSE Linux Enterprise Server 11 SP3 (or later service pack) YES YES YES YES NO  
SUSE Linux Enterprise Server 12 (or later service pack) YES NO YES YES NO

The following table illustrates the combinations of operating systems that are supported with NPS when NPS is installed on a dedicated server or in a distributed environment.

  NPS
NNMi1 Windows Linux
Windows Supported Not Supported
Linux Supported Supported

Virtualization Products

NNMi and iSPIs can be used with the following virtualization products:

Table: Virtualization Support
Virtualization Product NNMi Management Server NPS System iRA Node NNM iSPI Performance for Traffic Collectors NNM iSPI NET Diagnostic Server Additional Information
 VMware ESXi Server
VMware ESXi Server 5.x YES YES YES YES YES
  • Bridged network environment required. NAT'ed network environments are not supported.

  • (For NNMi) VMware vmotion (for DRS and DPM) of the NNMi management server is supported.

  • Supported only up to the Medium Tier for the NNM iSPI Performance for Traffic.
VMware ESXi Server 6.x YES YES YES YES YES
 Microsoft Hyper-V
Microsoft Hyper-V 2012 YES YES YES YES YES
  • Host OS: Windows Server 2012 or 2012 R2 (or later service pack)

  • Guest OS: Any of the supported Windows operating systems
Microsoft Hyper-V 2012 R2 (or later service pack) YES YES YES YES YES
 Kernel-Based Virtual Machine (KVM)
KVM YES YES NO NO NO
  • Guest OS: Any of the supported operating systems
  • Supported only up to the Medium Tier
  • Supported only for NNMi Premium.
 Oracle VM
Oracle VM 3.x (starting at 3.2) YES YES NO NO NO
  • Guest OS: Any of the supported operating systems
  • Supported only up to the Medium Tier
  • Supported only for NNMiPremium (however; iRA is not supported).

High-Availability Products

NNMi and iSPIs can be installed on the following high-availability products:

Table: HA Products
HA Cluster NNMi Management Server NPS System NNM iSPI Performance for Traffic Master Collector Additional Information
Windows Server 2012

Microsoft Failover Clustering for Windows Server 2012

YES NO YES

Before configuring HA on Windows Server, you must install the FailoverCluster-CmdInterface component using either Server Manager or Windows PowerShell cmdlets.

 

Microsoft Failover Clustering for Windows Server 2012 R2

YES YES YES
Red Hat Enterprise Linux

Red Hat Enterprise Linux 6.x with Veritas Cluster Server (VCS) version 6.x

YES YES YES
  • Some disk types require the use of Veritas Storage Foundation (VSF) version 6.0.
  • VCS 6.x and VSF 6.x might require operating system patches. For specific information, see the appropriate Veritas product documentation.

Red Hat Enterprise Linux 7.x with Veritas Cluster Server (VCS) version 6.x with 6.2

YES NO YES
Red Hat Enterprise Linux 6.x with Red Hat Cluster Suite (RHCS) 6.x YES NO

NO

This combination is not supported by any iSPIs.
SUSE Enterprise Linux
SUSE Linux Enterprise Server 11 SP3 with Veritas Cluster Server (VCS) version 6.x YES NO YES This combination is not supported by any iSPIs.
SUSE Linux Enterprise Server 12 with Veritas Cluster Server (VCS) version 6.x YES NO YES This combination is not supported by any iSPIs.

Databases

NNMi is installed with an embedded database. If you like, you can use an external database with NNMi instead of the embedded database. The following databases are supported by NNMi:

  • Oracle and Oracle Real Application Clusters (RAC) 11g Release 2 (11.2.0.x starting with 11.2.0.3) Enterprise Edition
  • Oracle and Oracle Real Application Clusters (RAC) 11g Release 2 (11.2.0.x starting with 11.2.0.3) Standard Edition (only up to the medium tier)
  • Oracle and Oracle Real Application Clusters (RAC) 12c Release 1 (12.1.0.x) Enterprise Edition
  • Oracle and Oracle Real Application Clusters (RAC) 12c Release 1 (12.1.0.x) Standard Edition (only up to the medium tier)

Database Requirements for the NNM iSPI NET

NNM iSPI NET requires an external PostgreSQL database to store data. This version of the NNM iSPI NET supports PostgreSQL 9.3.x.

If you plan to use an existing OO Central Server with the NNM iSPI NET, you can choose to use any database supported by OO.

Java Development Kit with NNMi

NNMi10.30 requires Java Development Kit (JDK) 1.8.x. The NNMi installer is now shipped with Open JDK 1.8 (azul/zulu-openjdk).

The NNMi installer can install this embedded JDK. You can choose to use an already installed version of JDK 1.8.x.

During upgrade, the installer removes the JDK installed by the previous version of NNMi and allows you to install either the embedded version of JDK or an already installed version of JDK 1.8.x.

Note the following requirements:

  • In an Application Failover cluster, you must install the same version of JDK on the active and standby nodes.
  • In an HA cluster, you must install the same version of JDK on all nodes.
  • On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
  • On Windows, it is recommended that you install the Oracle JDK 1.8.x.

JDK and the NNM iSPI Performance for Metrics

NPS requires a local installation of Java Development Kit 1.8.x when installed on a dedicated server (and not on the NNMi management server). The NNM iSPI Performance for Metrics installer is now shipped with Open JDK 1.8 (azul/zulu-openjdk).

The NNM iSPI Performance for Metrics installer can install this embedded JDK. You can choose to use an already installed version of JDK 1.8.x.

During upgrade, the installer removes the JDK installed by the previous version of the NNM iSPI Performance for Metrics and allows you to install either the embedded JDK or an already installed version of JDK 1.8.x.

Note the following requirements:

  • Always choose the same edition of JDK for the NNMi management server and NPS
  • In an HA cluster of NPS, you must install the same version of JDK on all nodes.
  • On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
  • On Windows, it is recommended that you install the Oracle JDK 1.8.x.

JDK and the NNM iSPI Performance for Traffic

Each collector of the NNM iSPI Performance for Traffic requires a local installation of Java Development Kit 1.8.x when installed on a dedicated server (and not on the NNMi management server). The NNM iSPI Performance for Traffic installer is now shipped with Open JDK 1.8 (azul/zulu-openjdk).

The NNM iSPI Performance for Traffic installer can install this embedded JDK. You can choose to use an already installed version of JDK 1.8.x.

During upgrade, the installer removes the JDK installed by the previous version of the NNM iSPI Performance for Traffic and allows you to install either the embedded JDK or an already installed version of JDK 1.8.x.

Note the following requirements:

  • Always choose the same edition of JDK for the NNMi management server and the NNM iSPI Performance for Traffic collectors.
  • In an HA cluster of the Master Collector, you must install the same version of JDK on all nodes.
  • On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
  • On Windows, it is recommended that you install the Oracle JDK 1.8.x.

JDK and the iRA

iRA requires a local installation of Java Development Kit 1.8.x when installed on a dedicated system (and not on the NNMi management server). The iRA installer is now shipped with Open JDK 1.8 (azul/zulu-openjdk).

The iRA installer can install this embedded JDK at the time of installation. You can choose to use an already installed version of JDK 1.8.x during a new installation of iRA.

During upgrade, the installer removes the JDK installed by the previous version of the iRA and allows you to install either the embedded JDK or an already installed version of JDK 1.8.x.

While choosing an already installed version of JDK for iRA, note the following requirements:

  • Always choose the same edition of JDK for the NNMi management server and all iRA nodes
  • On Linux, it is recommended that you use the JDK 1.8.x provided by your operating system vendor (Red Hat or SUSE).
  • On Windows, it is recommended that you install the Oracle JDK 1.8.x.

Web Browsers

The following web browsers are supported on a remote client system.

  • Microsoft Internet Explorer (32-bit and 64-bit) version 11 (not running in Compatibility View mode).

  • Mozilla Firefox version 52.x ESR on a Windows or Linux client.

    • The Firefox ESR (Extended Support Release) browser is available at http://www.mozilla.org/firefox/organizations/all.html.
    • The Firefox browser works best when you open links as new windows rather than tabs. For information, see "Mozilla Firefox Known Problems" in the Release Notes.
  • Apple Safari version 10.x on an OS X client.

    • Exception: The NPS console and all other windows that are launched from the NPS console are not supported with Safari.

  • Google Chrome

    • Exceptions:
      • NPS Query Studio and BI Server Administration features are not supported with Chrome.

Make sure the web browser meets the following requirements:

  • Pop-ups are enabled
  • Cookies are enabled
  • JavaScript is enabled
  • Adobe Flash Plug-in is installed (Plug-in version 11.2 or above on Linux and 21.0.0.242 or above in Windows.)

Other Software

Microsoft Visio (NNM iSPI NET only)

The NNM iSPI NET feature to export map views to Visio (Tools → Visio Export) requires Microsoft Visio 2010 or Microsoft Visio 2013.

For standalone OO, NNM iSPI NET supports OO version 10.22 and OO content version 10.1.70.

Compatibility

This section provides information about software and configurations that are not required, but which are compatible with Network Node Manager i Software10.30.

NNMi, NPS, and iSPIs contain open source and third-party software components that are listed in the NNMi Open Source and Third Party Software License Agreements document. Do not independently apply any patches or updates released by these open source communities and third parties. Environments where such components are updated by patches that are not released and certified are not supported.

Languages

NNMi and iSPIs areis localized (or translated) to the following languages:

  • French
  • German
  • Japanese
  • Spanish
Localization
  Product Locale
French German Japanese Spanish
NNMi YES YES YES YES
NNM iSPI Performance for Metrics YES YES YES YES
NNM iSPI Performance for QA YES YES YES YES
NNM iSPI Performance for Traffic YES YES YES NO
NNM iSPI for MPLS YES YES YES NO
NNM iSPI for IP Multicast YES YES YES NO
NNM iSPI for IP Telephony YES YES YES NO
NNM iSPI NET YES YES YES NO

When those localized packages are installed, NNMi accepts non-English characters as input. With all other locales, English strings appear as output while NNMi accepts non-English characters as input.

On Windows systems, NNMi does not support installation using directory paths with localized characters; path names for %NnmInstallDir% and %NnmDataDir% can contain English characters only.

Software Integrations

The following products have additional functionality available through an NNMi10.30 integration.

The most current information about software that integrates with NNMi10.30 can be found at the Support web site. See Software Integrations Catalog.

For information on specific features, see the appropriate integration manual.

Integrations with NNMi

  • Advanced TeMIP NNM Integration (ATNI) version 6.0 with TeMIP version 6.0, 6.2

    NNMi10.30 on Red Hat Enterprise Linux integrated with ATNI 6.0 on Red Hat Enterprise Linux with patches TEMIPTNTLIN_00049 (runtime) and TEMIPTNTLIN_00050 (for Customization Toolkit) or any superseding patches. NNMi10.30 on Windows integrated with remote ATNI 6.0 on HP-UX with patches PHSS_ 44066 on HP-UX and TEMIPTNTWIN_00006 on Windows or any superseding patches.

    See the TeMIP NNMi Advanced Integration Overview and other ATNI documentation for more details on the integration.

  • ArcSight Logger version 6.0, 6.1, 6.2 , and 6.4

    NNMi10.30 supports all SmartConnectors supported by ArcSight Logger version 6.0, 6.1, 6.2 , and 6.4.

  • Asset Manager version 9.41 (with Connect-It 9.53), 9.50 (with Connect-It 9.60), and 9.60 (with Connect-It 9.60)
  • Business Service Management (BSM) Real User Monitor (RUM), Run-time Service Model (RTSM), Operations Management (OMi), My BSM with BSM version 9.25, 9.26

    Integration with OMi for BSM 9.25 or 9.26 is supported only with BSM Connector 10.01. The BSM Connector must be installed on the NNMi management server.

  • Operations Manager i (OMi) 10.00, 10.01, 10.10, 10.11, and 10.61

    If you are using OMi 10.00 on Windows, apply the hotfix QCCR8D38153 on OMi. Contact Support to obtain the hotfix.

    Integration with OMi is supported with Operations Connector (Operations Connector) 10.01, 10.11.

  • Intelligent Management Center (IMC) version 7.1, 7.2
  • Network Automation (NA) version 10.30, 10.21, 10.20

    For NNMi and NA to run correctly on the same computer, you must install NNMi before installing NA. If you install NA before installing NNMi, the NNMi installation reports a port conflict with NA and does not complete.

  • Operations Analytics Premium and Ultimate 2.31
    • See the Operations Analytics Configuration Guide for more details on the integration.

      Operations Analytics Express is not supported.

  • Operations Manager (OM)

    • HPOM for Linux version 9.11, 9.20, 9.21

    • HPOM for UNIX version 9.11, 9.20, 9.21

    • HPOM for Windows version 9.00

    Integration with OM (agent implementation) is supported only with Operations agent 12.03. The Operations agent must be installed on the NNMi management server.

  • Operations Orchestration ( OO) version 10.x.

    NNM iSPI NET provides a different integration with OO. An embedded package of the required OO version is included with the NNM iSPI NET media.

  • Route Analytics Management Software (RAMS) version 9.21  (requires a Premium, Ultimate or NNMi Advanced license)
  • SiteScope version 11.23, 11.30, 11.31, 11.32, 11.33
  • Systems Insight Manager (SIM) version 7.4.x, 7.5.x
  • Universal CMDB (UCMDB) version 10.10, 10.11, 10.21, 10.22, 10.31, 10.32, 10.33

    The NNMi– BSM/UCMDB Topology integration, as described in the NNMi—Business Service Management/Universal CMDB Topology Integration Guide, now supports integration with either Business Service Management (BSM) Topology or UCMDB. NNMi cannot simultaneously integrate directly with both BSM Topology and UCMDB. If you want NNMi information in both databases, configure the NNMi– BSM/UCMDB Topology integration with either BSM Topology or UCDMB and then configure integration between BSM Topology and UCMDB as described in the UCMDB Data Flow Management Guide, which is included on the UCMDB product media

  • IBM Tivoli Netcool/OMNIbus version 8.1
  • NetScout nGenius Performance Manager 5.2.1

Integrations with iSPIs (10.30)

  • NNM iSPI Performance for Metrics with Operations Bridge Reporter 10.00, 10.01

  • NNM iSPI for IP Telephony with SiteScope

    Supports integration with SiteScope 11.30

Software Coexistence

The following products can coexist on the same system as NNMi10.30:

  • All NNM iSPIs except the NNM iSPI NET.

    The NNM iSPI NET Diagnostic Server and NNMi cannot coexist on the same server. For instructions to install the NNM iSPI NET Diagnostic Server, see the NNM iSPI NET Interactive Installation and Upgrade Guide.

  • ArcSight Smart Connector: Network NodeManager i SNMP version 7.1.6

  • Network Automation (NA) version 10.11, 10.20, 10.21

  • Business Service Management Connector version 10.01

  • Operations Connector version 10.11

  • Operations agent (64-bit only) version 12.00, 12.01, 12.03

  • IBM Tivoli Netcool/OMNIbus SNMP Probe: The latest version that is compatible with IBM Tivoli Netcool/OMNIbus version 8.1

Device Support for NNMi and iSPIs

This section provides a list of devices supported by NNMi and iSPIs.

For the list of supported network devices, see the NNMi Device Support Matrix at https://softwaresupport.softwaregrp.com/km/KM02795785/nnmi10.30_devicematrix.htm.

This device support information is based on the latest information available to at the time of publication. Note that device vendors can at any time alter a device's MIB usage (for example, in newer IOS or system software versions) and invalidate NNM's interpretation of that device's MIB data.

Supported Network Devices for the NNM iSPI Performance for QA

NNM iSPI Performance for QA supports the NNMi supported devices that match the following MIB specifications:

Supported Network Devices
Vendor Feature MIBs Supported Recommended image version
Cisco IPSLA probes CISCO-RTTMON-MIB 12.x or higher
Cisco QoS CISCO-CLASS-BASED-QOS-MIB 12.x or higher
Juniper RPM probes without jitter metrics
  • DISMAN-PING-MIB
  • JNX-RPM-MIB
  • JNX-PING-MIB
9.x to 13.x
Juniper RPM probes with jitter metrics1
  • DISMAN-PING-MIB
  • JNX-RPM-MIB
  • JNX-PING-MIB
10.x to 13.x
H3C NQA probes DISMAN-PING-MIB  
iRA Probes QA-PROBE-MIB (shipped with the iRA installation)  
Cisco Ping Latency Pairs CISCO-PING-MIB  

1Jitter metrics for RPM Probes are supported only for selected models of MX and SRX device series.

This device support information is based on the latest information available to at the time of publication. Note that device vendors can at any time alter a device's MIB usage (for example, in newer IOS or system software versions) and invalidate the NNM iSPI Performance for QA's interpretation of that device's MIB data.

Supported IP Flow Export Formats for the NNM iSPI Performance for Traffic

NNM iSPI Performance for Traffic supports the following IP Flow export formats:

  • NetFlow

    • NetFlow v5
    • NetFlow v9
    • Flexible NetFlow with v9 export format configured with normal cache. The Flexible NetFlow record must contain the Input Interface Index field.
    • NetFlow from Adaptive Security Appliances (ASA) devices with ASA version 8.2(x).
    • Random Sampled NetFlow
  • JFlow
  • sFlow (R) v5
  • Internet Protocol Flow Information eXport (IPFIX)

Supported Network Devices for the NNM iSPI for MPLS

The NNM iSPI for MPLS supports the following:

  • Cisco routers running IOS Version 12.2(33) or above
  • Cisco XR 12000 series routers running IOS XR Version 3.4 or above
  • Cisco CRS-1 series routers running IOS XR Version 3.4 or above
  • Juniper (M/T/J/MX/EX) series routers running JUNOS Version 8.3 or above
  • Redback SmartEdge series routers running SEOS 6.5 or above
  • Alcatel 7750 and 7710 series routers
  • Huawei NE5000E, NE40E, NE20E/20 series routers

The following table illustrates the supported combinations of devices for the NNM iSPI for MPLS objects and services:

MPLS Objects and Services
Vendor/Device L2VPNs L3VPNs MVPNs TE Tunnels PseudoWires LSPs SDPs
Alcatel Yes Yes No Yes Yes No Yes
Cisco Yes Yes Yes Yes Yes Yes Not Applicable
Huawei Yes Yes No Yes Yes No Not Applicable
Juniper Yes Yes No Yes Yes Yes Not Applicable
Redback No Yes No Yes No No Not Applicable

Supported Network Devices for the NNM iSPI for IP Multicast

The NNM iSPI for IP Multicast supports the following routers:

  • Cisco routers running IOS Version 12.x or above with the following MIBs:

    • IPMROUTE-STD-MIB - (RFC 2932)
    • PIM-MIB - (RFC 2934)

    • IGMPStdMIB (RFC2236) or IGMPExpMIB (RFC2236)

  • Cisco routers running IOS-XR Version 3.4.1 and above with the following MIBs:

    • CISCO-IETF-IPMROUTE-MIB
    • CISCO-IETF-PIM-EXT-MIB
    • CISCO-IETF-PIM-MIB
    • MGMD-DRAFT-IETF-MIB
  • Juniper routers running JunOS 7.x and above with the following MIBs:

    • IPMROUTE-STD-MIB - (RFC 2932) or IPMROUTE-MIB - ( RFC 2932)
    • PIM-MIB - (RFC 2934)
    • IGMPStdMIB (RFC2236) or IGMPExpMIB (RFC2236)
  • Alcatel-Lucent Service Router 7X50 with the following MIBs:

    • TIMETRA-GLOBAL-MIB.mib
    • TIMETRA-PIM-NG-MIB.mib

    • TIMETRA-PIM-MIB.mib
    • TIMETRA-VRTR-MIB.mib
    • TIMETRA-IGMP-MIB.mib
    • TIMETRA-TC-MIB.mib

    The NNM iSPI for IP Multicast cannot discover devices that have both the TIMETRA-PIM-NG and TIMETRA-PIM MIBs.

  • Brocade NetIron MLX (System Mode: MLX), IronWare Version V5.2.0T163 with the following MIBs:

    • IPMROUTE-STD-MIB - (RFC 2932)
    • IGMP MIB (RFC 2933)
    • PIM MIB - (RFC2934)

Supported Devices for the NNM iSPI for IP Telephony

The following tables list the IP telephony devices supported by this version of the NNM iSPI for IP Telephony:

Cisco IP telephony
Devices/Entities Version/Model/Type/Supported Protocol
Cisco Unified Communications Manager  5.x,6.x,7.x, 8.x, 9.x, and 10.x
Voice Gateway All Cisco IOS-based gateways, Cisco VG224 Analog Gateways, and Cisco VG 248 Analog Gateways (supports Cisco Voice Gateways running MGCP and H.323 protocols; supports T1/E1 PRI, T1/E1 CAS, FXS, FXO, E&M voice interfaces).
IP Phone Supports IP phones running on the SIP and SCCP (or Skinny) protocols. Supports Cisco IP Communicator Soft Phones. Also supports all other models of Cisco Unified IP Phone appliances.
Gatekeeper All Cisco IOS routers that can run the Cisco H.323 Gatekeeper service. CISCO-GATEKEEPER-MIB must be accessible on the Cisco IOS-based device that runs the Cisco Gatekeeper Service.
Unity
  • Cisco Unity 5.x, 7.x, 8.x
  • Cisco Unity Connection 7.x, 8.x, and 10.x

SNMP MIB CISCO-UNITY-MIB must be accessible on the Unity and Unity Connection systems.

Cisco Call Manager Express (CCME) All Cisco IOS routers that can run the CCME service.
Survivable Remote Site Telephony (SRST) All Cisco IOS routers that can run the SRST service.

 

Avaya IP Telephony
Devices/Entities Version/Model/Type/Supported Protocol
Communication Manager

Supports Communication Manager software/Firmware version 4.x/ 5.x/6.x on the following servers :

  • s88xx
  • s87xx
  • s85xx
  • s84xx
  • s83xx

 

  • If you are using version 5.x of the Avaya Communication Manager, you must install service pack 6 (Patch 18576) or later versions of the service packs on the Avaya Communication Manager.
  • If you are using version 6.x of the Avaya Communication Manager, you must install service pack 2 (Patch 18567) or later versions of the service packs on the Avaya Communication Manager.
Local Survivable Processor

Supports Communication Manager software/Firmware version 4.x/5.x/6.x on the following servers :

  • s8500
  • s8300
H.248 Media Gateways

Supports Communication Manager software/Firmware version 4.x/5.x/6.x on the following media gateways:

  • G250
  • G350
  • G430
  • G450
  • G700
Port Network Media Gateway

Supports Communication Manager software/Firmware version 4.x/5.x/6.x on the following port network media gateway:

  • G650
IP Phones Communication Manager software/Firmware 4.x/5.x/6.x compliant IP phones. The supported protocols include SIP and H.323. On the Avaya ONE –X soft phones, at the least, Avaya one-X Communicator Release 5.2 with Service Pack 4 (Product Version 5.2.0.23) or its equivalent Service Pack for Avaya one-X Communicator Release 6.x must be installed. The Service Pack level must include the fix for the issue with malformed CNAME in RTCP packets or the enhancement titled “Updated the CNAME in the RTCP packet for more accurate monitoring traceability.”

 

Nortel IP Telephony
Devices/Entities Version/Model/Type/Supported Protocol
Call Server Models running the software version 5.x
Signaling Server Models running the software version 5.x
Media Gateway Media Gateway Controller Card (MGC) with DSP daughter boards, Media Card (MC), Voice Gateway Media Card (VGMC); MC 32 and MC 32S cards are supported in the MC or VGMC category.
IP Phone

The following Nortel IP phone models are supported:

  • NORTEL IP PHONE 2001
  • NORTEL IP PHONE 2002
  • NORTEL IP PHONE 2004
  • NORTEL IP PHONE 2007
  • NORTEL IP PHONE 2033
  • NORTEL IP PHONE 1110
  • NORTEL IP PHONE 1140E
  • NORTEL IP SOFTPHONE 2050
  • MULTIMEDIA CLIENT

 

Microsoft IP Telephony
Devices/Entities Version/Model/Type/Supported Protocol
Microsoft Lync Server 2010  Gateway
  • NET UX 1.3
  • HP Survivable Branch Module (HP SBM) 1.1.19.0

 

Acme IP Telephony
Devices/Entities Version/Model/Type/Supported Protocol
Acme Packet Enterprise Session Border Controller
  • 3820
  • 4500

Supported Microsoft Lync Server Versions

The following Microsoft Lync versions are supported:

  • Microsoft Lync 2010
  • Microsoft Lync 2013

Supported Network Devices for the NNM iSPI NET

This device support information is based on the latest information available to at the time of publication. Note that device vendors can, at any time, alter a device's command syntax and displayed information (for example, in newer IOS or system software versions) and invalidate NNM iSPI NET diagnostics usage of that device. In general software versions listed indicate minimal versions for proper operation of the diagnostic flow.

Devices are only supported if the device is running in the English locale.

Cisco IOS Version 12.3 offers the best support for executing diagnostics flows on Cisco devices. Earlier versions may work as noted below but some command syntax may indicate a failure when reviewing the diagnostics flow report.

Nortel switch devices, such as the 5510, due to their form-based logon conventions are required to support SSH as a transport mechanism.

Network Devices supported by NNM iSPI NET
Vendor Family Model Agent's SNMP sysObjectID Software Version Notes
Cisco 2600 Series Multiservice Platform 2621 1.3.6.1.4.1.9.1.209 IOS Version 12.3(19) All commands except for show spanning tree brief should function.
Cisco 2600 Series Multiservice Platform 2691 1.3.6.1.4.1.9.1.413 IOS Version 12.4(1)  
Cisco 2600 Series Multiservice Platform 2651 1.3.6.1.4.1.9.1.320 IOS Version 12.2(19a) Versions earlier than 12.2(19a) may have problems invoking show VLAN commands.
Cisco Cisco 2800 Integrated Services Router 2821 1.3.6.1.4.1.9.1.577 IOS Version 12.4(12)  
Cisco Cisco 2800 Integrated Services Router 2851 1.3.6.1.4.1.9.1.578 IOS Version 12.4(5a)  
Cisco 3600 Series Multiservice Platform 3620 1.3.6.1.4.1.9.1.122 IOS 12.2(15)T13  
Cisco 3600 Series Multiservice Platform 3640 1.3.6.1.4.1.9.1.110 IOS Version 12.3(19) All commands except for show spanning tree brief should function.
Cisco Cisco 3700 Multiservice Access Routers 3725 1.3.6.1.4.1.9.1.414 IOS Version 12.3(9a)  
Cisco 3700 Series Multiservice Platform 3745 1.3.6.1.4.1.9.1.436 IOS Version 12.2(13)T12 All commands except for show spanning tree brief should function.
Cisco 4000M Series Routers 4500 1.3.6.1.4.1.9.1.14 IOS Version 12.2(23)  
Cisco Catalyst 2900 Series XL Switches 2912XL 1.3.6.1.4.1.9.1.219 IOS Version 12.0(5) All commands except for show interface, show protocols, show VLAN should function.
Cisco Catalyst 2950 Series Switches 2950T-24 1.3.6.1.4.1.9.1.359 IOS Version 12.1(14)EA1a All commands except for show interface, show protocols, show VLAN should function.
Cisco Catalyst 3500 Series XL Switches 3508G-XL 1.3.6.1.4.1.9.1.246 IOS 12.0(5)WC11 All commands except for show interface, show protocols, show VLAN should function.
Cisco Catalyst 3500 Series XL Switches 3524XL 1.3.6.1.4.1.9.1.248 IOS 12.0(5) All commands except for show interface, show protocols, show VLAN should function.
Cisco Catalyst 3560 Series Switches 3560-24PS 1.3.6.1.4.1.9.1.563 IOS Version 12.2(25)SEB4 All commands except for show VLAN and show spanning tree brief.
Cisco Catalyst 3750 Series 3750 1.3.6.1.4.1.9.1.516 IOS Version 12.2(25)SEB4 All commands except for show VLAN and show spanning tree brief.
Cisco Catalyst 5000 Series Switches 5000 1.3.6.1.4.1.9.5.7 Version 3.2(8) Only Cisco Switch Spanning Tree Baseline diagnostic supported.
Cisco Catalyst 5000 Series Switches (RSM) WS-X5302 1.3.6.1.4.1.9.1.168 IOS Version 11.2(12a.P1)P1 Only Cisco Switch Baseline supported with some command failures.
Cisco Catalyst 6500 Series Switch 6503 1.3.6.1.4.1.9.1.449 IOS Version 12.2(18)SXD7  
Cisco Catalyst 6500 Series Switch 6506 1.3.6.1.4.1.9.1.282 IOS Version 12.2(18)SXF6  
Cisco Catalyst 6500 Series Switch 6509 1.3.6.1.4.1.9.1.283 IOS 12.2(18)SXD7  
Cisco Cisco 7100 Series VPN Router 7140 1.3.6.1.4.1.9.1.277 IOS Version 12.2(15)T13  
Cisco Catalyst 8500 Series Multiservice Switch Routers 8510 1.3.6.1.4.1.9.1.190 IOS 12.0(1a)W5(6f) All commands except for show interface summary and show spanning tree brief should function.
Cisco Catalyst 8500 Series Multiservice Switch Routers 8540 1.3.6.1.4.1.9.1.203 IOS Version 12.1(6)EY1 All commands except for show interface summary and show spanning tree brief should function.
Nortel BayStack Baystack 5510 1.3.6.1.4.1.45.3.53.1 v5.1.1.017 Must configure SSH to use diagnostic flows.

Performance and Sizing Recommendations for NNMi

The following table shows different management tiers supported by NNMi and hardware sizing requirements for each tier. Managed environments larger than these tiers are not supported without additional approval.

 

 

 

 

Managed Environment

Recommended Hardware Size

Managed environment tier1

Total number of discovered nodes

Number of Hypervisors2

Number of VMs3

Number of discovered interfaces Number of polled addresses Number of polled interfaces

Number of custom-polled objects4

Number of polled node and physical sensors Number of concurrent users

CPU (64-bit) x86-64 or AMD645

RAM6

Recommended Java heap size7

Disk space for application installation ($NnmInstallDir)

Disk space for database and data during execution ($NnmDataDir)

Entry

Up to 250

5

100

15k

500

2500

1200

500

5

2 CPU cores

4 GB

2 GB

3 GB

10 GB

Small

250 - 3k

10

200

120k

5k

10k

30k

40k

10

4 CPU cores

8 GB

4 GB

3 GB

30 GB

Medium

3k – 8k

75

1500

400k

10k

50k

50k

60k

25

6 CPU cores

16 GB

8 GB

3 GB

40 GB

Large

8k – 18k

200

4000

900k

30k

70k

75k

80k

40

8 CPU cores

24 GB

12 GB

3 GB

60 GB

Very Large

18k - 30k

200

4000

1mil

60k

200k

200k

120k

40

12 CPU cores

48 GB

16 GB

3 GB

80 GB

  • 1To view discovered object counts and polled object counts, see the Database, State Poller, and Custom Poller tabs in the Help → System Information window.
  • 2The number of hypervisors (for example, VMware ESXi hosts) managed through a Web Agent. This number is included in the total number of discovered nodes.

  • 3The number of VMs managed through a Web Agent. This number is included in the total number of discovered nodes.
  • 4 This applies to Custom Polled Instances for Custom Poller "Instance" collection.

  • 5See Hardware Requirements for processor recommendations.
  • 6If you are running additional applications, increase resources appropriately.
  • 7These recommendations are based on the environment size and polled object counts stated in this table.  Polling fewer of a given object type might use less Java heap.  Polling more of a given object type might require increased Java heap size as well as approval.

The following table describes hardware recommendations for global network management environment.

Global Network Management Environment1
 

Managed Environment

Recommended Hardware Size
Approximate managed environment

Number of regionally managed nodes1

Number of Hypervisors2 Number of VMs3 Number of regional managers Number of Custom-Polled Objects via the Regional Manager as a Regional Proxy Number of concurrent users

CPU (64-bit)x86-64 or AMD644

RAM

Recommended Java heap size

Disk space for application installation ($NnmInstallDir)

Disk space for database and data during execution ($NnmDataDir)

Medium Global Manager

25k - 40k

500

10000

Up to 30

50k

20

8 CPU cores

24 GB

12 GB

3 GB

60 GB

Large Global Manager

40k - 80k

1000

20000

Up to 30

100k

40

12 CPU cores

48 GB

16 GB

3 GB

80 GB

  • 1To view discovered object counts and polled object counts, see the Database, State Poller, and Custom Poller tabs in the Help → System Information window.
  • 2The number of hypervisors (for example, VMware ESXi hosts) managed through a Web Agent. This number is included in the total number of discovered nodes.
  • 3The number of VMs managed through a Web Agent. This number is included in the total number of discovered nodes.
  • 4 See Hardware Requirements for processor recommendations.

Performance and Sizing Recommendations for the NNM iSPI Performance for Metrics

Follow the table below to determine the management tier of NPS.

 Management Tiers of NPS
Management Tier of NPS Management Tier of NNMi Management Tier of the NNM iSPI Performance for QA NNM iSPI Performance for Traffic
Small Entry None None
Small Entry Small None
Medium Entry None Entry
Medium Entry None Entry
Medium Entry Small Entry
Medium Entry Small Small
Medium Entry Medium None
Medium Entry Medium Entry
Medium Entry Medium Small
Medium Small None None
Medium Small None Entry
Medium Small None Small
Medium Small None Medium
Medium Small Small None
Medium Small Small Entry
Medium Small Small Small
Medium Small Small Medium
Medium Small Medium None
Medium Small Medium Entry
Medium Small Medium Small
Medium Medium None None
Medium Medium None Entry
Medium Medium Small None
Large Entry None Medium
Large Entry Small Medium
Large Entry Medium Medium
Large Medium None Small
Large Medium Small Entry
Large Medium Small Small
Large Medium Medium None
Large Medium Medium Entry
Large Medium Medium Small
Large Large None None
Large Large None Entry
Large Large None Small
Large Large Small None
Large Large Small Entry
Large Large Small Small
Large Large Medium None
Large Large Medium Entry
Large Large Medium Small
Very Large Small Large None
Very Large Small Large Entry
Very Large Small Large Small
Very Large Small Large Medium
Very Large Small Medium Medium
Very Large Small Large None
Very Large Small Large Entry
Very Large Small Large Small
Very Large Small Large Medium
Very Large Small None Medium
Very Large Medium Small Medium
Very Large Medium Medium Medium
Very Large Medium Large None
Very Large Medium Large Entry
Very Large Medium Large Small
Very Large Medium Large Medium
Very Large Large None Medium
Very Large Large Small Medium
Very Large Large Medium Medium
Very Large Large Large None
Very Large Large Large Entry
Very Large Large Large Small
Very Large Large Large Medium
Very Large Very Large None None
Very Large Very Large None Entry
Very Large Very Large None Small
Very Large Very Large None Medium
Very Large Very Large Small None
Very Large Very Large Small Entry
Very Large Very Large Small Small
Very Large Very Large Small Medium
Very Large Very Large Small Large
Very Large Very Large Medium None
Very Large Very Large Medium Entry
Very Large Very Large Medium Small
Very Large Very Large Medium Medium
Very Large Very Large Large None
Very Large Very Large Large Entry
Extra Large Entry None Large
Extra Large Entry Small Large
Extra Large Entry Medium Large
Extra Large Small Large Large
Extra Large Small None Large
Extra Large Small Small Large
Extra Large Small Medium Large
Extra Large Small Large Large
Extra Large Medium None Large
Extra Large Small Small Large
Extra Large Medium Medium Large
Extra Large Medium Large Large
Extra Large Large None Large
Extra Large Large Small Large
Extra Large Large Medium Large
Extra Large Large Large Large
Extra Large Very Large None Large
Extra Large Very Large Small Large
Extra Large Very Large Medium Large
Extra Large Very Large Large Small
Extra Large Very Large Large Medium
Extra Large Very Large Large Large

For the Extra Large tier, use a distributed deployment of NPS.

Follow the table below to determine the size of the system when NPS is installed on the NNMi management server.

Recommended Hardware Size for Same Server Installations

Tier

Number of CPUs (cores)

RAM

Disk Space

Disk Hardware

Additional
Disk Space
Retention =
R14/H70/D800a

Additional
Disk Space
Retention =
R70/H70/D800b

Additional
Disk Space
Retention =
R70/H400/D800c

Entry

8 CPU

16 GB

15 GB

1 SCSI or SATA disk drive

200 GB

300 GB

300 GB

Small

8 CPU

24 GB

15 GB

1 SCSI or SATA disk drive

300 GB

400 GB

1 TB

Medium

12 CPU

48 GB

15 GB

RAID 1+0 or 5/6 with write cache recommended

800 GB

1.5 TB

4 TB

Large

24 CPU

96 GB

15 GB

High performance SAN storage

2 TB

3 TB

10 TB

a Data retention is configured to 14 days of as polled, 70 days of hourly grain, 800 days of daily grain (R14/H70/D800)

b Data retention is configured to 70 days of as polled, 70 days of hourly grain, 800 days of daily grain (R70/H70/D800)

c Data retention is configured to 70 days of as polled, 400 days of hourly grain, 800 days of daily grain (R70/H400/D800)

Follow the table below to determine the size of the system when NPS is installed on a separate, dedicated server.

Dedicated NPS: Recommended Hardware Size
Managed Environment Recommended Hardware Size
Tier Number of concurrent users Number of CPUs (cores) RAM (in GB) Disk space in NPS installation directory (in GB) Disk hardware for the NPS data directory

Additional disk space Retention = R14/H70/D800a

Additional disk space Retention = R70/H70/D800b

Additional disk space Retention = R70/H400/D800c
Small 10 8 16 10 1 SCSI or SATA disk drive 300 GB 400 GB 1 TB
Medium 25 8 32 10 RAID 1+0 or 5/6 with write cache recommended 800 GB 1.5 TB 4 TB
Large 40 16 64 10 RAID 1+0 or 5/6 with write cache recommended 2 TB 3 TB 10 TB
Very large 40 32 160 10 High performance SAN storage 4 TB 8 TB 20 TB

a Data retention is configured to 14 days of as polled, 70 days of hourly grain, 800 days of daily grain (R14/H70/D800)

b Data retention is configured to 70 days of as polled, 70 days of hourly grain, 800 days of daily grain (R70/H70/D800)

c Data retention is configured to 70 days of as polled, 400 days of hourly grain, 800 days of daily grain (R70/H400/D800)

Follow the table below to determine the size of the system when NPS is deployed in a distributed environment (required for the Extra Large Tier).

Recommended Hardware Size for NPS in a Distributed Deployment
Server Role Recommended Hardware Size
Number of CPUs (cores) RAM (in GB) Disk space in NPS installation directory (in GB) Disk hardware for the NPS data directory Additional disk space Retention = R14/H70/D800a Additional disk space Retention = R70/H70/D800b Additional disk space Retention = R70/H400/D800c
DB Server 32 64 10 High performance SAN storage 4 TB 8 TB 20 TB
UiBi Server 16 32 10 1 SCSI or SATA disk drive Not Applicable
ETL Server 32 48 10 1 SCSI or SATA disk drive

a Data retention is configured to 14 days of as polled, 70 days of hourly grain, 800 days of daily grain (R14/H70/D800)

b Data retention is configured to 70 days of as polled, 70 days of hourly grain, 800 days of daily grain (R70/H70/D800)

c Data retention is configured to 70 days of as polled, 400 days of hourly grain, 800 days of daily grain (R70/H400/D800)

Performance and Sizing Recommendations for the NNM iSPI Performance for QA

The following table shows the hardware sizing details for each management tier of the NNM iSPI Performance for QA. These requirements are in addition to the requirements for the NNMi management server.

 
Managed Environment Recommended Hardware Size
Managed environment tier Max number of probes Max number of QoS interfaces Max number of iRA probes Max number of ping latency pairs Max number of combined probes, iRA probes, and QoS interfaces CPU (64-bit) x86-64 AMD64 RAM Recommended Java heap size1 Disk space for application installation ($NnmInstallDir) Disk space for operational database (data during execution) ($NnmDataDir)
Small 5,000 2,000 1,500 1,000 5,000 2 CPU Cores 4 GB 3 GB 2 GB 20 GB
Medium 30,000 12,000 10,000 3,000 30,000 4 CPU Cores 8 GB 6 GB 2 GB 60 GB
Large 50,000 20,000 40,000 5,000 50,000 4 CPU Cores 12 GB 8 GB 2 GB 80 GB

1These recommendations are based on the environment size and object counts stated in this table. Polling fewer of a given object type might use less Java heap. Polling more of a given object type might require increased Java heap size as well as approval. For more details, see Tuning the iSPI Memory Size section.

Caution Running the NNM iSPI Performance for QA in an environment with probes or QoS interfaces or iRA probes far exceeding the supported maximum limit mentioned in the table above can lead to a database deadlock. If you intend to run the NNM iSPI Performance for QA in such an environment, you must configure discovery filters before the first discovery by NNM iSPI Performance for QA to bring down the number of discovered probes or QoS interfaces or iRA probes to the supported limit.

Global Network Management

The following table shows the requirements for the NNM iSPI Performance for QA in a GNM environment. These requirements are in addition to the requirements for NNMi management server.

Global Network Management Environment Size
Managed Environment Recommended Hardware Size
Managed environment tier Number of Regional Services Max number of probes Max number of QoS interfaces Max number of ping latency pairs Max number of combined probes, iRA probes, and QoS interfaces CPU (64-bit) x86-64 AMD64 RAM Recommended Java heap size 1 Disk space for application installation ($NnmInstallDir) Disk space for operational database (data during execution) ($NnmDataDir)
Medium 4 120,000 50,000 10,000 120,000 4 CPU Cores 16 GB 12 GB 2 GB 80 GB
Large 9 250,000 100,000 20,000 250,000 8 CPU Cores 24 GB 20 GB 2 GB 100 GB

1These recommendations are based on the environment size and object counts stated in this table. Polling fewer of a given object type might use less Java heap. Polling more of a given object type might require increased Java heap size as well as approval.

These recommendations only apply to the NNM iSPI Performance for QA running under the default settings. If you intend to run any of the other NNM iSPIs, you must review each iSPI support matrix before determining the hardware you need.

The following are the recommendations for the QA Probes/QoS entities:

  • Not more than 5% probes (of the maximum number of probes that can be configured for that managed environment tier) have a polling frequency less than or equal to 1 minute. You can associate a maximum of 500 probes to a source site and 500 probes to a destination site.
  • To implement QoS management, we recommend the following average object count ratio per QoS interfaces to QoS policies, classes, and actions: 1:5.

Intelligent Response Agent

You can configure up to 2500 iRA probes for each iRA instance. The number of HTTP/HTTPS probes must not exceed 10% of the maximum supported iRA probes. You can see iRA-based probes in the Probes inventory in the Quality Assurance workspace.

The following table shows the requirements for the Intelligent Response Agent (iRA) only when it is hosted on an independent server:

Intelligent Response Agent Minimum Hardware System Requirements1
CPU (64-bit) x86-64 AMD64 RAM
2 CPU Cores 500 MB

1There is no additional hardware requirement for the iRA when it is hosted on a server along with NNMi, the NNM iSPI Performance for QA, and any other iSPI.

Performance and Sizing Recommendations for the NNM iSPI Performance for Traffic

Below are the different tiers of managed network environments and the hardware required by the NNM iSPI Performance for Traffic for supporting these environments. The tables below list the hardware resource requirements for the NNM iSPI Performance for Traffic collectors. Resource requirements published in this section are valid for the NNM iSPI Performance for Traffic deployed on physical servers. The resource requirements for deploying the NNM iSPI Performance for Traffic on a virtual machines, see NNM iSPI Performance for Traffic in a Virtual Environment.

Key factors considered while determining these requirements are:

  • Total number of flow records per minute in the entire managed environment (that is, the sum of the traffic flow records per minute)
  • Total number of interfaces that are being used by all the routers in the environment to export flows

Interface Traffic data is supported for up to 480k unique flow records per minute at the Master Collector. Disable the data generation for the Interface Traffic reports in medium and large tiers for optimum performance.

Size of the Master Collector System
Managed Environment Master Collector Minimum Hardware System Requirements NPS Requirements
Environment Tier Flow Records per Minute (max) Active Flow-Exporting Interfaces (max) Recommended Number of Leaf Collector Systems CPU (64-bit) x86-64/AMD64 RAM Xmx

Installation Directory Space (<Traffic InstallDir>)3

Data Directory Space(<Traffic DataDir>) 4

Queue Size5

CPU (64-bit) x86-64/AMD64 RAM Disk Space in the NPS Database
Entry1 60K 50 Master and Leaf Collectors are co-located on the same system 4 CPU cores 6 GB

Leaf: 1.5 GB

Master: 3 GB

1.5 GB 8 GB Not Applicable 4 CPU cores 16 GB 1 TB
Small 250K 200 1 4 CPU cores 8 GB 6 GB 1.5 GB 8 GB 600000 8 CPU cores 32 GB 2 TB
Medium2 6mil 1000 4 4 CPU cores 16 GB 12 GB 1.5 GB 32 GB 3000000 16 CPU cores 48 GB 5 TB
Large 20mil 4000 8 8 CPU cores 24 GB 16 GB 1.5 GB 64 GB 5000000 24 CPU cores 64 GB 12 TB
  • 1The entry tier specifications assume that the Master and Leaf Collectors are colocated on a single system. The total number of CPUs (4) and memory requirements (6 GB) are the cumulative resource requirements for one Master and one Leaf Collector on the system.
  • 2When the raw data collection is enabled, the maximum flow rate for the Medium tier is 600K flow records per minute.
  • 3<TrafficInstallDir> is configured during installation of the NNM iSPI Performance for Traffic (Master Collector or Leaf Collector) on Windows (C:\Program Files (x86)\HP\HP BTO Software by default). You can refer to the NNM iSPI Performance for Traffic Interactive Installation Guide to configure these parameters.
  • 4<TrafficDataDir> is configured during installation of the NNM iSPI Performance for Traffic (Master Collector or Leaf Collector) on Windows (C:\ProgramData\HP\HP BTO Software by default).
  • 5Queue size is the value of the nms.traffic-master.maxflowrecord.inqueue property in the nms-traffic-master.address.properties file on Master Collector. After installing the NNM iSPI Performance for Traffic, set the Queue size to the value recommended in the above table.

The following table lists the resource requirements for a single Leaf Collector. The key factor considered for sizing is the total number of flow records per minute being processed by the individual Leaf Collector instance. All the data presented in this table was derived from testing done on systems with Intel 64-bit (x86-64) processors.

Size of the Leaf Collector System
Leaf Collector Types Based on Sizing Flow Records per Minute to the Leaf Collector (Maximum) Number of Flow-Exporting Interfaces (Maximum) CPU RAM Xmx Installation Directory Space(<Traffic InstallDir>) Data Directory Space (<Traffic DataDir>) Flow Record Pool Size1 TopN Flow Record Pool Size2
Type 1: Use at least one Leaf Collector of this size for the Small tier. 250K 200 4 CPU cores 4 GB (1066 MHz or higher) 3 GB 1.5 GB 8 GB 600000 2000000
Type 2: Use at least four Leaf Collectors of this size for the Medium tier. 1.5mil 500 4 CPU cores 16 GB (1066 MHz or higher) 12 GB 1.5 GB 32 GB 3000000 5000000
Type 3: Use at least eight Leaf Collectors of this size for the Large tier. 2.5mil 800 8 CPU cores 24 GB (1066 MHz or higher) 20 GB 1.5 GB 64 GB 5000000 8000000
  • 1FlowRecord Pool Size is the value of the flowrecord.pool.size property in the nms-traffic-leaf.address.properties file on each Leaf Collector.
  • 2TopN Flow Record Pool Size is the value of the topn.flowrecord.pool.size property in the nms-traffic-leaf.address.properties file on each Leaf Collector.

While planning to install Leaf Collectors, follow these guidelines:

  • Make sure that the -Xms value is 3/4th of the value considered for Xmx.
  • The total number of Leaf Collectors in a deployment is dependent on the total number of flow records per minute in the environment, total number of flow exporting interfaces, and the processing capability of each Leaf Collector system.
  • recommends that you do not configure multiple Leaf Collector instances on a single Leaf Collector system.
  • If the incoming flow rate is more than the supported flow rate of the managed environment tier, the Leaf Collector may drop flow records, which may lead to data loss.
  • NetFlow records from some routers contain interface index for interfaces, although NetFlow is not configured for these interfaces. With NetFlow v5 exports, the interface index causes additional data processing on the NNM iSPI Performance for Traffic. Therefore, recommends that you use the NetFlow v9 or Flexible NetFlow that includes the flow direction field.

The NNM iSPI Performance for Traffic is tested to work optimally with the numbers of sites, TOS groups, and thresholds listed in the table below.

Maximum Sites, TOS Groups, and Thresholds

Management Environment Tier

Maximum Number of Sites

Maximum Number of TOS Groups

Maximum Number of Thresholds

Entry

10

5

5

Small

20

5

5

Medium

30

10

10

Large

40

20

20

The test environment used to derive the above numbers had only the following non-default features enabled: sites, TOS groups, and thresholds.

The NNM iSPI Performance for Traffic is tested with:

  • Five sites for each Leaf Collector in a Large tier
  • 15 sites for each Leaf Collector in a Medium tier
  • 20 sites for each Leaf Collector in a Small tier

    You may experience data processing issues if the ratio of sites to Leaf Collectors exceeds this value.

  • The NNM iSPI Performance for Traffic is tested in an environment where flow records do not contain VLAN IDs.

NNM iSPI Performance for Traffic in a Virtual Environment

Tables below list the hardware resource requirements for the collectors installed in a virtual environment.

Key factors considered while determining these requirements are:

  • Total number of flow records per minute in the entire managed environment (that is, the sum of the traffic flow records per minute)
  • Total number of interfaces that are being used by all the routers in the environment to export flows

These requirements are valid for all guest operating systems supported by the NNM iSPI Performance for Traffic.

Table: Master Collector
Managed Environment Size Master Collector Minimum Hardware System Requirements   Queue Size
Tier Total Flow Records per Minute (Maximum) Total Active Flow-Exporting Interfaces (Maximum) Recommended Number of Leaf Collector Systems CPU RAM -Xmx Disk space for application installation (<Traffic InstallDir>) Disk space for data during execution (<TrafficDataDir>)

Entry

60K 50 Master and Leaf Collectors are co-located on the same system 4 CPU cores 6 GB Leaf: 1.5 GB Master: 3 GB 1.5 GB 8 GB Not Applicable
Small 250K 200 1 4 CPU cores 8 GB 6 GB 1.5 GB 8 GB 600000

Medium

5M 1000 4 8 CPU core 16 GB 12 GB 1.5 GB 42 GB 3000000

The entry tier specifications assume that the Master and Leaf Collectors are co-located on a single system. The hardware requirements are the cumulative resource requirements for one Master and one Leaf Collector on the system.

When the data collection for the Interface Traffic reports is enabled, the maximum flow rate for the medium tier is 600K flow records per minute.

Table: Leaf Collector
  Leaf Collector Types Based on Sizing   Flow Records per Minute to the Leaf Collector (Maximum)   Number of Flow-Exporting Interfaces (Maximum) NNM iSPI Performance for Traffic Leaf Collector Minimum Hardware System Requirements   FlowRecord Pool Size   TopN Flow Record Pool Size
CPU RAM -Xmx Disk space for application installation (<Traffic InstallDir>) Disk space for data during execution (<Traffic DataDir>)
Type 1 - use at least one Leaf Collector of this size for the small tier. 250k 200 4 CPU cores 4 GB (1066 MHz or higher) 3 GB 1.5 GB 8 GB 600000 2000000
Type 2 - use four Leaf Collectors of this size for the medium tier. 1.25M 500 4 CPU cores 16 GB (1066 MHz or higher) 12 GB 1.5 GB 32 GB 3000000 5000000

Performance and Sizing Recommendations for the NNM iSPI for MPLS

Tables below list the hardware resource requirements for the NNM iSPI for MPLS.

 
Managed Environment Recommended Hardware Size
Managed environment tier Number of discovered VRFs Number of discovered TE Tunnels Number of discovered PseudoWire VCs Number of Monitored LSPs Preferred NNMi Scale Additional CPUs Additional RAM Recommended Java heap size Additional Disk space for database and data during execution (<NnmDataDir>)
Small Up to 2K Up to 500 Up to 500 Up to 50 Low 1 CPU core 4GB Default value 10 GB
Medium Up to 8K 500-1000 500-1000 Up to 100 Low or Medium 1 CPU core 8 GB 4 GB 20 GB
Large 8K-23K Up to 2K Up to 2K Up to 200 Large 2 CPU cores 16 GB 8 GB 20 GB

Physical RAM should be twice that of the combined Xmx values of NNMi and the available NNM iSPIs on the management server.

To calculate the disk space information for MPLS reports, check the number of VRFs. Consider this count of VRFs as the polled objects. Use the number of polled objects sizing information from Performance and Sizing Recommendations for the NNM iSPI Performance for Metrics. Allocate the same disk space information for the number of MPLS objects (VRF) according to the disk space required for the number of polled objects.

Global Network Management

Tables below list the hardware resource requirements for the NNM iSPI for MPLS installed in a GNM environment.

NNM iSPI for MPLS Global Network Management
Managed Environment Recommended Hardware Size
Managed environment tier Number of discovered VRFs1 Number of discovered TE Tunnels Number of PseudoWires VCs Number of Monitored LSPs Number of regional managers Additional CPUs Additional RAM Recommended Java heap size Additional Disk space for database and data during execution (<NnmDataDir>)
Medium Global Manager Up to 36K Up to 3K Up to 3600 Up to 600 Up to 3 4 cores 12 GB 6 GB 30 GB
Large Global Manager Up to 96K Up to 6K Up to 7200 Up to 1200 Up to 6 8 cores 24 GB or more 8 GB 40 GB

1Total number of discovered VRFs in the GNM topology. The count of discovered VRFs include the number of VRFs from all the Regional Managers.

Performance and Sizing Recommendations for the NNM iSPI for IP Multicast

Tables below list the hardware resource requirements for the NNM iSPI for IP Multicast.

IP Multicast Managed Environment Size
Managed Environment Recommended Hardware Size
Managed environment tier Number of discovered IP multicast nodes Number of discovered IP multicast interfaces Number of active flows1 CPU RAM Disk space for application installation (<NnmInstallDir>) Disk space for database and data during execution (<NnmDataDir>) Disk space for the IP Multicast report data during execution2 (<NnmDataDir>)
Medium Up to 500 Up to 4K Up to 100 2 CPU 4 GB 1 GB 10 GB 30 GB
Large Up to 2000 Up to 16K Up to 200 4 CPU 8 GB 1 GB 20 GB 200 GB

1 Active Flows - Number of unique multicast groups across all the multicast-enabled nodes in the network.

2This is the additional disk space required for the NNM iSPI for IP Multicast reports.

Global Network Management

The NNM iSPI for IP Multicast hardware requirements mentioned in the following table are in addition to the NNMi hardware requirements. Make sure that the physical RAM must be twice that of the combined Max Java Heap size (Xmx) value for the NNM iSPI for IP Multicast and NNMi.

Global Network Management Recommendations
Management Environment Size Recommended Hardware Size
Approximate managed environment tier Number of managed nodes Number of regional managers CPU (64-bit) x86-64 AMD64 RAM Recommended Java heap size (see Tuning the iSPI Memory Size) Disk space for application installation (<NnmInstallDir>) Disk space for database and data during execution (<NnmDataDir>) Disk space for the IP Multicast report data during execution1 (<NnmDataDir>)
Global Manager 6K Up to 4 4 CPU cores 12 GB 6 GB (-Xmx6g) 1 GB 30 GB 360 GB

1This is the additional disk space required for the IP Multicast reports.

Performance and Sizing Recommendations for the NNM iSPI for IP Telephony

This section provides the tier definition and hardware sizing details of the NNM iSPI for IP Telephony.

IP Telephony: Cisco, Avaya, Nortel, or Acme

The following tables contain environment sizes and minimum hardware requirements for Monitoring the Cisco, Avaya, or Nortel IP Telephony, or the Acme Session Border Controller.

Management Environment Tier-Cisco, Avaya, Nortel, or Acme
Managed Environment Recommended Hardware Size
Managed environment tier Number of Directory Numbers (DNs)

Number of Sessions (Through Session Border Controllers)2

Number of discovered NNMi nodes that are not hosting IP Phones CPU RAM Java heap size the NNM iSPI for IP Telephony Java heap size for NNMi Disk space for NNM iSPI for IP Telephony Installation with an Incremental Demand on (<NnmInstallDir>)

Total Disk Space for Database and Data During Execution (<NnmDataDir>)2

Entry Up to 500 Up to 500 Up to 100 2 CPU Cores 9 GB 3 GB (-Xmx3g) 3 GB (-Xmx3g) 1 GB 20 GB
Small Up to 3K Up to 3K Up to 500 4 CPU Cores 10 GB 4 GB (-Xmx4g) 4 GB (-Xmx4g) 1 GB 40 GB
Medium1 Up to 10K Up to 10K Up to 1.5K 4 CPU Cores 18 GB 8 GB (-Xmx8g) 8 GB (-Xmx8g) 1 GB 80 GB
Large1 Up to 30K Up to 30K Up to 3K 8 CPU Cores 28 GB 12 GB (-Xmx12g) 12 GB (-Xmx12g) 1 GB 120 GB
Very Large1 Up to 50K Up to 50K Up to 4K 8 CPU Cores 40 GB 16 GB (-Xmx16g) 16 GB (-Xmx16g) 1 GB 160 GB
  • 1Also assumes up to 30 K, 75K, and 100K CDRs per hour collected and processed by the NNM iSPI for IP Telephony for the medium tier, large tier, and very large tier respectively.
  • 2Indicates the concurrent SIP sessions through an Acme Session Director.

IP Telephony: Microsoft or Acme

The following table contains environment sizes and minimum hardware requirements for Monitoring the Microsoft IP Telephony or the Acme Session Border Controller.

Management Environment Tier-Microsoft or AcmeJava heap size for
Managed Environment Recommended Hardware Size
Managed environment tier Number of discovered Lync End Users Number of Sessions (Through Session Border Controllers)2 Number of discovered Gateways Number of discovered Lync Servers Number of discovered Lync Sites CPU RAM Java heap size the NNM iSPI for IP Telephony Java heap size for NNMi Disk space for NNM iSPI for IP Telephony Installation with an Incremental Demand on (<NnmInstallDir>) Total Disk Space for Database and Data During Execution (<NnmDataDir>)2
Entry Up to 200 Up to 200 1 10 1 Central Site 2 CPU Cores 8 GB 2 GB (- Xmx2g) 2 GB (- Xmx2g) 1 GB 20 GB
Small Up to 1K Up to 1K Up to 11 Up to 50 1 Central Site and 10 Branches 4 CPU Cores 12 GB 3 GB (- Xmx3g) 4 GB (- Xmx4g) 1 GB 40 GB
Medium1 Up to 5K Up to 5K Up to 110 Up to 500 1 Central Site and 100 Branches 4 CPU Cores 16 GB 4 GB (- Xmx4g) 6 GB (- Xmx6g) 1 GB 80 GB
Large1 Up to 10K Up to 10K Up to 550 Up to 1K 5 Central Sites and 500 Branches 8 CPU Cores 24 GB 6 GB (- Xmx6g) 8 GB (- Xmx8g) 1 GB 120 GB
Very Large1 Up to 50K Up to 50K Up to 1100 Up to 3K 10 Central Sites and 1000 Branches 8 CPU Cores 32 GB 8 GB (- Xmx8g) 10 GB (- Xmx10g) 1 GB 160 GB
  • 1Also assumes up to 30 K, 75K, and 100K CDRs per hour collected and processed by the NNM iSPI for IP Telephony for the medium tier, large tier, and very large tier respectively.
  • 2This does not include the disk space to retain the data for reporting.

Global Network Management

IP Telephony: Cisco, Avaya, Nortel, or Acme

The following tables contain the sizing details of the global manager instance of the NNM iSPI for IP Telephony for monitoring the Cisco, Avaya, or Nortel IP Telephony, or the Acme Session Border Controller.

Managed Environment Size-Cisco, Avaya, Nortel, or Acme
Managed Environment Recommended Hardware Size
Approximate managed environment tier Number of regionally managed Directory Numbers (DNs) Number of Sessions (Through Session Border Controllers)2 Number of regionally managed nodes that are not hosting IP Phones Number of regional managers Number of concurrent NNMi users

CPU

RAM Java heap size for the NNM iSPI for IP Telephony3 Java heap size for NNMi Disk space for NNM iSPI for IP Telephony Installation with an Incremental Demand on (<NnmInstallDir>) Total Disk Space for Database and Data During Execution (<NnmDataDir>4
Medium1 Up to 150K Up to 150K Up to 15K Up to 30 20 4 CPU cores 32 GB 16 GB (-Xmx16g) 12 GB (-Xmx12g) 1 GB 160 GB
Large1 Up to 250K Up to 250K Up to 20K Up to 30 40 8 CPU cores 48 GB 24 GB (-Xmx24g) 16 GB (-Xmx16g) 1 GB 320 GB
  • 1Also assumes a total of 375K and 500K CDRs per hour collected across regional manager instances of the NNM iSPI for IP Telephony and forwarded to a single global manager instance of the NNM iSPI for IP Telephony for the medium tier and large tier respectively.
  • 2Indicates the concurrent SIP sessions through an Acme Session Director.
  • 3To tune Java heap size, see the Tuning the iSPI Memory for the NNM iSPI for IP Telephony.
  • 4This does not include the disk space to retain the data for reporting.
  • 1Also assumes a total of 375K and 500K CDRs per hour collected across regional manager instances of the NNM iSPI for IP Telephony and forwarded to a single global manager instance of the NNM iSPI for IP Telephony for the medium tier and large tier respectively.
  • 2To tune Java heap size, see the Tuning the iSPI Memory for the NNM iSPI for IP Telephony.
  • 3This does not include the disk space to retain the data for reporting.

IP Telephony: Microsoft or Acme

The following tables contain the sizing details of the global manager instance of the NNM iSPI for IP Telephony for Monitoring the Microsoft IP Telephony or the Acme Session Border Controller.

Managed Environment Size-Microsoft and Acme
Managed environment tier Number of regionally managed Lync End Users Number of Sessions (Through Session Border Controllers)2 Number of regionally managed Servers and Gateways Number of regional managers Number of concurrent NNMi users
Medium1 Up to 150K Up to 150K Up to 3K Up to 30 20
Large2 Up to 250K Up to 250K Up to 5K Up to 30 40
  • 1Also assumes a total of 375K and 500K CDRs per hour collected across regional manager instances of the NNM iSPI for IP Telephony and forwarded to a single global manager instance of the NNM iSPI for IP Telephony for the medium tier and large tier respectively.
  • 2Indicates the concurrent SIP sessions through an Acme Session Director.
IP Telephony Minimum Hardware System Requirements for the Managed Environment Size
Approximate managed environment tier CPU (64-bit) x86-64/AMD64 RAM Java heap size for the NNM iSPI for IP Telephony2 Java heap size for NNMi Disk space for NNM iSPI for IP Telephony Installation with an Incremental Demand on (<NnmInstallDir>) Total Disk Space for Database and Data During Execution (<NnmDataDir>)3
Medium1 8 CPU cores 24 GB 12 GB (-Xmx12g) 8 GB (-Xmx8g) 1 GB 160 GB
Large1 8 CPU cores 32 GB 16 GB (-Xmx16g) 12 GB (-Xmx12g) 1 GB 320 GB
  • 1Also assumes a total of 375K and 500K CDRs per hour collected across regional manager instances of the NNM iSPI for IP Telephony and forwarded to a single global manager instance of the NNM iSPI for IP Telephony for the medium tier and large tier respectively.
  • 2To tune Java heap size, see the Tuning the iSPI Memory for the NNM iSPI for IP Telephony.
  • 3This does not include the disk space to retain the data for reporting.