Check network requirements

This section discusses the network requirements within a facility, open ports required for Core Components, and name resolution requirements. These requirements must be met for Primary Core, Secondary Core, and Satellite installations.

Network requirements within a facility

Before running the Installer, your network environment must meet the following requirements:

  • It is recommended that all SA Core Servers be on the same Local Area Network (LAN or VLAN). If cores are placed in different subnets, be aware that there may be performance issues.
  • There must be full network connectivity between all SA Core Servers and the servers that the SA Core will manage.
  • Core Servers expect user accounts to be managed locally and cannot use the Network Information Service (NIS) directory to retrieve password and group information. During installation of the Core Components, the installer checks for the existence of certain target accounts before creating them. If you are using NIS, this check will fail.
  • The Software Repository requires a Linux Network File System (NFS) server.
  • When using network storage for Core Components, such as the Software Repository or SA Provisioning Media Server, you must ensure that the root user has write access over NFS to the directories where the components will be installed.
  • The speed and duplex mode of the Core’s and Managed Servers’ NIC adapters must match the switch they are connected to. A mismatch will cause poor network performance between the Core and Managed Servers.
  • On any given core server, having multiple interfaces which reside on the same subnet is an unsupported configuration. If the slice server has multiple interfaces, the active interfaces MUST reside on separate subnets.
  • Firewall/network settings on the SA Core host servers can affect the accessibility of the network ports used for the SA Client, for example, restrictive Linux iptables rules. Ensure these operating system/network settings allow required SA Client access.

  • If the net.ipv6.conf.<interface>.disable_ipv6 kernel parameter on an interface is set to 1, then the IPv6 of the respective interface will be disabled. If the kernel parameter on all network interfaces excluding local interface is disabled, then httpsProxy will not start.

  • The SA gateway only supports tunneling to port 443. You may need to change the gateway configuration to allow tunneling to other ports if you are:
    • Using iLO on other ports.
    • Integrating with a vCenter server that is on a port other than port 443.
    • Integrating with an OpenStack deployment. In this case, you need to allow tunneling to ports 5000, 8774, and 8776, or to the custom ports for your deployment.

      For more information, see the Virtualization Service Tasks section in Virtualization management.

To identify the gateway host, open the opswgw.args file from the iLO or virtualization service server. The opswgw.args file is located on the managed server at:

  • UNIX/Linux: /etc/opt/opsware/agent
  • Windows: %SystemDrive%\Program Files\Common Files\Opsware\etc\agent

In this example, your agent gateway name is opswgw-agws1-TEAL1:

  1. On the gateway host, open the opswgw.custom file.

    The opswgw.custom file is located on the gateway host at:

    • UNIX/Linux: /etc/opt/opsware/opswgw-agws1-TEAL1
    • Windows: %SystemDrive%\Program Files\Common Files\Opsware\etc\opt\opsware\ opswgw-agws1-TEAL1
  2. For each port on which you want to allow tunneling (for example, port 5000), add the following new line:

    opswgw.EgressFilter=tcp:*:5000::

  3. Save and close the file.
  4. Restart the agent gateway component on the gateway host by running the following command:

    /etc/init.d/opsware-sas restart opswgw-agws

Required open ports

You must configure any firewalls protecting your Core Servers to allow the ports (shown in the following table) to be open. Note that the ports numbers listed in the table are the default values that can be changed during the installation. Therefore, ensure you are leaving the correct ports open.

Open ports on a firewall protecting an SA Core

Source

Destination

Open Port(s)

Notes

Management Desktops

Slice Component bundle hosts

80, 443, 8080

Required

Direct access to Oracle database (reports, troubleshooting, management)

Model repository (truth) host

1521

Strongly recommended to allow Oracle management

Management Desktops

Slice Component bundle hosts

1004, 1018, 1032, 2222, 8061

[Optional] Useful for troubleshooting; ports represent spin, way, twist, tsunami and ogsh (ssh).

SA Core (Management Gateway)

SA Core (Management Gateway)

2001

Required

SA Core (Management Gateway)

SA Core in a different Multimaster Mesh (management gateway)

22, 2003

[Optional] For scp (default word replication, can be forwarded over 2001 connection), backup for 2001 if it is busy.

Slice Component bundles

SA Agents (in same network)

1002

Required (only for the Agent Gateway managing the Agent).

SA Core (Management Gateway)

Satellite/Gateway

3001

Required

SA Core hosts

Mail server

25

Required for email notifications

SA Core hosts

LDAP server

636

Required for secure LDAP access; port can change if you use unsecure LDAP.

SA Agents

SA Core servers and Satellites managing the agent

3001

Required

SA Satellite/Gateway

SA Core

2001

Required

SA Satellite/Gateway

Managed Agents

1002

Required

* Port 1521 is the default Oracle listener (listener.ora) port, but you can specify a different port in your Oracle configuration. In case your installation has been modified to use a port other than 1521, you should verify the port number from the Oracle listener status and ensure that your firewall is configured to allow the correct port to be open for the Oracle listener.

If you have enabled IPTABLES, you must also add exception rules for mountd (tcp/udp), portmapper (tcp/udp) and port 4040.
SA’s data access layers (infrastructure) use connection pooling to the database. The connections between the database and the infrastructure layer must be maintained as long as SA is up and running. Ensure that your firewall is configured so that these connections do not time out and terminate the connections between the database and the infrastructure layers.

The following table shows the ports used by the SA Provisioning components that are accessed by servers during the provisioning process. (In SA, Provisioning refers to the installation of an operating system on and configuration of managed servers.)

Open Ports for the SA Provisioning components

Port

Component

Service

67 (UDP)

Boot Server

DHCP

69 (UDP)

Boot Server

TFTP

111 (UDP, TCP)

Boot Server, Media Server

RPC (portmapper), required for NFS

Dynamic/Static*

Boot Server, Media Server

rpc.mountd, required for NFS

2049 (UDP, TCP)

Boot Server, Media Server

NFS

8017 (UDP, TCP)

Agent Gateway

Interface to the Build Manager

137 (UDP)

Media Server

SMB NetBIOS Name Service

138 (UDP)

Media Server

SMB NetBIOS Datagram Service

139 (TCP)

Media Server

NetBIOS Session Service

445 (TCP)

Media Server

MS Directory Service

* By default, the rpc.mountd process uses a dynamic port, but it can be configured to use a static port. If you are using a dynamic port, the firewall must be an application layer firewall that can understand RPC requests that clients use to locate the port for mountd.

Requirements: The SA Provisioning Boot Server and Media Server run various services (such as portmapper and rpc.mountd) that could be susceptible to network attacks. It is recommended that you segregate the SA Provisioning Boot Server and Media Server components onto their own DMZ network. When you segregate these components, the ports should be opened to the DMZ network from the installation client network. Additionally, the Boot Server and Media Server should have all vendor-recommended security patches applied.

The following table shows the Managed Server port that must be open for SA Core Server connections.

Open ports on managed servers

Port

Component

1002 (TCP)

SAAgent

Required reserved ports

The following ports must be reserved for use by SA as they are required by SA components (non-third party).

Reserved ports

SA Component Port Secured Reason
Agent Gateway 8089 Yes  
3001 No Proxy port
8017 No Forward port
8086 No  
8084 No  
Core Gateway 8085 Yes  
2003 No  
2002 No Localhost only
8080 No Proxy port
3002 No Proxy port
4040 No  
443 Yes  
Management Gateway 2001 Yes  
3003 No Proxy port
4434 No Forward port
20002 No Forward port
Multimaster component (vault) 5678 Yes  
7501 No Localhost only
Data Access Engine (spin) 1004 Yes  
1007 No Localhost only
Web Services Data Access Engine (twist) 1032 Yes  
1026 No Localhost only
Command Engine (way) 1018 Yes  
Software Repository (word) 1003 Yes  
1006 No Localhost only
Software Repository Accelerator (tsunami) 8061 Yes  
Build Manager 1012 Yes  
1017 No  
Agent 1002 Yes  
AgentCache 8081 No  
SSHD 2222 Yes  
Command Center (occ) 9080 No Localhost only
HTTP Proxy 80 No Proxy port
4433 Yes  
81 No Localhost only
82 No Localhost only
Global File System (spoke) 8020 No Localhost only
Deployment Automation (da) 7080 No  
8010 No  
7006 No Localhost only
1027 No Localhost only
1028 Yes  
1029 No Localhost only

Host and service name resolution requirements

SA must be able to resolve Core Server host names and service names to IP addresses through proper configuration of DNS or the /etc/hosts file.

Previous releases

If you plan to install the Core Components on a server that had a previous SA installation, you must verify that the host names and service names resolve correctly for the new installation.

Core servers and host/service name resolution

During the installation, the /etc/hosts file on machines where the Slice Component bundle is installed will be modified to contain entries pointing to the Secondary Data Access Engine, the Command Center, the Build Manager, and the fully qualified domain name of the localhost.

All other servers hosting Core Components must be able to resolve their own valid host name and the valid host name of any other SA Core Server (if you will be using a multiple core installation or Multimaster Mesh). A fully qualified name includes the subdomain, for example, myhost.acct.buzzcorp.com. Enter the hostname -f command and verify that it displays the fully qualified name found in the local /etc/hosts file.

In a typical component layout, the Software Repository Store is installed as part of the Infrastructure Component bundle and the Slice Component bundle must able to map the IP of the Infrastructure host to its hostname. In a custom component layout, the Software Repository Store may be installed separately on any host, therefore the Slice Component bundle must be able to map the IP of that host to its hostname. It is a common practice, but not a requirement, to host the Software Repository Store and the OGFS home/audit directories on the same server.

SA Provisioning: DHCP proxying

If you plan to install your SA Provisioning components on a separate network from the Core Components, you must set up DHCP proxying to the DHCP server (for example, using Cisco IP Helper). If you use DHCP proxying, the server/router performing the DHCP proxying must also be the network router so that PXE can function correctly.

The SA Provisioning Boot Server component provides a DHCP server, but does not include a DHCP proxy. For DHCP server configuration information, see DHCP configuration for SA Provisioning.