Sizing for on-premises deployment

From ITOM Practitioner Info
Jump to: navigation, search


Note

  • A CPU load of more than 80% impacts the efficiency of the network transmission significantly inside the platform environment. Make sure that the CPU load is less than 80% by distributing the work load to multiple worker nodes.
  • You are recommended to run Docker with the devicemapper (direct-lvm) storage driver in a production environment. Make sure that a 100 GB logical device is added to run Docker with the devicemapper (direct-lvm) storage driver. If not, Docker will run with the devicemapper (loop) storage driver. For more information, see Prepare logical volumes and thin pools.
  • Avoid CPU overcommitment when creating virtual machines. In other words, you must make sure that the total number of CPU cores configured for your VMs does not exceed the number of your physical CPU cores.
  • Customers can switch to SSD for storage purpose, which provides higher I/O speed and bandwidth. This is typical for large size deployment. 
  • The suggested hardware resources are dedicated for the SMA suite without sharing with other product lines. 
  • For the demo environment, the master node will also work as the worker node and the NFS server. In addition, you do not need to prepare an external database server since internal databases are utilized.
  • For the production environment, make sure that you have additional worker nodes up and running as buffer in case some of your worker nodes may have downtime. This is to ensure that when a worker node is down, the pods that were running on this node can be successfully switched to other nodes without causing an out of CPU or memory issue.
  • For Processor type, Intel Xeon E5, E7 or peers are suggested. For Processor speed, Demo/test environment is suggested to have a processor frequency higher than 1.9 GHz, while for production environment, the processor frequency is suggested to be at least 2.3 GHz. Higher speed will certainly bring in performance improvements. 

Suite size definitions

When running the suite installer to install the suite, you need to select a suite size: DemoExtra SmallSmallMedium, or Large. The suite installer automatically scales the suite deployment according to the selected suite size. 

See the following table for definitions for the installation sizes: Demo, Extra Small, Small, Medium, and Large.

Suite size Demo Extra Small Small Medium Large Notes
Maximum number of concurrent users (including both ESS and IT agent users) 10 100 400 1,000 3,000

When estimating your number of concurrent users, be aware that the concurrency ratio can be affected by a number of factors: the number of modules enabled on your portal, your IT ticket submission channels, the number of time zones in your organization, and more. For Service Portal self-service users, the ratio may range from 1/200 to 1/50; for IT agent users, the ratio may range from 1/3 to 1/1.

Determine your deployment size based on your total number of ESS and IT agent users and an appropriate concurrency ratio.

Maximum number of records in Smart Analytics 1,000 1 Million 1 Million 2 Million 2 Million
Maximum number of CIs and Relationships in CMS 1,000 2 Million 2 Million 6 Million 25 Million Take this attribute into consideration only when UCMDB is in the container. If you are using external UCMDB, this attribute has no impact on suite sizing and should be ignored.

Hardware requirements for different suite sizes

The following table provides the minimum hardware requirements for SMAX deployed in mixed mode (that is, CMS is not included in the suite installation and an external CMS is used instead).

Note

  • The sizing recommendations should be considered as the minimum requirements for running SMAX properly. The numbers we provide are just a guidance and do not take into account the extra load as the volume of tickets in the system increases over time.
  • When you run the suite installer, mixed mode is the default (and recommended) installation mode. Therefore, we have conducted comprehensive performance testing for mixed mode only, and no sizing recommendations are available for containerized mode.
  • In a production environment, you are recommended to use a physical database server for the SMAX suite to achieve optimal performance.
Suite size SMAX master node SMAX worker node SMAX NFS SMAX database
Type Volume Quantity Type Volume Quantity Type Volume Quantity Type Storage Quantity
Demo 8CPU 32G Memory 120G+80G thin pool 1 N/A N/A N/A N/A N/A N/A N/A N/A N/A
Extra-Small 4CPU 16G Memory 120G+80G thin pool 1 8CPU 32G Memory 60G+60G thin pool 1 4CPU 16G Memory 200G 1 2CPU 8G Memory 200G 1
Small 4CPU 16G Memory 120G+80G thin pool 1 8CPU 32G Memory 60G+60G thin pool 2 4CPU 16G Memory 200G 1 2CPU 8G Memory 200G 1
Medium 4CPU 16G Memory 120G+80G thin pool 1 8CPU 32G Memory 60G+60G thin pool 3 4CPU 16G Memory 300G 1 8CPU 32G Memory 200G 1
Large 8CPU 32G Memory 120G+80G thin pool 1 8CPU 32G Memory 60G+60G thin pool 6 4CPU 16G Memory 400G 1 16CPU 64G Memory 400G 1

Partition size requirements

See the following tables for partition size requirements for the masters, workers and NFS server. Make sure you use the absolute path when specifying an equivalent directory.

For each master or worker node

Directory Equivalent directory Node Free space required during installation Description
/opt/kubernetes

To specify your own directory, follow one of the steps below:

  • Modify the K8S_HOME parameter in install.properties
  • Run the following command during the installation:
    ./install.sh --<k8s-home> 
Master 50G (150G)

If you set up a thin pool, 50G is enough. Without Thin Pool setup, you need to add the disk space from THINPOOL_DEVICE.


The disk usage will grow gradually.

Worker 50G (150G)
/var/opt/kubernetes

Run the following command to specify your own directory:

./downloadimages.sh --dir 
Master 60G

This directory includes the CDF images and all the suite images.


The subdirectory /offline/suite_images can be removed after uploading the suite images.

THINPOOL_DEVICE Without the thin pool setup, the system consumes the space in $K8S_HOME.

Add or extend logical volumes for two direct-lvm thin pools:

Prepare logical volumes and thin pools

Master 100G

For a demo or test setup, you do not need to setup thin pool, therefore the disk space for this directory is not required.


The disk usage will grow gradually

Worker 100G
/tmp To specify your own directory, follow one of the steps below:
  • Modify the TMP_FOLDER parameter in install.properties
  • Run the following command during the installation: 
    ./install.sh --<tmp-folder> 
Master 10G You can download CDF packages to this folder and unzip the packages there.You can remove the files in this folder once CDF is installed.
Worker 10G
/var/lib N/A Master 10G

The “kubelet” agent daemon is installed on all Kubernetes hosts to manage container creation and termination.


The disk usage will grow gradually.

Worker 10G

For the NFS server

Directory Free space required during installation Description
/var/vols/itom/core 30G This is the CDF NFS root folder, which contains the CDF database and files.

The disk usage will grow gradually.

/var/vols/itom/itsma/itsma-itsma-global 100G This is the SMA global NFS folder, which contains files other than Smart Analytics and database data. For example, attachments and log files.

The disk usage will grow gradually.

You need to clean up the log folder on a regular basis: /var/vols/itom/itsma/itsma-itsma-global/logs.

/var/vols/itom/itsma/itsma-itsma-smartanalytics 10G per million records This is the SMA Smart Analytics NFS folder, which contains files for Smart Analytics.

The disk usage will grow gradually.

/var/vols/itom/itsma/itsma-itsma-db 5G This is the SMA database NFS folder, which contains the suite database.

The disk usage will grow gradually.

/var/vols/itom/itsma/itsma-itsma-rabbitmq-infra-rabbitmq-0 3G There are RabbitMQ NFS folders, which contain RabbitMQ HA configuration data.
/var/vols/itom/itsma/itsma-itsma-rabbitmq-infra-rabbitmq-1 3G
/var/vols/itom/itsma/itsma-itsma-rabbitmq-infra-rabbitmq-2 3G
For equivalent directories, see Configure NFS shares for CDF and the suite.