Troubleshoot > Troubleshooting Data Flow Management > Troubleshooting and Limitations

Troubleshooting and Limitations – Data Flow Probe Setup

Data Flow Probe Setup - Troubleshooting

Problem: You cannot transfer a Data Flow Probe from one domain to another.

Reason: Once you have defined the domain of a Probe, you can change its ranges, but not the domain.

Solution: Install the Probe again:

  1. (Optional) If you are going to use the same ranges for the Probe in the new domain, export the ranges before removing the Probe. For details, see Ranges Pane.

  2. Remove the existing Probe from UCMDB. For details, see the Remove Domain or Probe button in Data Flow Probe Setup Window.

  3. Install the Probe. For details, see the section about installing the Data Flow Probe in the interactive Universal CMDB Deployment Guide.

  4. During installation, make sure to give the new Probe a different name to the name given to the old Probe, or make sure you delete the reference to Probe from the original domain.

 

Problem: Discovery shows a disconnected status for a Probe.

Solution: Check the following on the Probe machine:

  • That the Probe is running

  • That there are no network problems

Solution: The probe status is Disconnected or Disconnected (being restarted).

  • Search for restart messages in the wrapperProbeGW logs.
  • If the probe does not restart, try to take probe thread dump from the disconnected time and search for the ProbeGW Tasks Downloader thread.

  • If there is no probe thread dump, investigate the problematic timeframe in the wrapperProbeGw log. In particular:

    • Check if the probe tasks confirmer has been running for more than 5 minutes.
    • Check if some of the resources are being downloaded for more than 5 minutes.

 

Problem: The connection between the Universal CMDB server and the Probe fails due to an HTTP exception.

Solution: Ensure that none of the Probe ports are in use by another process.

 

Problem: A Data Flow Probe node name cannot be resolved to its IP address. If this happens, the host cannot be discovered, and the Probe does not function correctly.

Solution: Add the host machine name to the Windows HOSTS file on the Data Flow Probe machine.

 

Problem: After uninstalling the Data Flow Probe, mysqld.exe and associated files are not deleted.

Solution: To delete all files, restart the machine on which the Data Flow Probe was installed.

 

Problem: After the UCMDB Server CUP is updated, the Probe fails to start or fails to connect to server

Solution: The Probe's CUP version must be the same as UCMDB Server's CUP version. If the CUP versions are not aligned, you must update the Probe's CUP version. To do this, see How to Deploy a Data Flow Probe CUP.

In some cases, the CUP may need to be deployed manually on a Probe. For details, see How to Deploy a Data Flow Probe CUP Manually.

 

Problem: I want to check if my integration probe is connected, but I can't see it listed in the Data Flow Probe Setup module tree.

Reason: The Data Flow Probe Setup module displays only Data Flow Probes for discovery. Integration Probes—that is, Probes on Linux machines, and Windows Probes configured for integration only— are not displayed in the Data Flow Probe Setup module.

Workaround: To see if an integration Probe is connected, create a dummy integration point and verify that the Probe is listed among the Probes that can be selected for the integration point (in the Data Flow Probe field). For details, see How to Set Up an Integration Point.

 

Problem: Troubleshooting PostgreSQL Issues

Solution:

The table below lists the Data Flow Probe database scripts. These scripts can be modified for administration purposes, both in Windows and Linux environments.

Note  

  • The scripts are located on the Data Flow Probe machine, in the following location:

  • Data Flow Probe database scripts should be changed for specific administration purposes only.
Script Description
exportPostgresql [PostgreSQL root account password] Exports all data from the DataFlowProbe database schema to data_flow_probe_export.bin in the current directory
importPostgresql [Export file name] [PostgreSQL root account password Imports data from a file created by the exportPostgresql script into the DataFlowProbe schema
enable_remote_user_access Configures the PostgreSQL Data Flow Probe account to be accessible from remote machines
remove_remote_user_access Configures the PostgreSQL Data Flow Probe account to be accessible only from the local machine (default)
set_db_user_password [new PostgreSQL Data Flow Probe account password] [PostgreSQL root account password] Modifies the PostgreSQL Data Flow Probe account password
set_root_password [new PostgreSQL root account password] [Current PostgreSQL root account password] Modifies the PostgreSQL root account password

 

Problem: The Data Flow Probe database service cannot start.

  • Reason: Hosts machine must not contain "localhost".

    Solution: On the Data Flow Probe machine, open

    • Windows: %systemroot%\system32\drivers\etc\hosts
    • Linux: /etc/hosts

    and ensure that all lines containing "localhost" are commented out.

  • Reason: Microsoft Visual C++ 2010 x64 Redistributable is installed during the installation of the Probe. If for some reason this redistributable is uninstalled, PostgreSQL stops working

    Solution: Check if Microsoft Visual C++ 2010 x64 Redistributable is installed. If not, reinstall it.

Data Flow Probe Setup - Limitations

  • Limitation: The Probe cluster range distribution purely handles the potential number of active or inactive IP addresses available in its network range, not the actual number of IP addresses populated or the number of devices available inside those ranges. For example, You have 100 IP addresses, and Probe cluster distributes them into two Probes: 50 IP addresses to one Probe and 50 IP addresses to the other Probe. If only one IP address is active on Probe A and 50 IP addresses are active on Probe B, Probe B would take a disproportional share of the load.

    Reason: The actual number of IP addresses are unavailable before the actual discovery.

    Workaround: If many IP ranges are unevenly populated, in order to have full control of the IP address distribution, it is best to avoid using the Probe cluster feature and simply use Probes separately.

  • When the Probe is running in separate mode on a machine where both the Gateways and the Manager share a same installation folder, the Data Flow Probe CUP must be installed manually. For details, see How to Deploy a Data Flow Probe CUP Manually.

  • Data Flow Probe CUPs that were deployed manually can be uninstalled using manual methods only. For details, see How to Uninstall Probe CUPs Manually.

  • Universal Discovery Agent may not callhome in, but not limited to, the following scenario:

    • The callhome IP address that is configured on the Universal Discovery Agent belongs to a client type range that is added to a cluster.

      Note The Universal Discovery Agent supports 1 primary and 1 secondary probe.

    • The range is a member of a probe cluster.
    • The cluster contains two or more probes.

    In this scenario, callhome may not work as expected. Contact Micro Focus Support for assistance in configuring callhome.

Troubleshooting Probe Auto Upgrade

General issues

  • Probe Downgrade or Rollback

    Automatic downgrade or rollback of the probe version is not supported. To perform downgrade or to rollback a version upgrade, uninstall the probe and then install the required version.

     

  • Probe Restart

    There are several situations where the Probe automatically restarts itself. For example, when deploying a new Content Pack or applying a CUP. In these cases, the Probe waits for 15 minutes to allow the running jobs to finish, and only then shuts down. Jobs that did not finish in that time (for example, long integrations) start running again when the Probe restarts.

     

  • How to Change the PostgreSQL Database Default Port

    To change the port for the PostgreSQL database, that is defined by default in the Data Flow Probe installation:

    1. Stop the Probe (if already started).

    2. Stop the UCMDB Probe DB Service.

    3. Modify the port in the following file:

      • Windows: C:\UCMDB\DataFlowProbe\pgsql\data\postgresql.conf
      • Linux: /opt/UCMDB/DataFlowProbe/pgsql/data/postgresql.conf

      The following shows how to change the port from 5432 to 5433:

      Note If two probes coexist on the same machine, plan the port usage carefully so that the ports used by the two probes do not conflict.

      #port = 5432 # (change requires restart) < Old line

      port = 5433 # (change requires restart) < New line

    4. Make the following changes in the DataFlowProbe.properties file (in C:\UCMDB\DataFlowProbe\conf on Windows, and /opt/UCMDB/DataFlowProbe/conf on Linux):

      • Change:

        jdbc:postgresql://localhost/dataflowprobe

        to

        jdbc:postgresql://localhost:5433/dataflowprobe
      • Change:

        appilog.agent.local.jdbc.uri = jdbc:postgresql://localhost/dataflowprobe

        to

        appilog.agent.local.jdbc.uri = jdbc:postgresql://localhost:5433/dataflowprobe
      • Change:

        appilog.agent.normalization.jdbc.uri = jdbc:postgresql://localhost/dataflowprobe

        to

        appilog.agent.normalization.jdbc.uri = jdbc:postgresql://localhost:5433/dataflowprobe
      • Change:

        appilog.agent.netflow.jdbc.uri = jdbc:postgresql://localhost/dataflowprobe

        to

        appilog.agent.netflow.jdbc.uri = jdbc:postgresql://localhost:5433/dataflowprobe

     

  • Probe auto upgrade limitation

    If the probe auto upgrade fails, retry will not resolve the issue. You need to access the corresponding probe server and perform manual deployment of the probe.

     

  • Probe auto upgrade known issue

    The C:\UCMDB\temp folder was created and used by the probe auto upgrader during the upgrade process. If you see this folder on your probe server, you can just ignore it, or safely remove it. It has no functional impact.

     

  • Problem: Sometimes due to the environment, the probe installer may be in hung state and cannot finish the upgrade. If this happens, the probe upgrader will abort the probe upgrade process and restore the probe.

    Solution: Manually upgrade the probe.

     

  • Problem: Log shows that “errors occurred installing probe”, and probe service, probe DB service, or XML Enricher service could not be started. It may happen when errors occur launching the probe installer.

    Solution: You need to manually upgrade the probe.

    Most likely it is caused by the missing of some properties in the configuration file. If not, you may need to check the probe auto upgrade log files for further information. For details, see Probe auto upgrade log files.

 

How to check if resources are placed under the right place after UCMDB server is upgraded to version 2018.05

  1. Check if the Data Flow Probe installer is placed under the right place

    Go to the <UCMDB_Server>\content\probe_installer directory. This directory should contain the probe installer UCMDB_DataFlowProbe_2018.05.exe.

  2. Check if the probe auto upgrader package is placed under the right place

    Go to the <UCMDB_Server>\runtime\probe_upgrade directory. This directory should contain the probe upgrade package probe-patch-windows.zip.

    If the probe-patch-windows.zip package does not exist,

    1. Go to <UCMDB_Server>\content\probe_patch.
    2. Copy the probe-patch-2018.05-windows.zip package to the <UCMDB_Server>\runtime\probe_upgrade directory.
    3. Restart the UCMDB server. UCMDB server will then perform probe auto upgrade.

 

Probe auto upgrade log files

The following probe auto upgrade log files (in the <DataFlowProbe>\runtime\log directory) contains details if the probe auto upgrade fails:

  • pg_upgrade.log. Shows the running details of the pg_upgrade.bat script, including the details about PostgreSQL upgrade and table splitting.
  • probe_upgrade_conf_merge.log. Shows the related information when probe installer merger configuration files.
  • probe_auto_upgrade.log. In the probeUpgradeLogs subfolder, shows the related information when the probe auto upgrader upgrades a probe.

For more details about the log files, see "Data Flow Probe Log Files" in the Data Flow Management section of the UCMDB Help.

 

XML Enricher service port conflict issue

Problem: The XML Enricher service may fail to start after the probe upgrade due to port conflict. In that case, the probe_auto_upgrade.log is placed under the failed folder, for example, <UCMDB_Server>\runtime\log\probeUpgradeLogs\10.22to2018.05\failed. You should find the following message in probe_auto_upgrade.log:

2017-07-14 11:27:11 INFO  ServiceControl:106 - Starting XML Enricher service...
2017-07-14 11:27:11 INFO  ServiceControl:328 - XML EnricherStatus status: STOPPED
2017-07-14 11:27:11 INFO  ServiceControl:381 - Waiting for execution...
2017-07-14 11:27:46 ERROR ServiceControl:394 - Problems occurred during execution.

Solution: Check <DataFlowProbe>\runtime\logWrapperEnricher.log, if you find "Port already in use: 34545", you can change the port for XMLEnricher by editing the <DataFlowProbe>\bin\xmlenriche\WrapperEnricher.conf file, or free the port 34545.

Check the PostgreSQL version to verify if the upgrade is successful or not.

When PostgreSQL finishes upgrade, you can check the PostgreSQL version to verify if the probe upgrade is successful or not.

Or, you can check the pg_upgrade.log in the <DataFlowProbe>\runtime\log folder for more details.

If PostgreSQL upgrade is completed successfully, you can find “The new PostgreSQL will be used” message in the pg_upgrade.log file, and you can also see two folders: <DataFlowProbe>\pgsql and <DataFlowProbe>\pgsql.old . The <DataFlowProbe>\pgsql.new folder was removed when the upgrade is completed successfully. If you manually run the script from the <DataFlowProbe>/tools/dbscripts folder to upgrade the database again, the log will tell you that pgsql.new does not exist, and running the script again has no functional impact to the PostgreSQL installation.

 

How to read log messages when the PostgreSQL upgrade fails?

In some cases the PostgreSQL upgrade may fail. Then you can find three subfolders under <DataFlowProbe>: pgsql, pgsql.old, and pgsql.new. You can also find more details in the pg_upgrade.log file, which displays messages that may indicate why the upgrade failed. You may follow the solutions for different log messages.

  1. Log message: Folder pgsql.new doesn't exist.

    • Possible Cause: Something unexpected happened when installing the probe, and the probe failed to generate the pgsql.new folder.

      Solution: Download PostgreSQL resources for the same version from the official PostgreSQL website and extract the resources to the pgsql.new folder, then rerun the pg_upgrade.bat script.

    • Possible Cause: You have already run the script more than once, and the script already deleted the pgsql.new folder previously.

      Solution: The PostgreSQL upgrade is completed successfully previously. Just check for the PostgreSQL version.

  2. Log message: The new PostgreSQL database initialization failed.

    Possible Cause: The conditions for initdb were not met.

    Solution: Check if the password is correct, or there is no data folder in pgsql.new.

  3. Log message: The precheck of the old and new PostgreSQL failed.

    Possible Cause: The script did not run in the local system account or has no full control of the files.

    Solution: Switch to the local system account, or add full control to the whole folder for users, then rerun the script.

  4. Log message: PostgreSQL upgrade failed, the old PostgreSQL will still be used.

    Possible Cause: The conditions for pg_upgrade.exe were not met.

    Solution: Check the conditions for both the old PostgreSQL and the new PostgreSQL, make sure both are fine. You can manually run the following command to find more details:

    "%DB_PATH%\pg_upgrade.exe" -b "%BASE_DIR%\pgsql\bin" -B  "%BASE_DIR%\pgsql.new\bin" -d "%BASE_DIR%\pgsql\data" -D "%BASE_DIR%\pgsql.new\data" -p 5436 -P 5437 -U postgres
  5. Log message: Table splitting failed, the old PostgreSQL will still be used.

    Possible Cause: There is no ddm_discovery_results table in the database, or the upgrade failed when creating the ddm_discovery_touch_results table.

    Solution: Check the log details to find out where the problems happened, then check the script tools\dbscripts\migrateData.cmd.

After resolving issue a ~ issue e above, you can follow the steps below to upgrade PostgreSQL manually:

  1. Stop the UCMDB_Probe_DB service.
  2. Remove the content of the pgsql folder and copy the content of the pgsql.old folder into the pgsql folder.
  3. Give full control to the user of the DataFlowProbe folder, and then from the <DataFlowProbe>/tools/dbscripts folder run the following command:

    pg_upgrade.bat %DB_Password%
  4. Once the command is successful run, revert the full control you granted to the user.

Note During the upgrade, Micro Focus does not keep the configuration files for <DataFlowProbe>\pgsql\data\postgresql.conf, so make sure you reconfigure it after the upgrade (if necessary).

When the probe installer is launched, it will merge the following configuration files:

  • DataFlowProbe.properties
  • DataFlowProbeOverride.properties (If exists)

The result is that all the custom configuration settings will be written into the DataFlowProbeOverride.properties file.

Note The recommended value of the appilog.agent.probe.sendtouchResultsToServer.maxObjects setting in DataFlowProbe.properties for version 10.33 (and later)is 500. So if your value is greater than 500, it will be reset back to 500.

The following files will be replaced with the ones from your environment:

  • <DataFlowProbe>\conf\postgresql.conf
  • <DataFlowProbe>\conf\probeMgrList.xml
  • <DataFlowProbe>\conf\WrapperGatewayCustom.conf
  • <DataFlowProbe>\conf\WrapperManagerCustom.conf
  • <DataFlowProbe>\conf\security\ssl.properties
  • <DataFlowProbe>\conf\security\HPProbeKeyStore.jks
  • <DataFlowProbe>\conf\security\HPProbeTrustStore.jks
  • <DataFlowProbe>\conf\enricher.properties
  • <DataFlowProbe>\conf\EnricherServiceSettings.ini
  • <DataFlowProbe>\bin\WrapperEnv.conf

  • <DataFlowProbe>\bin\wrapper-platform.conf
  • <DataFlowProbe>\bin\WrapperManager.conf
  • <DataFlowProbe>\bin\WrapperGateway.conf
  • <DataFlowProbe>\bin\xmlenricher\WrapperEnricher.conf

Problem: After finishing probe auto upgrade, the probe cannot not be started, and many properties in DataFlowProbe.properties are empty. This happens when probe backing up configuration files failed.

Solution: You need to manually upgrade the probe. That is to say, uninstall the probe and install version 10.33 (or later) probe manually.