Use > Collecting Generic Output Data > How to Collect Generic Output Data

How to Collect Generic Output Data

This section contains an overview of the tasks you must perform when collecting generic output data.

These tasks are the following:

  1. Plan the data collection and mappings

    Define the data you want to collect and forward to the consumer application and which mappings are required to make the data usable for the application.

  2. Configure the policy sources

    Collecting generic data from sources is similar as with event and metric policies. You can gather data through the REST web service, use SQL queries to gather data from databases, patterns to gather data from log and XML files, or gather data by using Perl scripts.

    For details on how to configure the sources for each integration type, see the sections titled "How to Collect Generic Output Data" for the respective integrations:

  3. Map the key fields and add additional fields

    1. Unlike event and metric policies, generic output data policies have no defaults and rules. The mapping is done solely by mapping input filed qualifiers to their replacement fields.

      • With structured data such as XML and structured log files, only the eligible fields (the leaf node) are mapped. For example, if the data you gather has the following structure:

        /data/evt/timestamp
        /data/evt/event_type
        /data/evt/counter

        only the fields timestamp, event_type, and counter are eligible for mapping.

        If you want to rename (remap) part of the path too, you need to create additional rules for that part of the path. In our example, the path is:

        /data/evt

        Here, the eligible part is now evt, so if we map it to event, the mapped structure is now:

        /data/event/timestamp
        /data/event/event_type
        /data/event/counter
        

        Note that all paths that contain the eligible field are now changed.

      • With non-structured data, such as data gathered with Perl scripts or from databases, fields are directly mapped to replacements. This means that the entire filed is eligible.

    2. Decide whether to keep unmapped fields or not. By default, all fields are kept, including the unmapped ones, but you can also choose to discard the unmapped fields.

      With structured data, if unmapped fields are kept, Operations Connector keeps all parents of a mapped key are kept, keeping that the structure is maintained.

    3. Optionally, you can add additional fields to the data set that is sent to Data Forwarding targets, for example, you can add descriptions, notes, and similar.

      Additional fields are simple key-value pairs and you need to manually add both the keys and values. If an added key field name clashes with the one in the mapped data set, it is discarded. The actual field value can consist of static text and data references from the input data (<$DATA:…>). If the input data is structured, additional fields are added as immediate children of the highest level element.

    Related topics

    Configuring Mappings in Database Policies (Generic Output Only)

    Configuring Mappings in Perl Policies (Generic Output Only)

    Configuring Mappings in Structured Log File Policies (Generic Output Only)

    Configuring Mappings in XML File Policies (Generic Output Only)

    Configuring Mappings in REST Web Service Listener Policies (Generic Output Only)