Searching the Help
To search for information in the Help, type a word or phrase in the Search box. When you enter a group of words, OR is inferred. You can use Boolean operators to refine your search.
Results returned are case insensitive. However, results ranking takes case into account and assigns higher scores to case matches. Therefore, a search for "cats" followed by a search for "Cats" would return the same number of Help topics, but the order in which the topics are listed would be different.
Search for | Example | Results |
---|---|---|
A single word | cat
|
Topics that contain the word "cat". You will also find its grammatical variations, such as "cats". |
A phrase. You can specify that the search results contain a specific phrase. |
"cat food" (quotation marks) |
Topics that contain the literal phrase "cat food" and all its grammatical variations. Without the quotation marks, the query is equivalent to specifying an OR operator, which finds topics with one of the individual words instead of the phrase. |
Search for | Operator | Example |
---|---|---|
Two or more words in the same topic |
|
|
Either word in a topic |
|
|
Topics that do not contain a specific word or phrase |
|
|
Topics that contain one string and do not contain another | ^ (caret) |
cat ^ mouse
|
A combination of search types | ( ) parentheses |
|
- Structured Log File Policy User Interface
- Configuring Structured Log File Policy Properties
- Configuring the Data Source in Structured Log File Policies
- Configuring Mappings in Structured Log File Policies
- Configuring Mappings in Structured Log File Policies (Generic Output Only)
- Configuring Event Defaults in Structured Log File Policies
- Configuring Metrics Defaults in Structured Log File Policies
- Configuring Event Rules in Structured Log File Policies
- Configuring Metrics Rules in Structured Log File Policies
- Configuring Options in Structured Log File Policies
Configuring Data Source in Structured Log File Policies
The source page of the structured log file policy editor enables you to specify which log file the policy reads. You can also configure the policy to extract data, to which the log file structure is applied, from the log file. The policy retains that structured data for later reuse in other pages of the policy editor.
To access
-
In the Operations Connector user interface, click in the toolbar. Then click Event > Structured Log File.
-
In the Operations Connector user interface, click in the toolbar. Then click Metrics > Structured Log File.
-
In the Operations Connector user interface, click in the toolbar. Then click Generic output > Structured Log File.
Alternatively, double-click an existing policy to edit it.
Click Source to open the policy Source page.
Defining a log file structure by using OM pattern-matching language
A log file structure is defined by using the OM pattern-matching language, so that the dynamic parts of the text-based events can be extracted from any log file row, assigned to variables, and then used as parameters to build the event description or to set other attributes.
You can use an asterisk (<*>) wildcard in the Log File Path / Name field to match multiple file names. For example, to match the source file names events.1.log
and events.2.log
, use the pattern <path>/events<*>.xml
in the Log File Path / Name field. Note that the <*> wildcard is the only supported OM pattern in log file paths.
Example 1: Use OM pattern-matching language to extract the log file structure from the following log file line:
Mon, 28 Jul 2014 23:19:29 GMT;SEVERE;frogproc;123456;ERR-123;failed connect to db ‘pond’
This is done by defining the fields from which the log file line is logically constituted, and allocating the correspondent variables by which these fields can be organized within a structure. The log file line in this example is constituted logically from the following fields:
timestamp;severity;processname;pid;errorcode;errortext
Allocate the appropriate variable extractions to all fields by using the OM pattern-matching language, as follows:
<*.timestamp>;<*.severity>;<*.processname>;<*.pid>;<*.errorcode>;<*.errortext>
Now each field from the log file line can be identified by the variable name, that can be also used in all subsequent policy operations, such as mappings, default attributes and rules.
For example, when setting the Title
field of the event attribute with the value of the errortext
field, you should enter <$DATA:errortext>
in the Title
field of the Event Attributes tab of the policy editor, or you could just drag the errortext
property from the Sample Data tab to the Title
field.
In the Rules tab the field is simply referred to as errortext
in the Property
field.
Defining a log file structure by using static fields (Metric only)
In addition to defining a log file structure by using the OM pattern-matching language, you can identify a log file structure by using the static fields.
Example 2: Static fields are actually word lists of nonrecurring data from the log file separated by comma. In case only one metric per line is present, all fields can be addressed. For example:
Use static fields to extract the log file structure from the following log file line:
1380004749|tcpc113.RIESLING.INTERN|LogicalDisk|C:|% Free Space|66.379264831543|Microsoft.Windows.Server.2008.LogicalDisk
This is done by defining the fields from which the log file line is logically constituted, and by using these fields as static fields with the defined field separator char. The log file line in this example is constituted logically from the following fields:
timestamp|hostname|entitytype|entityid|countername|countervalue|scomtype
The corresponding static fields should be entered as follows:
timestamp,hostname,entitytype,entityid,countername,countervalue,scomtype
Field separator char is a pipe symbol (|
).
Note that the static fields require comma instead of the pipe symbol as a delimiter.
This is actually the recommended method, due to performance reasons.
Using recurring fields in defining a log file structure (Metric only)
The "Recurring fields" configuration parameter is useful in case when more than one performance value is present within a single log file line. This is actually a word list that contains the recurring part from the log line. Each recurrence creates a record in the store.
Example 3: Extract the log file structure from the following log file lines by using also the recurring fields:
1380004749|tcpc113.RIESLING.INTERN|LogicalDisk|C:|% Free Space|66.379264831543|Current Disk Queue Length|0|Avg. Disk sec/Transfer|0.000484383897855878
1380004748|tcpc113.RIESLING.INTERN|Network Interface|10|Bytes Total/sec|55230.0703125|Current Bandwidth|1000000000
This is done by defining the fields from which the log file line is logically constituted, and then identifying which of them can be addressed as static fields, and which can be described as a variable part that consists of an arbitrary number of countername-countervalue pairs. These are the recurring fields. The log file lines in this example are constituted logically from the following fields:
timestamp|hostname|entitytype|entityid|countername_1|countervalue_1|countername_2|countervalue_2|countername_3|countervalue_3
timestamp|hostname|entitytype|entityid|countername_1|countervalue_1|countername_2|countervalue_2
The corresponding static fields should be entered as follows:
timestamp,hostname,entitytype,entityid
In addition, enter the following recurring fields:
countername,countervalue
Field separator char is a pipe symbol (|
).
Static fields can also be specified by using the OM pattern-matching language. However, this is not the recommended method, because of the performance reasons. The syntax is as follows:
<*.timestamp>\|<*.hostname>\|<*.entitytype>\|<*.entityid>
Setting the Line Start Indicator (Event only)
Line start indicator enables you to differentiate the structured log file entries based on their logical relationship, regardless of their span in the log file. When the log entry that represents a single logical unit has a span of more than one line in the log file, you can easily differentiate it from other log entries by identifying a line start indicator from a log file, and then specifying the matched line start pattern by using the OM pattern matching language.
For example, the following tomcat.log
file excerpt contains four logically separated log entries that are expanded over multiple log lines, however all of them start with timestamp (May 19, 2015 2:39:01 PM):
May 19, 2015 2:39:01 PM org.apache.catalina.core.StandardService initInternal
SEVERE: Failed to initialize connector [Connector[HTTP/1.1-30000]] org.apache.catalina.LifecycleException: Failed to initialize component [Connector[HTTP/1.1-30000]] at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:106) at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559) at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102) at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:821) at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102) at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102) ... 12 more
May 19, 2015 2:39:01 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 3622 ms May 19, 2015 2:39:01 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina May 19, 2015 2:39:01 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: OpenView TomcatB
Therefore, in this case, the line start indicator must match the timestamp pattern, as follows:
<*> <#>, <4#> <#>:<#>:<#> <2*>
In this instance, the following applies:
Log Text | May | 19 | 2015 | 2 | 39 | 01 | PM |
OM Pattern | <*>
|
<#>
|
<4#>
|
<#>
|
<#>
|
<#>
|
<2*>
|
Description | Matches string | Matches digit(s) | Matches 4 digit(s) | Matches digit(s) | Matches digit(s) | Matches digit(s) | Matches string with length 2 |
The punctuation marks and spaces in the line start pattern represent the static strings derived from the log file text.
Tasks
How to configure the structured log file source
This task describes how to configure the structured log file source file and how the policy reads it.
-
Type the full path to the log file on the Operations Connector system.
-
Click to load a sample log file. You can load a sample file from the Operations Connector system or from the system where the Web browser runs.
When you load sample data, Operations Connector replaces already loaded data with the new data. This does not affect any mappings that are defined based on previously available sample data.
-
For the Event integration only:
- In the Logfile Structure field, enter an OM pattern by which the log file will be structured. You can see the newly structured data on the Structured data tab of the Structured logfile sample data window.
- In the Line Start Indicator field, enter a line start pattern that matches the line start indicator from the log file.
For the Metric integration only: In the Logfile Structure field:
- Choose the data field by which the log file structure will be identified. You can choose either to identify by using OM pattern, or by using static fields.
- Define the recurring fields in the log file structure.
- Enter the field separator.
Related tasks
UI Descriptions
UI Element | Description | ||||||
---|---|---|---|---|---|---|---|
Structured Logfile Source | |||||||
Log File Path / Name |
Path and name of the structured log file that the policy reads.
|
||||||
Polling Interval |
Determines how often the policy reads the structured log file (in days, hours, minutes, and seconds). This period of time is the polling interval. The larger polling interval is set, the less performance is needed. However, more memory is used (this depends on the amount of the data in the log file). Setting the polling interval below 30 seconds is not recommended, the default setting is usually appropriate. Note that a policy begins to evaluate data after the first polling interval passes, unless the To modify the time, click the button and use the drop-down lists to specify increments of days, hours, minutes, or seconds. To insert a parameter in a time field, type the parameter in the format Default value: 5 minutes Make sure that you set this value to a minimum of |
||||||
Logfile Character Set |
Name of the character set used by the structured log file that the policy reads. It is important to choose the correct character set. If the character set that the policy is expecting does not match the character set in the structured log file, pattern matching may not work, and the event details can have incorrect characters or be truncated in OMi. If you are unsure of which character set is used by the structured log file that the policy reads, consult the documentation of the program that writes the file. Default value: UTF-8 |
||||||
Send event if log file does not exist |
Default value: not selected |
||||||
Close after reading |
If you select this option, the file handle of the structured log file closes and reopens again after the polling interval. The file is read from the last position. If this file had a rollover in the meantime, it is read from the beginning. If the name of the structured log file changes, and a new file was started in the meantime, the policy continues to read the new structured log file and the original structured log file data is lost. If you do not select this option, the file handle remains and is read entirely each time, unless there is a newer file with the same name (or name pattern). In that case the original structured log file is read to the end, and then the newer file is read. Therefore, no data is lost. Consider the following example: a policy reads the structured log file In case this option is selected, the unread data from the In case this option is not selected, the unread data from the new Default value: not selected |
||||||
Read Mode |
The read mode of a structured log file policy indicates whether the policy processes the entire file or only new entries.
Every policy reads the same structured log files independently from any other policies. This means, for example, that if "Policy 1" with read mode Read from beginning (first time) is activated and "Policy 2" with the same read mode already exists, "Policy 1" still reads the entire file after it has been activated. Default value: Read from last position |
||||||
Sample Data | |||||||
Loads the log file into Operations Connector
|
|||||||
Opens the Structured logfile sample data dialog box. This dialog box contains the following tabs:
|
|||||||
Logfile Structure | |||||||
Logfile Pattern (for events only) |
A pattern by which the log file's structure is extracted, and which will be used in all other policy operations. This pattern should comply with the standard pattern definition used by all Operations Manager products (OM pattern). For example, this structure could like as follows: |
||||||
Data Fields (for metrics and generic output only) |
|
||||||
|
|||||||
Recurring Fields (for metrics and generic output only) |
A word list that contains the recurring part from the log line. Each recurrence creates a record in the store. For example:
|
||||||
Data Field Separator
(for metrics and generic output only) |
The separator that is used as a data separator in the log file. | ||||||
Line Start Indicator (for events only) | |||||||
Line Start Pattern
|
This field enables you to differentiate the structured log file entries based on their logical relationship, regardless of their span in the log file. You can do this by identifying a line start indicator from a log file, and then specifying the matched line start pattern by using the OM pattern matching language. |
Related topics
Configuring Data Source in Structured Log File Policies
We welcome your comments!
To open the configured email client on this computer, open an email window.
Otherwise, copy the information below to a web mail client, and send this email to ovdoc-asm@hpe.com.
Help Topic ID:
Product:
Topic Title:
Feedback: