Why do we discuss a tracer in the event data if all data carry hostname and timestamp field ? We will answer this shortly but tracers enable to generate artificial markers of varying size that test the flow of data in production without affecting or requiring any user changes to the existing data or its manipulation. In other words we add new data with different payloads. As an indicator of the payload, it could journal ( yes i'm using that term ) not only what hosts are participating at what time but also record all operations taken on that data flow so that we can see if the record of operations matches the final result on the data. If the tracer weren't there we would have to id the machines that participated and the logs on those machines to reconstruct the timeline, whereas these records are automatically available. This is no replacement to the logger of existing components but just an addition for each processor to indicate what it executed.
There are three components to enabling a tracer in the event data for Splunk. First is the forwarder which creates the event. Second is the indexer which indexes the data. Third is the search-head or peer that searches the data.
To create a tracer in the data, an admin handler is registered. This takes the following methods:
to create/update
to delete
to list
and such others as reload etc.
The tracer can be triggered via UI, conf file and and even CLI.
The intention of the tracer is to show how the data is flowing and can be put in any pipeline.
Therefore its a processor that can be invoked in different sources and streams. However, we will deal with creating a dedicated data flow for tracer that can be started from any forwarder.
To add the processor and the admin handler, we just implementing the existing interfaces.
To enable the indexer to add its data is slightly different.
We will cover that in more detail shortly.
First the important thing is that the data has to be labeled. We can create a custom data record with a header that identifies this type of data uniquely.
The second thing is to create a record for each host as the data makes its way. This host adds a separate entry with its hostname and time stamp. Further details can be added later as appropriate.
The destination index for this kind of data should always be internal since this is for diagnostics. The destination could be switched to nullQueue if specified but this is not relevant at this time.
The third thing is to create a mechanism to turn this on and off. This could be done via the controls that the admin handler processes.
Initially, there needs to be only one tracer for a Splunk instance but this can be expanded to different inputs as desirable. The config section entry that specifies it globally to the Splunk instance can be specified local to any input.
The presence of an entry in the config for tracer indicates that Splunk will attempt to send a tracer data every 15 minutes through its deployments to its indexes which can be viewed globally.
The tracer data is very much like audit data except for being more public and user friendly. It has information that enables a holistic view of all actively participating Splunk instances.
There are three components to enabling a tracer in the event data for Splunk. First is the forwarder which creates the event. Second is the indexer which indexes the data. Third is the search-head or peer that searches the data.
To create a tracer in the data, an admin handler is registered. This takes the following methods:
to create/update
to delete
to list
and such others as reload etc.
The tracer can be triggered via UI, conf file and and even CLI.
The intention of the tracer is to show how the data is flowing and can be put in any pipeline.
Therefore its a processor that can be invoked in different sources and streams. However, we will deal with creating a dedicated data flow for tracer that can be started from any forwarder.
To add the processor and the admin handler, we just implementing the existing interfaces.
To enable the indexer to add its data is slightly different.
We will cover that in more detail shortly.
First the important thing is that the data has to be labeled. We can create a custom data record with a header that identifies this type of data uniquely.
The second thing is to create a record for each host as the data makes its way. This host adds a separate entry with its hostname and time stamp. Further details can be added later as appropriate.
The destination index for this kind of data should always be internal since this is for diagnostics. The destination could be switched to nullQueue if specified but this is not relevant at this time.
The third thing is to create a mechanism to turn this on and off. This could be done via the controls that the admin handler processes.
Initially, there needs to be only one tracer for a Splunk instance but this can be expanded to different inputs as desirable. The config section entry that specifies it globally to the Splunk instance can be specified local to any input.
The presence of an entry in the config for tracer indicates that Splunk will attempt to send a tracer data every 15 minutes through its deployments to its indexes which can be viewed globally.
The tracer data is very much like audit data except for being more public and user friendly. It has information that enables a holistic view of all actively participating Splunk instances.
No comments:
Post a Comment