Monday, June 16, 2014

In the previous two posts we have discussed adding a tracer/marker that has tracking and operation log information to the event data in Splunk. We mentioned all events have host and timestamp information and we covered why we wanted this special event. The crux of the implementation is the introduction of this special events to data pipeline. To create the event, we follow the same mechanism as we do for audit events.
Before :
            1     2     3
            __     __      __
           |__|    |__|    |__|               Events --->
------------------------------------

After :
            1     2      Tracker              3
             __     __           __           __
            |__|    |__|         | __|         |__|      Events --->
-------------------------------------

When we generate these events we want to inject them locally to each stream. But we could start with a global tracker that traces the route of the data between various instances in the deployments. The events make their way to internal index. The event is written by creating an instance of the pipeline data from a model that's pre specified. Fields are added to the event to conform it as a regular event.
These are then sent on their way to a queue. Initially we could send the events directly to the queue serviced by and Indexing thread but in order to not hold up that queue or deadlock ourselves, we could create a separate queue that feeds into the other queue.


No comments:

Post a Comment