Wednesday, August 21, 2019

Whether a log index store or a native S3 bucket is provisioned with a service broker, the code for the broker provisioning that instance is more or less similar and makes use of web requests to external service providers. These examples from public cloud are open source so it is easy to follow the layout for a custom implementation. Connectivity is a necessary requirement for the service broker.
When the log index store is provisioned on a Splunk instance, the REST APIs to create and secure the log index store are easier and at the same time has options to control every aspect of the log indexing. The search operators can also make extended use of the inferred properties as well as the metadata for the logs
In addition to logs, all the changes from the Kubernetes API server can be sent to the log index store. This is very useful to monitor all the changes for the workloads or the ConfigMaps. Then a dashboard can be created to visualize the changes and the impact. This is helpful beyond the default shipping of events.
All the objects listed in the Kubernetes API reference and the with group as core can be used for this purpose.  This technique can be extended to objects with group as apps and those from namespaces.
The logs are transmitted over http. The event collector in the log store index has an http listener that needs to be turned on. This is not usually on by default.
On the Kubernetes side, the collector is one of the logging drivers. The JSON logging driver, the journald logging driver, fluentd are some of the examples. FluentD gives the ability to customize each input channel in terms of format and content which subsequently helps with search queries.

No comments:

Post a Comment