On the Kubernetes side, the collector is one of the logging drivers. The JSON logging driver, the journald logging driver, fluentd are some of the examples. FluentD gives the ability to customize each input channel in terms of format and content which subsequently helps with search queries.
There are many connectors to choose from when it comes to picking a connector for a specific log index store. In this section, we describe a particular connector for stream storage because this is helpful to directly send the data to stream storage.
The logstash connector can send data to a stream. A plugin for logstash is required only when the stream needs to be bound. In this case, a clientconfiguration is instantiated and bound to the stream controller uri along with the credentials such as username, password if the stream connections are not open for all. The streammanager is provided the clientconfiguration along with the parameters to set up the stream. For example, the parameters might include a routingkey and a stream configuration with a scaling policy that can include the minimum number of segments. The routingkey is only to locate the stream. The stream configuration describes the behaviour of the stream. With this configuration provided to the stream manager, a stream can be created.
Then a clientFactory can instantiate a reader and writer on the stream. With the writer an encoded data can then be serialized as event into the event stream. All subsequent data writes are essentially with the stream writer.
This plugin for connector will usually multiplex data into the event writer. As long as the plugin is bound to the connector, it acts like a funnel.
Logstash can handle all types of logging data. At some point, metrics and log entries may look similar for data and this makes logstash extend its inputs to collect data from both sources. The output of logstash is the input to the stream. The plugin for a particular stream is therefore required on the output side of the logstash.
Logstash may be considered external to the Kubernetes cluster but it can also be deployed local to the cluster
Logstash could be considered heavy for the purposes of sending logs to a store. Any log forwarder can simply forward the logs to a store listening on a tcp port. This is the equivalent of the Unix command “tail –f logfile ¦ nc host port”. The store can even be the same stream storage that is within the cluster.
No comments:
Post a Comment