Sunday, September 15, 2019

The difference between log forwarding and event forwarding becomes clear when the use of command line options for kube-apiserver is considered. For example, the audit-log-path option dumps the ausit events to a log file that cannot be accessed from within the Kubernetes runtime environment within the cluster. Therefore this option cannot be used with FluentD because that us a containerized workload. On the other hand, the audit-web-hook option allows the service to listen for callbacks from the Kubernetes control plane to the arrival of audur events. These service listening from Falco for this web hook endpoint is running in its own container as a Kubernetes service. The control plane makes only web request per audit event and since the events are forwarded over the http, the Falco service can efficiently handle the rate and latency of traffic.
The performance consideration between the two options is also notable. The log forwarding is the equivalent of running the tail command on the log file and forwarding it over TCP as netcat command. This utilizes the sand amount of data in transfers and uses a TCP connection although it does not traverse as many layers as the web hook. It is also suitable for Syslog drain that enables further performance improvements
The Webhook command is a push command and requires packing and unpacking of data as it traverses up and down the network layers. There is no buffering involved on the service side so there is a chance that some data will be lost as service goes down. The connectivity is also subject to faults more than the Syslog drain. However the http is best suited for message broker intake which facilitates filtering and processing that can significantly improve performance.
The ability to transform events is not necessarily restricted to the audit container based service or services specific to the audit framework. The audit data is rather sensitive which is why it’s access is restricted. The transformation of events can occur even during analysis. This lets the event queries be simpler when the events are transformed. The use of streaming analysis enables the view of the data since the origin as holistic and continuous. With the help of windows over the data, the transformations are efficient.
Transformations can also be persisted where the computations are costly. This helps pay those costs one time rather than every time they need to be analyzed. Persisted transformations help with reuse and sharing. This makes it convenient and efficient to use a single source of truth. Transformations can also be chained between operators and they serve to firm a pipeline. This makes it easier to diagnose, troubleshoot and improve separation of concerns.

No comments:

Post a Comment