Sunday, September 1, 2019

A survey of Kubernetes auditing frameworks:
1) A Do-it-yourself approach: Native Kubernetes framework supports the publishing of all events by the system for the core group resources such as pods, secrets, configmaps, persistent volume claims etc. This can be turned on with the help of the following:
An audit policy that specifies sections for verbs such as watch on resources such as pods, configmaps. It specifies levels where only changes affecting increasing orders of metadata level, metadata and request level, metadata, request and response levels are captured for publishing. Events are collected with the help of a collector agent such as fluentd. A log backend or a gateway may be used to specify the destination.
2) Available product – Falco: This is an auditing solution that can be deployed to a Kubernetes cluster and automates the auditing related activities. It also hosts a web server for the querying of audit events and enable more possibilities that would have otherwise required code to be written on the Kubernetes framework.
Falco is available as a container image for a Daemonset so it is deployment follows the same rules as any container resources. These container resources are available in the same way as any other Kubernetes framework.
3) Kubesec.io - This is an open source solution that features a rich web API server. It can scan any of the Kubernetes resources from its declaration and provide security analysis. This framework comes bundled with an http server that supports dynamic queries for security scans on Kubernetes resources.
The approach taken by each of the methods above is somewhat different.  The native Kubernetes auditing is somewhat simpler with very little use of resources while allowing both policy and storage definitions to remain flexible. The use of FluentD is somewhat standard for log collections. All audit events sink to the logs although this could be redirected to syslog.
Falco takes the approach of deploying the Kubernetes Response Engine (KRE) which provides a way to send Falco alerts to a message broker service such as the Amazon Web Service’s Simple Notification Service. It also helps to deploy security playbooks that can mitigate the Falco alerts when they are raised. A service account needs to be created for Falco to connect to the Kubernetes API server to fetch resource metadata. The service account must be secured with suitable Role-based Access control policy. This is typical for any application hosted on Kubernetes. With the help of the image, the default Yaml configuration, the startup script, the service account and role, Falco provides a simple Daemonset deployment.
The approach taken by Kubesec.io among other typical deployments includes an admission controller which prevents the application of a privileged Daemonset, Deployment, or Statefulset to a Kubernetes cluster with the help of a score for the scanning of the declarations corresponding to each. A minimum score is required to gain admission.
All three approaches require the kube-apiserver command line utility to be called with specific command line parameters to initiate auditing, locate the rules declaration and to specify the log file and its rotation. This is done over ssh with the help of suitable credentials, the apiserver host address, the path to the rules to be uploaded.
As a side note, when clusters run on PKS - a cloud technology for automating and hosting Kubernetes clusters, the ssh option may not be available. The PKS native way is to login with the help of username and password to the pks-api server and interact with it using the pks command line tool. File to be uploaded as audit rules is used across clusters so this is a one-time requirement outside of the installer for the cluster. The same can also be mentioned in a user manual for the installer or the product’s security configuration guide.
Auditing events thus collected, can subsequently be queried for specific insights.

No comments:

Post a Comment