Wednesday, September 25, 2019

Logs  may also optionally be converted to events which can then be forwarded to event gateway. Products like badger db are able to retain the events for a certain duration.

Components will then be able to raise events like so:
c.recorder.Event(component, corev1.EventTypeNormal, successReason, successMessage)

An audit event can be similarly raised as shown below:
        ev := &auditinternal.Event{
                RequestReceivedTimestamp: metav1.NewMicroTime(time.Now()),
                Verb:                     attribs.GetVerb(),
                RequestURI:               req.URL.RequestURI(),
                UserAgent:                maybeTruncateUserAgent(req),
                Level:                    level,
        }
With the additional data as:
        if attribs.IsResourceRequest() {
                ev.ObjectRef = &auditinternal.ObjectReference{
                        Namespace:   attribs.GetNamespace(),
                        Name:        attribs.GetName(),
                        Resource:    attribs.GetResource(),
                        Subresource: attribs.GetSubresource(),
                        APIGroup:    attribs.GetAPIGroup(),
                        APIVersion:  attribs.GetAPIVersion(),
                }
        }

Since Event is a core Kubernetes resource much as the same as others, it not only has the metadata with it just like other resources, but also can created, updated and deleted just like others via APIs.

func recordEvent(sink EventSink, event *v1.Event, patch []byte, updateExistingEvent bool, eventCorrelator *EventCorrelator) bool {
:
newEvent, err = sink.Create(event)
:
}
Events conform to append only stream storage due to the sequential nature of the events. Events are also processed in windows making a stream processor such as Flink extremely suitable for events. Stream processors benefit from stream storage and such a storage can be overlaid on any Tier-2 storage. In particular, object storage unlike file storage can come very useful for this purpose since the data also becomes web accessible for other analysis stacks.

As compute, network and storage are overlapping to expand the possibilities in each frontier at cloud scale, message passing has become a ubiquitous functionality. While libraries like protocol buffers and solutions like RabbitMQ are becoming popular, Flows and their queues can be given native support in unstructured storage. Messages are also time-stamped and can be treated as events
Although stream storage is best for events, any time-series database could also work. However, they are not web-accessible unless they are in an object store. Their need for storage is not very different from applications requiring object storage that facilitate store and access. However, as object storage makes inwards into vectorized execution, the data transfers become increasingly fragmented and continuous. At this junction it is important to facilitate data transfer between objects and Event and it is in this space that Events and object store find suitability. Search, browse and query operations are facilitated in a web service using a web-accessible store.


No comments:

Post a Comment