Wednesday, August 21, 2019

Whether a log index store or a native S3 bucket is provisioned with a service broker, the code for the broker provisioning that instance is more or less similar and makes use of web requests to external service providers. These examples from public cloud are open source so it is easy to follow the layout for a custom implementation. Connectivity is a necessary requirement for the service broker.
When the log index store is provisioned on a Splunk instance, the REST APIs to create and secure the log index store are easier and at the same time has options to control every aspect of the log indexing. The search operators can also make extended use of the inferred properties as well as the metadata for the logs
In addition to logs, all the changes from the Kubernetes API server can be sent to the log index store. This is very useful to monitor all the changes for the workloads or the ConfigMaps. Then a dashboard can be created to visualize the changes and the impact. This is helpful beyond the default shipping of events.
All the objects listed in the Kubernetes API reference and the with group as core can be used for this purpose.  This technique can be extended to objects with group as apps and those from namespaces.
The logs are transmitted over http. The event collector in the log store index has an http listener that needs to be turned on. This is not usually on by default.
On the Kubernetes side, the collector is one of the logging drivers. The JSON logging driver, the journald logging driver, fluentd are some of the examples. FluentD gives the ability to customize each input channel in terms of format and content which subsequently helps with search queries.

Tuesday, August 20, 2019


We were discussing the use of a log service broker in Kubernetes. Although any log index store can suffice, a specific log index store may make web registrations a breeze while providing the ability to view eye-candy charts and graphs with the analysis made over the logs. A log store may also offer full-service for the logs such that the automation to send alerts and notifications based on queries over the logs, may not require any code. 
Some of the public clouds also facilitate the use of their services by providing their respective service brokers in the cloud. This enables the public cloud services to use which expand well beyond the cluster. 

The Service broker implements the Open Service Broker API specification for consistency and implementing best practice so that the individual development teams are not left to do it on their own. A typical example of a public cloud service broker usage is one where an S3 bucket is provisioned and connected to the Kubernetes application.  This example allows data to be stored in the public cloud with the concerns alleviated from the Kubernetes cluster.  

Whether a log index store or a native S3 bucket is provisioned with a service broker, the code for the broker provisioning that instance is more or less similar and makes use of web requests to external service providers. These examples from public cloud are open source so it is easy to follow the layout for a custom implementation. Connectivity is a necessary requirement for the service broker. 

When the log index store is provisioned on a Splunk instance, the REST APIs to create and secure the log index store are easier and at the same time has options to control every aspect of the log indexing. The search operators can also make extended use of the inferred properties as well as the metadata for the logs  

Monday, August 19, 2019

The implementation of a LogServiceBroker in Kubernetes:
Earlier we discussed the advantage of using a Log service broker in Kubernetes (k8s) to provision a log service that sends logs to a Log index store external to the cluster. This enables log index stores to run anywhere and work independently from the cluster. This also allows the index store to serve different Kubernetes clusters. Therefore, it is a helpful addition to any Kubernetes cluster.

In this article, we describe the implementation specific of a k8s log service broker. A service broker facilitates the input to the log index store to be understood by Kubernetes as a custom resource within the cluster. This registration is maintained in the cluster-wide system catalog thereby ensuring only one active cluster-wide instance for all logs to be destined for the provisioned log index store.
This makes the broker registration as a boiler plate declaration where we define the Kubernetes resource as a “kind: ClusterServiceBroker” along with suitable labels and annotations. The specification of the resource includes an access url for the broker to receive requests and an authentication info represented by a reference to a Kubernetes secret. Any credential used along with the access url is traditionally kept safe with the notion of a Kubernetes secret. The service that is hosted for the url is also declared and allowed to run on the Kubernetes cluster with the help of an account and secured with role-based access control. All of these definitions for service, account and role-based access have standard declarations which is represented as yaml.
Having described the registration of the broker, we now look to implement the service accepting requests on the specified url. This service has a model and a controller that pretty much behaves like any other web service. The controller implements the ComposableServiceInstanceService class which describes the provisioning of a service instance and the ComposableServiceInstanceBindingService which describes the provisioning of a service binding for the service instance. Our notion of service instance and service binding translate to a service that setsup a channel for publishing logs to an external index store by making web requests to the index store while registering the callers who will supply the logs with service bindings to this instance. The model used with these services usually define the parameters and attributes accepted to make requests to these services. An application instance is setup with context as a SpringBootApplication if the services are implemented in Java.
Although any log index store can suffice, a specific log index store may make web registrations a breeze while providing the ability to view eye-candy charts and graphs with the analysis made over the logs. A log store may also offer full-service for the logs such that the automation to send alerts and notifications based on queries over the logs, may not require any code.

Sunday, August 18, 2019

An essay on hosting a runtime
Many storage products offer viable alternatives to file system storage with the hope that their offerings from storage engineering best practice, will meet and exceed the store-and-retrieve requirements from analytical workloads. The world of analytics drives business because it translates data to information by running queries. Traditionally, they have looked for running queries on accumulated historical data and the emerging trend is now to see the past, present and future data all at once and in a continuous streaming manner. This makes the analytics seek out more streamlined storage products than the conventional database.
The queries for analytics have an established culture and a well-recognized database-oriented query language and many data analysts have made the transition from using spreadsheets to writing queries to view their data. With the rapid popularity of programming languages to suit different industries, the queries became all the easier to write and grew in shape and form. This made queries very popular too.
Queries written in any language need to be compiled so that a query processing engine can execute it properly over a storage product. In the absence of this engine, it became hard for the storage product to embrace the queries. Similarly, the expansion of queries to leverage the nuances and the growing capabilities in the storage product was limited by the gap in a suitable translator for the product.
Around the same time, remote access of any product became a requirement, which was addressed with the adoption of a standard application programming interface or APIs as they are more commonly called.  These APIs allowed the product to be viewed as resources and use a small set of verbs over the web protocol that powers the internet. These verbs were the equivalent of create, retrieve, update and delete of a resource and virtually all offerings from the storage product could be viewed as resources.
Thus, querying became a way to execute these verbs over the network on the data residing in the storage product. This convention enabled clients to write the queries in their choice of languages which could then be translated into a set of calls over the network against these APIs. This mechanism gained immense popularity to bridge the gap between analysis and storage because the network was the world wide web and queries could come from all around the world.
Queries are diverse and storage products offer the best for data at rest and in transit. The bridge to the gap between storage and analysis was inefficient, delayed, chatty and layered. There needed to be some technology that enabled queries to be translated into something more native to the product and perhaps executed as close to the data as possible.
The notion that programs could be written in a variety of languages but executed on a computer using a runtime was now being sought for closing this gap. Many databases had already become leaders in hosting business logic directly within the database so that the computations could become fast as they operated very close to the data. Even when storage products were no longer databases, quite a few of them still used algorithms and data structures that had stood the test of time with databases. The matter of hosting a runtime to understand queries written in different languages and directly return the results of the calculations, now seemed to become part of a requirement for storage products. This holds a lot of promise in the industry and with surprisingly few pioneers.

Saturday, August 17, 2019

Today we continue reading the book "Stream processing with Apache Flink" We have read about parallelization, time and state in stream processing. We discussed the jobManager, taskManager, resourceManager and Dispatcher implemented the distributed stream processing. The jobManager is the master process that controls the execution of a single application. Each application is controlled by a different jobManager and is usually represented by a logical dataflow graph called the jobGraph and a jar file. The jobManager converts the job graph into a physical data flow graph and parallelizes as much of the execution as possible. 
Each task Manager has a pool of network buffers with size 32KB to send and receive data  The sender and receiver exchange data based on permanent tcp connections if they are on different processes A buffer is require for every receiver and there should be enough buffers and is proprotional to the number of tasks of the involved operators. Buffers add latency so a mechanism for flow control is required. Flink implements a credit-based flow control. which reduces latency because the sender can send data as soon as the receiver has enough resources to accept it.The receiver sends credit notification with the number of network buffers it was granted and the sender piggy backs the backlog that is ready. This ensures that there is a continuous flow
Flink features an optimization technique called task chaining where tasks that are usually remote are found locally and executed by linking them with local forward channels.\ All functions are evaluated by an individual task running in a dedicated thread.
Timestamps and watermarks support event-time semantics. Timetamps are sixteen byte long values that is slapped on to the metadata and is used to collect events into a time window. Watermarks are used to derive the current event time at each task in an event application.  Watermarks are special records which have a timestamp and are placed together with the usual event records. The watermark must be monotonically increasing and satisfy the property that events between them fall in the time range between their timestamps.

Friday, August 16, 2019

We were looking at storage for the stream processing. We now look into the analytical side. Flink is a distributed system for stateful parallel data stream processing. It performs distributed data processing by focusing leveraging its integration with cluster resource managers such as Mesos, YARN, and Kubernetes but it can also be configured to run as a standalone cluster.  Flink depends on Zookeeper for co-ordination. 

The components of a Flink setup include the jobManager, the resourceManagerthe taskManager and the dispatcher. The jobManager is the master process that controls the execution of a single application. Each application is controlled by a different jobManager and is usually represented by a logical dataflow graph called the jobGraph and a jar file. The jobManager converts the job graph into a physical data flow graph and parallelizes as much of the execution as possible. 

There are multiple resource managers based on the cluster of choice. It is responsible for managing task manager’s slots which is the unit of processing resources. It offers idle slots to the jobManager and launches new containers. 

The task manager are the worker processes of Flink. There are multiple task managers running in a Flink setup. Each task manager provides a certain number of slots. These slots limit the number of tasks that a task manager can execute. 

The dispatcher runs across job execution and when an application is submitted via its REST interface, it starts a jobManager and hands the application over. The REST interface enables the dispatcher to serve as an HTTP entrypoint to clusters that are behind a firewall. 

Thursday, August 15, 2019

We were discussing stream processing. This is particularly helpful when streams represent the data in terms of past, present and future. This is a unified view of data. Windowing is a tool to help process bounded streams which are segments of the stream. Segments build up the stream so it is merely a convenience to really transform only a portion of the stream,  Otherwise the stream as such gives a complete picture of the data from the origin.
Stream segments are made up of events. It is possible to view stream processing as a transformation on event by event basis.Event processing helps applications to be data driven. In this sense, batch processing has focused on historic data, stream processing has focused on current data and event driven processing has focused on data driven or future data. All of this processing may require to be stateful. Therefore by unifying the data as a span over all ranges of time, stream processing is positioned to provide all of the benefits of the batch, stream and event driven processing. Algorithms and data structures process finite data. The data range is chosen based on time span which may include past, present or future. Therefore stream processing can unify all the above processing.
All we need is a unification of the stack in terms of storage, runtime and modeling. The stream storage and stream analytics products do just that. The APIs available from these products make programming easier.
Inside the stream storage product, the segment store, the controller store and the zookeeper sit at the same level in the stack over the tier1 storage. They serve independent functions and are required to persist the streams. While the segment store hosts the Stream data API, the controller hosts the Stream controller API The client library that writes and queries the event stream uses the stream abstraction over all these three components.
Windowing is essential to stream processing since it allows a seqence of events to be processed. Stream processing can work on one event at a time. This abstraction made it possible to unify the query language in SQL where the predicate had the time range as a parameter.
However the use of SQL forced the data to be accumulated prior to the execution of the query. With the use of stream storage products and stream analytics, the data is viewed as if it were in transit on a pipeline and the querying is done on a unified view of the data as it acrrues. Therefore stream processing has the ability to be near real-time.