We talked about the twelve factors that enable the applications to be hosted on distributed environment with containerization so that it can scale. Let us now look at cluster services and abstractions that service these applications. Such use case will hit home on a majority of the considerations for scheduling and and workflows. Deis Workflow is a good example of these services.
L et us look at the components of the Deis workflow:
The workflow manager – checks your cluster for the latest stable components. If the components are missing. It is essentially a Workflow Doctor providing first aid to your Kubernetes cluster that require servicing.
The monitoring subsystem which consists of three components – the Telegraf, InfluxDB, and Grafana. The first is a metrics collection agent, that runs using the daemon set API.The second is a database that stores the metrics collected by the first. The third is a graphing application, which natively stores the second as a data source and provides a robust engine for creating dashboards on top of time-series data.
The logging subsystem which consists of two components – first that handles log shipping and second that maintains a ring buffer of application logs
The router component which is based on Nginx and routes inbound https traffic to applications. This includes a cloud based load balancer automatically.
The registry component which holds the application images generated from the builder component.
The object storage component where the data that needs to be stored is persisted. This is generally an off-cluster object storage.
Slugrunner is the component responsible for executing build-pack based applications. Slug is sent from the controller which helps the Slugrunner download the application slug and launch the application
The builder component is the workhorse that builds your code after it is pushed from source control.
The database component which holds the majority of the platform state. It is typically a relational database. The backup files are pushed to object storage. Data is not lost between backup and database restarts.
The controller which serves as the http endpoint for the overall services so that CLI and SDK plugins can be utilized.
Deis Workflow is more than just an application deployment workflow unlike CloudFoundry. It performs application rollbacks, supports zero time app migrations at the router level and provides scheduler tag support that determines which nodes the workloads are scheduled on. Moreover, it runs on Kubernetes so other workloads can be run on Kubernetes along with these workflows. Workflow components have a “deis-” namespace that tells them apart from other Kubernetes workloads and provide building, logging, release and rollback, authentication and routing functionalities all exposed via a REST API. In other words it is a layer distinct from the Kubernetes. While Deis provides workflows, Kubernetes provides orchestration and scheduling
The separation of workflows from resources and built to scale design is a pattern that will serve any automation.
L et us look at the components of the Deis workflow:
The workflow manager – checks your cluster for the latest stable components. If the components are missing. It is essentially a Workflow Doctor providing first aid to your Kubernetes cluster that require servicing.
The monitoring subsystem which consists of three components – the Telegraf, InfluxDB, and Grafana. The first is a metrics collection agent, that runs using the daemon set API.The second is a database that stores the metrics collected by the first. The third is a graphing application, which natively stores the second as a data source and provides a robust engine for creating dashboards on top of time-series data.
The logging subsystem which consists of two components – first that handles log shipping and second that maintains a ring buffer of application logs
The router component which is based on Nginx and routes inbound https traffic to applications. This includes a cloud based load balancer automatically.
The registry component which holds the application images generated from the builder component.
The object storage component where the data that needs to be stored is persisted. This is generally an off-cluster object storage.
Slugrunner is the component responsible for executing build-pack based applications. Slug is sent from the controller which helps the Slugrunner download the application slug and launch the application
The builder component is the workhorse that builds your code after it is pushed from source control.
The database component which holds the majority of the platform state. It is typically a relational database. The backup files are pushed to object storage. Data is not lost between backup and database restarts.
The controller which serves as the http endpoint for the overall services so that CLI and SDK plugins can be utilized.
Deis Workflow is more than just an application deployment workflow unlike CloudFoundry. It performs application rollbacks, supports zero time app migrations at the router level and provides scheduler tag support that determines which nodes the workloads are scheduled on. Moreover, it runs on Kubernetes so other workloads can be run on Kubernetes along with these workflows. Workflow components have a “deis-” namespace that tells them apart from other Kubernetes workloads and provide building, logging, release and rollback, authentication and routing functionalities all exposed via a REST API. In other words it is a layer distinct from the Kubernetes. While Deis provides workflows, Kubernetes provides orchestration and scheduling
The separation of workflows from resources and built to scale design is a pattern that will serve any automation.
#codingexercise
Recursive determination if a string is a palindrome:
bool isPalin(string A, int start, int end)
{
assert (start <= end);
if (start == end || start == end-1) return true
if (A[start] != A[end]) return false;
return IsPalin(A, start+1, end-1);
}
Today I found profiling quite useful to troubleshoot an issue:
Today I found profiling quite useful to troubleshoot an issue:
python -m cProfile -o ~/profile.log alloc.py
import pstats
p = pstats.Stats('profile.log’)
p.strip_dirs().sort_stats(-1).print_stats()
Wed May 3 15:21:45 2017 profile.log
105732 function calls (102730 primitive calls) in 1.271 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
2 0.000 0.000 0.000 0.000 SSL.py:1438(send)
2 0.000 0.000 0.953 0.476 SSL.py:1502(recv)
No comments:
Post a Comment