Tuesday, July 25, 2017

Yesterday we were discussing cloud services and compute or storage requirements. We  briefly mentioned services being granular. Services were hosted on compute and even when there were multiple service instances, each instance was whole. One of the ways to make this more granular, was to break down the processing with serverless computing. The notion here is that computations within a service can be packaged and executed elsewhere with little or no coupling to compute resources. This is a major change in the design of services. From service oriented architecture, we are going to microservices and from microservices we are going to serverless computing.
There are a few tradeoffs in the serverless computing that may be taken into perspective. First, we introduce latency in the system because the functions don't execute local to the application and require setup and teardown routines during invocations. Moreoever, debugging of serverless computing functions is harder to perform because the functions are responding to more than one applications and the callstack is not available or may have to be put together by looking at different compute resources. The same goes for monitoring as well because we now rely on separate systems. We can contrast this with applications that are hosted with load balancer services to improve availability. The services registered for load balancing is the same code on every partition. The callstack is coherent even if it is on different servers. Moreover, these share the same persistence even if the entire database server is also hosted on say Marathon with the storage on a shared volume. The ability of Marathon to bring up instances as appropriate along with the health checks improves the availability of the application. The choice of using platform as a service or a marathon cluster based deployment or serverless computing depends on the application.
That said, all the advantages that come with deploying code on containers in PaaS is the same for serverless computing only on smaller granularity. 
The serverless architecture may be standalone or distributed.  In both cases, it remains an event-action platform to execute code in response to events. We can execute code written as functions in many different languages and a function is executed in its own container.  Because this execution is asynchronous to the frontend and backend, they need not perform continuous polling which helps them to be more scaleable and resilient. OpenWhisk introduces event programming model where the charges are only for what is used. Moreover, it scales on a per-request basis. 
#codingexercise
Implement a virtual pottery wheel method that conserves mass but shapes it according to external factor
List<int> ShapeOnPotteryWheel(List<int> diameters, List<int> touches)
{
assert(diameters.count == touches.count); // height of clay
Assert(touches.All( x => x >= 0));
double volume = 0;
for (int i = 0; i < touches.count; i++)
{
    var old = diameters[i];
    diameters[i] -= 2 × touches[i];
    var new = diameters[i];
    Assert(new > 0);
    volume += (PI/4) x (old^2 - new^2);
}
assert(volume >= 0);
// volume adds up the top
var last = diameters.Last();
int count = volume/((PI/4)×(last^2));
if (count > 0)
    diameters.AddRange(Enumerable.Repeat(last, count));
return diameters;
}

No comments:

Post a Comment