Thursday, April 26, 2018

The previous post was on Events as a measurement of Cloud Database performance with a focus on the pricing of the cloud database. 
With the help of managed services in the cloud, deployments are simple and fast and so is scaling. Moreover, patching, backups and replication are also handled appropriately. It is compatible to almost all applications. And it has fast, predictable performance. There is no cost to get started and we pay for what we use. 
Initially the usage costs are a fraction of the costs incurred from on-premise solutions. Moreover for the foreseeable future, the usage expenses may not exceed on-premise solutions. However, neither the cloud provider nor the cloud consumer controls the data which only explodes and increases over time.
Traditionally disks kept up with this storage requirements and there was cost for the infrastructure but since the usage is proportional to the incoming data, there is no telling when the tradeoff between managed database service costs exceeds the alternatives. 
The metering of cloud database usage and the corresponding bills will eventually become significant enough for corporations to take notice when more and more data finds its way to the cloud. Storage has been touted to be zero cost in the cloud but compute is not. Any amount of incoming data is paired with compute requirements. While the costs may be granular per request processing, it aggregates when billions of requests are processed in smaller and smaller duration. Consequently, the cloud may no longer be the least expensive option. Higher metering and more monitoring along with dramatic increase in traffic add up to substantial costs while there is no fallback option to keep the costs under the bar. 
The fallback option to cut costs and make them more manageable is itself a very important challenge. What will money starved public cloud tenants do when the usage only accrues with explosion of data regardless of economic cycles? This in fact is the motivating factor to consider a sort of disaster management plan when public cloud becomes unaffordable while it may even be considered unthinkable today. 
I'm not touting the on-premise solution or the hybrid cloud technology as the fall back option. I don't even consider them as a viable option regardless of any economic times or technological advancement. The improvements to cloud services has leaped far ahead of such competition and the cloud consumers are savvy about what they need from the cloud. Instead I focus on some kind of easing of traffic into less expensive options primarily with the adoption of quality of service for cloud service users and their migration to more flat rate options. Corporate spending is not affected but there will be real money saved when the traffic to the cloud is kept sustainable. 
Let us take a look at this Quality of Service model. When all the data goes into the database and the data is measure only in size and number of requests, there is no distinction between Personally Identifiable Information and regular chatty traffic. In the networking world, the ability to perform QoS depends on the ability to differentiate traffic using <src-ip, src-port, dest-ip, dest-port, protocol> With the enumeration and classification of traffic, we can have traffic serviced with different QoS agreements using IntServ or DiffServ architecture. Similar QoS managers could differentiate data so that the companies not only have granularity but also take better care of their more important categories of data.

New:
#codingexercise: https://1drv.ms/u/s!Ashlm-Nw-wnWtjSc1V5IlYwWFY88

No comments:

Post a Comment