Saturday, August 25, 2018

We said we could combine gateway and http proxy services within the object storage for the site specific http addresses of objects. The gateway also acts as a http proxy. Any implementation of gateway has to maintain a registry of destination addresses. As http access enabled objects proliferate with their geo-replications, this registry becomes granular at the object level while enabling rules to determine the site from which they need to be accessed. Finally they gather statistics in terms of access and metrics which come very useful for understanding the http accesses of specific content within the object storage. 
Both the above functionalities can be elaborate allowing gateway service to provide immense benefit per deployment.
The advantages of an http proxy include aggregations of usages. In terms of success and failure, there can be detailed count of calls. Moreover, the proxy could include all the features of a conventional http service like Mashery such as Client based caller information, destination-based statistics, per object statistics, categorization by cause and many other features along with a RESTful api service for the features gathered.

Friday, August 24, 2018

We were saying there are advantages to writing Gateway Service within Object Storage. These included:
First, the address mapping is not at site level. It is at object level.  
Second, the address of the object – both universal as well as site specific are maintained along with the object as part of its location information 
Third, instead of internalizing a table of rules from the external gateway, a lookup service can translate universal object address to the address of the nearest object. This service is part of the object storage as a read only query. Since object name and address is already an existing functionality, we only add the ability to translate universal address to site specific address at the object level.  
Fourth, the gateway functionality exists as a microservice.  It can do more than static lookup of physical location of an object given a universal address instead of the site-specific address. It has the ability to generate tiny urls for the objects based on hashing.  This adds aliases to the address as opposed to the conventional domain-based address.  The hashing is at the object level and since we can store billions of objects in the object storage, a url shortening feature is a significant offering from the gateway service within the object storage. It has the potential to morph into other services than a mere translator of object addresses. Design of a url hashing service was covered earlier as follows. 
Fifth, the conventional gateway functionality of load balancing can also be handled with an elastic scale-out of just the gateway service within the object storage.  
Sixth, this gateway can also improve access to the object by making more copies of the object elsewhere and adding the superfluous mapping for the duration of the traffic. It need not even interpret the originating ip addresses to determine the volume as long as it can keep track of the number of read requests against existing address of the same object.  
In addition, this gateway service within  object storage may be written in a form that allows rules to be customized.  Moreover rules need not be written in the form of declarative configuration. They can be dynamic in the form of a module. As a forwarder, a gateway may leverage rules that are determined by the deployment. Expressions for rules may include features that can be borrowed from IPSec rules. These are well-known rules that govern whether a connection over the Internet may be permitted into a domain. 
With the help of a classifier, these rules may even be evaluated dynamically. 
The gateway also acts as a http proxy. Any implementation of gateway has to maintain a registry of destination addresses. As http access enabled objects proliferate with their geo-replications, this registry becomes granular at the object level while enabling rules to determine the site from which they need to be accessed. Finally they gather statistics in terms of access and metrics which come very useful for understanding the http accesses of specific content within the object storage. 
Both the above functionalities can be elaborate allowing gateway service to provide immense benefit per deployment.

Thursday, August 23, 2018

We were discussing gateway like functionality from object storage. While a gateway maintains address mapping for several servers where routes translate to physical destination based on say regex, here we give the ability to each object to records its virtual canonical address along with its physical location so that each object and its geographically replicated copies may be addressed specifically.  When an object is accessed by its address, the gateway used to forward the request to the concerned site based on a set of static rules say at the web server and usually based on regex. Instead with the gateway functionality now merged into the object storage, there are a few advantages that come our way: 
First, the address mapping is not at site level. It is at object level.  
Second, the address of the object – both universal as well as site specific are maintained along with the object as part of its location information 
Third, instead of internalizing a table of rules from the external gateway, a lookup service can translate universal object address to the address of the nearest object. This service is part of the object storage as a read only query. Since object name and address is already an existing functionality, we only add the ability to translate universal address to site specific address at the object level.  
Fourth, the gateway functionality exists as a microservice.  It can do more than static lookup of physical location of an object given a universal address instead of the site-specific address. It has the ability to generate tiny urls for the objects based on hashing.  This adds aliases to the address as opposed to the conventional domain-based address.  The hashing is at the object level and since we can store billions of objects in the object storage, a url shortening feature is a significant offering from the gateway service within the object storage. It has the potential to morph into other services than a mere translator of object addresses. Design of a url hashing service was covered earlier as follows. 
Fifth, the conventional gateway functionality of load balancing can also be handled with an elastic scale-out of just the gateway service within the object storage.  
Sixth, this gateway can also improve access to the object by making more copies of the object elsewhere and adding the superfluous mapping for the duration of the traffic. It need not even interpret the originating ip addresses to determine the volume as long as it can keep track of the number of read requests against existing address of the same object.  
These advantages can improve the usability of the objects and their copies by providing as many as needed along with a scalable service that can translate incoming universal address of objects to site specific location information. 

Wednesday, August 22, 2018

The nodes in a storage pool assigned to the VDC may have a fully qualified name and public ip address. Although these names and ip address are not shared with anyone, they serve to represent the physical location of the fragments of an object. Generally, an object is written across three such nodes. The storage engine gets a request to write an object. It writes the object to one chunk but the chunk may be physically located on three separate nodes. The writes to these three nodes may even happen in parallel.  The object location index of one chunk and the disk locations corresponding to the chunk are also artifacts that need to be written. For this purpose, also, three separate nodes may be chosen and the location information may be written.   The storage engine records the disk locations of the chunk in a chunk location index and the disk locations corresponding to the chunk to three different disks/nodes. The index locations are chosen independently from the object chunk locations.  Therefore, we already have a mechanism to store locations. When these locations have representations for the node and the site, a copy of an object served over the web has a physical internal location. Even when they are geo-replicated, the object and the location information will be updated together.  The mapping of a virtual address for an object to different physical copies and their location is therefore a matter of mere looking them up in an index just the same way as we look up the chunks for an object. We just need more information on the location part of the object and the replication group automatically takes care of keeping locations and objects updated as they are copied.


Tuesday, August 21, 2018

We were discussing gateway and object storage. We  wanted to create content distribution network from the object storage itself using a gateway like functionality to objects as geo-redundant copies. The storage engine layer responsible for the creation of objects would automatically take care of the replication of the objects. Storage engines generally have a notion each for a virtual data center and a replication group. An object created within a virtual data center is owned by that virtual data center. If there are more than one virtual data center within a replication group, the owning virtual data center within a group will be responsible to replicate the object in the other virtual data center and this is usually done after the object has been written. At the end of the replication, each virtual data center has a readable copy of the object.The location information is internal to the storage engine unlike the internet accessible address. The address is just another attribute for the object. Since the address has no geography specific information as per our design of the gateway, the rules for the gateway can be used to route a read request to the relevant virtual data center which will use the address to identify the object and use its location to read the object. 
Together the gateway and the storage engine provide address and copies of objects to facilitate access via a geographically close location. However, we are suggesting native gateway functionality to object storage in a way that promotes this Content Distribution Network.  Since we have copies of the object, we don’t need to give an object multiple addresses for access from different geographical regions.  
The object storage has an existing concept of replication group. Their purpose was to define a logical boundary where storage pool content is protected. These groups can actually be local or global. A local replication group protects objects within the same virtual data center. The global replication groups protect the objects against disk, node as well as site failures. The replication strategy is inherent to the object storage. The copies made for the object are within the replication group. In a multi-site content distribution network, the copies may exist outside of the local replication group. The copies of the objects are then made outside of the replication group creating new isolated objects.  In such cases, the replication strategy for content-distribution network may kick in to maintain the contents to be the same. However, in this case too, we don’t have to leverage external technologies to configure replication strategy different from that of the object storage. A multi-site virtual data center collection may be put under the same replication group and this should suffice to create enough copies across sites where sited are earmarked for different geographies. 

  

Monday, August 20, 2018

We were discussing gateway and object storage. We  wanted to create content distribution network from the object storage itself using a gateway like functionality to objects as geo-redundant copies. The storage engine layer responsible for the creation of objects would automatically take care of the replication of the objects. Storage engines generally have a notion each for a virtual data center and a replication group. An object created within a virtual data center is owned by that virtual data center. If there are more than one virtual data center within a replication group, the owning virtual data center within a group will be responsible to replicate the object in the other virtual data center and this is usually done after the object has been written. At the end of the replication, each virtual data center has a readable copy of the object. Erasure codes help protect the object without additional software or services because the data and the code of the fragments of the object are so formed that the entire object can be reconstructed. Internal to each virtual center, there may be a pool of cluster nodes and the object may have been written across three chosen nodes. Since each virtual data center needs to know the location of the object, the location information itself may be persisted the same way as an object. The location information is internal to the storage engine unlike the internet accessible address. The address is just another attribute for the object. Since the address has no geography specific information as per our design of the gateway, the rules for the gateway can be used to route a read request to the relevant virtual data center which will use the address to identify the object and use its location to read the object. Copying and caching by a non-owner virtual data center is entirely its discretion because those are enhancements to the existence of the object in each virtual data center within the same replication group. Traditionally replication groups were used for outages but the same may be leveraged for content distribution network and the gateway may decide to route the request to one of the virtual data centers. 
Together the gateway and the storage engine provide address and copies of objects to facilitate access via a geographically close location. However, we are suggesting native gateway functionality to object storage in a way that promotes this Content Distribution Network.  Since we have copies of the object, we don’t need to give an object multiple addresses for access from different geographical regions.

Sunday, August 19, 2018


The design of content distribution network with object storage 
The primary question we answer in this article is why objects don’t have multiple addresses for access from geographically closer regions. We know that there is more than one copy of objects and they are geographically replicated. Content distribution network also intends to do something very similar. They have content designated to proxy servers and the purpose of these servers is to make content available at the nearest location. These mirrored contents enable faster access over network simply by reducing the round-trip time. That is how content distribution network positions itself.   
Object storage also has geo-redundant replication and there are secondary addresses for read access to these replicated data. This means data becomes available even during a failover. The question is clearer when we refer to geographically close primary addresses that are served from the same object storage. As long as the user does not have to switch to a secondary address and the primary address is already equivalent to that from a distribution network in terms of performance, the user has no justification to use a content distribution network.  
With this context, let us delve into the considerations for enabling such an address for an object exposed over the object storage. We know gateways perform the equivalent of routing to designated servers and that the address merely needs to have a virtual address which is an address for the object that does not change in appearance to the user. Internally the address may be interpreted and routed to designated servers based on routing rules, availability and load. Therefore, the address may work well in terms of being a primary address for the object.  A gateway like functionality is already something that works for web server so its design is established and well-known. The availability of the object storage as the unified storage for content regardless of copies or versions is also well-established and known. The purpose for the copies of the objects may merely be for redundancy but there is no restriction for keeping copies of the same object for geographical purposes. This means we can have adequate number of objects for as many geography-based accesses as needed. We have now resolved the availability of objects and their access using a primary distribution network like address. 
boolean isDivisibleBy221(uint n)
{
return isDivisibleBy13(n) && isDivisibleBy17(n);
}