Wednesday, August 22, 2018

The nodes in a storage pool assigned to the VDC may have a fully qualified name and public ip address. Although these names and ip address are not shared with anyone, they serve to represent the physical location of the fragments of an object. Generally, an object is written across three such nodes. The storage engine gets a request to write an object. It writes the object to one chunk but the chunk may be physically located on three separate nodes. The writes to these three nodes may even happen in parallel.  The object location index of one chunk and the disk locations corresponding to the chunk are also artifacts that need to be written. For this purpose, also, three separate nodes may be chosen and the location information may be written.   The storage engine records the disk locations of the chunk in a chunk location index and the disk locations corresponding to the chunk to three different disks/nodes. The index locations are chosen independently from the object chunk locations.  Therefore, we already have a mechanism to store locations. When these locations have representations for the node and the site, a copy of an object served over the web has a physical internal location. Even when they are geo-replicated, the object and the location information will be updated together.  The mapping of a virtual address for an object to different physical copies and their location is therefore a matter of mere looking them up in an index just the same way as we look up the chunks for an object. We just need more information on the location part of the object and the replication group automatically takes care of keeping locations and objects updated as they are copied.


Tuesday, August 21, 2018

We were discussing gateway and object storage. We  wanted to create content distribution network from the object storage itself using a gateway like functionality to objects as geo-redundant copies. The storage engine layer responsible for the creation of objects would automatically take care of the replication of the objects. Storage engines generally have a notion each for a virtual data center and a replication group. An object created within a virtual data center is owned by that virtual data center. If there are more than one virtual data center within a replication group, the owning virtual data center within a group will be responsible to replicate the object in the other virtual data center and this is usually done after the object has been written. At the end of the replication, each virtual data center has a readable copy of the object.The location information is internal to the storage engine unlike the internet accessible address. The address is just another attribute for the object. Since the address has no geography specific information as per our design of the gateway, the rules for the gateway can be used to route a read request to the relevant virtual data center which will use the address to identify the object and use its location to read the object. 
Together the gateway and the storage engine provide address and copies of objects to facilitate access via a geographically close location. However, we are suggesting native gateway functionality to object storage in a way that promotes this Content Distribution Network.  Since we have copies of the object, we don’t need to give an object multiple addresses for access from different geographical regions.  
The object storage has an existing concept of replication group. Their purpose was to define a logical boundary where storage pool content is protected. These groups can actually be local or global. A local replication group protects objects within the same virtual data center. The global replication groups protect the objects against disk, node as well as site failures. The replication strategy is inherent to the object storage. The copies made for the object are within the replication group. In a multi-site content distribution network, the copies may exist outside of the local replication group. The copies of the objects are then made outside of the replication group creating new isolated objects.  In such cases, the replication strategy for content-distribution network may kick in to maintain the contents to be the same. However, in this case too, we don’t have to leverage external technologies to configure replication strategy different from that of the object storage. A multi-site virtual data center collection may be put under the same replication group and this should suffice to create enough copies across sites where sited are earmarked for different geographies. 

  

Monday, August 20, 2018

We were discussing gateway and object storage. We  wanted to create content distribution network from the object storage itself using a gateway like functionality to objects as geo-redundant copies. The storage engine layer responsible for the creation of objects would automatically take care of the replication of the objects. Storage engines generally have a notion each for a virtual data center and a replication group. An object created within a virtual data center is owned by that virtual data center. If there are more than one virtual data center within a replication group, the owning virtual data center within a group will be responsible to replicate the object in the other virtual data center and this is usually done after the object has been written. At the end of the replication, each virtual data center has a readable copy of the object. Erasure codes help protect the object without additional software or services because the data and the code of the fragments of the object are so formed that the entire object can be reconstructed. Internal to each virtual center, there may be a pool of cluster nodes and the object may have been written across three chosen nodes. Since each virtual data center needs to know the location of the object, the location information itself may be persisted the same way as an object. The location information is internal to the storage engine unlike the internet accessible address. The address is just another attribute for the object. Since the address has no geography specific information as per our design of the gateway, the rules for the gateway can be used to route a read request to the relevant virtual data center which will use the address to identify the object and use its location to read the object. Copying and caching by a non-owner virtual data center is entirely its discretion because those are enhancements to the existence of the object in each virtual data center within the same replication group. Traditionally replication groups were used for outages but the same may be leveraged for content distribution network and the gateway may decide to route the request to one of the virtual data centers. 
Together the gateway and the storage engine provide address and copies of objects to facilitate access via a geographically close location. However, we are suggesting native gateway functionality to object storage in a way that promotes this Content Distribution Network.  Since we have copies of the object, we don’t need to give an object multiple addresses for access from different geographical regions.

Sunday, August 19, 2018


The design of content distribution network with object storage 
The primary question we answer in this article is why objects don’t have multiple addresses for access from geographically closer regions. We know that there is more than one copy of objects and they are geographically replicated. Content distribution network also intends to do something very similar. They have content designated to proxy servers and the purpose of these servers is to make content available at the nearest location. These mirrored contents enable faster access over network simply by reducing the round-trip time. That is how content distribution network positions itself.   
Object storage also has geo-redundant replication and there are secondary addresses for read access to these replicated data. This means data becomes available even during a failover. The question is clearer when we refer to geographically close primary addresses that are served from the same object storage. As long as the user does not have to switch to a secondary address and the primary address is already equivalent to that from a distribution network in terms of performance, the user has no justification to use a content distribution network.  
With this context, let us delve into the considerations for enabling such an address for an object exposed over the object storage. We know gateways perform the equivalent of routing to designated servers and that the address merely needs to have a virtual address which is an address for the object that does not change in appearance to the user. Internally the address may be interpreted and routed to designated servers based on routing rules, availability and load. Therefore, the address may work well in terms of being a primary address for the object.  A gateway like functionality is already something that works for web server so its design is established and well-known. The availability of the object storage as the unified storage for content regardless of copies or versions is also well-established and known. The purpose for the copies of the objects may merely be for redundancy but there is no restriction for keeping copies of the same object for geographical purposes. This means we can have adequate number of objects for as many geography-based accesses as needed. We have now resolved the availability of objects and their access using a primary distribution network like address. 
boolean isDivisibleBy221(uint n)
{
return isDivisibleBy13(n) && isDivisibleBy17(n);
}


Saturday, August 18, 2018

Web assets as a software update:

Contents

Introduction: 
Any application with a web interface requires the usage of resources in the form of markup, stylesheets and scripts. Although they may represent code for the interaction with the end user, they don’t necessarily have to be maintained on the server side and treated the same way as server-side code. This document argues for using an update service for any code that is not maintained on the server side. The update service automatically downloads and installs the latest update to the code on a device or a relay server by a pull mechanism rather than the conventional pipeline-based push mechanism.  Furthermore, the source for the update service may be an object storage and preferably via distributors like Artifactory or Content Distribution Network.

Description:
Content Distribution Network are widely popular to make web application assets available to a web page regardless of whether it is hosted on the mobile, desktop or software as a service. They serve many purposes but primarily function as a set of proxy servers distributed over geographical locations such that the web page may readily find them and download them at high speed regardless of when, where and how the web page is displayed. Update service on the other hand is generally a feature of any software platform such that tenants can download the latest update from their publisher. The server has been a yet another model where there is a single source code from a single point of origin and usually gated over a pipeline and every consuming device or application points to this server via web redirects. These three software publishing conventions make no restrictions over the size or granularity of individual releases and generally they are determined based on what can be achieved within a timeline. Since the most recent update is guaranteed to work compatible with previous versions of host or device ecosystem and updates are mostly forward progressive, there is very little testing or requirement to ensure that new releases mix and match on a particular host works well. Moreover, a number of request responses are already being made to load a web page. Therefore, there is no necessity to make these downloads or responses to be a minimum size. This brings us to a point where we view assets not as a bundle but as something discrete that can be versioned and made available over the web. The rules for publishing assets to a set of proxy servers are similar to the rules for releasing code to a virtual server.  This works very well for asset that is viewed as files or objects. However, even archives are candidate for being versioned and uploaded via multi-part upload. Typically, proxy servers have local storage while object storage unifies the storage and exposes a single endpoint for the object. This would mean replicating the object over multiple geographical zones from the same object storage. Regardless of the topology of the storage where the assets are made available, the update service can rotate through one or more providers for downloading it to the device. Typically a gateway service takes care of accessing the object storage in this case.

Conclusion:
Software may be viewed both in terms of server-side logic and client updated assets. The granularity of releases for both can be fine grained and independently verified. The distribution may be finely balanced so that the physical representation of what makes an application, is much more modular and automatic for every consumer.

Friday, August 17, 2018

We look at a particular usage of Object Storage as Content Distribution Network (CDN). The latter is merely a collection of proxy servers. Typically, proxy servers have local storage while object storage unifies the storage and exposes a single endpoint for the object. This would mean replicating the object over multiple geographical zones from the same object storage. Regardless of the topology of the storage where the assets are made available, any service requiring to use the content can rotate through one or more CDNs for downloading it to the device.
Typically a CDN is enabled over object storage using a gateway. A Rados gateway for example enables content to be served from a distributed object storage. In order to read an object, a rados gateway will create a cluster handle and then connect to a cluster. Then it opens an IO context and reads the data from the object following which it closes the context and the handle. This gateway is implemented in the form of a proxy FastCGI module and can be used with any web server that supports such module.
The use of a gateway facilitates server-load-balancing, request routing and content services. It can be offloaded to hardware. Gateways may perform web switching or content switching  They need not be real web servers and can route traffic to other web servers. They may be augmented to monitor servers in order to change forwarding rules. Rules may be made simpler with good addressing instead of using lookup. Also, a gateway is generally assigned a single virtual IP address.
Moreover, not all the request need to reach the object storage. In some cases web caches may be used. A gateway can forward a request to a web cache just the same way as it forwards a request to a web server. The benefits of using a web cache including saving bandwidth, reducing server load, and improving request-response time. If a dedicated content store is required, typically the caching and server are encapsulated into a content server. This is quite the opposite paradigm of using object storage and replicated objects. to directly serve the content from the store. The distinction here is that there are two layers of functions - The first layer is the gateway layer that solves distribution using techniques such as caching, asset copying and load balancers. The second layer is the compute and storage bundling in the form of a server or a store with shifting emphasis on code and storage.
The two layers need to adhere to the end to end principle which is best to do with a DiffServ paradigm

Thursday, August 16, 2018

We were discussing the suitability of Object Storage to various workloads. Specifically, we discussed its role in Artifactory which is used to store binary objects from CI/CD. A large number of binary objects or files gets generated with each run of the build. These files are mere build artifacts and the most common usage of these files is download. Since the hosted solution is cloud based, Artifactory users demands elasticity, durability and http access. Object Storage is best suited to meet these demands.  The emphasis here is the distinction over a file-system exposed over the web for their primary use case scenario. In fact, the entire DevOps process with its CI/CD servers can use a common Object Storage instance so that there is little if any copying of files from one staging directory to another. The Object Storage not only becomes the final single instance destination but also avoids significant inefficiencies in the DevOps processes.  Moreover, builds are repeated through development, testing and production so the same solution works very well for repetitions. This is not just a single use case but an indication that there are many cycles within the DevOps process that could benefit from a storage tier as Object Storage. Static content like binary images of executable are generally copy over write. Versioning for same named files is a feature of Object Storage. Object Storage can not only use file exports but also provide automatic versioning of content. It becomes a content library for binary artifacts of build in all the features demanded over a file system such as versioning. Previous versions may be retained for as long as the life-cycle rules allow. These rules can be specified for the objects. It can also provide time limited access to content. The URI exposed for the object can be shared with anyone. The object may be downloaded on any device anywhere. It enables multi-part upload (mpu) of large objects. This is considered a significant improvement for large binary objects since it enables binary transfer in parts. There are three steps - an mpu upload is requested, different parts are uploaded, and finally an mpu complete is requested. The object storage constructs the object from the parts and then it can be accessed just the same as any other object. Each part is identified and they can number in hundreds. The part upload request includes a part number. The object storage returns a tag header for each part. The header and part number must be included in subsequent requests. The parts can be sent to object storage in any order.  A complete request or an abort request must be sent to finalize the parts and permit Object storage to start reconstruction of the object and removing the parts. Parts uploaded so far can be listed.  If the listed parts are more than 1000, a series of such list requests need to be sent. Multi-part uploads can be concurrent.If a part is sent again, it will update the already uploaded part. All the parts are used for reconstruction of the original object only after the complete request is received.