Wednesday, August 15, 2018

We were discussing the suitability of Object Storage to various workloads and the programmability convenience that enables migration of old and new workloads. We discussed the use of UI as well as SDK for ingesting data.
Let us now consider the usage of object storage for powering web applications. Static resources and files for web application can be served directly out of object storage. There are many web applications that require to serve a portion of the file system over the web due to a large number of artifacts. These are ideal for Object Storage. Consider Artifactory which is a leading hosted solution for all things binary. It is a perfect match for code repositories and aids CI/CD. A large number of binary objects or files gets generated with each run of the build. These files are mere build artifacts and the most common usage of these files is download. Since the hosted solution is cloud based, Artifactory demands elasticity, durability and http access. These are just some of the things that Object Storage provides a suitable platform for.  The emphasis here is the suitability of Object Storage over a filesystem for the primary use case scenario. In fact, the entire DevOps process with its CI/CD servers can use a common Object Storage instance so that there is little if any copying of files from one staging directory to another. The Object Storage not only becomes the final single instance destination but also avoids significant inefficiencies in the DevOps processes.  Moreover, builds are repeated through development, testing and production so the same solution works very well for repetitions. This is not just a single use case but an indication that there are many cycles within the DevOps process that could benefit from a storage tier as Object Storage.
boolean isDivisibleBy35(uint n)
{
return isDivisbleBy5(n) && isDivisibleBy7(n);
}

Tuesday, August 14, 2018

We were discussing the suitability of Object Storage to various workloads and the programmability convenience that enables migration of old and new workloads. In  particular, we discussed connectors for various data sources and their bidirectional data transfer. Duplicity is a command line tool that is an example of a connector tool but we were discussing availability of an SDK with the object storage. Writing the connectors for each data source is very much like an input-output model. The data is either from the external source to an object storage or from object storage to external source. In each of these directions a connector only changes for the type of external source. Otherwise the object storage facing part of the connector is already implemented in the form of S3 Apis for read and write.  The APIs varies only for the data source as available from the data source. This makes it easy to write the connector as an amalgam of source facing API for bidirectional data transfer to Object-Storage facing S3 Apis. A read from the external data source  is written to Object storage with s3 put api and a write to the external data destination has data coming from Object storage with a read using S3 get apis.  Since each connector varies by the type of external data platform, they can be written one per data platform so that it is easier to use with that data platform. Also, SDKs facilitate development by providing language based convenience. Therefore, the same connector sdk may be offered in more than one language.
SDKs may be offered in any language for the convenience of writing data transfer in any environment. It just does not stop there. UI widens the audience for the same purposes and brings in administrators and systems engineering without the need for writing scripts or code. ETL for example is a very popular usage of designer tools with drag and drop logic facilitating wiring and transfer of data. SDK may power the UI as well and both can be adapted to the data source, environment and tasks.
#codingexercise
bool isDivisibleBy55(uint n)
{
return isDivisibleBy5(n) &&isDivisibleBy11(n);
}
bool isDivisibleBy77(uint n)
{
return isDivisibleBy7(n) &&isDivisibleBy11(n);

}

Monday, August 13, 2018

We were discussing the suitability of Object Storage to various workloads
We said that the connectors for these data sources are not offered out of object storage products but they could immensely benefit data ingestion. S3 Api deals exclusively with the namespace, buckets and objects even when the Apis are made available as part of SDK but something more is needed for the connectors.
Writing the connectors for each data source is very much like an input-output model. The data is either from the external source to an object storage or from object storage to external source. In each of these directions a connector only changes for the type of external source. Otherwise the object storage facing part of the connector is already implemented in the form of S3 Apis for read and write.  The APIs varies only for the data source as available from the data source. This makes it easy to write the connector as an amalgam of source facing API for bidirectional data transfer to Object-Storage facing S3 Apis. A read from the external data source  is written to Object storage with s3 put api and a write to the external data destination has data coming from Object storage with a read using S3 get apis.  Since each connector varies by the type of external data platform, they can be written one per data platform so that it is easier to use with that data platform. Also, SDKs facilitate development by providing language based convenience. Therefore, the same connector sdk may be offered in more than one language.
The connectors are just an example of programmability convenience of data ingestion from different workloads. Specifying metadata for the objects and showing sample queries on object storage as part of sdk is another convenience for the developers using Object Storage. Well written examples in the sdk and documentation for easing search and analytics associated with Object Storage will tremendously help the advocacy of Object Storage in different software stacks and offerings. Moreover, it will be helpful to log all activities of the sdk for data and queries so that these can make its way to a log store for convenience with audit and log analysis. The usage of sdk to improve automatic tagging and logging is a powerful technique to improve usability and maintaining history.
#codingexercise
boolean isDivisibleBy22(uint n){
return isDivisibleBy2(n) && is DivisibleBy11(n);
}
boolean isDivisibleBy33(uint n) {
return isDivisibleBy3(n) && isDivisibleBy11(n);
}

Sunday, August 12, 2018

We were discussing the suitability of Object Storage to various workloads after having discussed its advantages and its position as a perfect storage tier:
The data sources can include:
Backup and restore workflows
Data warehouse ETL loads
Log stores and indexes
Multimedia libraries
Other file systems
Relational database connections
NoSQL databases
Graph databases
All upstream storage appliances excluding aging tiers.
Notice that the connectors for these data sources are not offered out of object storage. In reality, S3 Api deals exclusively with the namespace, buckets and objects even when the Apis are made available as part of SDK.
Writing the connectors for each data source is very much like an input-output model. The data is either from the external source to an object storage or from object storage to external source. In each of these directions a connector only changes for the type of external source. Otherwise the object storage facing part of the connector is already implemented in the form of S3 Apis for read and write.  The APIs varies only for the data source as available from the data source. This makes it easy to write the connector as an amalgam of source facing API for bidirectional data transfer to Object-Storage facing S3 Apis. A read from the external data source  is written to Object storage with s3 put api and a write to the external data destination has data coming from Object storage with a read using S3 get apis.  Since each connector varies by the type of external data platform, they can be written one per data platform so that it is easier to use with that data platform. Also, SDKs facilitate development by providing language based convenience. Therefore, the same connector sdk may be offered in more than one language.



#codingexercise
bool isDivisibleBy14 (n) {
return isDivisibleBy (2) && isDivisibleBy(7);
}

Saturday, August 11, 2018

Object Storage is very popular with certain content. Files directly map to objects. Multimedia content are also helpful to be served from object storage  Large files such as from Artifactory are also suitable for Object Storage. An entire cluster based file system may also be exported and this may be used with Object Store. Deduplication appliance may also provide benefit benefits in conjunction with an Object Storage.
Object Storage is usually viewed as a storage appliance in itself. Therefore it provides a form of raw storage suitable for what can be viewed as objects. However a suite of connectors may be made available in the form of sdk, that enables data to move into object storage from well-known platforms. For example, data in a content-library can be moved into object storage with the help of a connector in the sdk. This is just one of the examples, there are several more.

The data sources can include:
Backup and restore workflows
Data warehouse ETL loads
Log stores and indexes
Multimedia libraries
Other file systems
Relational database connections
NoSQL databases
Graph databases
All upstream storage appliances excluding aging tiers

#codingexercise
bool isDivisibleBy21 (n) {
return isDivisibleBy (3) && isDivisibleBy(7);
}

Friday, August 10, 2018

We were discussing application virtualization and the migration of workloads:
We brought up how both application as well as storage tier benefit from virtualization and the automation of workload migration using tools. Object storage itself may be on a container facilitating easy migration across hosts. Since object storage virtualizes datacenters and storage arrays, it is itself at once a storage application as well as  a representation of unbounded storage space. Once the workloads have been migrated to object storage, both can then be moved around the cloud much more nimbly than they were if they used raw storage volumes. 
One of the challenges associated with migration is that Application Server - Storage Tier model has evolved to a lot more complex paradigms. There is no more just an application server and a database. In fact servers are replaced by clusters and nodes. Applications are replaced by modules and modules run on containers. Platform as a service has evolved to using Mesos and Marathon where even the storage volumes are moved around if they are not  a shared volume.  Data usually resides in the form of files and the database connectivity is re-established because the connection string does not change as the nodes are rotated. Marathon monitors the health of the nodes as the application and storage is moved around  In the object storage, the location of the object is arbitrary once the underlying storage is virtualized. Object storage itself may use a container that may make it portable but it is generally not the norm to move Object Storage around in a Marathon framework. If anything Object Storage is akin to a five hundred pound gorilla in the room.
Object Storage is very popular with certain content. Files directly map to objects. Multimedia content are also helpful to be served from object storage  Large files such as from Artifactory are also suitable for Object Storage. An entire cluster based file system may also be exported and this may be used with Object Store. Deduplication appliance may also provide benefit benefits in conjunction with an Object Storage. 
Object Storage is usually viewed as a storage appliance in itself. Therefore it provides a form of raw storage suitable for what can be viewed as objects. However a suite of connectors may be made available in the form of sdk, that enables data to move into object storage from well-known platforms. For example, data in a content-library can be moved into object storage with the help of a connector in the sdk. This is just one of the examples, there are several more.

#codingexercise
bool isDivisibleBy12(uint n)
{
return isDivisibleBy3(n) && isDivisbleBy4(n);
}

Thursday, August 9, 2018

We were discussing application virtualization and the migration of workloads:
There are a few other caveats with application virtualization. The storage volumes usually move with the rotation of the servers as demonstrated by Mesos. This is very different from object storage where the storage is virtualized. When the storage volumes are moved around, the data usually resides in the form of a file such as a database file. The database connectivity is re-established because the connection string does not change as the nodes are rotated. Furthermore, the in and out rotation servers may use the same database file. In the object storage, the location of the object is arbitrary once the underlying storage is virtualized. This might explain why object storage provides a storage tier as opposed to an end to end virtualization.  There are tools that help with the workload migration. These tools provide what is termed as "smart availability" by enabling dynamic movements of workloads between physical, virtual and cloud infrastructure. This is an automation of all the tasks required to migrate a workload.  Even the connection string can be retained when moving the workload so long as the network name can be reassigned between servers. What this automation doesn't do is perform storage and OS level data replication because the source and destination is something the users may want to specify themselves and is beyond what is needed for migrating the workloads. Containers and shared volumes come close to providing this kind of ease but they do not automate all the tasks needed on the container to perform seamless migration regardless of the compute. Also, it makes no distinction between Linux containers and docker containers.  These tools are often used for high availability and for separating the read only data access to be performed from the cloud.
With the help of above explanation of workload migration, we have brought up how both application as well as storage tier benefit from virtualization and the automation of migration using tools. Object storage itself may be on a container facilitating easy migration across hosts. Since object storage virtualizes datacenters and storage arrays, it is itself at once a storage application as well as  a representation of unbounded storage space. Once the workloads have been migrated to object storage, both can then be moved around the cloud much more nimbly than they were if they used raw storage volumes. 
#codingexercise
bool isDivisibleBy4(uint n)
{
uint m = n %100;
return (m % 4 == 0);
}