Friday, October 27, 2017

We have been discussing MongoDB improvements in catalog management. Let us recap quickly the initiatives taken by MongoDB and how it is different from a regular Master Data Management. Specifically, we can compare it with MDM from Riversand which we discussed earlier:
Comparisions
MongoDB:
   - organizes Catalog as a one stop shop in its store
            -- no sub-catalogs or fragmentation or ETL or MessageBus
   - equally available to Application servers, API data and services and webservers.
   - also available behind the store for supply chain management and data warehouse analytics
   - catalog available for browsing as well as searching via Lucene search index
   - geo-sharding with persisted shard ids or more granular store ids for improving high availability and horizontal scalability
   - local real-time writes and tuned for read-dominated workload
   - bulk writes for refresh
   - RelationalDB stores point in time loads while overnight they are pushed to catalog information management and made available for real-time views
   - NoSQL powers insight and analytics based on aggregations
   - provides front-end data store for real time queries and aggregations from applications.
   - comes with incredible monitoring and scaling

Both MDM and MongoDB Catalog store hierarchy and facet in addition to Item and SKUs as a way of organizing items.

Riversand MDM
   - rebuildable catalog via change data capture
   - .Net powered comprehensive web services
   - data as a service model
   - Reliance on traditional relational databases only


#codingexercise
Find if one binary tree is a subtree of another
boolean IsSubTree (Node root1, Node root2)
{
Var original = new List <Node>();
Var sequence = new List <Node>();
Inorder (root1, ref original);
Inorder (root2, ref sequence);
return original.Intersect (sequence).equals (sequence);
}
     4
   2  6
1  35 7
InOrder : 1 2 3 4 5 6 7
Caveat this works for 2,3,4 but for 2,4,6 we need the sequence equals method on the intersection.
bool isEqual(Node root1, Node root2)
{
if (root1 == NULL && root2 == NULL) return true;
if (root1 == NULL) return false;
if (root2 == NULL) return false;
return root1.data == root2.data && isEqual(root1.left, root2.left) && isEqual(root1.right, root2.right);
}
Note that we cannot apply the isEqual solution as the isSubtree alternative because the subtree may not be a leaf in the original.

Thursday, October 26, 2017

Yesterday we were discussing how user activities are logged for insight by MongoDB. 
All user activity is recorded using HVDF API which is staged in User History store of MongoDB and pulled by external analytics such as Hadoop using MongoDB-Hadoop connector. Internal analytics for aggregation is also provided by Hadoop. Data store for Product Map, User preferences, Recommendations and Trends then store and make the aggregations available for personalization that the apps can use for interacting with the customer.
Today we look at monitoring and scaling in MongoDB detail. The user-activity-analysis-insight design works well for any kind of deployments. For example, we can have a local deployment, or an AWS deployment or a remote deployment of the database and they can be scaled from standalone to a replica set to a sharded cluster and in all these cases, the querying does not suffer. A replica set is group of MongoDB instances that host the same data set. It just provides redundancy and high availability.  A sharded cluster stores data across multiple machines. When the data sets are large and the throughput is high, sharding eases the load. Tools to help monitor instances and deployments include Mongo Monitoring Service, MongoStat, Mongotop, IOStat and plugins for popular frameworks. These help troubleshoot failures immediately. The observation here is that MongoDB enables comprehensive monitoring through its own services, API and user interface. Being a database, the data from monitoring may also be made available for external query and dashboards such as Grafana stack. Some of the popular metrics involved here include data size versus disk size, active set size versus RAM size, disk I/O and write lock. The goal for the metrics and monitoring is that they should be able to account for the highest possible traffic.
MongoDB recommends that we add replicas to reduce latency to users, add read capacity and increase data safety. If the read does not scale, the data may potentially get stale. Adding or removing replica may be seamless.
#codingexercise
Find if one binary tree is a subtree of another
boolean IsSubTree (Node root1, Node root2)
{
Var original = new List <Node>();
Var sequence = new List <Node>();
Inorder (root1, ref original);
Inorder (root2, ref sequence);
return original.Intersect (sequence).equals (sequence);

Wednesday, October 25, 2017

We were discussing how user activities are logged for insight by MongoDB. 
All user activity is recorded using HVDF API which is staged in User History store of MongoDB and pulled by external analytics such as Hadoop using MongoDB-Hadoop connector. Internal analytics for aggregation is also provided by Hadoop. Data store for Product Map, User preferences, Recommendations and Trends then store and make the aggregations available for personalization that the apps can use for interacting with the customer.
Thus MongoDB powers applications for products and inventory, recommended products, customer profile and session management. Hadoop powers analysis for elastic pricing, recommendation models, predictive analytics and clickstream history. The user activity model is a json document with attributes for geoCode, sessionId, device, userId, type of activity, itemId, sku, order, location, tags, and timestamp. Recent activity for a user is a simple query as db.activity.find({userId:"123"}).sort({time: -1}).limit(1000)
Indices that can be used include userId+time, itemId+time, time. 
Aggregations are very fast and in real time. Queries like finding the recent number of views for a user, the total sales for a user, the number of views/purchases for a item  are now near real-time.
A batch query over NoSQL such as a map-reduce calculation for unique visitors is also performant.
This design works well for any kind of deployments. For example, we can have a local deployment, or an AWS deployment or a remote deployment of the database and they can be scaled from standalone to a replica set to a sharded cluster and in all these cases, the querying does not suffer. A replica set is group of MongoDB instances that host the same data set. It just provides redundancy and high availability.  A sharded cluster stores data across multiple machines. When the data sets are large and the throughput is high, sharding eases the load. Tools to help monitor instances and deployments include Mongo Monitoring Service, MongoStat, Mongotop, IOStat and plugins for popular frameworks. These help troubleshoot failures immediately.
In the second iteration we can even compare the distance and exit when a coprime is located in the first iteration and the distance for that coprime is more than the current distance.
int dist = 0;
// First Iteration as described yesterday
// Second Iteration as described yesterday
    // within second iteration
    if (dist > 0 && math.abs(n-i) < dist) {
        break;
    }


Monday, October 23, 2017

We were discussing how user activities are logged for insight by MongoDB. These activities include search terms, items viewed or wished, cart added or removed. orders submitted, sharing on social network and as impression or clickstream. Instead of using a time series database, the activity store and reporting is improved right from the box by MongoDB Moreover the user activities is used to compute user/product history, product map, user preferences, recommendations and trends. The challenges associated with recording user activities include the following
1) they are too voluminous and usually available only in logs
2) when they are made available elsewhere, they hamper performance.
3) If a warehouse is provided to record user activities, it helps reporting but becomes costly to scale
4) a compromise is available in NoSQL stores
5) but it is harder to provide a NoSQL store at the front-end where real-time queries are performed.
MongoDB addresses these with 
1) providing a dedicated store for large stream of data samples with variable schema and controlling the retention period
2) computing aggregations and derivatives separately
3) providing low latency to update data.
All user activity is recorded using HVDF API which is staged in User History store of MongoDB and pulled from external analytics such as Hadoop using MongoDB-Hadoop connector. Internal analytics for aggregation is also provided by Hadoop. Data store for Product Map, User preferences, Recommendations and Trends then store and make the aggregations available for personalization that the apps can use for interacting with the customer.
This completes the Data->Insight->Actions translation for user activities.


#codingexercise
We were implementing the furthest coprime of a given number such that it is in the range 2 – 250.
We made a single pass over the range of 2 to 250 to determine which ones were co-prime.
However we only need the farthest. Consequently we could split the range to be above the number or below the number so long as the input number is in the middle of the range and we could bail at the first encountered co-prime from either extremes. In the second iteration we can even compare the distance and exit when a coprime is located in the first iteration and the distance for that coprime is more than the current distance
int dist = 0;
// First Iteration as described yesterday
// Second Iteration as described yesterday
    // within second iteration
    if (dist > 0 && math.abs(n-i) < dist) {
        break;
    }

Sunday, October 22, 2017

We were discussing MongoDB. Their catalog initially stored 1 document per SKU per store. if there are hundred million items in a thousand store, it results in a hundred billion entries. Instead the documents were now one each for a specific key/store grouping SKU. This improves geo distribution and results in lower number of docs. In order to keep reads/writes local with low latency in an architecture where writes go to every shard for every region, they were streamlined to go to their respective regions with the help of tags.
Even the shopping cart is modeled as a single document. In order to improve availability, inventory was shared with regions. The inventory was stored with geocode and there was option to make it more granular by using store Is instead of geocode. The shopping cart is also made more available by replication across regions so that the writes are local even as the shopping cart moves between regions. Each shard has one replica per region. The primary servers are distributed per region. The shopping carts are also shared by geocodes or optionally by storeid
User activities are logged for insight. These include search terms, items viewed or wished, cart added or removed. orders submitted, sharing on social network and as impression or clickstream. Data will be used to compute user / product history product map, user preferences, recommendations  or trends.
Insights from logs had proven difficult. Log files are usually forgotten. moreover the user activity is voluminous When the activity is recorded in data warehouse system, it provides good reporting but expensive to scale. NoSQL helps good scaling and reporting but there is still no data store for real time queries and aggregations from applications. Moreover there are delays in moving logs to the log processor and delays in processing data from the log processor to the analytics.from the analytics, the output is limited by schema before it is put in a relational store. From this store there is limited read capability by the application. Instead of using a time series database, the activity store and reporting is improved right from the box by MongoDB.
#codingexercise
Yesterday we talked about finding the largest number from the digits whose product equals the number.
for a single number as a standalone we determined the answer using digits from 2 to 9 and their repetition while omitting 1 by using dynamic programming 
when we have an increasing continuous sequence of inputs, we said we could re-use the results from the factors. specifically we order the factors by their counts and use the ones that maximize the count. The combinations of factors that result in the product need not all be enumerated. we just take as many of those with higher counts as necessary to make the product.
another way to think about this is to use the same dynamic programming as for the digits by recursively checking if the factor inclusion or exclusion returns higher count of digits. we bail if the product is reached and return 1 or not met and return 0. for inclusion we add the contribution from the factor to the recursionand for exclusion we return the recursion.

#implementations of K-means: https://1drv.ms/w/s!Ashlm-Nw-wnWsWbQF0tucrJAUlGz 

Saturday, October 21, 2017

MongoDB reported that the traditional relational store for a catalog suffered from the following issues:
1) The field inventory is a local view only until it makes its way to the central store.
2) The relational store involves a one-a-day sync or something periodic
3) Stale views are served until the refresh happens which is often not fast enough for consumers.
4) The stale view interferes with analytics and aggregations reports.
5) Downstream internal and external apps have to work around the delays and stale views with sub-optimal logic.
MongoDB overcomes all these and organizes the catalog into Item, Variant, Price, Hierarchy, Facet and Vendors in a document store. Then it allows the applications to search for data via prejoined objects in cache or via indexing through search engine using Lucene/Solr architecture
Applications generally want a central service with a single view to the catalog. It is used by most services and back channels. Moreover, they want this inventory to be in sync in real time. Therefore they expect their writes to be local and real-time in a read dominated workload. That said refresh is in the form of bulk writes. via point in time loads.  Bulk inserts work well with relational database. Therefore MongoDBs document data store gets synced with the relational data store periodically say nightly.  The real-time update and view all happen on the document store only and don't reach the relational store. Similarly analytics and aggregation pull data from the document store
The catalog initially stored 1 document per SKU per store. if there are hundred million items in a thousand store, it results in a hundred billion entries. Instead the documents were now one each for a specific key/store grouping SKU. This improves geo distribution and results in lower number of docs. In order to keep reads/writes local with low latency in an architecture where writes go to every shard for every region, they were streamlined to go to their respective regions with the help of tags.
Even the shopping cart is modeled as a single document. 

#codingexercise
We talked about finding the largest number from the digits whose product equals the number.
We said we can try different digits from 2 to 9 in increasing order to find the divisors of the numbers and their repetitions and we take as many of the divisors as possible. In increasing order of digits, this translates to (digit1, repetition1), (digit2, repetition2) ... pairs. The ones with the higher repetitions are included in the final solution.
Each digit, repetition pair is either included in the final solution or not. Consequently we can write a dynamic programming that recursively tries both paths and returns the count of the sub-problem plus the repetitions from the current choice or merely the count from the sub-problem as the case maybe.

An essay on personal assistant: https://1drv.ms/w/s!Ashlm-Nw-wnWsUKrRyxwuyYPFiz5

Friday, October 20, 2017

We started looking at storing catalogs in MongoDB.  The  catalog often gets fragmented, processes become redundant, ETL and message bus start getting added which lead to inefficiencies. MongoDB overcomes this with a single unified view and one central service that provides not only the catalog but also a search engine over the catalog. In this way, the catalog becomes one central service with flexible schema, high read volume, write spike tolerance during catalog update, and comes with advanced indexing and querying as well as geographical distribution  for HA and low latency.
MongoDB reported that the traditional relational store for a catalog suffered from the following issues:
1) The field inventory is a local view only until it makes its way to the central store.
2) The relational store involves a one-a-day sync or something periodic
3) Stale views are served until the refresh happens which is often not fast enough for consumers.
4) The stale view interferes with analytics and aggregations reports.
5) Downstream internal and external apps have to work around the delays and stale views with sub-optimal logic.

In this sense, MongoDB tries to provide Real-Time Inventory which is used directly by most services and channels. It even provides local real-time writes while supporting a read dominated workload. More geographical distribution and horizontal scalability helps improve performance. #codingexercise

Yesterday we were discussing a method to find the smallest number k such that the product of digits of k is equal to n We said we could do this by finding the digits from 1 to 9 that can divide the number n and list them sorted in increasing order with repetitions such that the product can be formed. By trying this with the decreasing order of digits from 9 to 1 we get fewer digit based numbers first.
The same solution works similarly if we want to find the largest number k instead of the smallest number. In this case we try to maximize the number of digits in the final solution and therefore we order the digits to try from smallest to biggest.
Moreover, we can also extend this to consecutive integers or multiples by avoiding redundant calculations as before. For example,  we  could re-use the results from the factors computed earlier using memoization. Instead of trying the combinations of the factors to see which one gives the largest number, we can improve that by choosing an increasing order of factors and then working with their results so that we know we will get the largest number first. Since the count of digits is important over their value, we can use the results from the factors. For example,
100 = 4 x 25
100 = 4 x 5 x 5
100 = 10 x 10 
100 = 2 x 5 x 2 x 5
While the individual digits can contribute at most 1,  the factors can contribute multiple of digits. we can choose the factors that contribute the most number of digits and their repetitions in the final solution. In this case we can ignore answers comprising of digits and look up the memoization table for the factors. Here we could directly choose, the result as f(100) = f(2) x f(50).