Sunday, October 22, 2017

We were discussing MongoDB. Their catalog initially stored 1 document per SKU per store. if there are hundred million items in a thousand store, it results in a hundred billion entries. Instead the documents were now one each for a specific key/store grouping SKU. This improves geo distribution and results in lower number of docs. In order to keep reads/writes local with low latency in an architecture where writes go to every shard for every region, they were streamlined to go to their respective regions with the help of tags.
Even the shopping cart is modeled as a single document. In order to improve availability, inventory was shared with regions. The inventory was stored with geocode and there was option to make it more granular by using store Is instead of geocode. The shopping cart is also made more available by replication across regions so that the writes are local even as the shopping cart moves between regions. Each shard has one replica per region. The primary servers are distributed per region. The shopping carts are also shared by geocodes or optionally by storeid
User activities are logged for insight. These include search terms, items viewed or wished, cart added or removed. orders submitted, sharing on social network and as impression or clickstream. Data will be used to compute user / product history product map, user preferences, recommendations  or trends.
Insights from logs had proven difficult. Log files are usually forgotten. moreover the user activity is voluminous When the activity is recorded in data warehouse system, it provides good reporting but expensive to scale. NoSQL helps good scaling and reporting but there is still no data store for real time queries and aggregations from applications. Moreover there are delays in moving logs to the log processor and delays in processing data from the log processor to the analytics.from the analytics, the output is limited by schema before it is put in a relational store. From this store there is limited read capability by the application. Instead of using a time series database, the activity store and reporting is improved right from the box by MongoDB.
#codingexercise
Yesterday we talked about finding the largest number from the digits whose product equals the number.
for a single number as a standalone we determined the answer using digits from 2 to 9 and their repetition while omitting 1 by using dynamic programming 
when we have an increasing continuous sequence of inputs, we said we could re-use the results from the factors. specifically we order the factors by their counts and use the ones that maximize the count. The combinations of factors that result in the product need not all be enumerated. we just take as many of those with higher counts as necessary to make the product.
another way to think about this is to use the same dynamic programming as for the digits by recursively checking if the factor inclusion or exclusion returns higher count of digits. we bail if the product is reached and return 1 or not met and return 0. for inclusion we add the contribution from the factor to the recursionand for exclusion we return the recursion.

#implementations of K-means: https://1drv.ms/w/s!Ashlm-Nw-wnWsWbQF0tucrJAUlGz 

Saturday, October 21, 2017

MongoDB reported that the traditional relational store for a catalog suffered from the following issues:
1) The field inventory is a local view only until it makes its way to the central store.
2) The relational store involves a one-a-day sync or something periodic
3) Stale views are served until the refresh happens which is often not fast enough for consumers.
4) The stale view interferes with analytics and aggregations reports.
5) Downstream internal and external apps have to work around the delays and stale views with sub-optimal logic.
MongoDB overcomes all these and organizes the catalog into Item, Variant, Price, Hierarchy, Facet and Vendors in a document store. Then it allows the applications to search for data via prejoined objects in cache or via indexing through search engine using Lucene/Solr architecture
Applications generally want a central service with a single view to the catalog. It is used by most services and back channels. Moreover, they want this inventory to be in sync in real time. Therefore they expect their writes to be local and real-time in a read dominated workload. That said refresh is in the form of bulk writes. via point in time loads.  Bulk inserts work well with relational database. Therefore MongoDBs document data store gets synced with the relational data store periodically say nightly.  The real-time update and view all happen on the document store only and don't reach the relational store. Similarly analytics and aggregation pull data from the document store
The catalog initially stored 1 document per SKU per store. if there are hundred million items in a thousand store, it results in a hundred billion entries. Instead the documents were now one each for a specific key/store grouping SKU. This improves geo distribution and results in lower number of docs. In order to keep reads/writes local with low latency in an architecture where writes go to every shard for every region, they were streamlined to go to their respective regions with the help of tags.
Even the shopping cart is modeled as a single document. 

#codingexercise
We talked about finding the largest number from the digits whose product equals the number.
We said we can try different digits from 2 to 9 in increasing order to find the divisors of the numbers and their repetitions and we take as many of the divisors as possible. In increasing order of digits, this translates to (digit1, repetition1), (digit2, repetition2) ... pairs. The ones with the higher repetitions are included in the final solution.
Each digit, repetition pair is either included in the final solution or not. Consequently we can write a dynamic programming that recursively tries both paths and returns the count of the sub-problem plus the repetitions from the current choice or merely the count from the sub-problem as the case maybe.

An essay on personal assistant: https://1drv.ms/w/s!Ashlm-Nw-wnWsUKrRyxwuyYPFiz5

Friday, October 20, 2017

We started looking at storing catalogs in MongoDB.  The  catalog often gets fragmented, processes become redundant, ETL and message bus start getting added which lead to inefficiencies. MongoDB overcomes this with a single unified view and one central service that provides not only the catalog but also a search engine over the catalog. In this way, the catalog becomes one central service with flexible schema, high read volume, write spike tolerance during catalog update, and comes with advanced indexing and querying as well as geographical distribution  for HA and low latency.
MongoDB reported that the traditional relational store for a catalog suffered from the following issues:
1) The field inventory is a local view only until it makes its way to the central store.
2) The relational store involves a one-a-day sync or something periodic
3) Stale views are served until the refresh happens which is often not fast enough for consumers.
4) The stale view interferes with analytics and aggregations reports.
5) Downstream internal and external apps have to work around the delays and stale views with sub-optimal logic.

In this sense, MongoDB tries to provide Real-Time Inventory which is used directly by most services and channels. It even provides local real-time writes while supporting a read dominated workload. More geographical distribution and horizontal scalability helps improve performance. #codingexercise

Yesterday we were discussing a method to find the smallest number k such that the product of digits of k is equal to n We said we could do this by finding the digits from 1 to 9 that can divide the number n and list them sorted in increasing order with repetitions such that the product can be formed. By trying this with the decreasing order of digits from 9 to 1 we get fewer digit based numbers first.
The same solution works similarly if we want to find the largest number k instead of the smallest number. In this case we try to maximize the number of digits in the final solution and therefore we order the digits to try from smallest to biggest.
Moreover, we can also extend this to consecutive integers or multiples by avoiding redundant calculations as before. For example,  we  could re-use the results from the factors computed earlier using memoization. Instead of trying the combinations of the factors to see which one gives the largest number, we can improve that by choosing an increasing order of factors and then working with their results so that we know we will get the largest number first. Since the count of digits is important over their value, we can use the results from the factors. For example,
100 = 4 x 25
100 = 4 x 5 x 5
100 = 10 x 10 
100 = 2 x 5 x 2 x 5
While the individual digits can contribute at most 1,  the factors can contribute multiple of digits. we can choose the factors that contribute the most number of digits and their repetitions in the final solution. In this case we can ignore answers comprising of digits and look up the memoization table for the factors. Here we could directly choose, the result as f(100) = f(2) x f(50).

Thursday, October 19, 2017

MongoDB provides all aspects of data storage for online retail stores. It can categorize data based on merchandising, content, inventory, customer, channel, sales and fulfilment, insight and social. Out of these merchanding, inventory, customer and insight are the most heavily used for peak holiday sales season.  In addition,  supply chain management system and data warehouse can also integrate well with this database.
We started looking at storing catalogs in MongoDB. The problem with catalog as with any master data management is that data becomes rigid. If the catalog is accessed more than 100 times per second, it causes issue with performance. When other departments need to access it, ETL, message bus and API services are put in place for this store.  This results in fragmented data, redundant processes, heterogeneous data processing and higher costs both in terms of time and money. Even users begin to see degradation in page load times. To solve the local performance, more caches, more message bus and more ETL operations are added which only complicates it. A single view of the product and one central service becomes harder to achieve.  The purpose of the catalog is to form a single view of the product with one central service, flexible schema, high read volume, write spike tolerant during catalog update, and to have advanced indexing and querying and geographical distribution  for HA and low latency.
MongoDB is not a traditional MDM but it addresses all the above requirements. A traditional Master Data Management has a well known organization of catalog with support for hierarchy and dimensions.  MongoDB organizes the catalog in the form of Items, Pricing, Promotions, Variants, and Ratings and Reviews. In Json this appears as nested fields and are pre-joined into objects.  The objects live in the cache as well.  A search engine provides search over the catalog. Functional data access is provided by the Product API. The API and the engine separately cover all operations on the catalog. The API can then be used by downstream units such as Online Store, Marketing, Inventory, SCMS and other APIs. The Search engine is built on Lucene/Solr Architecture A Lucene index keeps track of terms and their occurrence locations but the index needs to be rebuilt each time the catalog changes. The Product API can retrieve results directly from the catalog or via the search engine. In both cases, the customer only issues a single query.
#codingexercise

Yesterday we were discussing a method to find the smallest number k such that the product of digits of k is equal to n We said we could do this by finding the digits from 1 to 9 that can divide the number n and list them sorted in increasing order with repetitions such that the product can be formed. By trying this with the decreasing order of digits from 9 to 1 we get fewer digit based numbers first.
We extended this to say that if we try the above method repeatedly for consecutive integers or multiples, we do a lot of redundant calculations. Instead we  could re-use the results from the factors computed earlier. However, we also said that we have to try the combinations of the factors to see which one gives the smallest numbers. We can improve that by choosing a decreasing order of factors and then working with their results so that we know we will get the smallest number first.
For example 100 = 20  * 5 and 100 = 10 * 10
We get the answer from the decreasing order of factors.

Wednesday, October 18, 2017

MongoDB provides all aspects of data storage for online retail stores. It can categorize data based on merchandising, content, inventory, customer, channel, sales and fulfilment, insight and social. Out of these merchanding, inventory, customer and insight are the most heavily used for peak holiday sales season.  In addition,  supply chain management system and data warehouse can also integrate well with this database.
We now lookup some of these functions in a table. For example, a catalog is important for the online retail store to allow users to browse the inventory.  The problem with catalog as with any master data management is that data becomes rigid. If the catalog is accessed more than 100 times per second, it causes issue with performance. When other departments need to access it, ETL, message bus and API services are put in place for this store.  This results in fragmented data, redundant processes, heterogeneous data processing and higher costs both in terms of time and money. Even users begin to see degradation in page load times. Another concern with the catalog is that there are many more catalogs formed as it grows. This leads to more caches, more message bus and more ETL operations. A single view of the product and one central service becomes harder to achieve.  The purpose of the catalog is to form a single view of the product with one central service, flexible schema, high read volume, write spike tolerant during catalog update, and to have advanced indexing and querying and geographical distribution  for HA and low latency.
#codingexercise
Find the smallest number k such that the product of digits of k is equal to n
For example, if the input is 100, then the output is 455
                      if the input is 26, then the output is -1

We know that the smallest number possible must have increasing order sorted digits with the smallest digit appearing in the most significant position. Therefore, starting from 9 to 1, we can divide the number repeatedly and keep track of the digit as well as its repetitions for those that are divisors. We stop when we have exhausted the digits we started with or the number is reduced to 1 or 0. We store it in a stack because the sequence of digits and their repetitions need to be reversed to form the answer.  The termination conditions are when n is 1
int GetDivisors(int n)
{
if (n >= 0 && n <= 9) return n;
var divisors = new Stack<int>();
for ( int i = 9; i >=2 && n >1; i--)
{
    while(n % i == 0)
    {
         divisors.push(i);
         n = n  / i;
    }
}
if (n != 1) return -1;
return GetNumberFormedFromDivisors(divisors);
}
This method works well for a single input but if we have a contiguous sequence of inputs, we can utilize the results of the previous computations by memoizing the results and putting them in a dp table. The catch however will be to get all factors and see which combination is smallest.
For example
100 is 20 x 5
20 is already computed as 4 x 5

Tuesday, October 17, 2017

We were reviewing how MongoDB enables integration with social engineering applications while also enabling data and service integration for the user data inflow with APIs for data and services during peak annual sales season. At the heart of their store, they can categorize data based on merchandising, content, inventory, customer, channel, sales and fulfilment, insight and social. Out of these merchanding, inventory, customer and insight are the most heavily used for peak holiday sales season.  In addition,  supply chain management system and data warehouse also integrate well with this Storage.
The information layer improves experience with look and feel, Navigation, customization, personalization, branding, promotion, chats and ads. Each of these is a separate initiative.  The customer's perspective helps us understand their context. For example, a user would like to browse and search the store online. This is part of his research. The shopping cart helps him select the items. The checkout experience is streamlined towards purchase.  The tracking of the item improves the receiving experience.  There is feedback loop and maintenance considerations with use of item.  The dialog with the customer is the focus of the assistance efforts. All these customer experiences translate to sofware interactions in two or three major categories on the enterprise side. For example,  the Market/Offer will require services separate from sell/fulfill.  The sell/fulfill will required data servces different from insight. Note that the services are easily enumerable within each category to be different from each other while they share the same information management backend which is facilitated with MongoDB which provides a one-stop shop for information management specifically in the areas of merchandising, inventory, customer and insight  The merchandising fulfils market/offer services such as guide, offer, semantic search, recommend, rule-based decisions, pricing and coupons. The Sell/fulfill provides ways for order, payments, fraud detection, fulfilment and business rules. The Insight provides ways for session capture and activity monitoring.
#codingexercise continued
Yesterday we described finding the largest submatrix in a binary matrix. 
We said the conventional way was to find connected components. Today we explain this:
int maxsize = 0
for (int i = 0; i < ROW; i++)
   for (int j = 0; j < COL; j++)
      if ( A[i,j] == 1 && !visited[i, j])
   {
        int size = 0;
        count++;
        DFS(A, ROW, COL, i, j, ref visited, ref size);
        if (size > maxSize)
             maxSize = size;

   }
This can be modified to record the cells of the submatrix contributing to the maximum size.
Given a cell in a connected component, we can find the top-left corner of the bounding submatrix with the minimum values for the x and y co-ordinates.

The maximum values of the x and y co-ordinates leads us to the bottom-right corner of the bounding submatrix

Monday, October 16, 2017

We were reviewing how MongoDB enables integration with social engineering applications while also enabling data and service integration for the user data inflow with APIs for data and services during peak annual sales season. At the heart of their store, they can categorize data based on merchandising, content, inventory, customer, channel, sales and fulfilment, insight and social. Out of these merchanding, inventory, customer and insight are the most heavily used for peak holiday sales season.  In addition,  supply chain management system and data warehouse also integrate well with this Storage.
The information layer improves experience with look and feel, Navigation, customization, personalization, branding, promotion, chats and ads.


#codingexercise
Maximize the count of 1 in a binary matrix by flipping any submatrix exactly once. 

The usual way of answering it is to find the most connected component of zero by adjacency breadth-first-traversal and then find the top-left and bottom-right inclusive boundaries based on the min and max of co-ordinates. 

But here is a twist courtesy an online solution: 
Step 1. Calculate a similar sized matrix as the input but having values as counts of 1 for a sub matrix ending with that cell as bottom-right and starting from 0,0. Let us call this matrix 'ones' matrix. 
Step 2. Exhaustively iterate all submatrices with boundary (i,j) top left to (i+k-1, j+k-1) as bottom right and determine the number of ones after a flip of this submatrix. Let us call this as 'cal' submatrix. We are not flipping square submatrices as i and j can each range from 1 to i+k-1 as long as they are within the input matrix. Also, we count the ones of the whole input matrix. For now, let us follow the steps 
Step 3. Within each iteration, the resulting value of the whole input matrix after the flip comes out to be: 
                  ones[R,C] - ones in the chosen submatrix + zeroes in the chosen sub matrix 
               = ones[R,C] - cal(i, j, i+k-1, j+k-1) +  (k x k -cal(i, j, i+k-1, j+k-1)) 
               = ones[R, C] + k x k + 2 x cal(i, j, i+k-1, j+k-1) 
Step 4. From the value computed in step 3 for each iteration, we determine the max found so far and return it as the final result.