We started looking at storing catalogs in MongoDB. The catalog often gets fragmented, processes become redundant, ETL and message bus start getting added which lead to inefficiencies. MongoDB overcomes this with a single unified view and one central service that provides not only the catalog but also a search engine over the catalog. In this way, the catalog becomes one central service with flexible schema, high read volume, write spike tolerance during catalog update, and comes with advanced indexing and querying as well as geographical distribution for HA and low latency.
MongoDB reported that the traditional relational store for a catalog suffered from the following issues:
1) The field inventory is a local view only until it makes its way to the central store.
2) The relational store involves a one-a-day sync or something periodic
3) Stale views are served until the refresh happens which is often not fast enough for consumers.
4) The stale view interferes with analytics and aggregations reports.
5) Downstream internal and external apps have to work around the delays and stale views with sub-optimal logic.
In this sense, MongoDB tries to provide Real-Time Inventory which is used directly by most services and channels. It even provides local real-time writes while supporting a read dominated workload. More geographical distribution and horizontal scalability helps improve performance. #codingexercise
Yesterday we were discussing a method to find the smallest number k such that the product of digits of k is equal to n We said we could do this by finding the digits from 1 to 9 that can divide the number n and list them sorted in increasing order with repetitions such that the product can be formed. By trying this with the decreasing order of digits from 9 to 1 we get fewer digit based numbers first.
The same solution works similarly if we want to find the largest number k instead of the smallest number. In this case we try to maximize the number of digits in the final solution and therefore we order the digits to try from smallest to biggest.
Moreover, we can also extend this to consecutive integers or multiples by avoiding redundant calculations as before. For example, we could re-use the results from the factors computed earlier using memoization. Instead of trying the combinations of the factors to see which one gives the largest number, we can improve that by choosing an increasing order of factors and then working with their results so that we know we will get the largest number first. Since the count of digits is important over their value, we can use the results from the factors. For example,
100 = 4 x 25
100 = 4 x 5 x 5
100 = 10 x 10
100 = 2 x 5 x 2 x 5
While the individual digits can contribute at most 1, the factors can contribute multiple of digits. we can choose the factors that contribute the most number of digits and their repetitions in the final solution. In this case we can ignore answers comprising of digits and look up the memoization table for the factors. Here we could directly choose, the result as f(100) = f(2) x f(50).
MongoDB reported that the traditional relational store for a catalog suffered from the following issues:
1) The field inventory is a local view only until it makes its way to the central store.
2) The relational store involves a one-a-day sync or something periodic
3) Stale views are served until the refresh happens which is often not fast enough for consumers.
4) The stale view interferes with analytics and aggregations reports.
5) Downstream internal and external apps have to work around the delays and stale views with sub-optimal logic.
In this sense, MongoDB tries to provide Real-Time Inventory which is used directly by most services and channels. It even provides local real-time writes while supporting a read dominated workload. More geographical distribution and horizontal scalability helps improve performance. #codingexercise
Yesterday we were discussing a method to find the smallest number k such that the product of digits of k is equal to n We said we could do this by finding the digits from 1 to 9 that can divide the number n and list them sorted in increasing order with repetitions such that the product can be formed. By trying this with the decreasing order of digits from 9 to 1 we get fewer digit based numbers first.
The same solution works similarly if we want to find the largest number k instead of the smallest number. In this case we try to maximize the number of digits in the final solution and therefore we order the digits to try from smallest to biggest.
Moreover, we can also extend this to consecutive integers or multiples by avoiding redundant calculations as before. For example, we could re-use the results from the factors computed earlier using memoization. Instead of trying the combinations of the factors to see which one gives the largest number, we can improve that by choosing an increasing order of factors and then working with their results so that we know we will get the largest number first. Since the count of digits is important over their value, we can use the results from the factors. For example,
100 = 4 x 25
100 = 4 x 5 x 5
100 = 10 x 10
100 = 2 x 5 x 2 x 5
While the individual digits can contribute at most 1, the factors can contribute multiple of digits. we can choose the factors that contribute the most number of digits and their repetitions in the final solution. In this case we can ignore answers comprising of digits and look up the memoization table for the factors. Here we could directly choose, the result as f(100) = f(2) x f(50).
No comments:
Post a Comment