Wednesday, March 24, 2021

The design of a management suite over MDM   (continued ...)

 Role of MDM: 

MDM is an enhanced suite of Product Information Management (PIM) web services. Functionality is provided in layers of information management starting with print/translation workflows at the bottom layer, followed by workflow or security for access to the assets, their editing, insertions, and bulk insertions, followed by integrations or portals  followed by flexible integration capabilities, full/data exports, multiple exports, followed by integration portals for integration with imports/exports, data pools and platforms, followed by digital asset management layer for asset on-boarding and delivery to channels, and lastly, data management for searches, saved searches, channel-based content, or localized content and the ability to author variants, categories, attributes and relationships to stored assets. 

Some MDMs do not require the catalog to be in a single domain. They have been long players and their products have matured with more power for collaborative and operational capabilities.  The trouble with MDM users is that they still prefer to use MS Excel. This introduces ETL-based workflows and silo-ed views of data. Materialized views don't help because they are not updated in time. Also, any separation of stages to data manipulation introduces human errors and inconsistencies, in addition, to delay to reach the data. 


The unification:  

Together the Master Data Management and the storage engine provide an out of the box as well as a customizable solution that targets all the aspects of an online product catalog. The data is available for retrieval and iteration directly out of the store while hierarchy and dimensions may be maintained by a web service over the Object Storage. 

The problem with catalog as with any master data management is that data becomes rigid. If the catalog is accessed more than 100 times per second, it causes an issue with performance. When other departments need to access it, ETL, message bus, and API services are put in place for this store.  This results in fragmented data, redundant processes, heterogeneous data processing, and higher costs both in terms of time and money. Even users begin to see degradation in page load times. To solve the local performance, more caches, more message bus, and more ETL operations are added which only complicates it. A single view of the product and one central service becomes harder to achieve.  The purpose of the catalog is to form a single view of the product with one central service, flexible schema, high read volume, write spike tolerant during catalog update, and to have advanced indexing and querying and geographical distribution for HA and low latency. 

No comments:

Post a Comment