Saturday, October 31, 2020

User Identity Integration with Organizations for reward points services

The reward points mean nothing without an owner. This person is most likely an employee of a business partnering with the reward points accumulation and redeeming service. The service delivers a comprehensive one-stop-shop for reward points and recognition programs. This makes it easy and consistent for businesses to follow and sign up for the program. This makes the reward point services truly global and offered as a software-as-a-service. 
The identity and Access Management is a critical requirement for this service without which the owner reference cannot be resolved. But this identity and access management does not need to be rewritten as part of the services offering. An identity cloud is a good foundation for IAM integration and existing membership providers across participating businesses can safely be resolved with this solution. 
The use of a leading IAM provider for Identity cloud also helps with the integration of on-premise integration of applications.  The translation of the owner to the owner_id required by the rewards service is automatically resolved by referencing the identifier issued by the cloud IAM. Since the IAM is consistent and accurate, the mappings are direct and accurate. The user does not need to sign in to generate the owner_id. They can be resolved with the integration of the membership provider which is on-premise for the organization such as with Active Directory integration to the identity cloud. Since the integration of different applications for enterprises is expected to be integrated with the Active Directory or IAM provider, the identity cloud can be considered as global for this purpose. The identities from different organizations will require to be isolated from each other in the identity cloud. The same convention can be leveraged by the reward points services. 

The difference between a B2B and a B2C reward points service stands out when the user does not have to register into yet another portal albeit for rewards. With the integration of enterprise-wide IAM to an identity cloud provider and the availability of attributes via SAML, the mapping of reward points to rewards becomes automatic. Together with the automatic collection of user activities from a variety of enterprise deployed applications, the reward points become increasingly significant and appealing to use. 

We assume the reward points balance is maintained accurately in the data layer. The service accumulating and redeeming from this balance update the same source of truth. Both the services recognize the same owner for the reward points because they use the same identity cloud. The redeeming service focuses on making one or more debits against the reward points balance. As long as the reward points store consistently handles the debit and the credit, the granularity of debits does not matter to the debits against the balance.  

This redeeming service simply creates, updates, and deletes a debit entry record in the ledger holding the reward points as long as the rewards added to the cart by the cart service can be honored. This is a simple online transaction processing system that brings the database guarantees to the ledger.   

The database should have change data capture and some form of audit trail. This is essential to the redeeming service because it prevents the detection of misuse. A debit should only be processed when all the security measures have passed.  

The redeeming service supports use-cases for reversal. In such cases, the entry may simply be removed from the ledger. The guarantees from the ledger are sufficient to ensure consistency. In the long run with expanded use cases, reward points may enable multiple stores and their carts to use the same service. In such a case, the microservice architecture for redeeming reward points will be helpful.   

The redeeming service can be simply written as a model-view-controller with the debit as the model. The controller has actions specific to list, create, update, delete, and get by id.  The list action will support ordered and paged entries as well as query parameters. The views are for administrative purposes.  

This session identifier is usually obtained as a hash of the session cookie. And is provided by the authentication and authorization server.  The session identifier can be requested as part of the login process or by a separate API Call to a session endpoint. An active session removes the need to re-authenticate. It provides a familiar end-user experience and functionality. The session can also be used with user-agent features or extensions to assist with authentication such as password-manager or 2-factor device reader.  

There are certain options other than build-your-own service. These options provide the ability to distribute a debit to multiple stores. For example, the Stripe Connect API allows the routing of reward points like payments between the owner and the recipients. Since the participating stores fulfill these orders independently, the reward points redeeming service concerns itself only with owners and their reward points. 

Friday, October 30, 2020

Writing debit and credit as a service

 

Problem statement:  How should a reward points redeeming service be implemented given that the debit and credit are typically occurring on separate services? 


Solution: We assume the reward points balance is maintained accurately in the data layer. The service accumulating and redeeming from this balance update the same source of truth. Both the services recognize the same owner for the reward points because they use the same identity cloud. The redeeming service focuses on making one or more debits against the reward points balance. As long as the reward points store consistently handles the debit and the credit, the granularity of debits does not matter to the debits against the balance.  

This redeeming service simply creates, updates, and deletes a debit entry record in the ledger holding the reward points as long as the rewards added to the cart by the cart service can be honored. This is a simple online transaction processing system that brings the database guarantees to the ledger.   

The database should have change data capture and some form of audit trail. This is essential to the redeeming service because it prevents the detection of misuse. A debit should only be processed when all the security measures have passed.  

The redeeming service supports use-cases for reversal. In such cases, the entry may simply be removed from the ledger. The guarantees from the ledger are sufficient to ensure consistency. In the long run with expanded use cases, reward points may enable multiple stores and their carts to use the same service. In such a case, the microservice architecture for redeeming reward points will be helpful.   

The redeeming service can be simply written as a model-view-controller with the debit as the model. The controller has actions specific to list, create, update, delete, and get by id.  The list action will support ordered and paged entries as well as query parameters. The views are for administrative purposes.  

This session identifier is usually obtained as a hash of the session cookie. And is provided by the authentication and authorization server.  The session identifier can be requested as part of the login process or by a separate API Call to a session endpoint. An active session removes the need to re-authenticate. It provides a familiar end-user experience and functionality. The session can also be used with user-agent features or extensions to assist with authentication such as password-manager or 2-factor device reader.  

There are certain options other than build-your-own service. These options provide the ability to distribute a debit to multiple stores. For example, the Stripe Connect API allows the routing of reward points like payments between the owner and the recipients. Since the participating stores fulfill these orders independently, the reward points redeeming service concerns itself only with owners and their reward points. 

 

Thursday, October 29, 2020

Network Engineering continued ...

 This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html 

  1. Random writes perform just as well as sequential writes on SSD as long as the data are comparable. If the data size is small and the random writes are numerous, it may affect performance. 


  1. The use of garbage collector sometimes interferes with the performance of the networking server. The garbage collector has to be tuned for the kind of workloads. 


  1. In Solid State Drives as per descriptions online, there is a garbage collection process inside the SSD controller which resets dirty pages into renewed pages that can take the writes. It is important to know under which conditions the garbage collection might degrade performance. In the case of SSD, a continuous workload on small random writes puts a lot of work on the garbage collection. 


  1. A garbage collector will find it easier to collect aged data by levels. If there are generations in the pages to be reclaimed, it splits the work for the garbage collector so that the application operations can continue to work well. 


  1. Similarly, aggregated and large pages are easier for the garbage collector to collect rather than multiple spatially and temporally spread out pages If the pages can be bundled or allocated in clusters, it will signal the garbage collector to free them up at once when they are marked 


  1. Among the customizations for garbage collector, it is helpful to see which garbage collector is being worked the most. The garbage collector closer to the application has far more leverage in the allocations and deallocations than something downstream. 

Wednesday, October 28, 2020

Network engineering continued ...

  1. This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html 


  2. Disks compete not only with other disks but also with other forms of storage such as Solid-State Drives. Consequently, disks tend to become cheaper, capable, and smarter in their operations with added value in emerging and traditional usages. Cloud Storage costs have been said to follow a trend that asymptotically reaches zero with the current price today at about 1c per GigaByte per month for cold storage. The average cost per GigaByte per drive has come down by half from 4c per GigaByte between 2013 and 2018. 


  1. Solid State Drives are considered replacements for memory, L1, and L2 cache with added benefits. This may not necessarily be true. It is storage even if it does wear out. Consequently, programming should be more mindful of the reads and writes to data and if they are random, store those data structures on the SSD drives. 


  1. The use of sequential data structures is very common in storage engineering. While some components go to great length to make their read and write access sequential, other components may simplify their design by storing on SSD. 


  1. Reads and writes are aligned on page size on Solid-State Drives while erasures are on a block level. Consequently, data organized in data structures can leverage these criteria for reading some or all at once. If we are writing less than a page more frequently, we are not making good usage of the SSD. We can use buffering to aggregate writes. 


  1. The internal caching and readahead mechanism in the SSD controller prefer long continuous reads and writes rather than simultaneous multiple reads and writes and performs them in one large chunk. This means we open up iterations and aggregate reads and writes to do them all together 

Tuesday, October 27, 2020

Network engineering continued

This is a continuation of the earlier posts starting with this one:  http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

 Shared-memory systems have been popular. They include SMPs, multi-core systems, and a combination of both. The simplest way to use it is to create threads in the same process. Shared-memory parallelism is widely used with big data.

The Shared-Nothing model supports shared-nothing parallelism.  When each node is independent and self-sufficient, there is no single point of contention. None of the nodes share memory or disk storage. Generally, these compete with any model that has a single point of contention in the form of memory or disk space.

Shared-Disk: This model is supported where a large space is needed. Some products implement shared-disk and some implement shared-nothing. Shared-nothing and shared-disk do not go together in the same code base. 

The implementation of a content-distribution network such as for images or videos generally translates to random disk reads which means caching may not always help. Therefore, the disks that are RAIDed are tuned. It used to be a monolithic RAID 10 when it is served from a single master with multiple slaves.  Instead, nowadays a sharded approach is taken and preferably served from Object Storage.


Image and video libraries will constantly run into cache misses especially with slow replication. It is better to separate traffic to different cluster pools. The replication and caching into the picture to handle the load.  With a distribution to different cluster pools, we can distribute the load and avoid them.


File Systems may implement byte-range locking to enable concurrent access. Typically, they are not supported by the File mapping operation. Poor use of file locks can result in performance issues or deadlock.


Monday, October 26, 2020

Network engineering continued ..

This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html

  1. Serializability of objects enables reconstruction on the remote destination. It is more than a protocol for data packing and unpacking on the wire. It includes constraints that enable data validation and helps prevent failures down the line. If the serialization includes encryption, it becomes tamper-proof. 


  1. Serializability is also the notion of correctness when simultaneous updates happen to a resource. When multiple transactions commit their actions, their result can correspond to the one from a serial execution of some transactions.  This is very helpful to eliminate inconsistencies across transactions. It differs from isolation only in that the latter tries to do the same from the point of view of a single transaction. 


  1. Databases were veritable storage systems that guaranteed transactions. Two-phase locking was introduced with transactions where a shared lock was acquired before read and an exclusive lock before write. The two-phase referred to intent and acquisition. With transactions blocking on a wait queue, this was a way to enforce serializability 


  1. Transaction locking and logging proved onerous and complicated. Multi-Version Concurrency control was brought in for the purpose of not acquiring locks. With consistent view of data at some points of time in the past, we no longer need to keep track of every change made since the latest such point of time 


  1. Optimistic concurrency control was introduced to allow each transaction to maintain histories of reads and writes so that those causing isolation conflicts can be rolled back.