Monday, May 16, 2022

 This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.     

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.     

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.       

In this section, we talk about monolithic persistence antipattern that must be avoided. This antipattern occurs when a single data store hurts performance due to resource contention. Additionally, the use of multiple data sources can help with virtualization of data and query.  

A specific example of this antipattern is when the crowdsourced application gets transactional records, logs, metrics and events to the same database. The online transaction processing benefits from a relational store but logs and metrics can be moved to a log index store and time-series database respectively. Usually, a single datastore works well for transactional data but this does not mean documents need to be stored in the same data store. A blob store or document database can be used in addition to a regular transactional database to allow individual documents to be shared without any impact to the business operations. Each document can then have its own web accessible address.  

This antipattern can be fixed in one of several ways. First, the data types must be listed, and their corresponding data stores must be assigned. Many data types can be bound to the same database but when they are different, they must be passed to the data stores that handles them best. Second, the data access patterns for each data type must be analyzed.  If the data type is a document, a CosmosDB instance is a good choice. Third, if the database instance is not suitable for all the data access patterns of the given data type, it must be scaled up. A premium sku will likely benefit this case.  

Detection of this antipattern is easier with the monitoring tools and the built-in supportability features of the database layer. If the database activity reveals significant processing, contention and very low data rate, it is likely that this antipattern is manifesting.   

Examine the work performed by the database in terms of data types which can be narrowed down by callers and scenarios, may reveal just the culprits that are likely to be causing this antipattern   

Finally, periodic assessments must be performed on the data storage tier. 

 

 

This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.     

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.     

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.       

In this section, we talk about monolithic persistence antipattern that must be avoided. This antipattern occurs when a single data store hurts performance due to resource contention. Additionally, the use of multiple data sources can help with virtualization of data and query.  

A specific example of this antipattern is when the crowdsourced application gets transactional records, logs, metrics and events to the same database. The online transaction processing benefits from a relational store but logs and metrics can be moved to a log index store and time-series database respectively. Usually, a single datastore works well for transactional data but this does not mean documents need to be stored in the same data store. A blob store or document database can be used in addition to a regular transactional database to allow individual documents to be shared without any impact to the business operations. Each document can then have its own web accessible address.  

This antipattern can be fixed in one of several ways. First, the data types must be listed, and their corresponding data stores must be assigned. Many data types can be bound to the same database but when they are different, they must be passed to the data stores that handles them best. Second, the data access patterns for each data type must be analyzed.  If the data type is a document, a CosmosDB instance is a good choice. Third, if the database instance is not suitable for all the data access patterns of the given data type, it must be scaled up. A premium sku will likely benefit this case.  

Detection of this antipattern is easier with the monitoring tools and the built-in supportability features of the database layer. If the database activity reveals significant processing, contention and very low data rate, it is likely that this antipattern is manifesting.   

Examine the work performed by the database in terms of data types which can be narrowed down by callers and scenarios, may reveal just the culprits that are likely to be causing this antipattern   

Finally, periodic assessments must be performed on the data storage tier. 

 

 

Saturday, May 14, 2022

 

This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.   

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.   

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.     

In this section, we talk about noisy neighbor. This antipattern occurs when there are many clients that can starve other clients as they hold up a disproportionate set of critical resources from a shared and reserved pool of resources meant for all clients.  The noisy neighbor problem occurs when one client causes problem for another. Some common examples of resource intensive operations include, retrieving or persisting data to a database, sending a request to a web service, posting a message or retrieving a message from a queue, and writing or reading from a file in a blocking manner. There is a lot of advantages to running dedicated calls especially from debugging and troubleshooting purposes because the calls do not have interference, but shared platform enables reuse of the same components. The overuse of this feature can hurt performance due to the clients' consuming resources that can starve other clients. It appears notably when there are components or I/O requiring synchronous I/O. The application uses library that only uses synchronous methods or I/O in this case. The base tier may have finite capacity to scale up. Compute resources are better suitable for scale out rather than scale up and one of the primary advantages of a clean separation of layers with asynchronous processing is that they can be hosted even independently. Container orchestration frameworks facilitate this very well. As an example, the frontend can issue a request and wait for a response without having to delay the user experience. It can use the model-view-controller paradigms so that they are not only fast but can also be hosted such that clients using one view model do not affect the other.

This antipattern can be fixed in one of several ways. First the processing can be moved out of the application tier into an Azure Function or some background api layer. Clients are given promises and are actively monitored. If the application frontend is confined to data input and output display operations using only the capabilities that the frontend is optimized for, then it will not manifest this antipattern. APIs and Queries can articulate the business layer interactions such that the clients find it responsive while the system reserves the right to perform. Many libraries and components provide both synchronous and asynchronous interfaces. These can then be used judiciously with the asynchronous pattern working for most API calls. Finally, limits and throttling can be applied. Application gateway and firewall rules can handle restrictions to specific clients

The introduction of long running queries and stored procedures, blocking I/O and network waits often goes against the benefits of a responsive multi-client service. If the processing is already under the control of the service, then it can be optimized further.

There are several ways to fix this antipattern. They are about detection and remedy. The remedies include capping the number of client attempts and preventing retrying for a long period of time. The client calls could include an exponential backoff strategy that increases the duration between successive calls exponentially, handle errors gracefully, use the circuit breaker pattern which is specifically designed to break the retry storm. Official SDKs for communicating to Azure Services already include sample implementations of retry logic. When the number of I/O requests is many, they can be batched into coarse requests. The database can be read with one query substituting many queries. It also provides an opportunity for the database to execute it better and faster. Web APIs can be designed with the REST best practices. Instead of separate GET methods for different properties, there can be a single GET method for the resource representing the object. Even if the response body is large, it will likely be a single request. File I/O can be improved with buffering and using cache. Files may need not be opened or closed repeatedly. This helps to reduce fragmentation of the file on disk.

 

 

 

 

 

 

 

 

 

Friday, May 13, 2022

 

This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.   

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.   

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.     

In this section, we talk about busy frontend. This condition occurs when there are many background threads that can starve foreground tasks of their resources which decreases response times to unacceptable levels. There are lots of advantages to running background jobs which avoid the interactivity for processing and can be scheduled asynchronously. But the overuse of this feature can hurt performance due to the tasks consuming resources that foreground workers need for interactivity with the user, leading to a spinning wait and frustrations for the user. It appears notably when the foreground is monolithic compressing the business tier with the application frontend. Runtime costs might shoot up if this tier is metered. An application tier may have finite capacity to scale up. Compute resources are better suitable for scale out rather than scale up and one of the primary advantages of a clean separation of layers and components is that they can be hosted even independently. Container orchestration frameworks facilitate this very well. The Frontend can be as lightweight as possible and built on model-view-controller or other such paradigms so that they are not only fast but also hosted on separate containers that can scale out. 

This antipattern can be fixed in one of several ways. First the processing can be moved out of the application tier into an Azure Function or some background api layer. If the application frontend is confined to data input and output display operations using only the capabilities that the frontend is optimized for, then it will not manifest this antipattern. APIs and Queries can articulate the business layer interactions. The application then uses the .NET framework APIs to run standard query operators on the data for display purposes.  

UI interface is designed for purposes specific to the application. The introduction of long running queries and stored procedures often goes against the benefits of a responsive application. If the processing is already under the control of the application techniques, then they should not be moved.   If the front-end activity reveals significant processing and very low data emission, it is likely that this antipattern is manifesting.  

 

 

 

 

 

 

 

Thursday, May 12, 2022

 This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.   

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.   

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.     

In this section, we talk about busy database. There are a lot of advantages to running code local to the data which avoids the transmission to a client application for processing. But the overuse of this feature can hurt performance due to the server spending more time processing, rather than accepting new client requests and fetching data. A database is also a shared resource, and it might deny resources to other requests when one of them is using a lot for computations. Runtime costs might shoot up if the database is metered. A database may have finite capacity to scale up. Compute resources are better suitable for hosting complicated logic while storage products are more customized for large disk space. The busy database occurs when the database is used to host a service rather than a repository or it is used to format the data, manipulate data, or perform complex calculations.  Developers trying to overcompensate for the extraneous fetching symptom often write complex queries that take significantly longer to run but produce a small amount of data.

This can be fixed in one of several ways. First the processing can be moved out of the database into an Azure Function or some application tier. As long as the database is confined to data access operations using only the capabilities, the database is optimized and will not manifest this antipattern. Queries can be simplified to fetching the data with a proper select statement that merely retrieves the data with the help of joins. The application then uses the .NET framework APIs to run standard query operators. 

Database tuning is an important routine for many organizations. The introduction of long running queries and stored procedures often goes against the benefits of a tuned database. If the processing is already under the control of the database tuning techniques, then they should not be moved.  

Avoiding unnecessary data transfer solves both this antipattern as well as chatty I/O antipattern. When the processing is moved to the application tier, it provides the opportunity to scale out rather than require the database to scale up. 


 

Wednesday, May 11, 2022

 

This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.   

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.   

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.     

In this section, we talk about synchronous I/O. When there are many background threads that can starve other threads as they enter a wait state and do not perform any work while holding up critical resources, it hampers the crowdsourced application which must stay up-to-date despite the volume of traffic matching the levels of other social engineering applications. Some common examples of I/O include, retrieving or persisting data to a database, sending a request to a web service, posting a message or retrieving a message from a queue, and writing or reading from a file in a blocking manner. There is a lot of advantages to running calls synchronously especially from debugging and troubleshooting purposes because the call sequences are pre-established. But the overuse of this feature can hurt performance due to the tasks consuming resources on a spinning wait can starve other threads. It appears notably when there are components or I/O requiring synchronous I/O. The application uses library that only uses synchronous methods or I/O. The base tier may have finite capacity to scale up. Compute resources are better suitable for scale out rather than scale up and one of the primary advantages of a clean separation of layers with asynchronous processing is that they can be hosted even independently. Container orchestration frameworks facilitate this very well. As an example, the frontend can issue a request and wait for a response without having to delay the user experience. It can use the model-view-controller paradigms so that they are not only fast but can also be hosted on separate containers that can scale out.  

It can be fixed in one of several ways. First the processing can be moved out of the application tier into an Azure Function or some background api layer. If the application frontend is confined to data input and output display operations using only the capabilities that the frontend is optimized for, then it will not manifest this antipattern. APIs and Queries can articulate the business layer interactions. Many libraries and components provide both synchronous and asynchronous interfaces. These can then be used judiciously with the asynchronous pattern working for most API calls.  

 

 

 

 

 

 

Tuesday, May 10, 2022

 This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.   

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.   

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.     

In this section, we talk about caching. A no-caching antipattern occurs when the crowdsourced application handles many concurrent requests, and they fetch the same data. Since there is contention for the data access, it can reduce performance and scalability. When the data is not cached, it leads to many manifestations of areas for improvement.  Degradation in response times, increased contention, and poor scalability are common examples.

Caching is sometimes out of scope of the architecture design or listed as option for operations to include via standalone independent products. Other times, the introduction of a cache might increase latency, maintenance and ownership and decrease overall availability. It might also interfere with existing caching strategies and expiration policies of the underlying systems. Some might prefer to not add an external cache to a database and only as a sidecar for the web services. It’s true that databases can cache even materialized views for a connection, but the addition of a cache lookup could be cheap in all cases where the compute in the deeper systems could be costly and can be avoided.

There are two strategies to fix the problem. The first one includes the on-demand network or cache-aside strategy. When the application tries to read the data from the cache, and if it isn’t there, it retrieves and puts it in the cache. When the application writes the change directly to the data source, it removes the old value from the source but refilled the next time it is required.

Another strategy might be to always keep static resources in the cache with no expiration date. This is equivalent to CDN usage although CDNs are for distribution.  Applications that cache dynamic data should be designed to support eventual consistency. 

No matter how the cache is implemented, it must support fallback to the deep data access when the data is not available in the cache. This Circuit-breaker pattern merely avoids overwhelming the data source.