Tuesday, September 13, 2022

 

Performance for multitenant application

This is a continuation of the articles on multitenancy with the most recent one linked https://1drv.ms/w/s!Ashlm-Nw-wnWhLZnYUBoDUNcjAHNwQ?e=NAo7vM. This article focuses on performance.

 

The multitenant application discussed so far has an application server and a database. Performance is improved by 1. Writing efficient pages, 2. Efficient web services, 3. Efficient reports, 4.  performance patterns, 5. Efficient data access, 6. Testing and validating performance, 7. Tuning the development environment 8. And using the profiler to analyze performance.

 

Efficient pages are written by using patterns that get a page to load faster. These include: avoiding unnecessary recalculation, 2. Doing less work, 3. And offloading the UI thread. Caching the data and refreshing the cache regularly avoids recalculation.  This saves time each time the page is loaded. Querying objects are notorious for recalculation since they reach the database each time. Caching the results from an API works significantly better.

 

Reducing the amount of work also speeds things up.  A simple page with few UI elements can also be ease of use and navigation.  Removing calculated fields from lists if they aren’t needed and removing the field definition  or page extension definition improves loading of pages that list data.

 

Creating dedicated lookup pages instead of the normal pages when dropdown like logic is involved, and removing triggers and fact boxes will help because a default page will render all controls

Offloading the UI thread with say page background tasks can get a more responsive and faster UI. Custom controls that require heavy duty logic can also be avoided.

 

Avoiding expose of calculated fields, avoiding heavy duty logic in pre and post handlers of getting records, refactoring the page and its code so that values are persisted can reduce performance hits. It is not recommended to use temp tables if there are many records. Fetching and inserting each record in a temp table without caching data can be detrimental to performance. If the number of records exceeds a hundred, this antipattern is easy to detect.

 

Parent and child records need not be inserted in parallel. This condition causes locks on parent and integration record tables because parallel calls try to update the same parent record. It is best to do it incrementally by allowing one to finish before another or by putting them in a transaction batch.

 

A deprecated protocol can be avoided. OData version 4 and APIs have best performance. API queries and pages are faster with newer technology stacks.

 

API pages and API queries are better than exposing ui pages as web service endpoints. If the latter must be implemented, then triggers need to run for all the records returned from the server. If we want OData endpoints that work as data readers, we can use API queries. OData has a few performance callouts such as limiting the set with $filter and $top if there’s an expensive $expand, using a transaction batch and read-only data access intent.

 

Large volumes of web service calls can cause stability and performance issues. It is important to understand the operational limits and to scale such that the load always falls under the limit. External applications can handle the HTTP Status codes 429 for too many requests and 504 for gateway timeout.

Handling status code 429 requires the client to adopt a retry logic while providing a cool off period. Retries can be regular interval, incremental interval, exponential backoff, and randomization. Status code 504 requires the client to refactor the long running request to execute within the time limit by splitting the request into multiple requests. Then the potential 429 codes can be handled by a backoff strategy. A common pattern is to implement a queue in the external application to flatten the spikes in the traffic. If the request gets a 429, it is put back in the queue and one of the retry strategies is applied.

Testing and validating the multitenant solution for performance is necessary prior to its planned deployment to production environments. Performance unit testing comes helpful in this regard and can include unit-tests that track the number of SQL statements or rows read. This can be used before and after the code to be tested. Also, assert statements can check for normal behavior. Performance toolkits are available to simulate the number of resources that tenants use in realistic scenarios to compare performance between builds of their solution. This toolkit might not measure throughput but given the time to execute key scenarios and the guidance provided by operational limits, it is possible to determine say how many orders are processed per hour. If there are queues involved, advanced analysis can be enabled. Performance telemetry includes database locks, long running operations, long running queries, page views, reports, sessions started, and web service requests.

No comments:

Post a Comment