Performance for multitenant application
This is a
continuation of the articles on multitenancy with the most recent one linked https://1drv.ms/w/s!Ashlm-Nw-wnWhLZnYUBoDUNcjAHNwQ?e=NAo7vM. This article
focuses on performance.
The multitenant
application discussed so far has an application server and a database. Performance
is improved by 1. Writing efficient pages, 2. Efficient web services, 3.
Efficient reports, 4. AL performance patterns, 5. Efficient data access, 6.
Testing and validating performance, 7. Tuning the development environment 8. And
using the AL profiler to analyze performance.
Efficient pages are
written by using patterns that get a page to load faster. These include: avoiding
unnecessary recalculation, 2. Doing less work, 3. And offloading the UI thread.
Caching the data and refreshing the cache regularly avoids recalculation. This saves time each time the page is loaded.
Querying objects are notorious for recalculation since they reach the database each
time. Caching the results from an API works significantly better.
Reducing the amount
of work also speeds things up. A simple
page with few UI elements can also be ease of use and navigation. Removing calculated fields from lists if they
aren’t needed and removing the field
definition or page extension definition
improves loading of pages that list data.
Creating dedicated
lookup pages instead of the normal pages when dropdown like logic is involved, and
removing triggers and fact boxes will help because a default page will render
all controls
Offloading the UI
thread with say page background tasks can get a more responsive and faster UI.
Custom controls that require heavy duty logic can also be avoided.
Avoiding expose of
calculated fields, avoiding heavy duty logic in pre and post handlers of
getting records, refactoring the page and its code so that values are persisted
can reduce performance hits. It is not recommended to use temp tables if there
are many records. Fetching and inserting each record in a temp table without
caching data can be detrimental to performance. If the number of records exceeds a hundred, this antipattern
is easy to detect.
Parent and child
records need not be inserted in parallel. This condition causes locks on parent
and integration record tables because parallel calls try to update the same
parent record. It is best to do it incrementally by allowing one to finish
before another or by putting them in a transaction batch.
A deprecated
protocol can be avoided. OData version 4 and APIs have best performance. API
queries and pages are faster with newer technology stacks.
API pages and API
queries are better than exposing ui pages as web service endpoints. If the
latter must be implemented, then triggers need to run for all the records
returned from the server. If we want OData endpoints that work as data readers,
we can use API queries. OData has a few performance callouts such as limiting
the set with $filter and $top if there’s an expensive $expand, using a
transaction batch and read-only data access intent.
No comments:
Post a Comment