This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.
Social engineering applications provide a wealth of
information to the end-user, but the questions and answers received on it are
always limited to just that – social circle. Advice solicited for personal
circumstances is never appropriate for forums which can remain in public view.
It is also difficult to find the right forums or audience where the responses
can be obtained in a short time. When we want more opinions in a discrete
manner without the knowledge of those who surround us, the options become fewer
and fewer. In addition, crowd-sourcing the opinions for a personal topic is not
easily available via applications. This document tries to envision an
application to meet this requirement.
The previous article continued the elaboration on the usage
of the public cloud services for provisioning queue, document store and
compute. It talked a bit about the messaging platform required to support this
social-engineering application. The problems encountered with social
engineering are well-defined and have precedence in various commercial
applications. They are primarily about the feed for each user and the
propagation of solicitations to the crowd. The previous article described
selective fan out. When the clients wake up, they can request their state to be
refreshed. This perfects the write update because the data does not need to be
sent out. If the queue sends messages back to the clients, it is a fan-out
process. The devices can choose to check-in at selective times and the server
can be selective about which clients to update. Both methods work well in
certain situations. The fan-out happens in both writing as well as loading. It
can be made selective as well. The fan-out can be limited during both pull and
push. Disabling the writes to all devices can significantly reduce the cost.
Other devices can load these updates only when reading. It is also helpful to
keep track of which clients are active over a period so that only those clients
get preference.
In this section, we talk about busy database. There are a lot of advantages
to running code local to the data which avoids the transmission to a client
application for processing. But the overuse of this feature can hurt
performance due to the server spending more time processing, rather than
accepting new client requests and fetching data. A database is also a shared
resource, and it might deny resources to other requests when one of them is
using a lot for computations. Runtime costs might shoot up if the database is
metered. A database may have finite capacity to scale up. Compute resources are
better suitable for hosting complicated logic while storage products are more
customized for large disk space. The busy database occurs when the
database is used to host a service rather than a repository or it is used to
format the data, manipulate data, or perform complex calculations.
Developers trying to overcompensate for the extraneous fetching symptom often
write complex queries that take significantly longer to run but produce a small
amount of data.
This can be fixed in one of several ways. First the processing can be moved out
of the database into an Azure Function or some application tier. As long as the
database is confined to data access operations using only the capabilities, the
database is optimized and will not manifest this antipattern. Queries can be
simplified to fetching the data with a proper select statement that merely
retrieves the data with the help of joins. The application then uses the .NET
framework APIs to run standard query operators.
Database tuning is an important routine for many
organizations. The introduction of long running queries and stored procedures
often goes against the benefits of a tuned database. If the processing is
already under the control of the database tuning techniques, then they should
not be moved.
Avoiding unnecessary data transfer solves both this antipattern as well as chatty I/O antipattern. When the processing is moved to the application tier, it provides the opportunity to scale out rather than require the database to scale up.
No comments:
Post a Comment