This is a
continuation of a series of articles on crowdsourcing application and including
the most recent article.
The original problem statement is included again for context.
Social
engineering applications provide a wealth of information to the end-user, but
the questions and answers received on it are always limited to just that –
social circle. Advice solicited for personal circumstances is never appropriate
for forums which can remain in public view. It is also difficult to find the
right forums or audience where the responses can be obtained in a short time.
When we want more opinions in a discrete manner without the knowledge of those
who surround us, the options become fewer and fewer. In addition,
crowd-sourcing the opinions for a personal topic is not easily available via
applications. This document tries to envision an application to meet this requirement.
The previous
article continued the elaboration on the usage of the public cloud services for
provisioning queue, document store and compute. It talked a bit about the
messaging platform required to support this social-engineering application. The
problems encountered with social engineering are well-defined and have
precedence in various commercial applications. They are primarily about the
feed for each user and the propagation of solicitations to the crowd. The
previous article described selective fan out. When the clients wake up, they
can request their state to be refreshed. This perfects the write update because
the data does not need to be sent out. If the queue sends messages back to the
clients, it is a fan-out process. The devices can choose to check-in at
selective times and the server can be selective about which clients to update.
Both methods work well in certain situations. The fan-out happens in both
writing as well as loading. It can be made selective as well. The fan-out can be
limited during both pull and push. Disabling the writes to all devices can
significantly reduce the cost. Other devices can load these updates only when
reading. It is also helpful to keep track of which clients are active over a
period so that only those clients get preference. In this section, we talk
about the busy frontend antipattern.
This
antipattern occurs when there are many background threads that can starve
foreground tasks of their resources which decreases response times to
unacceptable levels. There is a lot of advantages to running background jobs
which avoids the interactivity for processing and can be scheduled
asynchronously. But the overuse of this feature can hurt performance due to the
tasks consuming resources that foreground workers need for interactivity with
the user, leading to a spinning wait and frustrations for the user. It appears
notably when the foreground is monolithic compressing the business tier with
the crowdsourcing application frontend. Runtime costs might shoot up if this
tier is metered. A crowdsourcing application tier may have finite capacity to
scale up. Compute resources are better suitable for scale out rather than scale
up and one of the primary advantages of a clean separation of layers and
components is that they can be hosted even independently. Container
orchestration frameworks facilitate this very well. The Frontend can be as
lightweight as possible and built on model-view-controller or other such
paradigms so that they are not only fast but also hosted on separate containers
that can scale out.
This
antipattern can be fixed in one of several ways. First the processing can be
moved out of the application tier into an Azure Function or some background api
layer. If the application frontend is confined to data input and output display
operations using only the capabilities that the frontend is optimized for, then
it will not manifest this antipattern. APIs and Queries can articulate the
business layer interactions. The application then uses the .NET framework APIs
to run standard query operators on the data for display purposes.
UI interface is
designed for purposes specific to the application. The introduction of long
running queries and stored procedures often goes against the benefits of a
responsive application. If the processing is already under the control of the
application techniques, then they should not be moved.
Avoiding
unnecessary data transfer solves both this antipattern as well as chatty I/O
antipattern. When the processing is moved to the business tier, it provides the
opportunity to scale out rather than require the frontend to scale up.
Detection of
this antipattern is easier with the monitoring tools and the built-in
supportability features of the application layer. If the front-end activity
reveals significant processing and very low data emission, it is likely that
this antipattern is manifesting.
Examine the
work performed by the Frontend in terms of latency and page load times which
can be narrowed down by callers and scenarios, may reveal just the view models
that are likely to be causing this antipattern
Finally,
periodic assessments must be performed on the application tier.
No comments:
Post a Comment