Monday, May 30, 2022

 Gut Feelings: The Microbiome and our health. 

This is a book written by Susie Flaherty and Our Health and is written by MIT press, 2022 

The recent epidemic notwithstanding, the next generations seem more prone to epidemics of chronic, non-infectious, inflammatory diseases.  There is a radical change required in the health care model as suggested by the authors to counter that by centering on the emerging body of research into the human microbiome. Rather than treat the microorganisms that populate the people’s guts as enemies, there must be better understanding of the complex ecosystem each human carries within, and work to create more holistic, preventative health interventions.  

Microorganisms have tremendous potential. This has caused medicine and science to take note.  When people view microorganisms through the lens of germ theory and infectious diseases, they treat it as a conflict between microbial ecosystems. The treatment is usually to prescribe antibiotics without discriminating between the microorganisms. Even if microorganisms contribute to diseases, they live in symbiosis with human hosts and can be instrumental in treating diseases ranging from cancer to neurological conditions. The authors content that microorganisms live in complex sophisticated civilizations that we did not appreciate yet. 

The Human Genome project made significant strides over three decades in addressing chronic inflammatory illnesses via better understanding genetics’ role in those diseases. Yet only 2% of the potential therapeutic targets have been addressed. They call the coexistence of humans and microorganisms a “coevolutionary destiny” 

Our lifestyle and the environment affect the microbiome’s composition which plays a significant role in the immune system function.  The microbiome extends beyond our body and connects us to a global ecosystem of microbes that live everywhere from soil to other creatures. The implications on our health and disease can be understood only when we consider the whole of the earth’s ecosystem as a continuous circle of life. 

Environmental factors that deplete the diversity of gut microbiota trigger poor health. It has been seen in some cases that when people change their dietary habits, overuse antibiotics, and overly sanitize human environments, they lose ancient microorganisms. This forms the hypothesis for “missing microbes” by Martin Blaser. For example, the East African hunter-gatherer tribe – whose diets and lifestyles resemble that of our predecessors, host several beneficial microorganisms that are found absent in other demographics. 

This forms the basis for the suggestion that the prevalence of chronic inflammatory disease in the West with a loss of microbial diversity which can then be addressed by fortifying the probiotics.  But there is no study that can differentiate the probiotics and map the human microbiome as the scientists did with the genome.  

The Human Microbiome project has had five primary aims since its inception which include: 

  1. Isolate and identify microbial genome sequences, much like the Human Genome Project. 

  1. Establish whether a “core microbiome” exists  

  1. Find the relationship between alterations in the microbiome and disease. 

  1. Develop new technologies and tools  

  1. Reflect on the legal, ethical and social dimensions of microbiome sequencing. 

Good health outcomes entail balancing a complex ecosystem of microorganisms. This is evident in the conditions of homeostasis where there is a balance between states of health and disease.  This approach is paralleled and employed by holistic healing modalities – such as traditional Chinese medicine.  

The microbiome includes microorganisms such as bacteria, fungi, viruses, parasites, protozoa, archaea and yeast. So, bacteria is not the only kind that needs to be studied. Viruses or “virome” are also a key contributor to disease and its effects on health.  

The authors suggest a framework to consider much more complex and multidimensional explanations of the drivers of human health and disease with the notion of “five pillars” which are: genetic predisposition; exposure to certain environmental factors such as pollution; depleted mucosal barrier; immune system dysregulation; and a lack of balance in the microbiome. The easiest to manipulate are gut permeability, immune system function, and the microbiome.  

The gut microbiome influences the development or severity of many disease symptoms including the Gut inflammatory disorders, Obesity, Autoimmunity, Neurological and behavioral disorders, Cancer. A preventative approach to health care will employ predictive computational models.  

The authors argue that we have the power to stop these epidemics of non-infectious, chronic inflammatory diseases, if our scientific discoveries could be put in the microbiome domain at the service of the public health policies.  

The future holds promise with: 

Prebiotics trigger the activity of the colon bacteria. 

Probiotics ingest bacteria such as lactobacillus that people can use to balance the microbiome and boost immune function.  

Correlations are being mapped between strains of probiotics and the treatment of specific diseases. 

Synbiotics – the synergistic combination of prebiotics and postbiotics which can ensure desirable strains of bacteria colonize and survive in the gut. 

Postbiotics – include organic acids, peptides and enzymes. One way of consuming postbiotics is through fermenting foods.  

Psychobiotics – Microbiome research could result in the creation of a new class of probiotics to treat nervous system diseases such as mental health disorders and neurodegenerative diseases. 

Emerging health strategies include everything from prebiotics to psychobiotics. 

The authors summarize that these emerging fields of science benefit demand our investments in our time, talent, and resources to develop a roadmap for translating current and future scientific information into implementable clinical interventions that could change our collective destiny for the better. 

 

Sunday, May 29, 2022

Giving criticism

 

This is a summary of the book: the power of positive criticism by Hendrie WEISINGER, Ph.D.

The main takeaways:

·        Criticism must be embraced and valued as a developmental process, essential to learning, growth and success.

·        Positive criticism recognizes the merits and demerits of a situation and then evaluates them, looking for improvement.

·        What must be communicated and why must be very clear and required.

·        Expectations and criteria for criticism must both be clearly expressed

·        Positive criticism aims for improvement, so first positives must be recognized.

·        The recipient’s motives must be understood, and criticism must be framed so that the recipient will improve her performance.

·        Words must be chosen carefully, avoiding negative language and offering specific solutions.

·        Help with changes must surely be offered, so the recipients do not feel alone.

·        A partnership must be formed with the recipient to make change happen.

·        One’s own emotional state must be recognized so that there is calmness under stress

Criticism is a complex, essential and powerful process. One of the ways to remain close to truth is by evaluation of our welfare. It is essential because so many things depend on it. It is powerful because it can shape the future.

Criticism can assess the merits and demerits of a situation and make appropriate judgements. It also brings out the best from ourselves and those around us.

The following are some of the tips to exercise it cautiously:

1.       Positively receiving criticism is essential to good health. It is not the same as “feedback”. Appreciate it.

2.       Criticize strategically where the one giving it is but an instrument to make the recipient more productive.

3.       Be improvement oriented regardless of the comfort level and suggest curative actions.

4.       Protecting the self-esteem of the recipient goes a long way.

5.       Begin a criticism with a positive intent statement so the recipient knows where you are coming from.

6.       Criticizing your criticism helps you know it is apt.

7.       Involving the recipient in the criticism process keeps everyone open and minimizes defensiveness.

8.       Qualifying an appreciation within a criticism with a “but” is inappropriate. Offering sincere positives is good practice.

9.       Giving a clear direction to what we want will be a clearer message.

10.   As with most delivery, the timing must be right.

11.   Always minding the surroundings, the time and place to criticize someone must be observed. A rule of thumb is to never criticize when you are angry.

12.   Socrates used to ask questions that would guide the recipients to discover solutions to their problems.

13.   When giving the same criticism repeatedly, it is better to change our behavior to help drive the change in the other person.

14.   Criticism can often find its roots in unmet, uncommunicated expectations. Examining them to be realistic and adjusting them might also help.

15.   Acknowledging that criticism is subjective even when it is based on objective facts helps smoothen perception differences.

16.   Putting motivation in the criticism is a good habit. We must begin our own motivational assumptions and those of each recipient.

17.   Using stories, examples and metaphors help to convey the message.

18.   It is a development process so it should not be one-time effort and must be ongoing.

19.   Knowing the criteria for criticism is important to both the giver and the receiver.

20.   Listening to one’s thoughts and processing them with rationality helps articulate criticism.

21.   Staying cool, calm and collected by practicing a relaxation response prior to conveying a criticism helps the giver.

22.   Criticism is a learning mechanism and must be adapted to stressful situations. Relying on informal relationships, timing, ambiguity and self-restraint under these circumstances help.

23.   Change becomes easier when there is a partner

24.   Some targets of criticism are quite difficult to say anything to. Criticizing a customer, a boss or ethics can be considered out of line, but it can still be presented to help them reach their goal.

25.   Immunizing oneself against negativity by clarifying our own thoughts and feelings help us with our work.

26.   Emotions during criticism are perceived more than the message so being mindful of how we say it is also important.

27.   Habits are infectious so positive criticism will circle back around.

Friday, May 27, 2022

 

This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.     

 

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.     

 

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.  In this section, we talk about the retry storm antipattern. 

 

This antipattern occurs in social engineering applications when] I/O requests fail due to transient errors and services must retry their calls. It helps overcome errors, throttle and rate limits and avoid surfacing and requiring user intervention for operational errors. But when the number of retries or the duration of retries is not governed, the retries are frequent and numerous, which can have a significant impact on performance and responsiveness. Network calls and other I/O operations are much slower compared to compute tasks. Each I/O request has a significant overhead as it travels up and down the networking stack on local and remote and includes the round trip time, and the cumulative effect of numerous I/O operations can slow down the system. There are some manifestations of the retry storm.

Reading and writing individual records to a database as distinct requests – When records are often fetched one at a time, then a series of queries are run one after the other to get the information. It is exacerbated when the Object-Relational Mapping hides the behavior underneath the business logic and each entity is retrieved over several queries. The same might happen on writing for an entity. When each of these queries is wrapped in their own retry, they can cause severe errors. 

Implementing a single logical operation as a series of HTTP requests. This occurs when objects residing on a remote server are represented as a proxy in the memory of the local system. The code appears as if an object is modified locally when in fact every modification is coming with at least the cost of the RTT. When there are many networks round trips, the cost is cumulative and even prohibitive. It is easily observable when a proxy object has many properties, and each property get/set requires a relay to the remote object. In such a case, there is also the requirement to perform validation after every access.

Reading and writing to a file on disk – File I/O also hides the distributed nature of interconnected file systems.  Every byte written to a file on amount must be relayed to the original on the remote server. When the writes are several, the cost accumulates quickly. It is even more noticeable when the writes are only a few bytes and frequent. When individual requests are wrapped in a retry, the number of calls can rise dramatically.

There are several ways to fix the problem. They are about detection and remedy. The remedies include capping the number of retry attempts and preventing retrying for a long period of time. The retries could include an exponential backoff strategy that increases the duration between successive calls exponentially, handle errors gracefully, use the circuit breaker pattern which is specifically designed to break the retry storm. Official SDKs for communicating to Azure Services already include sample implementations of retry logic. When the number of I/O requests is many, they can be batched into coarse requests. The database can be read with one query substituting many queries. It also provides an opportunity for the database to execute it better and faster. Web APIs can be designed with the REST best practices. Instead of separate GET methods for different properties, there can be a single GET method for the resource representing the object. Even if the response body is large, it will likely be a single request. File I/O can be improved with buffering and using cache. Files may need not be opened or closed repeatedly. This helps to reduce fragmentation of the file on disk.

When more information is retrieved via fewer I/O calls and fewer retries, the operational necessary evil becomes less risky but there is also a risk of falling into the extraneous fetching antipattern. The right tradeoff depends on the usages. It is also important to read-only as much as necessary to avoid both the size and the frequency of calls and their retries. Sometimes, data can also be partitioned into two chunks, frequently accessed data that accounts for most requests and less frequently accessed data that is used rarely. When data is written, resources need not be locked at too large a scope or for a longer duration. Retries can also be prioritized so that only the lower scope retries are issued for idempotent workflows.

 

 

Thursday, May 26, 2022

 

This is a continuation of a recent article on multi-tenancy.

The previous article introduced multi-tenancy with an emphasis on dedicated and shared resource models. It suggested that provisioning of resources, their lifetime, scope, and level can be isolated in the client’s perspective and run deep into the infrastructure as necessary. The infrastructure makes choices on the costs, but the clients want separation and protection of resources, privacy, and confidentiality of their data as well as tools for control and data plane management The benefits that the infrastructure provides with multi-tenancy include common regulatory controls, governance and security framework as well as a scheduled sweep of resources.

We took the opportunity to discuss Azure storage service as an example of a service that implements multi-tenancy. The design choices for the components at different levels that articulated multi-tenancy were called out. Specifically, the adaptive algorithm and the movement of partitions is equally applicable  to dedicated resources from a SaaS. Multi-tenant resource providers today ask for their clients to choose regions because it provides high availability given that resources have little or no difference when used from one region or another.

A case was made for service class differentiation from a multi-tenant service. Resources can be grouped and tiered to meet different service level classifications, but this can also go deeper into the provisioning logic of the multi-tenant service.  Service class differentiation can be achieved with quality-of-service implementation and costing strategies.

Quality of Service guarantees are based on sharing and congestion studies. To study congestion, resource requests are marked for distributor to distinguish between different classes and new distribution policies to treat requests differently. QoS guarantees provide isolation for one class from other classes. For example, when allocating different bandwidths to each application flow by a router, bandwidth may not be efficiently used. QoS guarantees that while providing isolation to different classes, it is preferred to utilize the resources as efficiently as possible. Since most multi-tenancy providers have an infrastructure capacity that cannot be exceeded, there is a need to control admission. Tenants can declare their requirements, and the multi-tenant service provider can block if it cannot provide the resources. 

The costing strategy has been favored to be on a pay-as-you-go basis because this improves the value for the client while giving them the incentive to reclaim resources and keep their costs down further.  Costing and monitoring go hand in hand so a certain amount visibility into usage is demanded from the infrastructure.  Continuous operation for each phase of the DevOps and IT operations lifecycles is necessary. Health, performance, and reliability of the provisioned resources play a critical role in their usage and affect billing. Continuous monitoring of API is also possible via Synthetic monitoring. It provides proactive visibility into API issues before customers find the issues themselves. This is automated probing then ensures end-to-end validation of specific scenarios. The steps to setup a Synthetic monitoring includes onboarding, provisioning, and deployment.

The costs can make service levels more appealing when the costing strategy involves a weighted cost analysis. By making the costing be based on granular activities, the organization of costs to categories is removed and separate focus on all the labor involved in the cloud operations is removed. A virtual data warehouse and a star schema can help to capture all dimensions including time and space to perform aggregations for better querying, reporting and visualizations.

Wednesday, May 25, 2022

 This is a continuation of a series of articles on crowdsourcing application and including the most recent article. The original problem statement is included again for context.     

 

Social engineering applications provide a wealth of information to the end-user, but the questions and answers received on it are always limited to just that – social circle. Advice solicited for personal circumstances is never appropriate for forums which can remain in public view. It is also difficult to find the right forums or audience where the responses can be obtained in a short time. When we want more opinions in a discrete manner without the knowledge of those who surround us, the options become fewer and fewer. In addition, crowd-sourcing the opinions for a personal topic is not easily available via applications. This document tries to envision an application to meet this requirement.     

 

The previous article continued the elaboration on the usage of the public cloud services for provisioning queue, document store and compute. It talked a bit about the messaging platform required to support this social-engineering application. The problems encountered with social engineering are well-defined and have precedence in various commercial applications. They are primarily about the feed for each user and the propagation of solicitations to the crowd. The previous article described selective fan out. When the clients wake up, they can request their state to be refreshed. This perfects the write update because the data does not need to be sent out. If the queue sends messages back to the clients, it is a fan-out process. The devices can choose to check-in at selective times and the server can be selective about which clients to update. Both methods work well in certain situations. The fan-out happens in both writing as well as loading. It can be made selective as well. The fan-out can be limited during both pull and push. Disabling the writes to all devices can significantly reduce the cost. Other devices can load these updates only when reading. It is also helpful to keep track of which clients are active over a period so that only those clients get preference.  In this section, we talk about the busy frontend antipattern. 

 

This antipattern occurs when there are many background threads that can starve foreground tasks of their resources which decreases response times to unacceptable levels. There is a lot of advantages to running background jobs which avoids the interactivity for processing and can be scheduled asynchronously. But the overuse of this feature can hurt performance due to the tasks consuming resources that foreground workers need for interactivity with the user, leading to a spinning wait and frustrations for the user. It appears notably when the foreground is monolithic compressing the business tier with the crowdsourcing application frontend. Runtime costs might shoot up if this tier is metered. A crowdsourcing application tier may have finite capacity to scale up. Compute resources are better suitable for scale out rather than scale up and one of the primary advantages of a clean separation of layers and components is that they can be hosted even independently. Container orchestration frameworks facilitate this very well. The Frontend can be as lightweight as possible and built on model-view-controller or other such paradigms so that they are not only fast but also hosted on separate containers that can scale out.

 

This antipattern can be fixed in one of several ways. First the processing can be moved out of the application tier into an Azure Function or some background api layer. If the application frontend is confined to data input and output display operations using only the capabilities that the frontend is optimized for, then it will not manifest this antipattern. APIs and Queries can articulate the business layer interactions. The application then uses the .NET framework APIs to run standard query operators on the data for display purposes. 

 

UI interface is designed for purposes specific to the application. The introduction of long running queries and stored procedures often goes against the benefits of a responsive application. If the processing is already under the control of the application techniques, then they should not be moved.  

Avoiding unnecessary data transfer solves both this antipattern as well as chatty I/O antipattern. When the processing is moved to the business tier, it provides the opportunity to scale out rather than require the frontend to scale up. 

 

Detection of this antipattern is easier with the monitoring tools and the built-in supportability features of the application layer. If the front-end activity reveals significant processing and very low data emission, it is likely that this antipattern is manifesting. 

 

Examine the work performed by the Frontend in terms of latency and page load times which can be narrowed down by callers and scenarios, may reveal just the view models that are likely to be causing this antipattern 

 

Finally, periodic assessments must be performed on the application tier. 

Tuesday, May 24, 2022

 

This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent article on Service Fabric discussed an infrastructure for hosting. In this article, we explore the Dataverse and solution layers.

Microsoft Dataverse is a data storage and management system for the various Power Applications so that they are easy to use with Power Query. The data is organized in tables some of which are built-in and standard across applications, but others can be added on a case-by-case basis for applications. These tables enable applications to focus on their business needs while providing a world class, secure and cloud-based storage option for the data that are 1. Easy to manage, 2. Easy to secure, 3. Accessible via Dynamics 365, has rich metadata, logic and validation, and come with productivity tools. Dynamics 365 applications are well-known for enabling businesses to quickly meet their business goals and customer scenarios and Dataverse makes it easy to use the same data across different applications. It supports incremental and bulk loads of data both on a scheduled and on-demand basis.

Logic and validation performed on the Dataverse include business rules, business process flows, workflows, and business logic with code although they are by no means limited to just these. Dataverse is another option when compared to connectors for external data sources and AI processing stacks.

Solutions are used to transport applications and components from one environment to another or to add customizations to an existing application. It can comprise applications, site maps, tables, processes, resources, choices, and flows. It implements Application Lifecycle management and powers Power Automate. There are two types of solutions (managed and unmanaged) and the lifecycle of a solution involves create, updates, upgrade and patch. The managed properties of a solution govern which components are customizable. A solution can be created using the navigation menu of the Power Apps. Microsoft AppSource is the marketplace where solutions tailored to a business need can be found.

Managed and unmanaged solutions can co-exist at different levels within a Microsoft Dataverse environment. They form two distinct layer levels. What the user sees as runtime behavior, comes from the active customizations of an unmanaged layer which in turn might be supported by a stack of one or more user-defined managed solutions and system solutions in the managed layer.  Managed solutions can also be merged. The solution layers feature enables one to see all the solution layers for a component.