Sunday, December 31, 2023

 

Sheena Yap Chan, a podcaster, emphasizes the importance of Asian women facing unique pressures to succeed and be high performers while navigating harmful stereotypes that limit their careers. The dominant culture expects Asian Americans to be "model" minorities, which can harm their mental health. The term "model minority" has oppressive roots and fails to capture the diversity of Asian people. Asian women often internalize model minority expectations of self-sufficiency and high performance, making them less likely to ask for help.

Toxic racist stereotypes limit Asian women, as non-Asian people may view them as quiet, submissive, and obedient, hindering their leadership potential. Mainstream media perpetuates anti-Asian stereotypes in subliminal ways, such as in COVID-19 articles. Asian women may be unable to imagine themselves in positions of power because they lack leadership role models. Chan encourages readers to prioritize their health and well-being, forge new leadership pathways, and break free from the harmful effects of intergenerational trauma.

The main takeaway for Asian Americans is to unlock their potentials by facing our trauma and to prioritize their needs. She even suggests improving the health with the ancient Hindu chakra system and to build self-confidence. These suggestions are even more pertinent to Asian-Americans because they tend to prioritize everyone else’s needs. Instead, they must practice self-care by investing in our physical, mental, spiritual, and emotional health. Find the self-care activities that appeal to you, such as listening to music, working out, napping, meditating, going to the spa, having a girl’s night, listening to podcast or getting a manicure.

Working to open your chakras can help you embrace your full potential. “Chakra” means “wheel” in Sanskrit and refers to the seven primary “subtle energy” discs running through your body. Each relates to a different body part and impacts other aspects of your life, such as creativity or self-confidence. In a healthy person, energy moves freely between each chakra but stress, poor diet, negative thoughts, or a lack of exercise can block your chakras, triggering emotional, mental, and physical ailments. The following chakras must be worked on through modalities such as breath exercises to improve the overall well-being.

1.      Muladhara – the “root chakra” is associated with being secure and grounded.

2.      Swadhisthana – the “sacral chakra” is at the bottom of the belly button and lets us tap into our creative and sexual energy

3.      Manipura – the “solar plexus chakra” is in the abdomen and balancing helps us express ourselves with confidence.

4.      Anahata – the “Heart chakra” is in the center of the chest, opening it helps us with healthy, loving relationships

5.      Vishuddha – the “throat chakra” is in the throat. Balancing it helps you express your authentic voice.

6.      Ajna – the “third eye chakra” is on the forehead. It is associated with trusting our institution.

7.      Sahasrara – the “Crown Chakra” sits at the top of the head and balancing it connects us to higher self-purpose.

Self-confidence can be built by believing in yourself to remember that you have the power to achieve your dreams, educating yourself to pursue different approaches to building self-confidence and taking action to align our actions to achieving our goals.

Kamala Harris, the VP of United States, Kim Ng, the first woman general manager of a major sports team, Savitri Jindal, the world’s richest Asian woman, and Sanda Oh, the first Asian actress to win several golden globes remain inspirational.

Sheena Yap Chan hosts the award-winning podcast “The Tao of self-confidence”, which interviews Asian women on their “inner journeys to self-confidence.”

Previous book summaries: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYIIwJbPCitSu_D5A?e=bz918M

 #codingexercise 

https://1drv.ms/w/s!Ashlm-Nw-wnWhOYp_QOtRtp3TXRjxg?e=SbvjXx

Saturday, December 30, 2023

 

This is the summary of a book titled “Crossover Creativity” and published by Harriman House in 2023. Dave Trott, the author is a creative director, copywriter and author of several books.

Crossover creativity is the process of combining seemingly disconnected ideas, resulting in new ideas when a reaction occurs between two existing things, and as described by creative director Dave Trott. This process is influenced by creative people and companies like Picasso, Banksy, and IBM. To find creative solutions, marketers should be different, respond quickly to challenges, and reduce complexity. Overcoming fear of criticism and going against the flow can lead to moments of creative genius. The quality of your brief is crucial for campaign success, and good branding requires a human element. Delivering an entertaining, simple, and true message is essential. Accidents can inspire unexpected creativity, and mistakes can sink your campaign. Be wary of unbelievable promises and focus on selling a product, not an idea. To find creative solutions, be different, respond quickly, and reduce complexity. Middle managers can slow down the creative process, so it's essential to be agile and adaptable.

 

To maximize the chances of generating new ideas, marketers should remain open to new perspectives and do things differently, even if it disrupts their current understanding and patterns. They should also be wary of unbelievable promises, as they are selling a product, not an idea.

 

Challenges in crossover creativity require challenging traditional Western concepts of intelligence, especially those that dominate in marketing and advertising. Creative ideas have little value unless they can be applied practically. To find creative solutions, marketers should be different, respond to challenges quickly, and reduce complexity. Being different can give a competitive advantage, as individuals with working-class backgrounds may have more "street smarts" than those from middle-class backgrounds.

 

Middle managers within organizations can hinder idea generation by obsessing over irrelevant minutiae, being overly cautious, worrying too much about decision propriety, and constantly referring matters to committees. Eschewing agility for scrupulous observation of process can result in dull advertisements. When facing situations that require fast, immediate solutions, marketers should either take action or do nothing.

 

In conclusion, crossing over creativity is essential for marketers to stand out from the competition and develop the mindset needed to seize opportunities today.

 

Previous Book Summaries: BookSummary30.docx

Summarizing Software: https://booksonsoftware.com/text/

CodingExercise-12-30-2023.docx

Friday, December 29, 2023

 

Sheena Yap Chan, a podcaster, emphasizes the importance of Asian women facing unique pressures to succeed and be high performers while navigating harmful stereotypes that limit their careers. The dominant culture expects Asian Americans to be "model" minorities, which can harm their mental health. The term "model minority" has oppressive roots and fails to capture the diversity of Asian people. Asian women often internalize model minority expectations of self-sufficiency and high performance, making them less likely to ask for help.

Toxic racist stereotypes limit Asian women, as non-Asian people may view them as quiet, submissive, and obedient, hindering their leadership potential. Mainstream media perpetuates anti-Asian stereotypes in subliminal ways, such as in COVID-19 articles. Asian women may be unable to imagine themselves in positions of power because they lack leadership role models. Chan encourages readers to prioritize their health and well-being, forge new leadership pathways, and break free from the harmful effects of intergenerational trauma.

The main takeaway for Asian Americans is to unlock their potentials by facing our trauma and to prioritize their needs. She even suggests improving the health with the ancient Hindu chakra system and to build self-confidence. These suggestions are even more pertinent to Asian-Americans because they tend to prioritize everyone else’s needs. Instead, they must practice self-care by investing in our physical, mental, spiritual, and emotional health. Find the self-care activities that appeal to you, such as listening to music, working out, napping, meditating, going to the spa, having a girl’s night, listening to podcast or getting a manicure.

Working to open your chakras can help you embrace your full potential. “Chakra” means “wheel” in Sanskrit and refers to the seven primary “subtle energy” discs running through your body. Each relates to a different body part and impacts other aspects of your life, such as creativity or self-confidence. In a healthy person, energy moves freely between each chakra but stress, poor diet, negative thoughts, or a lack of exercise can block your chakras, triggering emotional, mental, and physical ailments. The following chakras must be worked on through modalities such as breath exercises to improve the overall well-being.

1.       Muladhara – the “root chakra” is associated with being secure and grounded.

2.       Swadhisthana – the “sacral chakra” is at the bottom of the belly button and lets us tap into our creative and sexual energy

3.       Manipura – the “solar plexus chakra” is in the abdomen and balancing helps us express ourselves with confidence.

4.       Anahata – the “Heart chakra” is in the center of the chest, opening it helps us with healthy, loving relationships

5.       Vishuddha – the “throat chakra” is in the throat. Balancing it helps you express your authentic voice.

6.       Ajna – the “third eye chakra” is on the forehead. It is associated with trusting our institution.

7.       Sahasrara – the “Crown Chakra” sits at the top of the head and balancing it connects us to higher self-purpose.

Self-confidence can be built by believing in yourself to remember that you have the power to achieve your dreams, educating yourself to pursue different approaches to building self-confidence and taking action to align our actions to achieving our goals.

Kamala Harris, the VP of United States, Kim Ng, the first woman general manager of a major sports team, Savitri Jindal, the world’s richest Asian woman, and Sanda Oh, the first Asian actress to win several golden globes remain inspirational.

Sheena Yap Chan hosts the award-winning podcast “The Tao of self-confidence”, which interviews Asian women on their “inner journeys to self-confidence.”

Previous book summaries: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYIIwJbPCitSu_D5A?e=bz918M

 

Thursday, December 28, 2023

Summarizer code snippets

 

These are some code snippets to summarize text:

1.       Using genism

from gensim.summarization import summarize

def shrinktext(request):

    text = request.POST.get('text','')

    text = text.split('.')

    text = '\n'.join(text)

    try:

       summary = summarize(text)

       summary_list = []

       for line in summary.splitlines():

           if line not in summary_list:

              summary_list.append(line)

       summary = '\n'.join(summary_list)

    except Exception as e:

       summary = str(e)

       if type(e).__name__ == "TypeError":

          summary = ''.join(text.splitlines()[0:1])

2.       Using langchain

!pip install openai tiktoken chromadb langchain

 

# Set env var OPENAI_API_KEY or load from a .env file

# import dotenv

 

# dotenv.load_dotenv()

from langchain.chains.summarize import load_summarize_chain

from langchain.chat_models import ChatOpenAI

from langchain.document_loaders import WebBaseLoader

 

loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")

docs = loader.load()

 

llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-1106")

chain = load_summarize_chain(llm, chain_type="stuff")

 

chain.run(docs)

 

OR with documents in a single prompt:

from langchain.chains.combine_documents.stuff import StuffDocumentsChain

from langchain.chains.llm import LLMChain

from langchain.prompts import PromptTemplate

 

# Define prompt

prompt_template = """Write a concise summary of the following:

"{text}"

CONCISE SUMMARY:"""

prompt = PromptTemplate.from_template(prompt_template)

 

# Define LLM chain

llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")

llm_chain = LLMChain(llm=llm, prompt=prompt)

 

# Define StuffDocumentsChain

stuff_chain = StuffDocumentsChain(llm_chain=llm_chain, document_variable_name="text")

 

docs = loader.load()

print(stuff_chain.run(docs))

 

3.       Using cloud apis

setx LANGUAGE_KEY your-key

setx LANGUAGE_ENDPOINT your-endpoint

pip install azure-ai-textanalytics==5.3.0

# This example requires environment variables named "LANGUAGE_KEY" and "LANGUAGE_ENDPOINT"

key = os.environ.get('LANGUAGE_KEY')

endpoint = os.environ.get('LANGUAGE_ENDPOINT')

 

from azure.ai.textanalytics import TextAnalyticsClient

from azure.core.credentials import AzureKeyCredential

 

# Authenticate the client using your key and endpoint

def authenticate_client():

    ta_credential = AzureKeyCredential(key)

    text_analytics_client = TextAnalyticsClient(

            endpoint=endpoint,

            credential=ta_credential)

    return text_analytics_client

 

client = authenticate_client()

 

# Example method for summarizing text

def sample_extractive_summarization(client, document):

    from azure.core.credentials import AzureKeyCredential

    from azure.ai.textanalytics import (

        TextAnalyticsClient,

        ExtractiveSummaryAction

    )

 

    poller = client.begin_analyze_actions(

        document,

        actions=[

            ExtractiveSummaryAction(max_sentence_count=4)

        ],

    )

 

    document_results = poller.result()

    return document_results

 

sample_extractive_summarization(client)

 

 

There are variations possible with the LLM context window or the keyword versus latent-semantic model or the pipeline but the above provide readable summary.

Wednesday, December 27, 2023

 

Transformers work very well because of three components: 1. Positional Encoding, 2. Attention and 3. Self-Attention.  Positional encoding is about enhancing the data with positional information rather than encoding it in the structure of the network. As we train the network on lots of text data, the transformers learn to interpret those positional encodings. It really helped transformers easier to train than RNN. Attention refers to a concept that originated from the paper aptly titled “Attention is all you need”. It is a structure that allows a text model to look at every single word  in the original sentence when deciding to translate the word in the output. A heat map for attention helps with understanding the word and its grammar. While attention is for understanding the alignment of words, self-attention is for understanding the underlying meaning of a word to disambiguate it from other usages. This often involves an internal representation of the word also referred to as its state. When attention is directed towards the input text, there can be differences understood between say “server, can I have the check” and the “I crashed the server” to interpret the references to a human versus a machine server. The context of the surrounding words helps with this state.

BERT, an NLP model, make use of attention and can be used for a variety of purposes such as text summarization, question answering, classification and finding similar sentences. BERT also helps with  Google search and Google cloud AutoML language. Google has made BERT available for download via TensorFlow library while Hugging Face company has made Transformers available in Python language.

A recent study on Copilot by Gartner found that the most successful pilots focus on demonstrating business potential, not on technical feasibility. The difference between the two is the realization of the transformative potential of this technology. Since the technology is still broad and emerging, IT leaders find it hard to prioritize generative AI use cases. Mature AI partners involve business partners and software engineers as key members of their AI projects. Generative AI allows for faster development cycle than traditional AI projects. As always but more so from shorter development cycles, success is realized via rapid testing, refinement, and the elimination of low priority and severity use cases.

Tuesday, December 26, 2023

 

Using SOM for Drone Formation network:

A self-organizing map (SOM) is a machine learning technique that reduces the dimensionality of a high-dimensional dataset. It's a type of artificial neural network (ANN) that uses unsupervised learning to produce a low-dimensional representation of a training sample's input space. This representation is known as a map. SOMs are unsupervised algorithms, such as k-means clustering and principal component analysis (PCA). SOMs have two stages: ordering and convergence. The algorithm has five stages: Initialization, Sampling, Matching, Updating, Continuation.

The map is applied a regression operation to modify the nodes position in order update the nodes, one element from the model (e) at a time. The expression used for the regression is:


With any distance measure, say Euclidean, the winner of an element is the most similar node in the map. The neighborhood is defined as a convolutional-like kernel for the map around the winner. This lets us  update the winner and the neurons closeby and iteratively attain an optimum fit.

The starting point for the drone formation, represented with neurons, can be a grid or a circle. In the latter case, the som will behave like an elastic ring, getting closer to the stimuli while trying to minimize the perimeter

Implementation and test: https://github.com/raja0034/som4drones

Monday, December 25, 2023

 

Problem statement: Given a wire grid of size N * N with N-1 horizontal edges and N-1 vertical edges along the X and Y axis respectively, and a wire burning out every instant as per the given order using three matrices A, B, C such that the wire that burns is

(A[T], B[T] + 1), if C[T] = 0 or
(A[T] + 1, B[T]), if C[T] = 1

Determine the instant after which the circuit is broken

     public static boolean checkConnections(int[] h, int[] v, int N) {

        boolean[][] visited = new boolean[N][N];

        dfs(h, v, visited,0,0);

        return visited[N-1][N-1];

    }

    public static void dfs(int[]h, int[]v, boolean[][] visited, int i, int j) {

        int N = visited.length;

        if (i < N && j < N && i>= 0 && j >= 0 && !visited[i][j]) {

            visited[i][j] = true;

            if (v[i * (N-1) + j] == 1) {

                dfs(h, v, visited, i, j+1);

            }

            if (h[i * (N-1) + j] == 1) {

                dfs(h, v, visited, i+1, j);

            }

            if (i > 0 && h[(i-1)*(N-1) + j] == 1) {

                dfs(h,v, visited, i-1, j);

            }

            if (j > 0 && h[(i * (N-1) + (j-1))] == 1) {

                dfs(h,v, visited, i, j-1);

            }

        }

    }

    public static int burnout(int N, int[] A, int[] B, int[] C) {

        int[] h = new int[N*N];

        int[] v = new int[N*N];

        for (int i = 0; i < N*N; i++) { h[i] = 1; v[i] = 1; }

        for (int i = 0; i < N; i++) {

            h[(i * (N)) + N - 1] = 0;

            v[(N-1) * (N) + i] = 0;

        }

        System.out.println(printArray(h));

        System.out.println(printArray(v));

        for (int i = 0; i < A.length; i++) {

            if (C[i] == 0) {

                v[A[i] * (N-1) + B[i]] = 0;

            } else {

                h[A[i] * (N-1) + B[i]] = 0;

            }

            if (!checkConnections(h,v, N)) {

                return i+1;

            }

        }

        return -1;

    }

        int[] A = new int[9];

        int[] B = new int[9];

        int[] C = new int[9];

        A[0] = 0;    B [0] = 0;    C[0] = 0;

        A[1] = 1;    B [1] = 1;    C[1] = 1;

        A[2] = 1;    B [2] = 1;    C[2] = 0;

        A[3] = 2;    B [3] = 1;    C[3] = 0;

        A[4] = 3;    B [4] = 2;    C[4] = 0;

        A[5] = 2;    B [5] = 2;    C[5] = 1;

        A[6] = 1;    B [6] = 3;    C[6] = 1;

        A[7] = 0;    B [7] = 1;    C[7] = 0;

        A[8] = 0;    B [8] = 0;    C[8] = 1;

        System.out.println(burnout(9, A, B, C));

1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0

8
Alternatively,

    public static boolean burnWiresAtT(int N, int[] A, int[] B, int[] C, int t) {

        int[] h = new int[N*N];

        int[] v = new int[N*N];

        for (int i = 0; i < N*N; i++) { h[i] = 1; v[i] = 1; }

        for (int i = 0; i < N; i++) {

            h[(i * (N)) + N - 1] = 0;

            v[(N-1) * (N) + i] = 0;

        }

        System.out.println(printArray(h));

        System.out.println(printArray(v));

        for (int i = 0; i < t; i++) {

            if (C[i] == 0) {

                v[A[i] * (N-1) + B[i]] = 0;

            } else {

                h[A[i] * (N-1) + B[i]] = 0;

            }

        }

        return checkConnections(h, v, N);

    }

    public static int binarySearch(int N, int[] A, int[] B, int[] C, int start, int end) {

        if (start == end) {

            if (!burnWiresAtT(N, A, B, C, end)){

                return end;

            }

            return  -1;

        } else {

            int mid = (start + end)/2;

            if (burnWiresAtT(N, A, B, C, mid)) {

                return binarySearch(N, A, B, C, mid + 1, end);

            } else {

                return binarySearch(N, A, B, C, start, mid);

            }

        }

    }

1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0

8

Saturday, December 23, 2023

Complex deployments using IaC:

 A complex deployment is one which has multiple layers, resource groups and resource types. Creating a complex deployment using IaC is fraught with errors both at plan and execution stages. The IaC compiler can detect only those errors as can be statically determined from the IaC. Runtime execution errors are more common because policy violations are not known until the actual deployment and given the diverse set of resources that must be deployed, the errors are not always well-known. From name size limitations, invalid security principals, locked resources, mutual incompatibility of resource pairs, conflicting settings between resources, are just a few of the errors to name a few.

 

A realization dawns in as the size and scale of infrastructure grows that the veritable tenets of IaC such as reproducibility, self-documentation, visibility, error-free, lower TCO, drift prevention, joy of automation, and self-service somewhat diminish when the time and effort increases exponentially to overcome its brittleness. Packages go out of date, features become deprecated and stop working, backward compatibility is hard to maintain, and all existing resource definitions have a shelf-life. Similarly, assumptions are challenged when the cloud provider and the IaC provider describe attributes differently.  The information contained in IaC can be hard to summarize in an encompassing review unless we go block by block. Its also easy to shoot oneself in the foot by means of a typo or a command to create and destroy instead of change and especially when the state of the infrastructure disagrees with that of the portal.

 

TCO of an IaC for a complex deployment does not include the man-hours required to keep it in a working condition and to assist with redeployments and syncing. One-off investigations are just too many to count on a hand in the case when deployments are large and complex. The sheer number of resources and their tracking via names and identifiers can be exhausting. A sophisticated CI/CD for managing accounts and deployments is a good automation but also likely to be run by several contributors.  When edits are allowed and common automation accounts are used, it can be difficult to know who made the change and why.

 

Some flexibility is required to make a judicious use of automation and manual interventions for keeping the deployments robust. Continuously updating the IaC, especially by the younger members of the team is not only a comfort but also a necessity.  The more mindshare a complex IaC gets, the likely that it will reduce the costs associated with maintaining it and dispel some of the limitations mentioned earlier.

 

As with all solutions, scope and boundaries apply. It is best not to let IaC spread out so much that the high priority and severity deployments get affected. It can also be treated like code with its own index, model and co-pilot.

References to build the first co-pilot:  

1.      https://github.com/raja0034/azureml-examples 

2.      https://github.com/raja0034/openaidemo/blob/main/copilot.py 

References: previous articles on IaC

Friday, December 22, 2023

Automated Cloud IaC using Copilot:


A Copilot is an AI companion that can communicate with a user over a prompt and a response. It can be used for various services such as Azure and Security, and it respects subscription filters. Copilots help users figure out workflows, queries, code and even the links to documentation. They can even obey commands such as changing the theme to light or dark mode. Copilots are well-integrated with many connectors and types of data sources supported. They implement different Natural Language Processing models and are available in various flagship products such as Microsoft 365 and GitHub. They can help create emails, code and collaboration artifacts faster and better.   

 

This article delves into the creation of a copilot to suggest IaC code relevant to a query. It follows the same precedence as a GitHub Copilot that helps developers write code in programming languages. It is powered by the OpenAI Codex model, which is a modified production version of the Generative Pre-trained Transformer-3 aka (GPT-3). The GPT-3 AI model created by OpenAI features 175 billion parameters for language processing. This is a collaboration effort between OpenAI, Microsoft and GitHub.   

 

A copilot can be developed with no code using Azure OpenAI studio. We just need to instantiate a studio, associate a model, add the data sources, and allow the model to train. The models differ in syntactic or semantic search.  The latter uses a concept called embedding that discovers the latent meaning behind the occurrences of tokens in the given data. So it is more inclusive than the former.  A search for time will specifically search for that keyword with the GPT-3 but a search for clock will include the references to time with a model that leverages embeddings.  Either way, a search service is required to create an index over the dataset because it facilitates fast retrieval. A database such as Azure Cosmos DB can be used to assist with vector search.  

 

At present, all these resources are created in a cloud, but their functionality can also be recreated on a local Windows machine with the upcoming release of the Windows AI Studio. This helps to train the model on documents that are available only locally. Usually, the time to set up the resources is only a couple of minutes but the time to train the model on all the data is the bulk of the duration after which the model can start making responses to the queries posed by the user. The time for the model to respond once it is trained is usually in the order of a couple of seconds.  A cloud storage account has the luxury to retain documents indefinitely and with no limit to size but the training of a model on the corresponding data accrues cost and increases with the size of the data ingested to form an index.  

References: previous articles on IaC

Code for web application: https://github.com/raja0034/books-app

Thursday, December 21, 2023

Creative Cloud Deployments:

The following are some novel proposals for cloud resource deployments using Infrastructure-as-code. They bring together technologies that have proven their value in other domains.

1.       Sidecar resource: The sidecar deployment is a common pattern that uses additional containers to extend the functionality of the main container. Sidecar containers run alongside the main application container, providing additional services and extending its functionality. They are active throughout the pod’s lifecycle and can be started and stopped independently of the main container. In the cloud resources. Although not quite as popular on Azure, there are a few examples in AWS that make use of this deployment pattern. For example, the Open Policy Agent is a sidecar deployment in the Amazon Elastic Container Services (Amazon ECS) which runs in its own process with high levels of isolation and encapsulation. The Open Policy Agent aka OPA is an open source, general-purpose policy engine that lets us specify policy as code and provides simple APIs to offload policy decision-making from the applications.  A connection classifier is a popular policy evaluation module that can receive incoming requests and perform a policy evaluation against stored data and policy documents. Logging, monitoring, and authorizations are some of the other usages of sidecar deployment, but they have become primary citizens of the Azure public cloud that enable a consistent experience across resources which is more popular and less restrictive than the sidecar model. Also, sidecar models suffer from increased resource consumption and complexity, potential performance degradation and latency, security and reliability risks. Central Infrastructure deployment teams for various business divisions or campaigns can leverage new sidecars for working with specific deployment templates such as for app services, their alerts, and network infrastructure to provide analytical models using machine learning or data mining algorithms. By virtue of deployment templates, the models target highly scoped and cross-resource activities and artifacts to make their inferences and avoid the use of large analytical public cloud resources such as Data Explorers that can become quite expensive.

2.       Pseudo-resources: an extension of the sidecar pattern to be more acceptable across deployments but scoped at a higher level than what sidecar applies to, which could even be the entire subscription and not just the resource group, is the idea that a custom cloud resource can be deployed that effectively works as a combination of existing resource types. By giving the designation of a pseudo resource, the combination is given a name and visibility akin to out-of-box cloud resources. T-shirt sizing and deployment templates are very popular in this regard. For example, if a shopping store has dedicated microservices for catalog, cart, rewards, credit and so on, then providing infrastructure to the shopping store can be done in the form of a custom resource which can even be sized as small, medium, and large.

3.       Functionality and analytics are different from one-another, and this can be leveraged to package custom queries and data connections that can provide ease of use for the owner of the functionalities needed from the infrastructure. The cloud resources offer graph explorer and data explorer as analytical platforms that can work with many published data connections and include new custom data connections, but the analytical solutions dedicated to the functionality can abstract the query, interactivity, and data to make it simpler and easier for owners to know more about their pseudo-resources or deployments.

References: previous articles on IaC

Wednesday, December 20, 2023

 

This is a continuation of a previous articles on IaC shortcomings and resolutions. The case study in this example refers to the use of alias which are different from keys and names used in the resource definition using the IaC modules. Aliases are friendly names in place of the actual names used for the resources which are usually multi-part and have a predefined format with prefixes and/or suffixes. 

While the names must be unique, conform to patterns and conventions, appear within a certain limit for maximum characters and must belong to a namespace or scope,  aliases are less restrictive and usually short form for the names. These aliases are very helpful to use repeatedly in definitions, scripts and languages and serve the purpose in runtime, operating systems and cloud resources.

An alias is only valid in the plane that is defined in and not in the layers above or below. For example, an alias used in the SQL statements for the purposes of referring to existing database objects is only valid in the database and is saved in the master catalog. It does not hold any meaning in the cloud resources. Specifically, cloud resources might also make use of their own aliases and it is convenient to have the same value within different layers but they hold significance only in their specific layer.

Some aliases are used as labels as well but a label is just that and has no backing property, storage or target resources. These labels are also interchangeable without any impact to functionality and might serve additional analytical usages such as for billing. One such example is the Microsoft Entra Admin alias where the actual value used for securing the resource is passed in as an entity id to the aad_object_id attribute and is independent of the alias. The usage of the id and alias in the same resource definition is quite popular because one is used for display purposes and the other is used for the backend processing and referencing purposes.

Associations can also be referred to with alias such as key vault keys but it is important not to over engineer the alias with the same rigor as for names. Whenever we draw a list, whether it is for names or aliases, we tend to codify additional metadata inlined into them by using prefixes and suffixes but it is incorrect in the case of aliases.

When the associations are to logins and identities, they will automatically gain usage in the resources. It is not necessary to add role assignments and all users will use this credential whenever they are anonymously accessing resources across the wire. To pass through their respective credentials, they must ask the resource to do so by virtue of the properties allowed in their definitions or settings.

Reference: previous articles

 

Tuesday, December 19, 2023

 

This is a continuation of previous articles on IaC shortcomings and resolutions. One of the common challenges faced during resource creation are cross-subscription associations. Generally, the subscriptions are independently controlled and IaC deployments are specific to a single subscription at any point of the deployment. However, resources can exist in external subscriptions and become associated with a resource in the current subscription. For example, a private endpoint can be created in the Azure management portal to access an app service in another subscription by specifying the fully qualified resource identifier to the app service and this private endpoint will be created in a local subnet and virtual network in the current subscription.

In such a case, the association must be accepted by the owners of the destination resource. Since the service principal or enterprise application used for the deployment in the current subscription might not have access to the other subscription, the creation of an association resource in the local subscription will fail with the error message that an approval is required, and the instance will not be created. This error message is specific to the automation and not to the manual case when a cross subscription association is created. In the manual case, the resource is created but has a status pending until the owner of the associated resource in the other subscription accepts the connection. The separation of creation and approval stages is only available in the manual case. In the automated case, there is no way to force the creation of the resource and defer the approval process.

The resolution in this case is to grant permissions to the service principal to both create and auto-approve the association with the target resource. The error message will call out the permission required to auto-approve, and this will be part of the permission set of one of the built-in roles. Such a role must be assigned to the service principal used for deployment in the destination subscription/resource group/resource hierarchy. On the Azure portal, the service principal will have permissions menu item against the service principal but admin consent might be required and a service ticket might need to be opened to ask the tenant admin to grant adequate permissions to the service principal. The role assignment is not at the tenant level but at the subscription level and the owners of the targeted subscription can assign the builtin role to this service principal.

It is also possible to have a hybrid approach when creating the resource by doing it manually first, then importing it into the IaC state and finally running the pipeline automation to reconcile the actual resource, the IaC state and the code. Care must be taken to include the attribute necessary to inform the compiler that the resource being created requires a manual connection. Usually, this is done with an attribute like ‘is_manual_connection’ which is set to true.

Finally, it is possible to assign either static or dynamic private ip address to a connection resource without requiring a change to its name. The difference between the two is that we also get an address that does not change just like the name and sometimes the network resolves ip addresses better than names given that DNS registrations might not be added until the resource is created.