Thursday, March 6, 2025

 This article answers the following questions:

1. Can there be multiple static web sites hosted on the same storage account behind the front door in different containers?

2. Can there be different domains associated with each of the static websites?

3. Will the routing and caching change when navigating directly to that path or via domain?

4. Will there be a need to divert traffic on path basis or keep it at apex level for each static website.

Answers:

1. With or without the use of an FD, a storage account can host multiple static websites but the organization differs significantly. The most straightforward approach is to create separate storage accounts for each static website as called out by 1 and 2 in the references

2. Website can be reached via $web if the static website setting is enabled: on the storage account. All paths to any assets in $web can be reached directly via the primary endpoint shown in the screenshot above. This makes it convenient to share assets across different usages and even websites if they make a web request to the same endpoint and path for that asset. Treat this just like permalink for assets that must not change location for many callers.

3. Dedicated websites that differ in purpose must be separated as website1 and website2 folders. In this case, they can both be nested in the $web container but their respective folders will become part of the path. This becomes an acceptable solution even without fd so long as the primary endpoint can be tolerated to be the same for both websites and to have the website1 and website2 as static path and part of url. This completes the built-in support for one or more static websites with just a storage account.

4. Many applications want to avoid path qualifiers in the url simply because it reflects the actual location to a caller who could have been given that asset with an alias so that it is easy to move around the actual location for planned or unplanned reasons.

5. It is at this point that FD comes helpful because 1. you have the luxury to use as many aliases to separate your business usages. Since $web is default, even a single alias can hide the actual storage account while leveraging /website1 and /website2 as earlier because fd has no knowledge about those paths and doesn't do any routing other than to the $web as a backend member that it refers to as origin. 2. you can nest your website anywhere in the storage account without regard to $web.

6. Aliases aka subdomains are also inexpensive and having them reflect the environment or business purpose makes it easy to tell apart any traffic without a lookup. By adding separate origins to send traffic from separate aliases keeps the routing and caching with default values which prevents unexpected behaviors when they are set with custom settings. More than routing, custom caching settings also called cdn settings in fd routes often give a different response for the same request at different times unless the caller leverages purging either on flushed/ad-hoc basis or the built-in scheduled basis. So keeping everything simple naturally moves to having different storage accounts for different purposes/environments.

7. FD also has sophistications for routing and caching that should also be called out because they are simply not possible with a storage account. For example, An FD can allow a static website to be hosted in any container outside the $web as well because it treats that as a path within the origin. The change to the routing is that FD has a rewrite functionality similar to an app-gateway where specifying a path qualifier such as /site1 can be replaced with a full path list /path/to/site1 while leaving the rest of the unmatched parts of the url including the endpoint and the individual asset file with query parameters intact. A sample screenshot is included for this purpose.

8. This successfully helps to translate moniker urls that do not have actual location such as https://foo.optum.com/site1/ to actually translate to "https://foo.optum.com/path/to/index.html" where foo.optum.com automatically reaches the destination storage account without relying on the internal support of that storage account leveraging $web. When testing with this option the following limitations were observed and appear to noteworthy:

a. the pattern match including wild card characters is very tricky to capture all edge cases where callers may or may not include a slash at the end of their url.

b. the only other option to rewrite that helps with routing to multiple websites is redirect and that significantly introduces a whole new set of issues.

The conclusion from the above two in terms of the features available with app gateway indicate that FD has poor support for path based routing but a rich support for mapping 1 apex to 1 origin. In fact, the combination of these two limitations forces deployments of fd to separate origins to different origin groups. while it is easy to leverage priority and weight to distribute of all the traffic specificed by a route to more than one origin in the origin group, such activity is only for load balancing and not for user traffic splitting to different websites.

9. The caching settings include a checkbox and drop down on each route. There are default settings for this that work well with the scheduled purges of the FrontDoor but customizations based on compression and query parameter allow to target different asset types and associated key-values in the urls. If there are further relays before the fd and to the caller, these can introduce significant complexity.

10. The ability to have the TLS work for each alias because the certificate automatically renews elsewhere and is not stored on the front door or in a key vault allows the convenience that each apex level can work independently and maintenance-free which is why it directly shows how many days before the certificate expires on the fd itself but automatically rolls over.

References:

1. https://learn.microsoft.com/en-us/answers/questions/254127/hosting-multiple-static-websites-on-azure-blob-sto

2. https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-static-website

3. https://1drv.ms/w/c/d609fb70e39b65c8/EboOa84huEROjJO7OPF1GwkBFo7QmjvquCoNm_YmLzzJ1w?e=ciAlDa

#codingexercise: CodingExercise-03-06-2025.docx

Wednesday, March 5, 2025

 This is a summary of the book titled “Beyond the algorithm: AI, security, privacy, and ethics” written by Omar Santos and Petar RadanLiev and published by Addison-Welsey in 2024. The authors are AI security experts who explain the legal and security risks associated with AI and ML generated algorithms and statistical models in a variety of industries. Their treatise is both enticing in that its non-technical and guiding as it talks about how to tackle the challenges. AI comes in many forms such as to analyze data, make predictions and automate tasks and they are used in many contexts across different industries. GenAI and LLMs analyze patterns in existing data and generate new content based on those patterns. When they are targeted by malicious and unwanted elements, hackers can exploit security vulnerabilities and target their exploitations in phases. They leverage system vulnerabilities in the form of improper access control. Aside from system vulnerabilities, privacy and ethics are core societal issues. Working with AI demands an understanding of legal issues and regulatory compliance.

Artificial Intelligence (AI) has been a subject of speculation since ancient times, with early advances in the field occurring during World War II. The first significant advances in AI were made during the Dartmouth Conference in 1956, which included AI founders like Alan Turing, Marvin Minsky, and Herbert A. Simon. AI uses Machine Learning (ML) to analyze data, make predictions, and automate tasks. The current AI systems are called "narrow" AI, which performs specific tasks in a human-like way. Some are working on developing "general" AI, which can learn, comprehend, and apply knowledge across various fields. AI comes in various forms, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs). These innovations have the potential to change various businesses and human-computer interactions. For example, natural language generation (NLG) can turn raw, unstructured information into coherent sentences, while AI and ML-powered speech recognition technologies can improve human communication with computers and other technologies.

Generative AI and large language models (LLMs) analyze data patterns to create new content. Generative AI models like ChatGPT-4 can produce content similar to human creation but training them can be challenging. LLMs are based on "transformer models," neural networks that can weigh the significance of every word in a sequence and use multiple centers or "heads." OpenAI's GPT-4 is the most popular use of LLMs, but it raises ethical concerns as it may reproduce biases in the data. Hackers can exploit AI and ML's security vulnerabilities, potentially leading to serious consequences. Adversarial attacks involve subtle manipulation of input data to cause AI models to make mistakes, leading to accidents, financial losses, or deception of national security surveillance systems. Data poisoning attacks alter an AI system's training data, allowing bad actors to infect social media networks and spread misinformation.

Malicious attacks on AI and ML systems occur in phases, starting with reconnaissance and acquiring resources. They can gain initial access through insinuation, poisoning data sources, or manipulating public models. Cyber attackers will use various methods to evade detection, particularly by fooling the system’s breach detection systems. In the final stages, they seek valuable information and attempt to "exfiltrate" the ML system. Identifying system vulnerabilities and ensuring AI system infrastructure security is crucial. Network, physical, and software vulnerabilities can be exploited by attackers. In an AI-dominated world, privacy and ethics are core societal issues. AI has transformed business, industry, and healthcare, and it is essential to ensure fairness and prevent discrimination. Developers must train AI on diverse datasets and monitor algorithm performance to avoid biases and maintain data privacy. Companies should ask users for their personal data and ensure they have access to their data to prevent misuse and maintain security.

AI developers must maintain privacy and security by obtaining user consent for data collection and transparent processing practices. Companies should prioritize data security and safe storage. As AI technologies advance, ethical issues must be considered. Legal and regulatory frameworks for AI are still developing, but they are becoming more significant. The European Union passed the Artificial Intelligence Act (AIA) in 2023, while the UK and the US are working on AI policies prioritizing data security, transparency, equity, and governance. The US is expected to adopt a regulatory framework for AI within the next few years.

Notice how many parallels we can draw between this review and the earlier discussion of AI evaluations in various industries1 as well as vulnerability disclosure programs2. Also, the discussion of AI safety and security3 seems to be one of consensus among professionals.

References:

1. https://1drv.ms/w/c/d609fb70e39b65c8/ERsKj7_TWl9Kgay0P5WWaYMBhgb1-Ko5aaxab3PN2P229g?e=RTEiZr

2. https://1drv.ms/w/c/d609fb70e39b65c8/Edogg2nr_01IgzG768O3328BE9DQ8YP_vs9Bd7afNrz9Jw?e=ujoerM

3. https://1drv.ms/w/c/d609fb70e39b65c8/ETJ4CZIx_ONMgv1bk-qtPOsB3yGrFH1xAvqqUmGOEobWKQ?e=MoEFhX


Tuesday, March 4, 2025

 This follows up on a previous article to split a large text for use with text-to-speech api:

import azure.cognitiveservices.speech as speechsdk

import io

import wave

def split_text(text, max_chunk_size=5000):

    """Split text into chunks of approximately max_chunk_size characters."""

    words = text.split()

    chunks = []

    current_chunk = []

    current_size = 0

    for word in words:

        if current_size + len(word) + 1 > max_chunk_size:

            chunks.append(' '.join(current_chunk))

            current_chunk = [word]

            current_size = len(word)

        else:

            current_chunk.append(word)

            current_size += len(word) + 1

    if current_chunk:

        chunks.append(' '.join(current_chunk))

    return chunks

def synthesize_text(speech_synthesizer, text):

    """Synthesize speech from text."""

    result = speech_synthesizer.speak_text_async(text).get()

    if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:

        return result.audio_data

    else:

        print(f"Speech synthesis failed: {result.reason}")

        return None

def combine_audio(audio_chunks):

    """Combine multiple audio chunks into a single WAV file."""

    combined = io.BytesIO()

    with wave.open(combined, 'wb') as wav_file:

        for i, audio_chunk in enumerate(audio_chunks):

            if i == 0:

                # Set parameters from the first chunk

                with wave.open(io.BytesIO(audio_chunk), 'rb') as first_chunk:

                    params = first_chunk.getparams()

                wav_file.setparams(params)

            wav_file.writeframes(audio_chunk)

    return combined.getvalue()

def process_large_text(text, speech_key, service_region):

    """Process large text by splitting, synthesizing, and combining audio."""

    speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)

    speech_config.set_speech_synthesis_output_format(speechsdk.SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm)

    speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)

    chunks = split_text(text)

    audio_chunks = []

    for chunk in chunks:

        audio_data = synthesize_text(speech_synthesizer, chunk)

        if audio_data:

            audio_chunks.append(audio_data)

    if audio_chunks:

        combined_audio = combine_audio(audio_chunks)

        return combined_audio

    else:

        return None

# Usage example

if __name__ == "__main__":

    speech_key = "YOUR_SPEECH_KEY"

    service_region = "YOUR_SERVICE_REGION"

    large_text = "Your very large text goes here... " * 1000 # Example of a large text

    result = process_large_text(large_text, speech_key, service_region)

    if result:

        with open("output.wav", "wb") as audio_file:

            audio_file.write(result)

        print("Audio file 'output.wav' has been created.")

    else:

        print("Failed to process the text.")

A large document can be split into text as shown:

from docx import Document import os

input_file = Document1.docx'

output_file = 'Text1.txt'

def process_large_file(input_file_path, output_file_path):

try:

doc = Document(input_file_path)

print(f"Number of paragraphs: {len(doc.paragraphs)}")

with open(output_file_path, 'a', encoding='utf-8') as output_file:

for para in doc.paragraphs: chunk = para.text

if chunk:

output_file.write(chunk)

output_file.write("\r\n")

except Exception as e: print(f"An error occurred: {e}")

process_large_file(input_file, output_file)

print(f"Text has been extracted from {input_file} and written to {output_file}")

--

https://ezcloudiac.com/info/index.html


Monday, March 3, 2025

 A previous article explained just the extractive summary with Azure REST APIs. This is about producing summaries for entire manuscripts:

import requests

import json

import time

from docx import Document

import os

# Azure AI Language Service configuration

endpoint = "https://<your-azure-ai-resource-name>.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2023-04-01"

api_key = "<your-api-key>"

headers = {

    "Content-Type": "application/json",

    "Ocp-Apim-Subscription-Key": api_key

}

def summarize_text(text):

    body = {

        "displayName": "Document Summarization",

        "analysisInput": {

            "documents": [

                {

                    "id": "1",

                    "language": "en",

                    "text": text

                }

            ]

        },

        "tasks": [

            {

                "kind": "ExtractiveSummarization",

                "parameters": {

                    "sentenceCount": 5

                }

            }

        ]

    }

    response = requests.post(endpoint, headers=headers, json=body)

    if response.status_code == 202:

        operation_location = response.headers["Operation-Location"]

        return operation_location

    else:

        raise Exception(f"Failed to start summarization job: {response.text}")

def get_summary_result(operation_location):

    while True:

        response = requests.get(operation_location, headers=headers)

        result = json.loads(response.text)

        if result["status"] == "succeeded":

            summary = result["tasks"]["items"][0]["results"]["documents"][0]["sentences"]

            return " ".join([sentence["text"] for sentence in summary])

        elif result["status"] == "failed":

            raise Exception(f"Summarization job failed: {result}")

        time.sleep(5) # Wait for 5 seconds before checking again

def get_text(file_path):

    with open(file_path, 'r') as file:

        file_contents = file.read()

    return file_contents

# Main execution

if __name__ == "__main__":

    docx_file_path = "1.txt"

    # Extract text from Word document

    document_text = get_text(docx_file_path)

    # Start summarization job

    operation_location = summarize_text(document_text)

    print(operation_location)

    # Get summary result

    summary = get_summary_result(operation_location)

    print("Summary:")

    print(summary)

Sample Output:

“””

https://text-ctl-3.cognitiveservices.azure.com/language/analyze-text/jobs/9afb7002-7930-4448-8bd3-e3cb02287708?api-version=2023-04-01

Summary:

The public cloud offers capabilities to the general public in the form of services from the provider̢۪s services portfolio that can be requested as instances called resources. Both for the provider and the general public, IaC is a common paradigm for self-service templates to manage, capture and track changes to a resource during its lifecycle. Public cloud is the epitome of infrastructure both in terms of history and landscape and this book describes principles using references to public cloud. The IaC architecture is almost always dominated by the choice of technology stacks. When the code is delivered, configuration management and infrastructure management provide a live operational environment for testing.

“””

A large document can be split into text as shown:

from docx import Document import os

input_file = 'Document1.docx'

output_file = Text1.txt'

def process_large_file(input_file_path, output_file_path):

try:

doc = Document(input_file_path)

print(f"Number of paragraphs: {len(doc.paragraphs)}")

with open(output_file_path, 'a', encoding='utf-8') as output_file:

for para in doc.paragraphs: chunk = para.text

if chunk:

output_file.write(chunk)

output_file.write("\r\n")

except Exception as e: print(f"An error occurred: {e}")

process_large_file(input_file, output_file)

print(f"Text has been extracted from {input_file} and written to {output_file}")

Instead of “ExtractiveSummarization” value in the request, we can use “AbstractiveSummarization”. The parsing of the operation status will also require to be changed as follows in that case:

def get_summary_result(operation_location):

    while True:

        response = requests.get(operation_location, headers=headers)

        result = json.loads(response.text)

        if result["status"] == "succeeded":

            print(repr(result))

            summary = result["tasks"]["items"][0]["results"]["documents"][0]["summaries"]

            return " ".join([sentence["text"] for sentence in summary])

        elif result["status"] == "failed":

            raise Exception(f"Summarization job failed: {result}")

        time.sleep(5) # Wait for 5 seconds before checking again

and a sample output will result as follows:

https://text-ctl-3.cognitiveservices.azure.com/language/analyze-text/jobs/3f246bed-ebfb-4b2b-bcc3-e40582b800d1?api-version=2023-04-01

Summary:

The document discusses the architecture of Infrastructure-as-Code (IaC) within public clouds, highlighting its tiered implementation that includes IaaS, PaaS, and DevOps tools. It emphasizes the role of IaC in managing resources through code, facilitating quick and consistent provisioning, and addressing changes throughout a resource's lifecycle. The architecture is heavily influenced by the choice of technology stacks, with tools like Ansible, Terraform, and Pulumi being prominent choices. The document notes the benefits of IaC in reducing shadow IT, integrating with CI/CD platforms, and standardizing infrastructure across environments, whether cloud-based or on-premises. It distinguishes between configuration management tools, such as CFEngine, and infrastructure management tools, like Terraform and Pulumi, which can be mixed and matched to meet specific organizational needs. The summary encapsulates the essence of IaC's role in modern cloud environments, its impact on DevOps, and its capacity to manage complex infrastructures effectively.

Sunday, March 2, 2025

 Problem: A transformation sequence from word beginWord to word endWord using a dictionary wordList is a sequence of words beginWord -> s1 -> s2 -> ... -> sk such that:

• Every adjacent pair of words differs by a single letter.

• Every si for 1 <= i <= k is in wordList. Note that beginWord does not need to be in wordList.

• sk == endWord

Given two words, beginWord and endWord, and a dictionary wordList, return all the shortest transformation sequences from beginWord to endWord, or an empty list if no such sequence exists. Each sequence should be returned as a list of the words [beginWord, s1, s2, ..., sk].

Example 1:

Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log","cog"]

Output: [["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]

Explanation: There are 2 shortest transformation sequences:

"hit" -> "hot" -> "dot" -> "dog" -> "cog"

"hit" -> "hot" -> "lot" -> "log" -> "cog"

Example 2:

Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log"]

Output: []

Explanation: The endWord "cog" is not in wordList, therefore there is no valid transformation sequence.

Constraints:

• 1 <= beginWord.length <= 5

• endWord.length == beginWord.length

• 1 <= wordList.length <= 500

• wordList[i].length == beginWord.length

• beginWord, endWord, and wordList[i] consist of lowercase English letters.

• beginWord != endWord

• All the words in wordList are unique.

• The sum of all shortest transformation sequences does not exceed 105.

class Solution {

    public List<List<String>> findLadders(String beginWord, String endWord, List<String> wordList) {

        List<List<String>> results = new ArrayList<List<String>>();

        var q = new LinkedList<String>();

        var s = new HashSet<String>(wordList);

        q.add(beginWord);

        var result = new ArrayList<String>();

        combine(beginWord, endWord, s, results, result);

        var minOpt = results.stream().filter(x -> x.get(0).equals(beginWord)).mapToInt(x -> x.size()).min();

        if (minOpt.isPresent()) {

            var min = minOpt.getAsInt();

            results = results.stream().filter(x -> x.size() == min).collect(Collectors.toList());

        }

        return results;

    }

    private static void combine(String top, String endWord, HashSet<String> s, List<List<String>> results, List<String> result)

    {

            if (top.equals(endWord)) {

                return;

            }

            result.add(top);

            char[] chars = top.toCharArray();

            for (int i = 0; i < chars.length; i++)

            {

                for (char c = 'a'; c <= 'z'; c++)

                {

                    char temp = chars[i];

                    if (temp != c) {

                        chars[i] = c;

                    }

                    String candidate = new String(chars);

                    if (s.contains(candidate) && !result.contains(candidate)) {

                        var clone = new ArrayList<String>(result);

                        if (candidate.equals(endWord)) {

                            clone.add(candidate);

                            results.add(clone);

                        } else {

                            combine(candidate, endWord, s, results, clone);

                        }

                    }

                    chars[i] = temp;

                }

            }

            result.remove(top);

    }

}

Test cases:

1.

Input

beginWord =

"hit"

endWord =

"cog"

wordList =

["hot","dot","dog","lot","log","cog"]

Output

[["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]

Expected

[["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]

2.

Input

beginWord =

"hit"

endWord =

"cog"

wordList =

["hot","dot","dog","lot","log"]

Output

[]

Expected

[]


Saturday, March 1, 2025

 Sample Code for leveraging Azure AI Language Services to summarize text:

import requests

import json

def summarize_text(document, endpoint, api_key):

    # Define the API endpoint for Text Analytics v3.2-preview.2 (used for text summarization)

    # url = f"{endpoint}/text/analytics/v3.2-preview.2/analyze"

    url = f"{endpoint}/language/analyze-text/jobs?api-version=2023-04-01"

    # Set up headers with your API key

    headers = {

        "Ocp-Apim-Subscription-Key": api_key,

        "Content-Type": "application/json",

        "Content-Length": "0"

    }

    # Define the input document for summarization

    body = {

            "documents": [

                {

                    "id": "1",

                    "language": "en",

                    "text": document

                }

            ],

        "analysisInput": {

            "documents": [

                {

                    "id": "1",

                    "language": "en",

                    "text": document

                }

            ]

        },

        "tasks": [

            {

                "kind": "ExtractiveSummarization",

                "taskName": "extractiveSummarization",

                "parameters": {

                    "modelVersion": "latest",

                    "sentenceCount": 3 # Adjust the number of sentences in the summary

                }

            }

        ]

    }

    # Send the POST request

    response = requests.post(url, headers=headers, json=body)

    # Check for response status

    if response.status_code == 200:

        result = response.json()

        # Extract summarized sentences from the response

        summary = result['tasks']['extractiveSummarizationResults'][0]['results']['documents'][0]['sentences']

        return " ".join([sentence["text"] for sentence in summary])

    elif response.status_code == 202:

        print(f"Headers: {response.headers}")

    else:

        raise Exception(f"Error: {response.status_code}, Message: {response.text}, Headers: {response.headers}")

# Example usage

if __name__ == "__main__":

    # Replace with your Azure Text Analytics endpoint and API key

    AZURE_ENDPOINT = "https://<your-azure-ai-endpoint>.cognitiveservices.azure.com"

    AZURE_API_KEY = "<your-api-key>"

    # Input text document to summarize

    input_document = """

    Artificial intelligence (AI) refers to the simulation of human intelligence in machines

    that are programmed to think like humans and mimic their actions. The term may also

    be applied to any machine that exhibits traits associated with a human mind such as

    learning and problem-solving.

    """

    try:

        summary = summarize_text(input_document, AZURE_ENDPOINT, AZURE_API_KEY)

        print("Summary:")

        print(summary)

    except Exception as e:

        print(e)

Sample trial:

Response Headers:

{'Content-Length': '0', 'operation-location': 'https://<your-azure-ai-endpoint>.cognitiveservices.azure.com/language/analyze-text/jobs/7060ce5a-afb4-4a08-87a1-e456d486510f?api-version=2023-04-01', 'x-envoy-upstream-service-time': '188', 'apim-request-id': 'f44e40fe-e2ef-41f4-b46d-d36896f3f20d', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'x-ms-region': 'Central US', 'Date': 'Sat, 01 Mar 2025 02:59:41 GMT'}

curl -i -H "Ocp-Apim-Subscription-Key: <your-api-key>" "https://<your-azure-ai-endpoint>.cognitiveservices.azure.com/language/analyze-text/jobs/7060ce5a-afb4-4a08-87a1-e456d486510f?api-version=2023-04-01"

HTTP/1.1 200 OK

Content-Length: 933

Content-Type: application/json; charset=utf-8

x-envoy-upstream-service-time: 51

apim-request-id: 30208936-7f0d-49d5-9d36-8fc69ffc976e

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

x-content-type-options: nosniff

x-ms-region: Central US

Date: Sat, 01 Mar 2025 03:01:03 GMT

{"jobId":"7060ce5a-afb4-4a08-87a1-e456d486510f","lastUpdatedDateTime":"2025-03-01T02:59:42Z","createdDateTime":"2025-03-01T02:59:41Z","expirationDateTime":"2025-03-02T02:59:41Z","status":"succeeded","errors":[],"tasks":{"completed":1,"failed":0,"inProgress":0,"total":1,"items":[{"kind":"ExtractiveSummarizationLROResults","taskName":"extractiveSummarization","lastUpdateDateTime":"2025-03-01T02:59:42.8576711Z","status":"succeeded","results":{"documents":[{"id":"1","sentences":[{"text":"Artificial intelligence (AI) refers to the simulation of human intelligence in machines","rankScore":1.0,"offset":5,"length":87},{"text":"that are programmed to think like humans and mimic their actions.","rankScore":0.49,"offset":98,"length":65},{"text":"be applied to any machine that exhibits traits associated with a human mind such as","rankScore":0.3,"offset":187,"length":83}],"warnings":[]}],"errors":[],"modelVersion":"2024-11-04"}}]}}

Reference:

1. previous articles

2. Extractive Summarization:

a. https://learn.microsoft.com/en-us/azure/ai-services/language-service/summarization/overview?tabs=text-summarization

b. https://learn.microsoft.com/en-us/azure/ai-services/language-service/summarization/how-to/document-summarization#try-text-extractive-summarization

3. Abstractive Summarization: https://github.com/microsoft/AzureSearch-MRC/blob/main/README.md


Friday, February 28, 2025

 This is a summary of the book titled “Your Stone Age Brain in the Screen Age: Coping with digital distraction and sensory overload” written by Richard Cytowic and published by MIT Press in 2024. The author is a neurologist who explains how screens are grabbing your attention and how we can regain them. His book talks about the impact of continuous alerts, notifications and stimulations on the human brain and why reclaiming your attention and engaging with real-world is not only pertinent and significant but also a necessity. Excessive screen time is reducing brain development in children, resulting in reduced impulse control even leading to psychological harm. We are hardwired with sensory perceptions which makes it difficult to peel away our eyes from the flickering screen. Cellphone and tablet usage has given rise to increased body dysmorphia and virtual autism. Depriving children of sufficient human contact can inhibit the development of empathy. Screen addictions can put you on a “hedonic treadmill”. Protect your sleep and make space for silence and connection to fight against digital toxicity.

Excessive screen time is causing brain damage in children, resulting in reduced impulse control and reduced attention span. Addiction, originating from the Latin word "addictum," refers to the time spent serving a master. Many people don't see excessive screen time as a problem, but this blindness is due to tech giants exploiting human psychology to keep people glued to their screens. Social media addiction can be fatal and trigger severe psychological harm, with injuries caused by inattentive cellphone usage resulting in 76,000 emergency room visits in the past two decades. The human brain's two hemispheres support different skill sets and distinct conceptualizations of identity, making it difficult to stop giving attention to screens. The internal battle for control between the two hemispheres is a result of the way the brain's two hemispheres interact, making it difficult to stop giving attention to screens.

The brain's sensitivity to change, which helped early humans survive, has led to the rise of digital distractions and increased body dysmorphia. The brain's orienting reflex allows it to misperceive digital sound and visual interruptions as having life-or-death significance. This has led to the development of new mental health conditions, such as Snapchat dysmorphia, a new type of body dysmorphic disorder (BDD), where individuals become distressed when their real-life face doesn't look like their edited digital one. Children with virtual autism show dramatic improvements once digital screens are removed. Heavy screen time can also result in the development of autism-like behaviors in younger children, as they can't learn to make eye contact or display context-appropriate facial expressions. Child psychiatrist Victoria Dunckley identifies digital devices as the primary source of issues in children without autism spectrum disorder (ASD). Parents can limit their child's development of virtual autism by organizing in-person play dates and limiting screen time for children 12 years old and younger.

Attachment theory, based on research by Harry Harlow, suggests that depriving children of human contact can inhibit the development of empathy. Harlow's experiments showed that chimpanzees raised without warmth and comfort were unable to comfort themselves, leading to a "pit of despair" and a lack of relational understanding. The iPhone generation may face similar fate to those raised without warmth, as they may escape into digital worlds at the expense of developing empathy and healthy attention spans. Smartphone addictions can trap individuals on a "hedonic treadmill," chasing fleeting moments of happiness and lacking genuine inner contentment. The perpetual unpredictability of digital rewards makes them never less exciting, leading to a constant state of craving. The brain treats the cues for the addiction as more salient than the reward itself, trapping individuals in a constant state of craving. As children escape into digital worlds, they may do so at the expense of developing empathy and healthy attention spans, as empathy requires the ability to focus on another person long enough to understand a different perspective.

To protect your sleep from blue screen light, follow these self-care and sleep hygiene practices:

1. Establish consistent bedtime and wake-up times, block out light sources, and choose a restorative sleep posture.

2. Keep your bedroom temperature between 65°F and 68°F, using a cooling gel pillow or mattress pad.

3. Keep the bathroom lights low, using low-wattage LED nightlights and candles.

4. Consider taking a walk outdoors before bed, limit digital device use, and get natural light.

5. Rethink digital habits, making room for silence and connection.

6. Engage in niksen, the art of doing nothing and putting life on pause for a few minutes.

7. Switch to paper media, writing by hand, and avoiding streaming while eating.

By following these practices, you can improve your sleep and reduce the risk of health consequences such as rapid cellular aging.