Monday, April 14, 2025

 Comparision between CNN-LSTM and Logistic Regression

Deep Learning has shown superior performance in object detection and image classification in drone imageries. Logistic Regression shows superior prediction with drone telemetry data. While they serve different purposes, they can be compared on a common use case for predicting deviation from trajectory and compensation based on past orientations and current. The cost function in the CNN-LSTM is a mean squared error. CNN-LSTM uses the vectorized output of edge-detected and gaussian-smoothed images captured sequentially from video to predict the next steering angle of the drone. But the same data can be emitted as telemetry along with additional telemetry from edge detections, trajectory and squared errors and their corresponding vectors can then be run through Logistic Regression as shown below:Sample usage of Logistic Regression:

#! /bin/python

import matplotlib.pyplot as plt

import pandas

import os

here = os.path.dirname(__file__) if "__file__" in locals() else "."

data_file = os.path.join(here, "data", "flight_errors", "data.csv")

data = pandas.read_csv(data_file, sep=",")

# y is the last column and the variable we want to predict. It has a boolean value.

data["y"] = data["y"].astype("category")

print(data.head(2))

print(data.shape)

data["y"] = data["y"].apply(lambda x: 1 if x == 1 else 0)

print(data[["y", "X1"]].groupby("y").count())

try:

    from sklearn.model_selection import train_test_split

except ImportError:

    from sklearn.cross_validation import train_test_split

train, test = train_test_split(data)

import numpy as np

from microsoftml import rx_fast_trees, rx_predict

features = [c for c in train.columns if c.startswith("X")]

model = rx_fast_trees("y ~ " + "+".join(features), data=train)

pred = rx_predict(model, test, extra_vars_to_write=["y"])

print(pred.head())

from sklearn.metrics import confusion_matrix

conf = confusion_matrix(pred["y"], pred["PredictedLabel"])

print(conf)

def train_test_hyperparameter(trainA, trainB, **hyper):

    # Train a model

    features = [c for c in train.columns if c.startswith("X")]

    model = rx_fast_trees("y ~ " + "+".join(features), data=trainA, verbose=0, **hyper)

    pred = rx_predict(model, trainB, extra_vars_to_write=["y"])

    conf = confusion_matrix(pred["y"], pred["PredictedLabel"])

    return (conf[0,0] + conf[1,1]) / conf.sum()

trainA, trainB = train_test_split(train)

hyper_values = [5, 10, 15, 20, 25, 30, 35, 40, 50, 100, 200]

perfs = []

for val in hyper_values:

    acc = train_test_hyperparameter(trainA, trainB, num_leaves=val)

    perfs.append(acc)

    print("-- Training with hyper={0} performance={1}".format(val, acc))

import matplotlib.pyplot as plt

fig, ax = plt.subplots(1, 1)

ax.plot(hyper_values, perfs, "o-")

ax.set_xlabel("num_leaves")

ax.set_ylabel("% correctly classified")

tries = max(zip(perfs, hyper_values))

print("max={0}".format(tries))

model = rx_fast_trees("y ~ " + "+".join(features), data=train, num_leaves=tries[1])

pred = rx_predict(model, test, extra_vars_to_write=["y"])

conf = confusion_matrix(pred["y"], pred["PredictedLabel"])

print(conf)


Sunday, April 13, 2025

Emerging trends and regulations in UAV swarms

The units in a full-fledged, safe and autonomous swarm are comprised of drones and when the entire swarm is homogeneous, the adherence to Federal Aviation Administration (FAA)'s Small UAS Rule (Part 107) is sufficient to clear.  This regulation mandates the following from the drones:

  • Drone Weight: Must weigh less than 55 pounds, including payload.
  • Visual Line of Sight (VLOS): The drone must remain within the operator's unaided visual line of sight.
  • Daylight Operations: Flights are allowed during daylight or twilight (with anti-collision lighting).
  • Maximum Altitude: Cannot exceed 400 feet above ground level unless within 400 feet of a structure.
  • Maximum Speed: Limited to 100 mph (87 knots).
  • Airspace Restrictions: Operations in controlled airspace require prior FAA authorization.
  • No Flying Over People: Unless they are directly involved in the operation.
  • No Moving Vehicles: Cannot operate from a moving vehicle unless in a sparsely populated area.
  • Weather Visibility: Minimum visibility of 3 miles from the control station.
  • Pilot Certification: Operators must hold a Remote Pilot Certificate or be supervised by someone who does.
  • Registration: All drones must be registered with the FAA.

For example, Amazon's drone delivery system, known as Prime Air, is designed to deliver packages weighing up to 5 pounds within 30 minutes. The drones are fully electric and incorporate advanced aerospace standards to ensure safety and reliability such as  Part 135 Air Carrier Certificate from the FAA as well as the FAA Part 107. The drones are equipped with a sophisticated sense-and-avoid system that enables them to detect and navigate around obstacles, both static (like chimneys) and dynamic (like other aircraft). This system uses proprietary algorithms for object detection and decision-making, ensuring safe operations even in unexpected situations. The algorithms leverage a diverse suite of object detection technologies to identify obstacles and adjust flight paths accordingly. During the delivery descent, the drones can detect and avoid smaller obstacles like trampolines or clotheslines that might not be visible in satellite imagery. An automated drone-management system is being developed to plan the flight paths and ensure there are safe distances between the aircraft and other aircraft in the area, and that all aviation regulations are complied with.

The autonomous drone delivery system features a deep learning autonomous drone model built using CNN-LSTM algorithms. It includes functionalities like online purchasing, drone delivery processing, and real-time location tracking. CNN-LSTM algorithms combine Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks to handle tasks involving spatial and temporal data.CNNs are excellent for extracting spatial features from data, such as images or spectrograms. They use convolutional layers to identify patterns like edges, textures, or shapes. LSTMs are a type of Recurrent Neural Network (RNN) designed to capture temporal dependencies in sequential data. They excel at learning long-term relationships, making them ideal for tasks like time-series analysis or speech recognition.CNN layers process spatial data to extract features. These features are then passed to LSTM layers, which analyze the temporal relationships between them. This combination allows the model to understand both spatial and temporal aspects of the data, making it highly effective for tasks like video analysis, activity recognition, and speech emotion detection. This technique can help with generating textual descriptions of video sequences, identifying actions in a sequence of images and classifying emotions from audio spectrograms.

Amazon's CNN-LSTM predictor makes use of Gaussian and Edge detection preprocessing functions from image processing libraries for Steering Angle Dataset exploration.  Yolov3 bounding boxes architecture is used to find bounding boxes of cars, people, and trees in the image dataset. These bounding boxes were used by their probability model to calculate the probability of collision. Weight determination functions were used to calculate the probability of colliding into any given object. A pilot script is used to fly the drone.


Saturday, April 12, 2025

 Emerging Trends of AI in Autonomous Business: 


The digital business era is maturing, with industry-leading enterprises seeking the next technology-enabled business growth curve. Autonomous business is a style of business partly governed and majority-operated by self-learning software agents, providing smart products and services to machine-customer-prevalent markets in a programmable economy. Executive leaders should factor autonomous business concepts into their long-range business strategy cycle, identify early "land grabs" needed to secure a competitive foothold, and pay attention to the possible arrival of machine customers in markets. The concept of autonomous business is still emerging, and its contours may assume a different market term in the future. It is characterized by a style of business partly governed and majority-operated by self-learning software agents, providing smart products and services to machine-customer-prevalent markets. Examples of autonomous business include fingerprint recognition door locks, people-tracking camera drones, voice assistants and chatbots. 


Operating in a programmable economy involves organizations trading with customers and other entities via blockchain decentralized ledgers, using smart contracts and digital tokens for value exchanges. This evolution of autonomous business will not be fundamentally dehumanizing, but it will lead to a four-day workweek in advanced economies, but not mass unemployment and societal crises. The definition of autonomous business is indicative rather than absolute, and it follows from prior evolutionary stages of digital and information-technology-enabled business capability and strategic value focus. Autonomous business builds on the prior phases, which will continue to grow and add value, even if their progress rate slows as autonomous business matures. It will rely heavily on golden thread business technology capacities, such as composability, that have helped weave the previous eras and continue to evolve and advance. 


The next era, "metaversal business," is expected to emerge from the integration of people into cyberspace, blurring the boundary between humans and machines. However, widespread deep immersion in cyberspace is unlikely, and direct interfacing is unlikely before the 2040s. Autonomous business will become a significant macro business technology concept in industries like mining, financial services, automotive, aerospace, defense, smart cities, medicine, research, higher education, and entertainment. It will depend on technologies that are already available and rapidly advancing, and will involve machine-controlled operations, augmented governance, and interaction with customers and other businesses through blockchain-enabled mechanisms. The programmable economy, based on distributed and decentralized digital resources, supports the production and consumption of goods and services, enabling innovation, entrepreneurship, and value exchange among humans and machines. 


#codingexercise: https://1drv.ms/w/c/d609fb70e39b65c8/EYMCYvb9NRtOtcJwdXRDUi0BVzUEyGL-Rz2NKFaKj6KLgA?e=fBM3eo

Friday, April 11, 2025

 #codingexercise

Problem: A transformation sequence from word beginWord to word endWord using a dictionary wordList is a sequence of words beginWord -> s1 -> s2 -> ... -> sk such that:

Every adjacent pair of words differs by a single letter.

Every si for 1 <= i <= k is in wordList. Note that beginWord does not need to be in wordList.

sk == endWord

Given two words, beginWord and endWord, and a dictionary wordList, return all the shortest transformation sequences from beginWord to endWord, or an empty list if no such sequence exists. Each sequence should be returned as a list of the words [beginWord, s1, s2, ..., sk].

 

Example 1:

Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log","cog"]

Output: [["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]

Explanation: There are 2 shortest transformation sequences:

"hit" -> "hot" -> "dot" -> "dog" -> "cog"

"hit" -> "hot" -> "lot" -> "log" -> "cog"


Example 2:

Input: beginWord = "hit", endWord = "cog", wordList = ["hot","dot","dog","lot","log"]

Output: []

Explanation: The endWord "cog" is not in wordList, therefore there is no valid transformation sequence.


 

Constraints:

1 <= beginWord.length <= 5

endWord.length == beginWord.length

1 <= wordList.length <= 500

wordList[i].length == beginWord.length

beginWord, endWord, and wordList[i] consist of lowercase English letters.

beginWord != endWord

All the words in wordList are unique.

The sum of all shortest transformation sequences does not exceed 105.

class Solution {

    public List<List<String>> findLadders(String beginWord, String endWord, List<String> wordList) {

        List<List<String>> results = new ArrayList<List<String>>();

        var q = new LinkedList<String>();

        var s = new HashSet<String>(wordList);

        q.add(beginWord);

        var result = new ArrayList<String>();

        combine(beginWord, endWord, s, results, result);

    

        var minOpt =  results.stream().filter(x -> x.get(0).equals(beginWord)).mapToInt(x -> x.size()).min();

        if (minOpt.isPresent()) {

            var min = minOpt.getAsInt();

            results = results.stream().filter(x -> x.size() == min).collect(Collectors.toList());

        }

        

        return results;

    }



    private static void combine(String top, String endWord, HashSet<String> s,  List<List<String>> results, List<String> result)

    {

            if (top.equals(endWord)) {

                return;

            }

            result.add(top);

            char[] chars = top.toCharArray();

            for (int i = 0; i < chars.length; i++)

            {

                for (char c = 'a'; c <= 'z'; c++)

                {

                    char temp = chars[i];

                    if (temp != c) {

                        chars[i] = c;

                    }



                    String candidate = new String(chars);

                    if (s.contains(candidate) && !result.contains(candidate)) {

                        var clone = new ArrayList<String>(result);

                        if (candidate.equals(endWord)) {

                            clone.add(candidate);

                            results.add(clone);

                        } else {

                            combine(candidate, endWord, s, results, clone);

                        }

                    }

                    chars[i] = temp;

                }

            }

            result.remove(top);

    }

}

Test cases:

1.

Input

beginWord =

"hit"


endWord =

"cog"


wordList =

["hot","dot","dog","lot","log","cog"]


Output

[["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]


Expected

[["hit","hot","dot","dog","cog"],["hit","hot","lot","log","cog"]]


2.

Input

beginWord =

"hit"


endWord =

"cog"


wordList =

["hot","dot","dog","lot","log"]


Output

[]


Expected

[]


Thursday, April 10, 2025

 The following script can be used to covert the manuscript of a book into its corresponding audio production.

Option 1: individual chapters

import azure.cognitiveservices.speech as speechsdk

import time

def batch_text_to_speech(text, output_filename):

      # Azure Speech Service configuration

      speech_key = "<use-your-speech-key>"

      service_region = "eastus"

# Configure speech synthesis

speech_config = speechsdk.SpeechConfig(

     subscription=speech_key,

     region=service_region

)

# Set output format to MP3

                          speech_config.set_speech_synthesis_output_format(speechsdk.SpeechSynthesisOutputFormat.Audio48Khz192KBitRateMonoMp3)

speech_config.speech_synthesis_voice_name = "en-US-BrianMultilingualNeural"

# Create audio config for file output

audio_config = speechsdk.audio.AudioOutputConfig(filename=output_filename)

# Create speech synthesizer

synthesizer = speechsdk.SpeechSynthesizer(

    speech_config=speech_config,

    audio_config=audio_config

)

# Split text into chunks if needed (optional)

# text_chunks = split_large_text(text)

# Synthesize text

result = synthesizer.speak_text_async(text).get()

if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:

    print(f"Audio synthesized to {output_filename}")

elif result.reason == speechsdk.ResultReason.Canceled:

    cancellation_details = result.cancellation_details

    print(f"Speech synthesis canceled: {cancellation_details.reason}")

    if cancellation_details.reason == speechsdk.CancellationReason.Error:

        print(f"Error details: {cancellation_details.error_details}")

def split_large_text(text, max_length=9000):

        return [text[i:i+max_length] for i in range(0, len(text), max_length)]

input_filename = ""

large_text = ""

for i in range(1,100):

        input_filename=f"{i}.txt"

        print(input_filename)

        if input_filename:

          with open(input_filename, "r") as fin:

              large_text = fin.read()

              print(str(len(large_text)) + " " + input_filename.replace("txt","mp3"))

              batch_text_to_speech(large_text, input_filename.replace("txt","mp3"))

Option 2. Whole manuscript:

import requests import json import time from docx import Document import os import uuid

# Azure AI Language Service configuration

endpoint = "https://eastus.api.cognitive.microsoft.com/texttospeech/batchsyntheses/JOBID?api-version=2024-04-01" api_key = "<your_api_key>"

headers = {

 "Content-Type": "application/json",

 "Ocp-Apim-Subscription-Key": api_key

 }

def synthesize_text(inputs):

body = {

 "inputKind": "PlainText", # or SSML

 'synthesisConfig': {

 "voice": "en-US-BrianMultilingualNeural",

 },

 # Replace with your custom voice name and deployment ID if you want to use custom voice.

 # Multiple voices are supported, the mixture of custom voices and platform voices is allowed.

 # Invalid voice name or deployment ID will be rejected.

 'customVoices': {

  # "YOUR_CUSTOM_VOICE_NAME": "YOUR_CUSTOM_VOICE_ID" }, "inputs": inputs,

   "properties": {

     "outputFormat": "audio-48khz-192kbitrate-mono-mp3"

    }

 }

 response = requests.put(endpoint.replace("JOBID", str(uuid.uuid4())), headers=headers, json=body)

 if response.status_code < 400:

    jobId = f'{response.json()["id"]}'

    return jobId

 else:

    raise Exception(f"Failed to start batch synthesis job: {response.text}")

def get_synthesis(job_id: str):

 while True:

 url = f'https://eastus.api.cognitive.microsoft.com/texttospeech/batchsyntheses/{job_id}?api-version=2024-04-01'

     headers = { "Content-Type": "application/json", "Ocp-Apim-Subscription-Key": api_key }

     response = requests.get(url, headers=headers)

  if response.status_code < 400:

status = response.json()['status']

if "Succeeded" in status:

return response.json()

else:

print(f'batch synthesis job is still running, status [{status}]')

time.sleep(5) # Wait for 5 seconds before checking again

def get_text(file_path):

with open(file_path, 'r') as file:

  file_contents = file.read()

print(f"Length of text: {len(file_contents)}")

return file_contents

if name == "main":

input_file_name = ""

large_text = ""

inputs = []

  for i in range(1,100):

   input_file_name=f"{i}.txt"

   print(input_file_name)

   if input_file_name:

document_text = get_text(input_file_name)

inputs += [ { "content": document_text }, ]

jobId = synthesize_text(inputs)

 print(jobId)

 # Get audio result

 audio = get_synthesis(jobId)

 print("Result:")

 print(audio)

#Codingexercise: https://1drv.ms/w/c/d609fb70e39b65c8/EV8iyT_-kuVCp1f6IVela_0BRuHHSQwBqNnng7Ztz4cQaA?e=ZHpPON


Wednesday, April 9, 2025

 This is a summary of the book titled “Crash Landing” written by Liz Hoffman and published by Crown in 2023. When the pandemic hit, the 2008 recession was dwarfed. Leaders had to act fast. Billions of dollars changed hands. Some companies made money while others barely endured. Hoffman provided intimate portraits of leaders who navigated these times as the inside story of how some companies survived an economy on the brink. Many were blindsided by the pandemic such as the America’s airline industry. By the time they could grapple with the reality, the crisis was a major one. When money stopped flowing, companies borrowed. The US Government threw in its vast financial firepower at the crisis but the economy that survived was no longer the original before the pandemic.

In late January 2020, the financial elite at the World Economic Forum in Davos, Switzerland, were unaware of the potential impact of COVID-19 on the global economy. The US economy had experienced 10 consecutive years of growth and had a record high in 2019 corporate profits. However, the American economy was particularly vulnerable to a health crisis due to stagnant wages, reduced workers' benefits, and lack of surplus funds. The virus was already affecting Taiwan and Japan, and it would soon appear in Europe and North America. The airline industry, which had enjoyed a champagne decade, was also vulnerable to the virus. In 2020, the globalized world was not interconnected by land or sea, but by air. Direct flights reached twice as many cities as they had 20 years earlier. A 35-year-old man returning from China in mid-January became America's first reported case of COVID-19, unaware of the potentially deadly virus.

In March 2020, the world faced a major crisis due to the COVID-19 pandemic. American executives and Wall Street bankers were not taking the situation seriously, as businesses in other countries did. The virus spreads through the air and resembles an ordinary flu, leading to widespread social distancing. Despite rigorous lockdowns, COVID-19 quickly crossed out of China, leading to the closure of Disney's Shanghai Park, McDonald's, Starbucks, Delta, and Hilton's hotels in China. The world's greatest economy shut down, and Wall Street's financial markets experienced panic. Bill Ackman, founder and CEO of Pershing Square Capital Management, believed the virus might be difficult to control in the US, leading to massive unemployment and civil unrest. Investors began feeling spooked, and stock values fell. The Federal Reserve intervened with an interest rate cut, but the markets remained open. On March 11, 2020, the World Health Organization announced that COVID-19 had reached the level of a global pandemic, leading to stock declines, sports leagues ending, and Disney parks closing.

The COVID-19 pandemic severely impacted the travel industry, leading to a significant drop in revenue per available room, a crucial financial metric in the hotel industry. Hilton, a major hotel chain, had barely survived the 2008 financial collapse and was unable to survive the 2020 crisis. The pandemic exposed the dangers of a financial playbook that had become the default in corporate boardrooms over the previous two decades. Hilton's leaders called in its $1.75 billion line of credit to borrow money and worry that banks themselves could go under. The 2020 financial meltdown differed from the 2008 crisis, as it was not as severe as the 2008 crisis. Wall Street traders were uneasy and the sudden need to work from home worsened volatility. Bank reforms in 2020 limited Wall Street's activities and freedoms, leading to a decline in productivity and a dramatic fall in the S&P 500.

The US government and airline executives faced financial challenges during the COVID economic crisis, aiming to avoid bankruptcy and a complete meltdown. After negotiations, Congress pursued a multibillion-dollar payroll relief package, leading to major hedge funds selling bonds and Airbnb spending billions on COVID refunds. The number of Americans with COVID increased exponentially, and banks borrowed billions. The economy that survived the pandemic is not the one that crashed headlong into it, but it did not fall into depression. The pandemic created value, such as improved telecommunications infrastructure and higher pay for essential workers. However, inflation and interest rates rose, making life difficult for ordinary people. While the pandemic wasn't a total disaster, it required a careful balance between swift action and making the right choices.

#codingexercise: https://1drv.ms/w/c/d609fb70e39b65c8/Echlm-Nw-wkggNYlIwEAAAABD8nSsN--hM7kfA-W_mzuWw?e=BQczmz 


Tuesday, April 8, 2025

 Lessons from storage engineering for Knowledge bases and RAGs.

Data at rest and in transit are chunks of binaries that make sense only when there are additional layers built to process them, store them, ETL them or provide them as results to queries and storage engineering has a rich tradition in building datastores, as databases and data warehouses and even making them virtual and hosted in the cloud. Vector databases, albeit the authority in embeddings and semantic similarity, do not operate independently but must be part of a system that serves as a data platform and often spans multiple and hybrid data sources for best results. The old and the new worlds can enter a virtuous feedback loop that can improve the use of new datastores.

Take Facebook Presto, for example, as a success story in bridging structured and unstructured social engineering data. Developed as an open-source distributed SQL query engine, it revolutionized data analytics by enabling seamless querying across structured and unstructured data sources. Presto's ability to perform federated queries allowed users to join and analyze data from diverse sources, such as Hadoop Distributed File System (HDFS), Apache Cassandra, and relational databases, in real-time. This unified approach eliminated the need for multiple specialized tools, bridging the gap between structured and unstructured data. Presto's architecture, optimized for low query latency, employed in-memory processing and pipelined execution, significantly reducing end-to-end latency compared to traditional systems like Hive3. Its scalability and flexibility made it a valuable tool for handling petabyte-scale datasets.

Drawing parallels to technologies that work with structured and vector data, vector databases emerge as a compelling counterpart. These databases are designed to store and retrieve high-dimensional vectors, which are mathematical representations of objects. By mapping structured data into vector space, vector databases facilitate similarity searches and enable AI algorithms to retrieve relevant information efficiently. For example, Milvus, a popular vector database, supports vectorizing structured data and querying it for advanced analytics. This process involves converting structured data into numerical vectors using machine learning models, allowing for nuanced analysis and pattern detection.

Both Presto and vector databases share a common goal: unifying disparate data types for seamless analysis. Some examples of vector databases include:

• Milvus: Milvus is an open-source vector database designed for managing large-scale vector data. It supports hybrid searches, combining structured metadata with vector similarity queries, making it ideal for applications like recommendation systems and AI-driven analytics.

• Weaviate: Weaviate is another open-source vector database that integrates structured data with vector embeddings. It offers semantic search capabilities and allows users to query data using natural language prompts.

• Redis (Redis-Search and Redis-VSS): Redis has extensions for vector search that enable hybrid queries, combining structured data with vector-based similarity searches. It's optimized for high-speed lookups and real-time applications.

• Qdrant: Qdrant is a vector database that supports hybrid queries, allowing structured filters alongside vector searches. It is designed for scalable and efficient AI applications.

Azure Cosmos DB stands out as a versatile database service that integrates vector search capabilities alongside its traditional NoSQL and relational database functionalities. When compared with the above list, here’s how it stands out:

• Hybrid Data Support: Like Milvus, Weaviate, Redis, and Qdrant, Azure Cosmos DB supports hybrid queries, combining structured data with vector embeddings. This makes it suitable for applications requiring both traditional database operations and vector-based similarity searches.

• Integrated Vector Store: Azure Cosmos DB allows vectors to be stored directly within documents alongside schema-free data. This colocation simplifies data management and enhances the efficiency of vector-based operations, a feature that aligns with the capabilities of vector databases.

• Scalability and Performance: Azure Cosmos DB offers automatic scalability and single-digit millisecond response times, ensuring high performance at any scale. This is comparable to the optimized performance of vector databases like Redis and Milvus.

• Vector Indexing: Azure Cosmos DB supports advanced vector indexing methods, such as DiskANN-based quantization, enabling efficient and accurate vector searches. This is like the indexing techniques used in specialized vector databases.

• AI Integration: Azure Cosmos DB is designed to support AI-driven applications, including natural language processing, recommendation systems, and multi-modal searches. This aligns with the use cases of vector databases like Weaviate and Qdrant.

While Azure Cosmos DB provides robust vector search capabilities, it also offers the flexibility of a general-purpose database, making it a compelling choice for organizations looking to unify structured, unstructured, and vector data within a single platform.

#codingexercise: https://1drv.ms/w/c/d609fb70e39b65c8/Echlm-Nw-wkggNYlIwEAAAABD8nSsN--hM7kfA-W_mzuWw?e=BQczmz