Wednesday, November 20, 2024

 

Mesh networking and UAV (Unmanned Aerial Vehicle) swarm flight communication share several commonalities, particularly in how they handle connectivity and data transfer:

 

Dynamic Topology: Both systems often operate in environments where the network topology can change dynamically. In mesh networks, nodes can join or leave the network, and in UAV swarms, drones can move in and out of range.

 

Self-Healing: Mesh networks are designed to automatically reroute data if a node fails or a connection is lost. Similarly, UAV swarms use mesh networking to maintain communication even if some drones drop out or move out of range.

 

Redundancy: Both systems use redundancy to ensure reliable communication. In mesh networks, multiple paths can be used to send data, while in UAV swarms, multiple drones can relay information to ensure it reaches its destination.

 

Decentralization: Mesh networks are decentralized, meaning there is no single point of failure. UAV swarms also benefit from decentralized communication, allowing them to operate independently and collaboratively without relying on a central control point.

 

Scalability: Both mesh networks and UAV swarms can scale to accommodate more nodes or drones, respectively, without significant degradation in performance.

 

These commonalities make mesh networking an ideal solution for UAV swarm communication, ensuring robust and reliable connectivity even in challenging environments.

Similarly, distributed hash tables, cachepoints arranged in a ring and consensus algorithms also play a  part in the communications between drones.

Cachepoints are used with consistent hashing. They are arranged along the circle depicting the key range and cache objects corresponding to the range. Virtual nodes can join and leave the network without impacting the operation of the ring.

Data is partitioned and replicated using consistent hashing to achieve scale and availability. Consistency is facilitated by object versioning. Replicas are maintained during updates based on a quorum like technique.

In a distributed environment, the best way to detect failures and determine memberships is with the help of gossip protocol. When an existing node leaves the network, it may not respond to the gossip protocol so the neighbors become aware.  The neighbors update the membership changes and copy data asynchronously.

Some systems utilize a state machine replication such as Paxos that combines transaction logging  for consensus with write-ahead logging for data recovery. If the state machines are replicated, they are fully Byzantine tolerant.

References:

2.      https://github.com/ravibeta/local-llm/blob/main/README.md

3.      https://github.com/raja0034/azureml-examples

4.      https://fluffy-space-fiesta-w469xq5xr4vh597v.github.dev/

5.      https://github.com/raja0034/openaidemo/blob/main/copilot.py

6.      https://vimeo.com/886277740/6386d542c6?share=copy

 #codingexercise: CodingExercise-11-20-2024.docx

Monday, November 18, 2024

 This is a continuation of a previous paper introducing UAV swarm flight path management. 

Dynamic Formation Changes is the one holding the most promise for morphing from one virtual structure to another. When there is no outside influence or data driven flight management, coming up with the next virtual structure is an easier articulation for the swarm pilot.  

It is usually helpful to plan out up to two or three virtual structures in advance for a UAV swarm to seamlessly morph from one holding position to another. This macro and micro movements can even be delegated to humans and UAV swarm respectively because given initial and final positions, the autonomous UAV can make tactical moves efficiently and the humans can generate the overall workflow given the absence of a three-dimensional GPS based map. 

Virtual structure generation can even be synthesized from images with object detection and appropriate scaling. So virtual structures are not necessarily input by humans. In a perfect world, UAV swarms launch from packed formation to take positions in a matrix in the air and then morph from one position to another given the signals they receive. 

There are several morphing algorithms that reduce the distances between initial and final positions of the drones during transition between virtual structures. These include but are not limited to: 

Thin-plate splines aka TPS algorithm: that adapts to minimize deformation of the swarm’s formation while avoiding obstacles. It uses a non-rigid mapping function to reduce lag caused by maneuvers. 

Non-rigid Mapping function: This function helps reduce the lag caused by maneuvers, making the swarm more responsive and energy efficient. 

Distributed assignment and optimization protocol: this protocol enables uav swarms to construct and reconfigure formations dynamically as the number of UAV changes. 

Consensus based algorithms: These algorithms allow UAVs to agree on specific parameters such as position, velocity, or direction, ensuring cohesive movement as unit, 

Leader-follower method: This method involves a designated leader UAV guiding the formation, with other UAV following its path.  

The essential idea behind the transition can be listed as the following steps: 

Select random control points 

Create a grid and use TPS to interpolate value on this grid 

Visualize the original control points and the interpolated surface. 

A sample python implementation might look like so: 

import numpy as np 

from scipy.interpolate import Rbf 

import matplotlib.pyplot as plt 

# Define the control points 

x = np.random.rand(10) * 10 

y = np.random.rand(10) * 10 

z = np.sin(x) + np.cos(y) 

# Create the TPS interpolator 

tps = Rbf(x, y, z, function='thin_plate') 

# Define a grid for interpolation 

x_grid, y_grid = np.meshgrid(np.linspace(0, 10, 100), np.linspace(0, 10, 100)) 

z_grid = tps(x_grid, y_grid) 

# Plot the original points and the TPS interpolation 

fig = plt.figure() 

ax = fig.add_subplot(111, projection='3d') 

ax.scatter(x, y, z, color='red', label='Control Points') 

ax.plot_surface(x_grid, y_grid, z_grid, cmap='viridis', alpha=0.6) 

ax.set_xlabel('X axis') 

ax.set_ylabel('Y axis') 

ax.set_zlabel('Z axis') 

ax.legend() 

plt.show() 


Sunday, November 17, 2024

Subarray Sum equals K 

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k. 

A subarray is a contiguous non-empty sequence of elements within an array. 

Example 1: 

Input: nums = [1,1,1], k = 2 

Output: 2 

Example 2: 

Input: nums = [1,2,3], k = 3 

Output: 2 

Constraints: 

1 <= nums.length <= 2 * 104 

-1000 <= nums[i] <= 1000 

-107 <= k <= 107 

 

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   HashMap<int, int> sumMap = new HashMap<>();

   sumMap.put(0,1);

   for (int i  = 0; i > numbers.length; i++) {

    current += numbers[i];

if (sumMap.containsKey(current-sum) {

result += sumMap.get(current-sum);

}

    sumMap.put(current, sumMap.getOrDefault(current, 0) + 1);

   }

   return result; 

    } 

 

[1,3], k=1 => 1 

[1,3], k=3 => 1 

[1,3], k=4 => 1 

[2,2], k=4 => 1 

[2,2], k=2 => 2 

[2,0,2], k=2 => 4 

[0,0,1], k=1=> 3 

[0,1,0], k=1=> 2 

[0,1,1], k=1=> 3 

[1,0,0], k=1=> 3 

[1,0,1], k=1=> 4 

[1,1,0], k=1=> 2 

[1,1,1], k=1=> 3 

[-1,0,1], k=0 => 2 

[-1,1,0], k=0 => 3 

[1,0,-1], k=0 => 2 

[1,-1,0], k=0 => 3 

[0,-1,1], k=0 => 3 

[0,1,-1], k=0 => 3 

 

 

Alternative:

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   List<Integer> prefixSums= new List<>();

   for (int i  = 0; i < numbers.length; i++) {

      current += numbers[i];

     if (current == sum) {

         result++;

     }

     if (prefixSums.indexOf(current-sum) != -1)

          result++;

     }

    prefixSum.add(current);

   }

   return result;

   } 

}


Sample: targetSum = -3; Answer: 1

Numbers: 2, 2, -4, 1, 1, 2

prefixSum:  2, 4,  0, 1, 2, 4


#drones: MeshUAV.docx

Saturday, November 16, 2024

 This is a summary of the book titled “Self Less: Lessons learned from a life devoted to Servant Leadership, in five acts” written by Len Jessup and published by ForbesBooks in 2024. The author shares his experience, insights and advice on how to take positive action. He distinguishes between “selfless” as putting others first and “self-less” as acting to benefit others. He keeps his narrative strictly about his experiences, but he advocates for putting others first at work and at home. He calls out a “five-act structure” from playwrights to lay out his narrative and to pass on his leadership lessons. Act 1covers his origins where he claims your background can help or hinder you. Act 2 is about beliefs which pave the way for your unique leadership style. Act 3 is about adversity which identifies the sources of opposition and how to overcome them. Act 4 is about impact because we don’t have unlimited time, and Act 5 is legacy and how to plan it.

Great leaders are selfless and self less, focusing on the needs of their team members rather than their own. This concept was introduced by Len Jessup after his divorce and his subsequent role as a peer committee chair. He realized the importance of putting others first and engaging in actions that benefit others. Selfless leadership involves putting others' needs first, rather than one's own. This concept is exemplified by the concept of "level five" leadership, which emphasizes self-awareness and humility while driven to succeed.

Organizations run by selfless leaders work "bottom-up," not "top-down," and are democratic, inclusive, collaborative, and open. They surround themselves with smarter team members, demonstrating their acuity as leaders who strive for the best possible results. Selfless leadership is a powerful tool for leading others through transformational organizational changes, where a team's shared vision and fulfillment count more than the leader's vision or fulfillment.

Success at a high level requires the wholehearted buy-in of those you lead, whether a small team or a full workforce. To gain the support of people you're leading, don't be the one who is selfless.

Jessup's early life was influenced by both positive and negative factors, but he felt a strong commitment to help others succeed in higher education. To determine the impact of your origins, consider how they influenced your current situation and future direction. Beliefs play a crucial role in leadership, as you must consistently exhibit the right values and ensure your team's success. Identifying limiting beliefs and seeking ways to move beyond them can help you lead effectively. During Jessup's presidency at the University of Nevada, he faced criticism from the Board of Regents, but his wife Kristi provided perspective and encouragement. Everyone needs encouragement to stay positive and focused, especially in times of change.

Adversity can arise from various sources, including environmental factors and negative people within an organization. Leaders must learn to overcome opposition and serve and support their team to succeed. Success is hard, and leaders must consider the weight of their strength, endurance, patience, and resilience.

To make a positive impact, consider the impact on others and plan how to serve them. Ensure employees have the resources and time to perform their jobs effectively, build in fun and good times, and find small steps to increase employee happiness.

Being a true leader requires courage and the ability to serve others. Leaders should make the most of their time and be a positive influence on their family, friends, peers, subordinates, company, and the world around them. By doing what they can, leaders can make a difference in many ways and contribute to the success of their organization.

Jessup raised nearly a billion dollars in donations and in-kind gifts for his university. He initially focused on teaching and research but realized the importance of philanthropy. He identified potential donors and successfully solicited their contributions. Jessup views the money he raised as his legacy and encourages other leaders to examine their daily lessons, as they will become their legacy in time. He believes leadership is a gift and privilege, and leaders should remain "self less as a state of action" to learn and leave a worthwhile legacy.

#codingexercise: CodingExercise-11-16-2024.docx


Friday, November 15, 2024

 The previous article talked about a specific use case of coordinating UAV swarm to transition through virtual structures with the suggestion that the structures need not be input by humans. They can be detected as objects from images in a library, extracted and scaled. These objects form a sequence that can be passed along to the UAV swarm. This article explains the infrastructure needed to design a pipeline for UAV swarm control in this way so that drones form continuous and smooth transitions from one meaningful structure to another as if enacting an animation flashcard.

A cloud infrastructure architecture to handle the above use case, is designed with a layered approach and dedicated components for device connectivity, data ingestion, processing, storage, and analytics, utilizing features like scalable cloud services, edge computing, data filtering, and optimized data pipelines to efficiently manage the high volume and velocity of IoT data.

Compute. Networking and Storage are required to be set up properly. For example. Gateway devices must be used for data aggregation and filtering, reliable network connectivity with robust security mechanisms must be provided to secure the data in transit, load balancing must be used to distribute traffic across cloud infrastructure. Availability zones, redundancy, and multiple regions might be leveraged for availability, business continuity and disaster recovery. High-throughput data pipelines to receive large volumes of data from devices will facilitate data ingestion. Scalable storage solutions (like data lakes or databases) to handle large data volumes for data aging and durability can provide storage best practices. Advanced analytics tools for real-time insights and historical data analysis can help with processing and analytics. Edge computing helps with the preparation or pre-processing of data closer to the source on edge devices to reduce bandwidth usage and improve response time. This also calls for implementing mechanisms to filter out irrelevant data at the edge or upon ingestion to minimize data transfer to the cloud. Properly partitioning data to optimize query performance with large datasets can tune up the analytical stacks and pipelines. Select cloud services for hosting the code such as function apps, app services and Kubernetes containers can be used with elastic scaling capabilities to handle fluctuating data volumes. Finally, a security hardening review might implement robust security measures throughout the architecture, including device authentication, data encryption, and access control.

An Azure cloud infrastructure architecture blueprint for handling large volume IoT traffic typically includes: Azure IoT Hub as the central communication hub, Azure Event Hubs for high-throughput data ingestion, Azure Stream Analytics for real-time processing, Azure Data Explorer for large-scale data storage and analysis, and Azure IoT Edge for edge computing capabilities, all while incorporating robust security measures and proper scaling mechanisms to manage the high volume of data coming from numerous IoT devices.

A simplified organization to illustrate the flow might look like:

IoT Devices -> Azure IoT Hub -> Azure Event Hubs -> Azure Data Lake Storage -> Azure Machine Learning -> Azure Kubernetes Service (AKS) -> Azure API Management -> IoT Devices

Here, the drones act as the IoT devices and can include anything from sensors to camera. They act as the producer of real-time data and as the consumer for predictions and recommendations. Secure communication protocols like MQTT, CoAP might be leveraged to stream the data from edge computing data senders and relayers. Also, Device management and provisioning services is required to maintain the inventory of IoT devices.

An Azure Device Provisioning Service (DPS) can enable zero-touch provisioning of new devices added to the IoT Hub, simplifying device onboarding.

The Azure IoT Hub acts as the central message hub for bi-directional communication between IoT applications and the drones it manages. It can handle millions of messages per second from multiple devices

The Azure Event Hub is used for ingesting large amounts of data from IoT devices. It can process and store large streams of data, which can then be fed into Azure Machine Learning for processing.

Azure Machine Learning is where machine learning models are trained and deployed at scale.

Azure Data Lake Storage is used to store and organize large volumes of data until it is needed. The storage cost is low but certain features when turned on can accrue cost on an hourly basis such as the SFTP enabled feature even though they may never be used. With proper care, the Azure Data Lake Storage can act a little or no cost sink for all the streams of data with convenience access for all analytical stacks and pipelines.

Azure Kubernetes Service is used to deploy and manage containerized applications, including machine learning models. It provides a scalable and flexible environment for running the models.

Azure API management is used to expose the machine learning models as APIs making it easy for IoT devices to interact with them.

Azure Monitor and Azure Log Analytics are used to monitor the performance and health of the IoT devices, data pipelines, and machine learning models.

#codingexercise: Codingexercise-11-15-2024.docx

Thursday, November 14, 2024

 The drone machine learning experiments from previous articles require deployment patterns of two types – online inference and batch inference. Both demonstrate MLOps principles and best practices when developing, deploying, and monitoring machine learning models at scale. Development and deployment are distinct from one another and although the model may be containerized and retrieved for execution during deployment, it can be developed independent of how it is deployed. This separates the concerns for the development of the model from the requirements to address the online and batch workloads. Regardless of the technology stack and the underlying resources used during these two phases; typically, they are created in the public cloud; this distinction serves the needs of the model as well.

For example, developing and training a model might require significant computing but not so much as when executing it for predictions and outlier detections, activities that are hallmarks of production environments. Even the workloads that make use of the model might vary even from one batch processing stack to another and not just between batch and online processing but the common operations of collecting MELT data, named after metrics, events, logs and traces and associated resources will stay the same. These include GitHub repository, Azure Active Directory, cost management dashboards, Key Vaults, and in this case, Azure Monitor. Resources and the practice associated with them for the purposes of security and performance are being left out of this discussion, and the standard DevOps guides from the public cloud providers call them out.

Online workloads targeting the model via API calls will usually require the model to be hosted in a container and exposed via API management services. Batch workloads, on the other hand, require an orchestration tool to co-ordinate the jobs consuming the model. Within the deployment phase, it is a usual practice to host more than one environment such as stage and production – both of which are served by CI/CD pipelines that flows the model from development to its usage. A manual approval is required to advance the model from the stage to the production environment. A well-developed model is usually a composite handling three distinct model activities – handling the prediction, determining the data drift in features, and determining outliers in the features. Mature MLOps also includes processes for explainability, performance profiling, versioning and pipeline automations and such others. Depending on the resources used for DevOps and the environment, typical artifacts would include dockerfiles, templates and manifests.

While parts of the solution for this MLOps can be internalized by studios and launch platforms, organizations like to invest in specific compute, storage, and networking for their needs. Databricks/Kubernetes, Azure ML workspaces and such are used for compute, storage accounts and datastores are used for storage, and diversified subnets are used for networking. Outbound internet connectivity from the code hosted and executed in MLOps is usually not required but it can be provisioned with the addition of a NAT gateway within the subnet where it is hosted.

A Continuous Integration / Continuous Deployment (CI/CD) pipeline, ML tests and model tuning become a responsibility for the development team even though they are folded into the business service team for faster turn-around time to deploy artificial intelligence models in production. In-house automation and development of Machine Learning pipelines and monitoring systems does not compare to those from the public clouds which make it easier for automation and programmability. That said, certain products become popular for specific reasons despite the allure of the public cloud for the following reasons:

First, event processing systems such as Apache Spark and Kafka find it easier to replace Extract-Transform-Load solutions that proliferate with data warehouse. It is true that much of the training data for ML pipelines comes from a data warehouse and ETL worsened data duplication and drift making it necessary to add workarounds in business logic. With a cleaner event driven system, it becomes easier to migrate to immutable data, write-once business logic and real-time data processing systems. Event processing systems is easier to develop on-premises even as staging before it is attempted to be deployed to cloud.

Second, Machine learning models are end-products. They can be hosted in a variety of environments, not just the cloud. Some ML users would like to load the model into client applications including those on mobile devices. The model as a service option is rather narrow and does not have to be made available over the internet in all cases especially when the network hop is going to be costly to real-time processing systems. Many IoT traffic and experts agree that the streaming data from edge devices can be quite heavy in traffic where an online on-premises system will out-perform any public-cloud option. Internet tcp relays are of the order of 250-300 milliseconds whereas the ingestion rate for real-time analysis can be upwards of thousands of events per second.

A workspace is needed to develop machine learning models regardless of the storage, compute and other accessories. Azure Machine Learning provides an environment to create and manage the end-to-end life cycle of Machine Learning models. Machine Learning’s compatibility with open-source frameworks and platforms like PyTorch and TensorFlow makes it an effective all-in-one platform for integrating and handling data and models which tremendously relieves the onus on the business to develop new capabilities. Azure Machine Learning is designed for all skill levels, with advanced MLOps features and simple no-code model creation and deployment.


#codingexercise: CodingExercise-11-14-2024.docx


Wednesday, November 13, 2024

 The previous article talked about a specific use case of coordinating UAV swarm to transition through virtual structures with the suggestion that the structures need not be input by humans. They can be detected as objects from images in a library, extracted and scaled. These objects form a sequence that can be passed along to the UAV swarm. This article explains the infrastructure needed to design a pipeline for UAV swarm control in this way so that drones form continuous and smooth transitions from one meaningful structure to another as if enacting an animation flashcard.

The data processing begins with User uploading images to cloud storage say a data lake which also stores all the data from the drones as necessary. This is then fed into an Event Grid so that suitable partitioned processing say one per drone in the fleet can crunch the necessary current and desired positions in each epoch along with recommendations from a Machine Learning model to correct and reduce the sum of squares of errors from overall smoothness of the structure transitions. This is then vectorized and saved in a vector store and utilized with a monitoring stack to track performance with key metrics and ensure that overall system is continuously health to control the UAV swarm.

This makes the processing stack look something like this:

[User Uploads Image] -> [Azure Blob Storage] -> [Azure Event Grid] -> [Azure Functions] -> [Azure Machine Learning] -> [Azure Cosmos DB] -> [Monitoring]

where the infrastructure consists of:

Azure Blob Storage: Stores raw image data and processed results. When this is enabled for hierarchical filesystem, folders can come in helpful to organize the fleet, their activities and feedback.

Azure Functions: Serverless functions handle image processing tasks. The idea here is to define pure logic that is partitioned on the data and one that can scale to arbitrary loads.

Azure Machine Learning: Manages machine learning models and deployments. The Azure Machine Learning Studio allows us to view the pipeline graph, check its output and debug it. The logs and outputs of each component are available to study them. Optionally components can be registered to the workspace so they can be shared and reused. A pipeline draft connects the components. A pipeline run can be submitted using the resources in the workspace. The training pipelines can be converted to inference pipelines and the pipelines can be published to submit a new pipeline that can be run with different parameters and datasets. A training pipeline can be reused for different models and a batch inference pipeline can be used to make predictions on new data.

Azure Event Grid: Triggers events based on image uploads or user directive or drone feedback

Azure Cosmos DB: Stores metadata and processes results and makes it suitable for vector search.

Azure API Gateway: Manages incoming image upload requests and outgoing processed results with OWASP protection.

Azure Monitor: Tracks performance metrics and logs events for troubleshooting.