Friday, January 17, 2025

 Infrastructure development with a collaborative design-forward culture:

There is business value in design even before there is business value in implementation. With the pressure from customers for better experiences and their expectations for instant gratification, organizations know the right investment is in the design, especially as a competitive differentiator. But the price for good design has always been more co-ordination and intention. With the ever expanding and evolving landscape of digital tools and data, divisions run deeper than before. Fortunately, newer technologies specifically generative AI can be brought to transform how design is done. By implementing core practices of clear accountability, cross-functional alignment, inclusion of diverse perspectives and regularly shared work, organizations can tap into new behaviors and actions that elevate design

Design boosts innovation, customer experience and top-line performance. Lack of clarity, collaboration and cross-team participation are the main limitations. Leading design teams emphasize clear accountability, cross-functional alignment, inclusion of diverse perspectives and regularly shared work. Design can also provide feedback to business and product strategy. More repeatable and inclusive design processes yield more thoughtful, customer-inspired work. Better creativity, innovation and top-line performance compound over time. For example. There can be up to 80% savings in the time it takes to generate reports. The saying is go faster and go further together.

The limitations to better design are also clear in their negatives that can be quantified as duplicate and recreate work while customer input is often left unused. Systems fracture because people will tend to avoid friction and save time. Ad hock demands tend to drift design from solid foundations. Vicious development cycles eat time.

No one will disagree to better understand a problem before kicking off a project. Many will praise or appreciate those who incorporate feedback. Many meetings are more productive when there is a clear owner and driver. Being more inclusive of others has always helped gain more understanding of requirements. Defining clear outcomes and regularly updating progress is a hallmark of those who design well. Articulation of a clear standard for design quality and leveraging a development process that is collaborative are some of the others. Leaders who are focused on organizational structure do suffer an impedance to adopting design first strategy but they could give latitude to teams and individuals to find the best way to achieve goals and run priorities and initiatives. Care must be taken to avoid creating a race to satisfy business metrics without the diligence to relieve the pain point they are solving.

Inclusivity is harder to notice. With newer technologies like Artificial Intelligence, employees are continuously upskilling themselves, so certain situations cannot be anticipated. For example, engineering leaders working with AI tend to forget that they must liaison with legal department at design time itself. The trouble with independent research and outsourced learning is that they may never be adopted. Cross-team collaboration must be actively sought for and participated in because the payoff is improved cross-functional understanding, culture-building and innovation – leading to better end product. Some teams just use existing rituals to gather quick thoughts on design ideas. Others favor offline review and more documentation prior to meetings. Sharing as a value by stressing openness, as a habit by maintaining a routine, as an opportunity to see the customer come through the work and as an avoided risk by reducing back and forth brings a culture that leads to a single source of truth. Designing must involve others but not create different versions.


Thursday, January 16, 2025

 One of the fundamentals in parallel processing in computer science involves the separation of tasks per worker to reduce contention. When you treat the worker as an autonomous drone with minimal co-ordination with other members of its fleet, an independent task might appear something like installing a set of solar panels in an industry with 239 GW estimate in 2023 for the global solar powered renewable energy. That estimate was a 45% increase over the previous year. As industry expands, drones are employed for their speed. Drones  aid in every stage of a plant’s lifecycle from planning to maintenance. They can assist in topographic surveys, during planning, monitor construction progress, conduct commissioning inspections, and perform routine asset inspections for operations and maintenance. Drone data collection is not only comprehensive and expedited but also accurate.

During planning for solar panels, drones can conduct aerial surveys to assess topography, suitability, and potential obstacles, create accurate 3D maps to aid in designing and optimizing solar farm layouts, and analyze shading patterns to optimize panel placement and maximize energy production. During construction, drones provide visual updates on construction progress, and track and manage inventory of equipment, tools, and materials on-site. During maintenance, drones can perform close-up inspections of solar panels to identify defects, damage, or dirt buildup, monitor equipment for wear and tear, detect hot spots in panels with thermal imaging, identify and manage vegetation growth that might reduce the efficiency of solar panels and enhance security by patrolling the perimeter and alerting to unauthorized access.

When drones become autonomous, these activities go to the next level. The dependency on human pilots has always been a limitation on the frequency of flights. On the other hand, autonomous drones boost efficiency, shorten fault detection times, and optimize outcomes during O&M site visits. Finally, they help to increase the power output yield of solar farms. The sophistication of the drones in terms of hardware and software increases from remote-controlled drones to autonomous drones. Field engineers might suggest selection of an appropriate drone as well as the position of docking stations, payload such as thermal camera and capabilities. A drone data platform that seamlessly facilitates data capture, ensures safe flight operations with minimal human intervention,  prioritize data security and meet compliance requirements becomes essential at this stage. Finally, this platform must also support integration with third-party data processing and analytics applications and reporting stacks that publish various charts and graphs. As usual, a separation between data processing and data analytics helps just as much as a unified layer for programmability and user interaction with API, SDK, UI and CLI. While the platform can be sold separately as a product, leveraging a cloud-based SaaS service reduces the cost on the edge.

There is still another improvement possible over this with the formation of dynamic squadrons, consensus protocol and distributed processing with hash stores. While there are existing applications that serve to improve IoT data streaming at the edges and cloud processing via stream stores and analytics with the simplicity of SQL based querying and programmability, a cloud service that installs and operates a deployment stamp with a solution accelerator and as a citizen resource of a public cloud helps bring the best practices of storage engineering, data engineering and enabling businesses to be more focused.

Wednesday, January 15, 2025

 

The preceding articles on security and vulnerability management mentioned that organizations treat the defense-in-depth approach as the preferred path to stronger security. They also engage in feedback from security researchers via programs like AI Red Teaming and Bug Bounty program to make a positive impact to their customers. AI safety and security are primary concerns for the emerging GenAI applications. The following section outlines some of the best practices that are merely advisory and not a mandate in any way.

As these GenAI applications become popular as productivity tools, the speed of AI releases and adoption acceleration must be matched with improvements to existing SecOps techniques. The security-first processes to detect and respond to AI risks and threats effectively include visibility, zero critical risks, democratization, and prevention techniques. Out of these the risks refer to data poisoning that alters training data to make predictions erroneous, model theft where proprietary AI models suffer from copyright infringement, adversarial attacks by crafting inputs that make model hallucinate, model inversion attacks by sending queries that cause data exfiltration and supply chain vulnerabilities for exploiting weaknesses in the supply chain.

The best practices leverage the new SecOps techniques and mitigate the risks with:

1.      Achieving full visibility by removing shadow AI which refers to both unauthorized and unaccounted for AI. AI bill-of-materials will help here as much as setting up relevant networking to ensure access for only allow-listed GenAI providers and software. Employees must also be trained with a security-first mindset.

2.      Protecting both the training and inference data by discovering and classifying the data according to its security criticality, encrypting data at rest and in transit, performing sanitizations or masking sensitive information, configuring data loss prevention policies, and generating a full purview of the data including origin and lineage.

3.      Securing access to GenAI models by setting up authentication and rate limiting for API usage, restricting access to model weights, and allowing only required users to kickstart model training and deployment pipelines.

4.      Using LLM-built-in guardrails such as content filtering to automatically removing or flagging inappropriate or harmful content, abuse detection mechanisms to uncover and mitigate general model misuse, and temperature settings to change AI output randomness to the desired predictability.

5.      Detecting and removing AI risks and attack paths by continuously scanning for and identifying vulnerabilities in AI models, verifying all systems and components that have the most recent patches to close known vulnerabilities, scanning for malicious models, assessing for AI misconfigurations, effective permissions, network resources, exposed secrets, and sensitive data to detect attack paths, regularly auditing access controls to guarantee authorizations and least-privilege principles, and providing context around AI risks so that we can proactively remove attack paths to models via remediation guidance.

6.      Monitoring against anomalies by using detection and analytics at both input and output, detecting suspicious behavior in pipelines, keeping track of unexpected spikes in latency and other system metrics, and supporting regular security audits and assessments.

7.      Setting up incident response by including processes for isolation, backup, traffic control, and rollback, integrating with SecOps tools, and availability of an AI focused incident response plan.

In this way, existing SecOps practices that leverage well-known STRIDE threat modeling and Assets, Activity Matrix and Actions chart with enhancements and techniques specific to GenAI.

References:

Across Industry

Row-level security

Metrics

 

Tuesday, January 14, 2025

 This is a summary of the book titled “Your AI Survival Guide” written by Sal Rashidi and published by Wiley in 2024. Sal argues that organizations cannot afford to be Laggards and Late majority sections of people adopting AI even if they are non-technical because that is here to stay and unless they want to be eliminated in business. So, the only choices are the Early Majority who adopt technology once it has demonstrated its advantages, early adopters who are more on the forefront, and innovators who pioneer the use of AI in their respective fields. Each group plays a crucial role in the adoption of lifecycle of technology which usually spans the duration until something better replaces it, so there is no wrong pick, but the author’s book lays out everything that helps you uncover your “why” to building your team and making your AI responsible. With applications already ranging from agriculture to HR, the time to be proactive is Now. His playbook involves assessing which AI strategy fits you and your team, selecting relevant use cases, planning how to launch your AI project, choosing the right tools and partners to go live, ensuring the team is gritty, ambitious, and resilient and incorporating human oversight onto AI decision making.

To successfully implement AI within a company, it is essential to balance established protocols with the need to adapt to changing times. To achieve this, consider the reasons for deploying AI, develop an AI strategy, and start small and scale quickly. Choose a qualified AI consultant or development firm that fits your budget and goals. Set a realistic pace for your project. Conduct an AI readiness assessment to determine the best AI strategy for your company. Score yourself on various categories, such as market strategy, business understanding, workforce acumen, company culture, role of technology, and data availability.

Select relevant use cases that align with your chosen AI strategy and measure the criticality and complexity of each use case. For criticality, measure how the use case will affect sales, growth, operations, culture, public perception, and deployment challenges. For complexity, measure how the use case will affect resources for other projects, change management, and ownership. Plan how to launch your AI project well to ensure success and adaptability.

To launch an AI project successfully, outline your vision, business value, and key performance indicators (KPIs). Prioritize project management by defining roles, deliverables, and tracking progress. Align goals, methods, and expectations, and establish performance benchmarks. Outline a plan for post-launch support, including ongoing maintenance, enterprise integration, and security measures. Establish a risk mitigation process for handling unintended consequences. Choose the right AI tool according to your needs and expertise, ranging from low-cost to high-cost, requiring technical expertise. Research options, assess risks and rewards, and collaborate with experts to create standard operating procedures. Ensure your team is gritty, ambitious, and resilient by familiarizing yourself with AI archetypes. To integrate AI successfully, focus on change management, create a manifesto, align company leadership, plan transitions, communicate changes regularly, celebrate small wins, emphasize iteration over perfection, and monitor progress through monthly retrospectives.

AI projects require human oversight to ensure ethical, transparent, and trustworthy systems. Principles for responsible AI include transparency, accountability, fairness, privacy, inclusiveness, and diversity. AI is expected to transform various sectors, generating $9.5 to $15.4 trillion annually. Legal professionals can use AI to review contracts, HR benefits from AI-powered chatbots, and sales teams can leverage AI for automated follow-up emails and personalized pitches. AI will drive trends and raise new challenges for businesses, such as automating complex tasks, scaling personalized marketing, and disrupting management consulting. However, AI opportunities come with risks such as cyber threats, privacy and bias concerns, and a growing skills gap. To seize AI opportunities while mitigating risks, businesses must learn how AI applies to their industry, assess their capabilities, identify high-potential use cases, build a capable team, create a change management plan, and keep a human in the loop to catch errors and address ethical issues.


Monday, January 13, 2025

 
ETA at waypoints using time-series algorithms:

Problem statement: Given the NURBS method for trajectory generation for UAV swarms as described in previous article, the UAV trajectory was independent of in-flight parameters and both the position and velocity profile of planned trajectory could be obtained using the global locations and expected time of arrival at the waypoints. While a single drone can adhere to the planned trajectory, the internal dynamics of the UAV swarm and their effect on the ETA are harder to quantify. A closer tracking of the ETA at waypoints and trajectory deviations is needed for the UAV swarm.

   

Solution:

Consider a closed loop trajectory of UAV swarm. The effects of UAV swarm dynamics are easier to observe along waypoints in the loop because the NURBS trajectory assumes a constant velocity profile. Uncertainty in external variables such as unmodeled wind-field or uncertainty from internal friction between the drone units, can lead to different arrival times. Uncertainty affecting cruise velocity can be modeled using Gaussian independent random variables with covariance but a time-series algorithm does not need any attributes other than the historical collection of ETAs at the waypoints to be able to predict the next ETA. It only looks at scalar value regardless of the type or factors playing into the arrival time of the swarm while weights can be used to normalize the irregularity of distances between waypoints on the trajectory between start to finish. The historical data is utilized to predict an estimation on the arrival time as if the arrival were a scatter plot along the timeline. Unlike other data mining algorithms that involve additional attributes of the event, this approach uses a single auto-regressive method on the continuous data to make a short-term prediction. The regression is automatically trained as the data accrues so there is no need to parameterize or quantify uncertainties. 

Central to the step of fitting the linear regression, is the notion of covariance stationarity which suggests: 

·        The mean is not dependent on t 

·        The standard deviation is not dependent on t 

·        The covariance (Yt, Yt-j) exists and is finite and does not depend on t 

·        This last factor is called jth order autocovariance 

·        The jth order autocorrelation is described as autocovariance divided by the square of standard deviation 

 

The autocovariance measures the direction of the linear dependence between Yt and Yt-j. 
while the autocorrelation measures both the direction and the strength of the linear dependence between the Yt and Yt-j. 
An autoregressive process is defined as one in which the time dependence in the process decays to zero as the random variables in the process get farther and farther apart. It has the following properties: 
E(Yt) = mean 
Var(Yt) = sigma squared 
Cov(Yt, Yt-1) = sigma squared . phi 
Cor(Yt, Yt-1) = phi
 

To fit the linear regression for a restricted data set, we determine the values of the random variable from the length p transformations of the time series data set. 

For a given time-series data set, a corresponding nine data sets for length p transformations are created. The p varies from zero to eight for the nine data sets. Each of these transformed datasets is centered and standardized before modeling; that is for each variable we subtract the mean value and divide by the standard deviation. Then we divided the data set into a training set used as input to the learning method and a holdout set to evaluate the model. The holdout set contains the cases corresponding to the last five observations in the sequence. 

Sunday, January 12, 2025

 Waypoint Selection

A previous article1 introduced waypoints and trajectory smoothing for UAV swarms. This section focuses on waypoint selection.

The fight path management we propose is about the example of flying a fleet of drones around skyscrapers. The sample space can be considered a grid that must be navigated from one end to another and all intermediary spaces can be thought of as waypoints to occupy on the way to the other end and allowing the fleet to organize themselves around these intermediary points. By treating sub grids within grids as potential candidates to select from, a path can be forged with a sequence of sub grids to forge to the other end and the fleet organizes itself around each sub grid. The sub grids are pre-determined, invariant and uniform in size in each epoch.

Searching for the optimum intermediary point for the flight of the drones translates to the selection of waypoints by way of centroids of the sub grids. Each viable waypoint acts as a vector for various features such as potential gain towards eventual destination, safety, signal strength and wind effects. All information about adjacencies of sub grids as viable paths is known beforehand. Treating sub grids as nodes in a graph, and using depth first traversal for topological sort, it is possible to discover paths between start to finish. The approach outlined here uses a gradient descent method to determine the local optima given the waypoints as vectors. A quadratic form representing the waypoints as vectors is assumed to denote their initial matrix.

The solution to the quadratic form representing the embeddings is found by arriving at the minima represented by Ax = b using conjugate gradient method.     

We are given input matrix A, b, a starting value x, a number of iterations i-max and an error tolerance epsilon < 1     

This method proceeds this way:      

set I to 0      

set residual to b - Ax      

set search-direction to residual.     

And delta-new to the dot-product of residual-transposed.residual.     

Initialize delta-0 to delta-new     

while I < I-max and delta > epsilon^2 delta-0 do:      

    q = dot-product(A, search-direction)     

    alpha = delta-new / (search-direction-transposed. q)      

    x = x + alpha.search-direction     

    If I is divisible by 50      

        r = b - Ax      

    else      

        r = r - alpha.q      

    delta-old = delta-new     

    delta-new = dot-product(residual-transposed,residual)     

     Beta = delta-new/delta-old     

     Search-direction = residual + Beta. Search-direction     

     I = I + 1      

The Jacobi iteration gives eigen values and eigen vectors.


Saturday, January 11, 2025

 Monitoring, Telemetry and Observability are important aspects of infrastructure. The public cloud becomes the gold standard in demonstrating both active and passive monitoring. With a vast landscape of platforms, products, services, solutions, frameworks and dynamic clouds, modern IT infrastructure has enormous complexity to overcome to set up monitoring. Yet, they are seldom explained. In this article, we list five such challenges.

The first is the most obvious by nature of a diverse landscape and this is complexity. Contemporary environments for many teams and organizations are dynamic, complex, ephemeral and distributed. Tools for monitoring must keep up with these. To set up monitoring for a big picture that spans hybrid stacks and environments, one must grapple with disconnected data, alerts and reports and engage in continuously updating tagging schemas to maintain context. So, the solution to addressing complexity, unified observability and security with automated contextualization is a key solution. A comprehensive solution can indeed monitor containers, hosting frameworks like Kubernetes, and cloud resources. Topology and dependency mapping enable this flexible and streamlined observability.

The second challenge is the sprawl of tools and technologies for monitoring that are often also disconnected. Do-it-yourself and open-source solutions for monitoring were partly to blame for this. Leveraging built-in solutions from the cloud eases the overall efficiency and effort involved. This challenge has often resulted in a patchwork view, blind spots and duplicated efforts and redundant monitoring. This implies that a solution would comprise of a single, integrated full-stack platform that reduces licensing costs, increases visibility to support compliance, and empowers proactive issue remediation and robust security.

The third challenge is the sheer size of MELT (Metrics, Logs and Traces) data. With the ever-increasing volume, variety and velocity of data generated, IT Teams are tasked with finding ways to ingest, store, analyze and interpret the information often grappling with numerous and disconnected ways to do each. This results in critical issue being buried under a ton of data or overlooked due to unavailability or inadequate context which results in delayed decision making and potential for errors whose cost and impact to business are both huge and indeterministic. The right modern monitoring tool acts as a single source of truth, enriching data with context and not shying away from using AI to reason vast volumes of data. It would also have sufficient processing to emit only quality alerts and reduce triage efforts.

The fourth challenge is troubleshooting and time to resolution because teams suffering from glitches and outages do not have the luxury to root cause incidents as they must struggle to restore operations and business. As users struggle with frustrations, poor experiences, insufficient information, and the risks of not meeting Service Level Agreements, there is decreased productivity, low team morale and difficulty in retaining the most valuable employees in addition to fines that can be incurred from missed SLAs. A true monitoring solution will come with programmability features that can make triaging and resolving easier. AI can also be used to find patterns and anomalies so that there can be some proactive measures on approaching thresholds rather than being reactive after incidents.

The fifth challenge is the areas of the technological landscape that either do not participate in monitoring or do so insufficiently. In fact, data breaches and hacks that can result from incomplete monitoring have devastating financial consequences, fines and legal fees besides damaged market reputation that erodes stakeholders’ and customers’ trust. A single-entry point for comprehensive monitoring across entire infrastructure is a favored solution to meet this challenge. By visualizing the dependencies and relationships among application components and providing real-time, end-to-end observability with no manual configuration, gaps, or blind spots, a monitoring solution renders a complete picture.

Reference: Previous articles.

#Codingexercise: https://1drv.ms/w/c/d609fb70e39b65c8/Echlm-Nw-wkggNYXMwEAAAABrVDdrKy8p5xOR2KWZOh3Yw?e=hNUMeP