Saturday, February 21, 2026

 Most drones don’t have radars. They merely have positions that they change based on fully autonomous decisions or provided by a controller. In the former case, the waypoints and trajectory determine the flight path, and each drone independently tries to minimize the errors in deviations from the flight path while aligning its path using the least squares method. The selection of waypoints and the velocity and ETA at each waypoint is determined for each unit in a UAV swarm with ability to make up delays or adjust ETAs using conditional probability between past and next waypoint while choosing a path of least resistance or conflict between the two. Usually, a formation, say matrix, already spreads out the units, and its center of mass is used to calculate the progress on the flight path for the formation. This article discusses a novel approach to minimize conflicts and adhere to the path of least resistance.

For example, to transform between an “Abreast” and a “Diamond” formation, any technique must demonstrate efficiency in minimizing transformation distance and maintaining formation coherence. Similarly, to transform matrix formation to flying linearly under a bridge between its piers, any technique must demonstrate a consensus based on pre-determined order.

The approach included here defines a drone formation state with six parameters: time, 3D positions, yaw angle (heading), and velocity. For a formation to be considered coherent, all drones must share the same heading and speed while maintaining relative positions—essential for realistic aerial maneuvers.

The transformation itself consists of two steps: location assignment and path programming. First, to determine which drone should move to which position in the new formation, the Hungarian algorithm, a centralized optimization method is used or in its absence the information about the greatest common denominator for volume between two waypoints determines the number of multiple simultaneous paths to choose and the matrix model is used to assign the positions for the drones to the nearest path. If there is only one path and no centralized controller, the units use the Paxos algorithm for coming to a consensus on the linear order. This first step evaluates the cost of moving each drone to each new position by considering spatial displacement, heading change, and velocity difference. This ensures that the assignment minimizes overall disruption and maneuvering effort.

Second, each drone calculates its own flight path to the newly assigned position using a Dubins path model, which generates the shortest possible route under a minimum turning radius constraint—a requirement for fixed-wing drones that can’t make sharp turns or hover. Positions alone do not guarantee compliance, and the velocity adjustments for each unit must also be layered over the transition. The adjustment of velocities follows a Bayesian conditional probability along the associated path for the unit. This involves computing acceleration and deceleration phases to fine-tune the duration and dynamics of the transition with error corrections against deviations.

Overall, this provides a cohesive framework for in-flight drone formation reconfiguration that balances centralized planning with distributed execution. By coding the physical constraints and states for each unit and classifying the adherence, outliers can be handled by rotating them with other units for a smooth overall progression for the formation and overcoming environmental factors such as turbulence with error corrections.

Lastly, a simple Hungarian algorithm application demonstration with sample code to determine the position allocation in formation transformation.

#! /usr/bin/python

# pip install hungarian-algorithm

from hungarian_algorithm import algorithm

import numpy as np

# Source: drones in a 3×3 grid on Z=0 plane

source_positions = [

    (x, y, 0)

    for y in range(3)

    for x in range(3)

]

# Target: drones in a single horizontal line (linear flight path), spaced 10 units apart

target_positions = [

    (i * 10, 0, 0) for i in range(9)

]

# Compute cost matrix (Euclidean distance)

cost_matrix = [

    [

        np.linalg.norm(np.array(src) - np.array(dst))

        for dst in target_positions

    ]

    for src in source_positions

]

# Run Hungarian Algorithm to get minimum-cost assignment

assignment = algorithm.find_matching(cost_matrix, matching_type='min')

# Report matched pairs

for src_idx, dst_idx in enumerate(assignment):

    print(f"Drone {src_idx} → Target Position {dst_idx}: {target_positions[dst_idx]}")


Friday, February 20, 2026

 This is a summary of the book titled “Applying AI in Learning and Development: From Platforms to Performance” written by Josh Cavalier and published by ATD (Association for Talent Development) in 2025. This book examines how learning and development (L\&D) professionals can use artificial intelligence thoughtfully to improve both learning efficiency and organizational performance. Rather than presenting AI as a replacement for human expertise, the book positions it as a partner that can handle routine, data-intensive tasks while allowing L\&D professionals to focus on strategy, analysis, and design.

Cavalier begins by showing how AI can streamline common instructional design activities. Tasks such as transcribing interviews, summarizing discussions, or generating draft materials—once time-consuming—can be completed quickly with AI support. As these efficiencies increase, the role of the L\&D professional evolves. The book introduces the idea of the human–machine performance analyst (HMPA), a role in which practitioners use judgment, contextual knowledge, and empathy to interpret data and shape learning interventions, while AI supports content creation and analysis. An example illustrates this shift: when compliance incidents continued despite high course completion rates, an L\&D professional used AI-generated data as a starting point but relied on interviews and observation to identify the real issue—irrelevant training. Redesigning the program into role-specific scenarios led to a measurable reduction in incidents.

Throughout the book, he emphasizes that the core skills of L\&D—understanding how people learn, connecting learning to performance, and aligning learning with business outcomes—remain unchanged. What has changed is the set of tools available and the scope of influence L\&D can have across an organization. He encourages teams to begin experimenting with AI in small, low-risk ways, such as using meeting assistants to capture action items or deploying chatbots to answer frequently asked learner questions. Progress should be tracked, lessons documented, and experimentation treated as part of normal professional growth rather than a one-time initiative.

A significant portion of the book focuses on assessing an organization’s current relationship with AI. He outlines several common patterns, ranging from informal individual experimentation to full organizational integration. In some organizations, employees use external AI tools without guidance, increasing the risk of data exposure. Others hesitate to act at all, stalled by concerns about privacy, bias, or regulation. Still others implement AI unevenly, creating silos where some departments benefit while others are left behind. The most mature organizations, by contrast, provide approved tools, clear policies, and role-specific training that allow AI to be used consistently and responsibly. Understanding where an organization falls along this spectrum helps L\&D leaders determine realistic next steps.

From there, the book argues that successful AI adoption depends less on choosing a particular tool and more on establishing a strong foundation. AI initiatives should be explicitly tied to business goals such as faster onboarding, improved compliance, or better customer service, with clear explanations of how time or costs will be saved. Small pilots and case studies can demonstrate value and reduce resistance, especially when results are communicated through concrete comparisons rather than abstract claims.

He places strong emphasis on governance. While many L\&D professionals already experiment with AI, far fewer feel confident about using it ethically. An effective AI policy, he argues, must address data privacy, security, regulatory compliance, and bias. Policies should specify which tools are approved, what information can be shared with them, and where human review is required. The book uses the well-known example of Amazon’s abandoned résumé-screening system to illustrate how biased training data can produce discriminatory outcomes. To mitigate these risks, he recommends close collaboration with legal, HR, and cybersecurity teams, as well as processes that allow learners and employees to flag AI-generated content for review.

When it comes to technology selection, the book encourages L\&D leaders to advocate for platforms that support both learning and broader business needs. Many organizations are moving away from standalone learning management systems toward integrated human capital management platforms that combine learning, skills tracking, performance management, and workforce planning. He suggests defining what the organization wants AI to accomplish over the next six to twelve months and evaluating vendors against practical criteria such as transparency, system integration, usability, analytics, scalability, support, security, and return on investment. The goal is not to adopt the most advanced system available, but to choose the one that fits the organization’s context and constraints.

The book also provides detailed guidance on working effectively with generative AI. Cavalier stresses that output quality depends heavily on prompt quality. Clear instructions, explicit constraints, and well-defined criteria produce more useful results than vague requests. He recommends treating prompts as reusable assets by developing templates and maintaining a shared prompt library that documents use cases, tested models, and variations. Chaining prompts within a single session—moving from objectives to outlines to scripts, for example, can also improve coherence. Despite these efficiencies, the book repeatedly underscores the importance of human oversight to ensure accuracy, relevance, and alignment with learning goals.

In its final section, the book explores the use of AI agents to personalize learning at scale. Unlike traditional automated systems, these agents can reason, adapt, and make recommendations based on learner data, such as skill gaps, goals, and performance trends. Examples show how personalized recommendations can increase engagement and motivation. However, he is careful to frame AI agents as collaborators rather than autonomous decision-makers. He advocates for models in which AI proposes learning paths or resources, while human coaches or managers remain involved in reflection and decision-making. Implementing these systems requires careful attention to data quality, accessibility, integration with existing platforms, and iterative testing with pilot groups.

Overall, Applying AI in Learning and Development presents AI not as a disruptive force to be feared or a shortcut to be exploited, but as a tool that amplifies the strategic role of L&D. By combining experimentation with governance, efficiency with human judgment, and technology with organizational context, he argues that L&D professionals can use AI to deliver learning that is both more personalized and more closely tied to real performance outcomes.


Thursday, February 19, 2026

 3756. Concatenate Non-Zero Digits and Multiply by Sum II

You are given a string s of length m consisting of digits. You are also given a 2D integer array queries, where queries[i] = [li, ri].

For each queries[i], extract the substring s[li..ri]. Then, perform the following:

Form a new integer x by concatenating all the non-zero digits from the substring in their original order. If there are no non-zero digits, x = 0.

Let sum be the sum of digits in x. The answer is x * sum.

Return an array of integers answer where answer[i] is the answer to the ith query.

Since the answers may be very large, return them modulo 109 + 7.

Example 1:

Input: s = "10203004", queries = [[0,7],[1,3],[4,6]]

Output: [12340, 4, 9]

Explanation:

s[0..7] = "10203004"

x = 1234

sum = 1 + 2 + 3 + 4 = 10

Therefore, answer is 1234 * 10 = 12340.

s[1..3] = "020"

x = 2

sum = 2

Therefore, the answer is 2 * 2 = 4.

s[4..6] = "300"

x = 3

sum = 3

Therefore, the answer is 3 * 3 = 9.

Example 2:

Input: s = "1000", queries = [[0,3],[1,1]]

Output: [1, 0]

Explanation:

s[0..3] = "1000"

x = 1

sum = 1

Therefore, the answer is 1 * 1 = 1.

s[1..1] = "0"

x = 0

sum = 0

Therefore, the answer is 0 * 0 = 0.

Example 3:

Input: s = "9876543210", queries = [[0,9]]

Output: [444444137]

Explanation:

s[0..9] = "9876543210"

x = 987654321

sum = 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 45

Therefore, the answer is 987654321 * 45 = 44444444445.

We return 44444444445 modulo (109 + 7) = 444444137.

Constraints:

1 <= m == s.length <= 105

s consists of digits only.

1 <= queries.length <= 105

queries[i] = [li, ri]

0 <= li <= ri < m

 Solution:

class Solution {

    public int[] sumAndMultiply(String s, int[][] queries) {

        int[] answers = new int[queries.length];

        for (int i = 0; i < queries.length; i++){

            long x = 0;

            long sum = 0;

            String sub = s.substring(queries[i][0], queries[i][1]+1);

            for (int j = 0; j < sub.length(); j++) {

                int numericValue = Character.getNumericValue(sub.charAt(j));

                if (numericValue != 0) {

                    x = x * 10 + numericValue;

                    sum += numericValue;

                }

            }

            answers[i] = (int) ((x * sum) % (Math.pow(10, 9) + 7));

        }

        return answers;

    }

}


Wednesday, February 18, 2026

 This is a summary of a book titled “The Datapreneurs: The Promise of AI and the Creators Building Our Future” written by Steve Hamm and Bob Muglia and published by Peakpoint Press, 2023. This book examines how artificial intelligence and data-driven systems are reshaping the economy, technology, and society. The authors argue that the world is entering a period in which intelligence, labor, and energy—the three foundational forces of the modern economy—are all becoming cheaper due to technological advances. Artificial intelligence, particularly the development of artificial general intelligence and the possibility of artificial superintelligence, has the potential to add intelligence to nearly every device and application. At the same time, progress in renewable and advanced energy technologies may reduce the cost of electricity, while robotics could significantly lower the cost of certain kinds of labor. Together, these shifts point toward a profound economic transformation.

The authors suggest that within the next decade, many of the AI assistants people interact with daily could surpass the level of median human intelligence. As these systems evolve through successive generations, they may become capable of artificial superintelligence, potentially exceeding the combined intellectual capacity of humanity. This development could trigger what has often been described as a technological singularity, a moment when technological progress accelerates beyond human prediction or control. Such a shift could compress centuries of scientific and economic advancement into a much shorter time span, creating opportunities to address persistent global challenges such as climate change, disease, and poverty. However, the authors emphasize that these outcomes are not guaranteed and depend heavily on how humans choose to guide and govern intelligent machines.

The authors delve into need for ethics and values to shape the relationship between humans and machines. They contrast optimistic visions of a future characterized by abundance and ease with darker, more dystopian possibilities in which powerful machines generate fear or inequality. To avoid harmful outcomes, they argue for the creation of a new social contract that defines how intelligent systems should behave once they exceed human capabilities. Because advanced machines will increasingly make decisions and take actions independently, the values embedded in their design will play a decisive role in shaping their impact. Establishing ethical frameworks is therefore not an abstract concern but a practical necessity for long-term human and machine collaboration.

The book places current developments in artificial intelligence within a longer historical context by tracing the evolution of data management technologies. Relational databases are presented as a foundational breakthrough that made modern data-driven computing possible. Earlier systems relied on rigid hierarchical or network-based structures that were difficult to update and scale. The relational model, developed by IBM researcher Ted Codd in 1970, introduced a more flexible way to organize data, allowing relationships to be defined mathematically rather than hard-coded into applications. The introduction of SQL and the commercialization of relational databases by companies such as IBM, Oracle, and Sybase helped make data more accessible and adaptable for organizations of all sizes.

Microsoft’s role in expanding access to data management is highlighted as a key moment in the democratization of computing. The company’s emphasis on making information readily available, combined with the release of more affordable and user-friendly database systems such as SQL Server 7.0, lowered barriers for smaller businesses that previously lacked access to enterprise-level data tools. By reducing costs and simplifying maintenance, Microsoft helped bring advanced data processing capabilities beyond large corporations and into the broader economy.

As data volumes grew, the book explains, new infrastructure became necessary to support machine learning and AI systems. Cloud-based data platforms and pipelines now allow organizations to store, process, and move massive amounts of structured and unstructured data. These pipelines function as connective tissue, transferring data into centralized repositories where it can be used to train AI systems. In this framework, future AI assistants will increasingly learn from data warehouses and data lakes, drawing insights from continuous streams of information rather than static datasets.

The authors also describe the emergence of data applications, which differ from traditional software by responding directly to changes in data rather than user commands. Powered by relational knowledge graphs and predictive models, these systems can automate routine decisions and actions. As a result, many administrative tasks may be handled by machines, allowing people to focus on analysis, strategy, and creative problem-solving. This shift extends to autonomous systems such as drones and self-driving vehicles, which require databases capable of synchronizing data rapidly across networks to ensure safety and coordination.

The book further explores the importance of programming languages in the evolving data ecosystem, particularly the rise of Julia. Designed to address inefficiencies in data science workflows, Julia enables high-performance computing without requiring developers to rewrite code in lower-level languages. Its support for automatic differentiation makes it well suited for building predictive models and neural networks, and it is already being used in fields ranging from finance to climate science.

Finally, the authors turn to foundation models, large-scale AI systems trained on vast datasets that exhibit emergent capabilities. These models can be adapted for a wide range of tasks, from writing text to generating images and assisting with software development. Powered by neural networks, such systems can sense, learn, reason, plan, adapt, and act with increasing autonomy. As these capabilities advance, the authors argue that computer scientists and society as a whole must prepare for a future in which machines generate long-term plans and predictions. The book concludes that while superintelligent systems hold enormous promise, their impact will ultimately depend on the values and responsibilities humans choose to embed within them.


Tuesday, February 17, 2026

 While operational and analytical data gets rigorous treatment in terms of the pillars of good architecture such as purview, privacy, security, governance, encryption at rest and in transit, aging, tiering and such others, DevOps tasks comprising Extract-Transform-Load, backup/restore and such others, is often brushed aside but never eliminated for the convenience they provide. This is inclusive of the vast vector stores that have now become central to building contextual copilots in many scenarios.

One of the tools to empower access of data for purposes other than transactional or analytics is the ability to connect to it with a client native to the store where the data resides. Even if the store is in the cloud, data plane access is usually independent of the control plane command-line interfaces. This calls for a creating a custom image that can be used on any compute to spin up a container with ability to access the vectors. For example, this Dockerfile installs clients:

FROM python:3.13-latest-dev

USER root

RUN apt-get update && \

    apt-get install -y ksh \

    ldap-utils \

    mysql-client \

    vim \

    wget \

    curl \

    libdbd-mysql-perl \

    libcurl4-openssl-dev \

    rsync \

    libev4 \

    tzdata \

    jq \

    pigz \

    python3-minimal \

    python3-pip && \

    apt-get clean && \

    rm -rf /var/lib/apt/lists/* && \

    pip3 install s3cmd

RUN apk add --no-cache mariadb mariadb-client

RUN pip install azure-storage-blob requests

RUN pip install requests

WORKDIR /app

COPY custom_installs.py .

RUN mysqldump --version

RUN mysql --version

ENTRYPOINT ["python", "custom_installs.py"]


Monday, February 16, 2026

 This is a summary of the book titled “Technology for Good: How Nonprofit Leaders Are Using Software and Data to Solve Our Most Pressing Social Problems” written by Jim Fruchterman and published by MIT Press, 2025. This book piques my interest because bad ideas need to be abandoned fast and both startups and non-profits struggle with that until it becomes critical. In this book, the author explores why high-growth, profit-driven start-ups can and nonprofit technology ventures cannot. While the popular imagination tends to focus on for-profit start-ups capable of viral success and massive wealth creation, Fruchterman argues that nonprofit tech start-ups play an equally important role in shaping the future, particularly when it comes to addressing entrenched social problems. Drawing on his experience as a social entrepreneur, he offers a practical guide to building social enterprises, noting that while nonprofit and for-profit start-ups face similar challenges in developing ideas and raising capital, nonprofits benefit from a crucial advantage. Because they are not beholden to investors seeking financial returns, nonprofit founders have greater freedom to prioritize impact over profit.

Nonprofit organizations are chronically behind the technology curve. Tight budgets and donor expectations often leave charities and public agencies relying on outdated hardware and software, sometimes lagging a decade or more behind current standards. Although technology is essential to modern organizational effectiveness, donors frequently view technology spending as overhead rather than as a core part of the mission. Fruchterman challenges this mindset and emphasizes that the most effective way for nonprofits to modernize is often by adapting widely used, standard platforms rather than attempting to build custom solutions from scratch. Tools such as Microsoft Office or Slack can meet many needs, and large technology companies frequently offer discounted pricing to nonprofits, often coordinated through organizations like TechSoup Global. While custom software development is sometimes necessary, it is usually more cost-effective to purchase existing solutions, provided the organization has enough technical expertise to manage vendor relationships and protect its interests. In rare cases, nonprofits even form specifically to create technology that the commercial market has failed to address.

Fruchterman is particularly critical of the nonprofit sector’s tendency to incubate ill-fated technological innovations. Unlike the for-profit world, where start-ups are encouraged to test ideas quickly, gather feedback, and abandon bad concepts early, nonprofit leaders often cling to flawed ideas for too long. One common mistake is the assumption that every organization needs a mobile app simply because apps are ubiquitous in everyday life. In reality, most users do not want more apps, and many nonprofit apps fail to gain traction. The author also cautions against rushing into experimental or heavily hyped technologies. Blockchain, for example, attracted significant attention after the success of Bitcoin, leading many donors and nonprofits to assume it could be easily repurposed for social good. In practice, blockchain initiatives have often failed to deliver meaningful benefits, as illustrated by costly implementations that outweighed their promised savings. Fruchterman urges social leaders to remain skeptical and clear-eyed, especially when technologies are promoted by those more focused on ideology than sound technical design.

Despite these pitfalls, the book makes a strong case that thoughtfully deployed technology can dramatically increase the social sector’s impact. While for-profit companies often aim to eliminate human interaction through automation, nonprofits tend to emphasize person-to-person relationships. Fruchterman argues that technology should not replace human connection in the social sector, but rather support it, particularly by improving efficiency for frontline workers. When those closest to the people being served can work more effectively, the organization’s overall impact is amplified. He also highlights the potential of delivering well-designed tools directly to communities themselves.

One illustrative example is Medic, a social organization that builds tools specifically for community health workers. By replacing paper forms with digital data and linking frontline workers to local health systems, Medic created an app that succeeded precisely because it was narrowly targeted and deeply practical. Although most nonprofit apps add little value, Medic’s tool stands out because it was designed for a clearly defined audience and addressed real operational needs. The result was improved outcomes in areas such as maternal health, disease treatment, and vaccination tracking.

Fruchterman also challenges conventional nonprofit strategic planning. He argues that long-term strategic plans are often too rigid to survive in a rapidly changing world, a lesson underscored by the COVID-19 pandemic, which rendered many carefully crafted plans irrelevant almost overnight. Instead of producing static documents, nonprofits should adopt a more agile approach to strategy that remains grounded in mission while allowing for rapid adaptation. This means focusing on the organization’s core objectives—the “what”—rather than locking into specific tactics—the “how.” By collecting real-time data and learning continuously from results, nonprofits can test assumptions, adjust programs, and respond more effectively to changing conditions.

The book devotes significant attention to artificial intelligence, emphasizing both its promise and its limitations. Fruchterman stresses that AI systems are only as good as the data used to train them, and that bias is an unavoidable risk when datasets are incomplete or unrepresentative. Because many AI tools are developed primarily in English and rely on mainstream data sources, they often overlook the poor and underserved populations that nonprofits aim to support. The author illustrates this problem with examples of biased facial recognition systems that perform poorly on women and people of color due to skewed training data. Such cases underscore the importance of diverse development teams and careful scrutiny when deploying AI in social contexts.

Another key distinction Fruchterman draws is between the goals of nonprofit and for-profit start-ups. While commercial tech ventures are often driven by the promise of wealth, nonprofit start-ups exist to serve people who cannot pay for services. As a result, financial success is defined not by profits but by impact and sustainability. Although the motivations differ, the basic phases of launching a start-up are similar, beginning with exploration and user research, followed by development, growth, and eventual maturity. Throughout these stages, nonprofit founders must be disciplined about testing ideas, releasing imperfect products, and learning from feedback.

Funding and talent emerge as persistent challenges for nonprofit tech start-ups. Fruchterman estimates that early-stage funding typically ranges from modest six-figure sums to around a million dollars for more ambitious projects, with founders often contributing unpaid labor in the beginning. Philanthropic foundations, fellowship programs, accelerators, government agencies, and corporate social good initiatives all play important roles in supporting these ventures. Unlike for-profit start-ups, nonprofits aim simply to break even while maximizing the number of people they help. Although nonprofits cannot compete with the salaries offered by commercial tech firms, they can attract professionals motivated by purpose rather than profit, particularly when expectations around compensation are addressed transparently from the outset.

Fruchterman argues that social entrepreneurs should prioritize empowering communities and individuals rather than imposing top-down solutions. Access to healthcare, education, capital, and inclusion can transform lives, and technology can be a powerful enabler when used responsibly. He encourages nonprofit leaders to embrace data collection and cloud-based tools while remaining transparent about how data is used and firmly committed to protecting it from exploitation. The book closes with a call to use AI and other emerging technologies for good, capturing efficiency gains without surrendering human judgment or ethical responsibility. Fruchterman has a long career in social entrepreneurship and open-source development that gives authenticity to his message that when technology is guided by mission, humility and respect for the people it serves, it can become a powerful force for positive social change.

Sunday, February 15, 2026

 While operational and analytical data gets rigorous treatment in terms of the pillars of good architecture such as purview, privacy, security, governance, encryption at rest and in transit, aging, tiering and such others, DevOps tasks comprising Extract-Transform-Load, backup/restore and such others, is often brushed aside but never eliminated for the convenience they provide. This is inclusive of the vast vector stores that have now become central to building contextual copilots in many scenarios.

One of the tools to empower access of data for purposes other than transactional or analytics is the ability to connect to it with a client native to the store where the data resides. Even if the store is in the cloud, data plane access is usually independent of the control plane command-line interfaces. This calls for a creating a custom image that can be used on any compute to spin up a container with ability to access the vectors. For example, this Dockerfile installs clients:

FROM python:3.13-latest-dev

USER root

RUN apt-get update && \

    apt-get install -y ksh \

    ldap-utils \

    mysql-client \

    vim \

    wget \

    curl \

    libdbd-mysql-perl \

    libcurl4-openssl-dev \

    rsync \

    libev4 \

    tzdata \

    jq \

    pigz \

    python3-minimal \

    python3-pip && \

    apt-get clean && \

    rm -rf /var/lib/apt/lists/* && \

    pip3 install s3cmd

RUN apk add --no-cache mariadb mariadb-client

RUN pip install azure-storage-blob requests

RUN pip install requests

WORKDIR /app

COPY custom_installs.py .

RUN mysqldump --version

RUN mysql --version

ENTRYPOINT ["python", "custom_installs.py"]