Friday, January 3, 2025

 This is a summary of the book titled “Your AI Survival Guide” written by Sal Rashidi and published by Wiley in 2024. Sal argues that organizations cannot afford to be Laggards and Late majority sections of people adopting AI even if they are non-technical because that is here to stay and unless they want to be eliminated in business. So, the only choices are the Early Majority who adopt technology once it has demonstrated its advantages, early adopters who are more on the forefront, and innovators who pioneer the use of AI in their respective fields. Each group plays a crucial role in the adoption of lifecycle of technology which usually spans the duration until something better replaces it, so there is no wrong pick, but the author’s book lays out everything that helps you uncover your “why” to building your team and making your AI responsible. With applications already ranging from agriculture to HR, the time to be proactive is Now. His playbook involves assessing which AI strategy fits you and your team, selecting relevant use cases, planning how to launch your AI project, choosing the right tools and partners to go live, ensuring the team is gritty, ambitious, and resilient and incorporating human oversight onto AI decision making.

To successfully implement AI within a company, it is essential to balance established protocols with the need to adapt to changing times. To achieve this, consider the reasons for deploying AI, develop an AI strategy, and start small and scale quickly. Choose a qualified AI consultant or development firm that fits your budget and goals. Set a realistic pace for your project. Conduct an AI readiness assessment to determine the best AI strategy for your company. Score yourself on various categories, such as market strategy, business understanding, workforce acumen, company culture, role of technology, and data availability.

Select relevant use cases that align with your chosen AI strategy and measure the criticality and complexity of each use case. For criticality, measure how the use case will affect sales, growth, operations, culture, public perception, and deployment challenges. For complexity, measure how the use case will affect resources for other projects, change management, and ownership. Plan how to launch your AI project well to ensure success and adaptability.

To launch an AI project successfully, outline your vision, business value, and key performance indicators (KPIs). Prioritize project management by defining roles, deliverables, and tracking progress. Align goals, methods, and expectations, and establish performance benchmarks. Outline a plan for post-launch support, including ongoing maintenance, enterprise integration, and security measures. Establish a risk mitigation process for handling unintended consequences. Choose the right AI tool according to your needs and expertise, ranging from low-cost to high-cost, requiring technical expertise. Research options, assess risks and rewards, and collaborate with experts to create standard operating procedures. Ensure your team is gritty, ambitious, and resilient by familiarizing yourself with AI archetypes. To integrate AI successfully, focus on change management, create a manifesto, align company leadership, plan transitions, communicate changes regularly, celebrate small wins, emphasize iteration over perfection, and monitor progress through monthly retrospectives.

AI projects require human oversight to ensure ethical, transparent, and trustworthy systems. Principles for responsible AI include transparency, accountability, fairness, privacy, inclusiveness, and diversity. AI is expected to transform various sectors, generating $9.5 to $15.4 trillion annually. Legal professionals can use AI to review contracts, HR benefits from AI-powered chatbots, and sales teams can leverage AI for automated follow-up emails and personalized pitches. AI will drive trends and raise new challenges for businesses, such as automating complex tasks, scaling personalized marketing, and disrupting management consulting. However, AI opportunities come with risks such as cyber threats, privacy and bias concerns, and a growing skills gap. To seize AI opportunities while mitigating risks, businesses must learn how AI applies to their industry, assess their capabilities, identify high-potential use cases, build a capable team, create a change management plan, and keep a human in the loop to catch errors and address ethical issues.


Thursday, January 2, 2025

 Serverless SQL in Azure offers a flexible and cost-effective way to manage SQL databases and data processing without the need to manage the underlying infrastructure. Here are some key aspects:

Azure SQL Database Serverless

Autoscaling: Automatically scales compute based on workload demand. It bills for the amount of compute used per second2.

Auto-Pause and Resume: Pauses databases during inactive periods when only storage is billed and resumes when activity returns.

Configurable Parameters: You can configure the minimum and maximum vCores, memory, and IO limits.

Cost-Effective: Ideal for single databases with intermittent, unpredictable usage patterns.

Azure Synapse Analytics Serverless SQL Pool

Query Service: Provides a query service over data in your data lake, allowing you to query data in place without moving it.

T-SQL Support: Uses familiar T-SQL syntax for querying data.

High Reliability: Built for large-scale data processing with built-in query execution fault-tolerance.

Pay-Per-Use: You are only charged for the data processed by your queries.

Benefits

Scalability: Easily scales to accommodate varying workloads.

Cost Efficiency: Only pay for what you use, making it cost-effective for unpredictable workloads.

Ease of Use: No infrastructure setup or maintenance required.

The product Neon Database was launched in 2021 for going serverless on a cloud platform as a relational database. Recently it has become cloud native to Azure just like it has been on AWS. This deeper integration of Neon in Azure facilitates rapid app development because postgres sql is the developers’ choice. Serverless reduces operational overhead and frees the developers to focus on the data model, access and CI/CD integration to suit their needs. In fact, Microsoft’s investments in GitHub, VSCode, TypeScript, OpenAI and Copilot align well with the developers’ agenda.

Even the ask for a vector store from AI can be facilitated within a relational database as both Azure SQL and Neon have demonstrated. The compute seamlessly scale up for expensive index builds and back down for normal queries or RAG queries. Since pause during inacitivity and resume for load is automated in serverless, the cost savings are significant. In addition, both databases focus on data privacy.

The following is a way to test the ai vector cosine similarity in a relational database.

1. Step 1: upload a dataset to a storage account from where it can be accessed easily. This must be a csv file with headers like:

id,url,title,text,title_vector,content_vector,vector_id

Sample uploaded file looks like this:



2. Step 2: Use Azure Portal Query Editor or any client to run the following SQL:

a. 00-setup-blob-access.sql

/*

 Cleanup if needed

*/

if not exists(select * from sys.symmetric_keys where [name] = '##MS_DatabaseMasterKey##')

begin

 create master key encryption by password = 'Pa$$w0rd!'

end

go

if exists(select * from sys.[external_data_sources] where name = 'openai_playground')

begin

 drop external data source [openai_playground];

end

go

if exists(select * from sys.[database_scoped_credentials] where name = 'openai_playground')

begin

 drop database scoped credential [openai_playground];

end

go

/*

 Create database scoped credential and external data source.

 File is assumed to be in a path like:

 https://saravinoteblogs.blob.core.windows.net/playground/wikipedia/vector_database_wikipedia_articles_embedded.csv

 Please note that it is recommened to avoid using SAS tokens: the best practice is to use Managed Identity as described here:

 https://learn.microsoft.com/en-us/sql/relational-databases/import-export/import-bulk-data-by-using-bulk-insert-or-openrowset-bulk-sql-server?view=sql-server-ver16#bulk-importing-from-azure-blob-storage

*/

create database scoped credential [openai_playground]

with identity = 'SHARED ACCESS SIGNATURE',

secret = 'sp=rwdme&st=2024-11-22T03:37:08Z&se=2024-11-29T11:37:08Z&spr=https&sv=2022-11-02&sr=b&sig=EWag2qRCAY7kRsF7LtBRRRExdWgR5h4XWrU%2'; -- make sure not to include the ? at the beginning

go

create external data source [openai_playground]

with

(

 type = blob_storage,

  location = 'https://saravinoteblogs.blob.core.windows.net/playground',

  credential = [openai_playground]

);

Go

b. 01-import-wikipedia.sql:

/*

Create table

*/

drop table if exists [dbo].[wikipedia_articles_embeddings];

create table [dbo].[wikipedia_articles_embeddings]

(

[id] [int] not null,

[url] [varchar](1000) not null,

[title] [varchar](1000) not null,

[text] [varchar](max) not null,

[title_vector] [varchar](max) not null,

[content_vector] [varchar](max) not null,

[vector_id] [int] not null

)

go

/*

Import data

*/

bulk insert dbo.[wikipedia_articles_embeddings]

from 'wikipedia/vector_database_wikipedia_articles_embedded.csv'

with (

data_source = 'openai_playground',

    format = 'csv',

    firstrow = 2,

    codepage = '65001',

fieldterminator = ',',

rowterminator = '0x0a',

    fieldquote = '"',

    batchsize = 1000,

    tablock

)

go

/*

Add primary key

*/

alter table [dbo].[wikipedia_articles_embeddings]

add constraint pk__wikipedia_articles_embeddings primary key clustered (id)

go

/*

Add index on title

*/

create index [ix_title] on [dbo].[wikipedia_articles_embeddings](title)

go

/*

Verify data

*/

select top (10) * from [dbo].[wikipedia_articles_embeddings]

go

select * from [dbo].[wikipedia_articles_embeddings] where title = 'Alan Turing'

go

c. 02-use-native-vectors.sql:

/*

    Add columns to store the native vectors

*/

alter table wikipedia_articles_embeddings

add title_vector_ada2 vector(1536);

alter table wikipedia_articles_embeddings

add content_vector_ada2 vector(1536);

go

/*

    Update the native vectors

*/

update

    wikipedia_articles_embeddings

set

    title_vector_ada2 = cast(title_vector as vector(1536)),

    content_vector_ada2 = cast(content_vector as vector(1536))

go

/*

    Remove old columns

*/

alter table wikipedia_articles_embeddings

drop column title_vector;

go

alter table wikipedia_articles_embeddings

drop column content_vector;

go

/*

Verify data

*/

select top (10) * from [dbo].[wikipedia_articles_embeddings]

go

select * from [dbo].[wikipedia_articles_embeddings] where title = 'Alan Turing'

go

d. 03-store-openai-credentials.sql

/*

    Create database credentials to store API key

*/

if exists(select * from sys.[database_scoped_credentials] where name = 'https://postssearch.openai.azure.com')

begin

drop database scoped credential [https://postssearch.openai.azure.com];

end

create database scoped credential [https://postssearch.openai.azure.com]

with identity = 'HTTPEndpointHeaders', secret = '{"api-key": "7cGuGvTm7FQEJtzFIrZBZpOCJxXbAsGOMDd8uG0RIBivUXIfOUJRJQQJ99AKACYeBjFXJ3w3AAABACOGAL8U"}';

go

e. 04-create-get-embeddings-procedure.sql:

/*

    Get the embeddings for the input text by calling the OpenAI API

*/

create or alter procedure dbo.get_embedding

@deployedModelName nvarchar(1000),

@inputText nvarchar(max),

@embedding vector(1536) output

as

declare @retval int, @response nvarchar(max);

declare @payload nvarchar(max) = json_object('input': @inputText);

declare @url nvarchar(1000) = 'https://postssearch.openai.azure.com/openai/deployments/' + @deployedModelName + '/embeddings?api-version=2023-03-15-preview'

exec @retval = sp_invoke_external_rest_endpoint

    @url = @url,

    @method = 'POST',

    @credential = [https://postssearch.openai.azure.com],

    @payload = @payload,

    @response = @response output;

declare @re nvarchar(max) = null;

if (@retval = 0) begin

    set @re = json_query(@response, '$.result.data[0].embedding')

end else begin

    select @response as 'Error message from OpenAI API';

end

set @embedding = cast(@re as vector(1536));

return @retval

go

f. 05-find-similar-articles.sql:

/*

    Get the embeddings for the input text by calling the OpenAI API

    and then search the most similar articles (by title)

    Note: postssearchembedding needs to be replaced with the deployment name of your embedding model in Azure OpenAI

*/

declare @inputText nvarchar(max) = 'the foundation series by isaac asimov';

declare @retval int, @embedding vector(1536);

exec @retval = dbo.get_embedding 'postssearchembedding', @inputText, @embedding output;

select top(10)

    a.id,

    a.title,

    a.url,

    vector_distance('cosine', @embedding, title_vector_ada2) cosine_distance

from

    dbo.wikipedia_articles_embeddings a

order by

    cosine_distance;

go

3. Finally, manually review the results.




Wednesday, January 1, 2025

 This is a summary of the book titled “Your AI Survival Guide” written by Sal Rashidi and published by Wiley in 2024. Sal argues that organizations cannot afford to be Laggards and Late majority sections of people adopting AI even if they are non-technical because that is here to stay and unless they want to be eliminated in business. So, the only choices are the Early Majority who adopt technology once it has demonstrated its advantages, early adopters who are more on the forefront, and innovators who pioneer the use of AI in their respective fields. Each group plays a crucial role in the adoption of lifecycle of technology which usually spans the duration until something better replaces it, so there is no wrong pick, but the author’s book lays out everything that helps you uncover your “why” to building your team and making your AI responsible. With applications already ranging from agriculture to HR, the time to be proactive is now. His playbook involves assessing which AI strategy fits you and your team, selecting relevant use cases, planning how to launch your AI project, choosing the right tools and partners to go live, ensuring the team is gritty, ambitious, and resilient and incorporating human oversight onto AI decision making.

To successfully implement AI within a company, it is essential to balance established protocols with the need to adapt to changing times. To achieve this, consider the reasons for deploying AI, develop an AI strategy, and start small and scale quickly. Choose a qualified AI consultant or development firm that fits your budget and goals. Set a realistic pace for your project. Conduct an AI readiness assessment to determine the best AI strategy for your company. Score yourself on various categories, such as market strategy, business understanding, workforce acumen, company culture, role of technology, and data availability.

Select relevant use cases that align with your chosen AI strategy and measure the criticality and complexity of each use case. For criticality, measure how the use case will affect sales, growth, operations, culture, public perception, and deployment challenges. For complexity, measure how the use case will affect resources for other projects, change management, and ownership. Plan how to launch your AI project well to ensure success and adaptability.

To launch an AI project successfully, outline your vision, business value, and key performance indicators (KPIs). Prioritize project management by defining roles, deliverables, and tracking progress. Align goals, methods, and expectations, and establish performance benchmarks. Outline a plan for post-launch support, including ongoing maintenance, enterprise integration, and security measures. Establish a risk mitigation process for handling unintended consequences. Choose the right AI tool according to your needs and expertise, ranging from low-cost to high-cost, requiring technical expertise. Research options, assess risks and rewards, and collaborate with experts to create standard operating procedures. Ensure your team is gritty, ambitious, and resilient by familiarizing yourself with AI archetypes. To integrate AI successfully, focus on change management, create a manifesto, align company leadership, plan transitions, communicate changes regularly, celebrate small wins, emphasize iteration over perfection, and monitor progress through monthly retrospectives.

AI projects require human oversight to ensure ethical, transparent, and trustworthy systems. Principles for responsible AI include transparency, accountability, fairness, privacy, inclusiveness, and diversity. AI is expected to transform various sectors, generating $9.5 to $15.4 trillion annually. Legal professionals can use AI to review contracts, HR benefits from AI-powered chatbots, and sales teams can leverage AI for automated follow-up emails and personalized pitches. AI will drive trends and raise new challenges for businesses, such as automating complex tasks, scaling personalized marketing, and disrupting management consulting. However, AI opportunities come with risks such as cyber threats, privacy and bias concerns, and a growing skills gap. To seize AI opportunities while mitigating risks, businesses must learn how AI applies to their industry, assess their capabilities, identify high-potential use cases, build a capable team, create a change management plan, and keep a human in the loop to catch errors and address ethical issues.


Tuesday, December 31, 2024

 This is the summary of the book titled “Rumbles: A Curious History of the Gut” written by Elsa Richardson and published by Pegasus books in 2024. For anyone who has read Susie Flaherty’s “Gut Feelings: Microbiome and health”, the topic of intestinal heath might already be familiar. Elsa’s book takes on a journey of the impact of intestinal health over the centuries with its surprising influence on medicine, culture, and politics. Bodily processes such as the activities of the gut and microbiome defines the way people live, think, and govern and our understanding of this phenomenon has led to medical advancements, changes in cultural norms and sparked political movements. Modern technologies inhibit naturally paced, mindful eating and the drive to be productive has eaten away at our work-life balance. The gut can be a predictor of physical and mental health. Hunger and beliefs about digestion can drive social and political change.

The gut's influence on mental health has been a topic of debate throughout history. Initially, people viewed the gut with suspicion, with the Ancient Greeks using it to predict battle outcomes. In the Middle Ages, the gut was seen as a potential source of demonic possession, leading to mental and spiritual chaos. Medical figures like James Johnson connected patients' mental despondency to toxins in their bowels. However, some individuals, like George Cheyne, believed that diet directly influences emotions and mental states. Societal rules about diet and manners have been used to maintain social order and mental and physical well-being. The aim of regulating the gut was to maintain order, both socially and emotionally. By the late 17th century, etiquette books dictated the proper use of utensils and conversational practices, labeling those who followed these rules as "civilized" and those who did not as "savage."

Scientists have studied digestion for centuries to understand deeper aspects of human existence. The digestive process involves the coordination of various organs, enzymes, acids, and muscles. In the 19th century, French-Canadian voyageur Alexis St. Martin's accident allowed Dr. William Beaumont to observe the digestive process, leading to a new understanding of gastric juice and its role in digestion. Modern technologies have inhibited naturally paced, mindful eating, with the gut-brain connection being a significant factor. Avicenna, the founder of modern medicine, argued that the digestive system was designed to store waste and allow humans to focus on higher intellectual pursuits. However, modern distractions like smartphones have led to overeating and obesity, highlighting the need for mindfulness and avoiding multitasking during meals. The push to be productive has eroded people's ability to establish a healthy work-life balance, with pre-packaged meals like sandwiches highlighting the pressures of modern capitalism. Lunch became the barometer for how modern working life affects human health. British unions started to demand the introduction of workplace canteens to provide healthier, more structured meal breaks. British employers introduced canteens during World War 1 for reasons of boosting productivity albeit not for worker well-being.

The history of sanitation and human excrement management reveals how societies have struggled to control the consequences of digestion. In the mid-19th century, London's waste problem led to the installation of public toilets and the construction of a sewer network. Proper disposal of waste became linked to civilization, and hygiene and cleanliness bolstered social hierarchies. The gut can be a predictor of physical and mental health, with studies showing that gut bacteria can forecast potential health outcomes. Health reformers like William Arbuthnot Lane and John Harvey Kellogg argued that modern city life damaged people's digestion, causing constipation. Today, concerns about gut health remain, with concepts like "leaky gut syndrome" and the popularity of probiotics and fermented foods reflecting both old fears and new discoveries about digestion's impact on overall health. A better understanding of the human microbiome has led to treatments like fecal microbiota transplants, which can treat conditions like Crohn's disease, multiple sclerosis, and depression.

Hunger and beliefs about digestion can drive social and political change, serving as a means of bodily control. In 18th-century France, the Digesting Duck, a mechanical creature, was hailed as proof of France's modernity and commitment to scientific progress. Hunger and hunger can drive national upheaval, as seen in post-revolutionary France. Dieting, a concept popularized by figures like William Banting, reinforces societal norms and weight control. Dieting has also played a role in gender politics, reinforcing stereotypes about women's frailty and men's strength. However, suffragettes in the early 20th century reversed harmful gender notions by using their guts as political tools.

#Codingexercise: CodingExercise-12-31-2024.docx 


Sunday, December 29, 2024

 The preceding articles on security and vulnerability management mentioned that organizations treat the defense-in-depth approach as the preferred path to stronger security. They also engage in feedback from security researchers via programs like AI Red Teaming and Bug Bounty program to make a positive impact to their customers. As they evaluate the ROI for their efforts, Bug Bounty and penetration testing have proved of exceptional value. Bug Bounty is a relatively small investment for an organization that can measure the ROI in terms of 1. The absence of incidents or breaches, 2. Risk assessment, 3. Financial savings estimated from avoiding risk or avoiding breaches and 4. Agility and speed of security teams responsiveness, 5. Discount on cyber insurance, and 6. Estimated savings of reputational or customer-related impacts as a result of a security program. Penetration testing, on the other hand, tends to identify systemic or architectural vulnerabilities such as cryptographic weakness or secure design issues which are essential for long-term security but may not be immediately apparent to attackers. It is a bit ironic that organizations discover critical bugs using pentests during the deployment phase. Pentest-as-a-service aka PTaaS is gaining grounds as organizations shift to community-driven. SaaS based models that are more flexible, grant access to a more diverse pool of vetted security researchers, and wider coverage than traditional methods. It is common to discover a dozen vulnerabilities per engagement. Together with bug bounty programs and pentests, organizations gain comprehensive security coverage and achieve greater ROI than before albeit measuring ROI remains a challenge.

In contrast, there is a new metric in the industry that is called ROM or Return on Mitigation that is fast gaining acceptance. It compares the cost of mitigating risks to the potential financial losses from cyber incidents, providing a clear metric to measure how security efforts protect businesses from costly breaches. This nuanced view offers both qualitative and quantitative benefits as it articulates factors such as restoring compromised systems, lost revenue due to downtime legal and regulatory penalties, and damage to public trust and reputation.

ROM= (Anticipated Breach Cost)/(Mitigation Cost)

While ROI is similar to calculating profit percentage in its inspiration for an overall metric for the outcome, the factors that ROM represents are not covered by ROI alone and it highlights the importance of risk management and the overall benefits of security measures.

As with all reports, a human powered security program is needed internally to evaluate the priority and the severity of the reports’ findings and use the data to better understand and protect against malicious hackers. The program draws attention from the whole of the organization and not just the security team. The unique ability of the skilled security professionals to mitigate complex security vulnerabilities and deliver context-driven value, coupled with ROM, makes a compelling business case.

Reference: previous articles

#codingexercise: CodingExercise-12-29-2024.docx

 Computer Software: This is one of the most impactful of the industry sectors. The products in the high-tech industry serve a variety of users. A vulnerability or defect in one can impact many users. For example, on July 19th, 2024, CrowdStrike released a faulty software update that caused a widespread outage which resulted in five hundred-million-dollar loss for a single airline. The use of open-source libraries and third-party dependencies only exacerbates the risks. Enforcing in-depth security privilege management and enforcement across Windows, Linux and MacOS, each with its own security model only adds to the challenges. Noting that while privilege escalation is slightly lower than previous years but inconsistent security check is pervasive in this sector, the security experts recommend ensuring access is limited to necessary resources on a least-privileged basis and granted only to specific roles. This should be paired with an Intrusion detection system or intrusion prevention systems using alerts and actions. All components of the software products must be regularly patched.

Internet and online services: This is similar to that of the computer software sector except that the updates and releases in this sector occur at a faster rate than anywhere else. The push to scale quickly and roll out new features makes it tough to enforce strict access controls consistently. The speed and innovation allow vulnerabilities to slip through. The recommendations from the security experts call for improved authentication mechanisms such as MFA and re-authentication in addition to the least-privileged RBAC authorization methods as earlier.

Crypto and Blockchain: Organizations in this sector stand out for their many outliers by nature because of their unique offerings and operations. While they build rigorous security practices from the start, they tend to overlook the business logic discrepancies that lay waste to the security mechanisms in place. This high-rate of business logic errors is the highest across industry sectors. When the business models become complex, it becomes tough to eliminate edge cases or unintended uses. For example, smart contracts which run on blockchain and execute automatically are immutable once deployed which also implies that certain errors cannot be undone. Since they cause financial loss, they are prime targets for bug bounty hunters. The recommendations from security experts include test-driven development of business logic and integration testing to cover various scenarios and edge cases and the authorization of business logic on a least privilege basis.

Travel and Hospitality: This industry relies heavily on marketing and often works in partnership with other agencies that require OAuth redirects and referrals. Attackers may exploit open redirect vulnerabilities by tampering with the links to lead users to malicious sites. The exploitations can work their way through the least secured sites to the highly privileged ones via referrals and integrations that is the de facto in this sector. The recommendations from the security experts include provide clear warning for all redirects, notifying users on exit from and entry to a site and sanitizing the user inputs and allow listing based on the client IPs or other user side information.

Across these and the industry sectors in the earlier article, organizations spend a lot of their budget on known vulnerabilities types including indirect object references vulnerabilities that have potential for unauthorized access, modification, or deletion of sensitive information. The security experts community recommends that organizations monitor report volume, payout levels, and researcher feedback to adjust budgets over time as their security programs evolve.

Reference: previous article.

#codingexercise: CodingExercise-12-29-2024.docx


Saturday, December 28, 2024

 This is a summary of the book titled “Reaching for the stars” written by Jose M Fernandez and published by Center Street in 2012. This is an inspiring story of a migrant farm worker’s son turned NASA Astronaut. As he recounts, his hardworking family kept him focused on education and his future. He calls his parents role models and put to best use their belief that he belong in the school and not the farm. He earned his engineering degrees, worked in prestigious Lawrence Livermore Labs, the US Department of Energy, and then NASA. In his journey, he had to surmount several rejections and prejudice. His heartwarming book is an illustration of American dream come true.

José Hernández, born in 1962, is inspired by his immigrant parents, Salvador and his friend, who were both undocumented migrant farmworkers in the San Joaquin Valley in California. Salvador's father, Salvador, had many dreams and goals at a young age, but he never reached third grade. At 15, Salvador and his friend traveled to the United States with a friend, where they worked as undocumented migrant farmworkers. Hernández's youngest child, José M., was born in August 1962.

Hernández's father insisted that everyone in the world is the same, and he focused on his studies, learning math and watching Star Trek. His family's financial struggles led him to pursue his dream of becoming an astronaut, inspired by the first moon landing and the final Apollo mission, Apollo 17. Hernández's parents' resilience and determination inspired him to pursue his dreams and make a difference in the world.

As a poor and brown student from Mexico, he was influenced by his parents' belief in the importance of education for his future. His parents, Salvador and his wife, believed that their children should be in school rather than working in the fields. Hernández's parents made hard choices without knowing if their children would seize the opportunities available. Eventually, he entered middle school and made friends in a rough neighborhood. By 1980, he was ready to graduate and move on to university. He heard about Dr. Franklin Chang Díaz, a poor boy from Costa Rica who studied engineering at MIT and became NASA's first Latino astronaut candidate. With the help of a teacher, Hernández received a scholarship to study engineering at the University of the Pacific. He worked multiple jobs throughout college, believing education was the path to his future. Hernández applied for an internship at the Lawrence Livermore National Laboratory, which offered him a job through a program for minority students funded by the Office of Equal Opportunity.

Hernández graduated from the University of the Pacific in 1985 and began his career at the Lawrence Livermore National Laboratory in Livermore, California. He worked on a nuclear X-ray laser project as part of President Ronald Reagan's Strategic Defense Initiative. After the Soviet Union's end in 1991, Hernández applied to become an astronaut but was initially rejected. However, he fell in love with a woman he would marry and pursued new opportunities.

NASA selected Hernández as an astronaut after the Columbia tragedy in 2003. Hernández joined the team providing technical support for the investigation into the tragedy. NASA began selecting new astronaut candidates again in fall 2003, and Hernández was accepted after a two-year training process. Astronaut training involves acquiring new skills, such as survival underwater, co-piloting T-34C airplanes, and studying the space shuttle's systems in classrooms and simulators.

He achieved his lifelong dream of launching a space shuttle in 2009. Despite facing challenges due to weather conditions, the flight was launched without incident. Hernández installed computers, helped inspect the thermal protection system on the wings, and docked with the International Space Station (ISS). He hoped his story would inspire others to leave their own footprints and reach their own stars. After completing systems tests and preparations, Hernández's team returned to Earth, despite an extra day due to bad weather at the Kennedy Space Center. The view from space was spectacular, and on day 15, the shuttle burst through clouds at 26,000 feet, landing with the astronauts applauding.

#Codingexercise Codingexercise-12-28-2024.docx