Sunday, January 5, 2025

 

Setting up a Databricks instance to allow users to run their notebooks, jobs, and Delta Live Tables (DLT) queries on serverless compute involves several steps and considerations. Here's an overview of the process and how it differs from all-purpose compute clusters:

Enabling Serverless Compute

To set up serverless compute:

  1. An account admin must enable the feature in the account console:
    • Navigate to Settings > Feature enablement
    • Enable "Serverless compute for workflows, notebooks, and Delta Live Tables"
  1. Ensure your Databricks workspace meets the requirements:
    • Unity Catalog must be enabled
    • The workspace must be in a supported region

Types of Serverless Compute

Databricks offers several types of serverless compute:

  • Serverless compute for notebooks
  • Serverless compute for jobs
  • Serverless SQL warehouses
  • Serverless DLT pipelines
  • Mosaic AI Model Serving
  • Mosaic AI Model Training for forecasting

Benefits of Serverless Compute

Serverless compute offers several advantages:

  • Rapid startup and scaling times
  • Automatic resource allocation and management
  • Pay only for compute used
  • Reduced management overhead
  • Automatic security patching and upgrades

Differences from All-Purpose Compute Clusters

Serverless compute differs from all-purpose clusters in several ways:

  1. Resource Management: Serverless compute is managed by Databricks, while all-purpose clusters require manual configuration and management
  2. Scaling: Serverless includes a smarter, more responsive autoscaler compared to classic compute
  3. Version Updates: Databricks automatically and safely upgrades serverless compute to the latest versions
  4. Network Isolation: Serverless compute runs within a network boundary for the workspace, with additional security layers
  5. Compute Plane: Serverless runs in a compute layer within the Databricks account, while classic compute runs in the customer's cloud account
  6. Access Control: All workspace users can use serverless compute without needing cluster creation permissions

 

Security Considerations

When setting up serverless compute:

  • Be aware that serverless compute for notebooks and jobs has unrestricted internet access by default
  • Consider configuring network security features for more control

Understand that serverless workloads are executed within multiple layers of isolation for data protection

Usage and Optimization

To optimize serverless compute usage:

  • Leverage the automatic infrastructure optimization provided by Databricks
  • Monitor performance using built-in tools in the Azure Portal
  • Take advantage of the promotional discounts currently offered (50% for Workflows and DLT, 30% for Notebooks)

By setting up serverless compute, you can provide users with a more streamlined experience for running notebooks, jobs, and DLT queries, while reducing management overhead and potentially lowering costs compared to traditional all-purpose compute clusters.

Reference: previous articles

Saturday, January 4, 2025

 

Some of the best practices for IaC derive from the software development tenets such as using version control to prevent manual changes and enabling managing, testing, reviewing, and bringing it to production.  Also, like software development, a culture of collaborative development in version-controlled repositories enables individual features to be brought into the code with little disruption to others. Some practices like structuring the project do not apply rigorously to IaC where small projects might decide to keep everything in one folder and larger ones in many folders. Most of the other best practices are rather specific to IaC. Some of these are listed below.

1.      Using remote state: State is as important as version control and ranks among the top best practices specific to IaC because it helps the compiler know what the incremental changes are from what it remembers as having “applied” last time to the infrastructure. A state that support state locking and stored elsewhere to the version control helps it to be treated as immutable and for its backups. It’s even helpful to enable versioning if stored on the public cloud storage accounts for quick and easy state recovery.

2.      Using existing shared and community modules: Instead of writing our own modules for everything and reinventing the wheel, this helps to save time and harness the power of the community. Some are provided by the IaC provider.

3.      Import existing infrastructure: When parts of the infrastructure are created manually, this best practice keeps the IaC in sync with the changes and helps to avoid further manual changes by pushing out the subsequent changes through the pipeline to update code and state together.

4.      Avoiding variables hardcoding: assigning a value to a variable makes it brittle when the IaC must be repurposed or deployed differently. Instead reading it from some data source dynamically helps to keep it in sync when IaC changes.

5.      Always formatting and validating: just like the compiler is unavoidable, striving to use the format and validate tools help to keep the IaC clean and catch any issues that were missed since it is declarative.

6.      Using consistent naming conventions:  This doesn’t require us to be dogmatic but finding something that is comfortable does help. The consistency makes every part of the IaC easy to understand and follow along from project to project.

7.      Tagging the resources: The public clouds offer sufficient documentation for the merits of tagging  and capturing it in IaC along with the consistency definitely helps. Access control policies and cost management features can be implemented using tags.

8.      Writing policy as code: Although policies stand apart from resources, they can and should be captured in code for systems to be operational and secure from deployment onwards. It is also easy to verify the rules when they are in code.

9.      Implementing a secrets management strategy: This comes helpful to preventing disclosure of secrets in files, logs and pipeline artifacts. If necessary, they could be passed in as environment variables although a better option would be store it as a secret.

10.  Enabling debugging and troubleshooting: These features such as enhanced logging on demand can be helpful to narrow down problematic code.

11.  Building modules wherever possible: When there is no community module available, this practice encourages rapid development by just instantiating the object with a suitable set of parameters.

12.  Using loops and conditionals:  Since there can be multiple instances of the code to be managed, using built-in operators and functions can be quite nifty and keep the code readable. Count and foreach are some examples.

13.  Using functions: Along with the above practice, functions are even readily available from the IaC Provider and help to enforce the Do-not Repeat Yourself aka DRY principle. There is a large library of functions to explore and use by virtue of the built-ins from an IaC provider.

14.  Using Dynamic Blocks: Just like the benefits of using functions, this helps to provide flexibility in building resources so that the addition of say, a new rule, does not require to change the configuration.

15.  Using workspaces: this provides a scope for all the definitions so that they can be reused in their entirety, say for different environments.

16.  Maintaining lifecycle of resources:  Determining what changes to recognize can help with keeping the resource definitions and associated change output more manageable.

17.  Using variable validations:  This does a pretty good job of validating inputs especially when the system must fail fast and display helpful error messages before actual deployments.

18.  Leveraging helper tools: Usually, there are many more tools available for use with IaC and pipelines outside the compiler, formatter and validator. Leveraging these can save time and cost.

19.  Using IDE Extensions: This comes helpful when the code is authored in an Integrated Development Environment so that mistakes can be caught as early as when IaC is authored.

20.  Keeping UpToDate with the documentations from the IaC provider because nothing is set in stone, and change is the only constant. The advisories help to prevent mistakes going ahead and are worth reading.

#codingexercise: CodingExercise-01-04-2025.docx

Friday, January 3, 2025

 This is a summary of the book titled “Your AI Survival Guide” written by Sal Rashidi and published by Wiley in 2024. Sal argues that organizations cannot afford to be Laggards and Late majority sections of people adopting AI even if they are non-technical because that is here to stay and unless they want to be eliminated in business. So, the only choices are the Early Majority who adopt technology once it has demonstrated its advantages, early adopters who are more on the forefront, and innovators who pioneer the use of AI in their respective fields. Each group plays a crucial role in the adoption of lifecycle of technology which usually spans the duration until something better replaces it, so there is no wrong pick, but the author’s book lays out everything that helps you uncover your “why” to building your team and making your AI responsible. With applications already ranging from agriculture to HR, the time to be proactive is Now. His playbook involves assessing which AI strategy fits you and your team, selecting relevant use cases, planning how to launch your AI project, choosing the right tools and partners to go live, ensuring the team is gritty, ambitious, and resilient and incorporating human oversight onto AI decision making.

To successfully implement AI within a company, it is essential to balance established protocols with the need to adapt to changing times. To achieve this, consider the reasons for deploying AI, develop an AI strategy, and start small and scale quickly. Choose a qualified AI consultant or development firm that fits your budget and goals. Set a realistic pace for your project. Conduct an AI readiness assessment to determine the best AI strategy for your company. Score yourself on various categories, such as market strategy, business understanding, workforce acumen, company culture, role of technology, and data availability.

Select relevant use cases that align with your chosen AI strategy and measure the criticality and complexity of each use case. For criticality, measure how the use case will affect sales, growth, operations, culture, public perception, and deployment challenges. For complexity, measure how the use case will affect resources for other projects, change management, and ownership. Plan how to launch your AI project well to ensure success and adaptability.

To launch an AI project successfully, outline your vision, business value, and key performance indicators (KPIs). Prioritize project management by defining roles, deliverables, and tracking progress. Align goals, methods, and expectations, and establish performance benchmarks. Outline a plan for post-launch support, including ongoing maintenance, enterprise integration, and security measures. Establish a risk mitigation process for handling unintended consequences. Choose the right AI tool according to your needs and expertise, ranging from low-cost to high-cost, requiring technical expertise. Research options, assess risks and rewards, and collaborate with experts to create standard operating procedures. Ensure your team is gritty, ambitious, and resilient by familiarizing yourself with AI archetypes. To integrate AI successfully, focus on change management, create a manifesto, align company leadership, plan transitions, communicate changes regularly, celebrate small wins, emphasize iteration over perfection, and monitor progress through monthly retrospectives.

AI projects require human oversight to ensure ethical, transparent, and trustworthy systems. Principles for responsible AI include transparency, accountability, fairness, privacy, inclusiveness, and diversity. AI is expected to transform various sectors, generating $9.5 to $15.4 trillion annually. Legal professionals can use AI to review contracts, HR benefits from AI-powered chatbots, and sales teams can leverage AI for automated follow-up emails and personalized pitches. AI will drive trends and raise new challenges for businesses, such as automating complex tasks, scaling personalized marketing, and disrupting management consulting. However, AI opportunities come with risks such as cyber threats, privacy and bias concerns, and a growing skills gap. To seize AI opportunities while mitigating risks, businesses must learn how AI applies to their industry, assess their capabilities, identify high-potential use cases, build a capable team, create a change management plan, and keep a human in the loop to catch errors and address ethical issues.


Thursday, January 2, 2025

 Serverless SQL in Azure offers a flexible and cost-effective way to manage SQL databases and data processing without the need to manage the underlying infrastructure. Here are some key aspects:

Azure SQL Database Serverless

Autoscaling: Automatically scales compute based on workload demand. It bills for the amount of compute used per second2.

Auto-Pause and Resume: Pauses databases during inactive periods when only storage is billed and resumes when activity returns.

Configurable Parameters: You can configure the minimum and maximum vCores, memory, and IO limits.

Cost-Effective: Ideal for single databases with intermittent, unpredictable usage patterns.

Azure Synapse Analytics Serverless SQL Pool

Query Service: Provides a query service over data in your data lake, allowing you to query data in place without moving it.

T-SQL Support: Uses familiar T-SQL syntax for querying data.

High Reliability: Built for large-scale data processing with built-in query execution fault-tolerance.

Pay-Per-Use: You are only charged for the data processed by your queries.

Benefits

Scalability: Easily scales to accommodate varying workloads.

Cost Efficiency: Only pay for what you use, making it cost-effective for unpredictable workloads.

Ease of Use: No infrastructure setup or maintenance required.

The product Neon Database was launched in 2021 for going serverless on a cloud platform as a relational database. Recently it has become cloud native to Azure just like it has been on AWS. This deeper integration of Neon in Azure facilitates rapid app development because postgres sql is the developers’ choice. Serverless reduces operational overhead and frees the developers to focus on the data model, access and CI/CD integration to suit their needs. In fact, Microsoft’s investments in GitHub, VSCode, TypeScript, OpenAI and Copilot align well with the developers’ agenda.

Even the ask for a vector store from AI can be facilitated within a relational database as both Azure SQL and Neon have demonstrated. The compute seamlessly scale up for expensive index builds and back down for normal queries or RAG queries. Since pause during inacitivity and resume for load is automated in serverless, the cost savings are significant. In addition, both databases focus on data privacy.

The following is a way to test the ai vector cosine similarity in a relational database.

1. Step 1: upload a dataset to a storage account from where it can be accessed easily. This must be a csv file with headers like:

id,url,title,text,title_vector,content_vector,vector_id

Sample uploaded file looks like this:



2. Step 2: Use Azure Portal Query Editor or any client to run the following SQL:

a. 00-setup-blob-access.sql

/*

 Cleanup if needed

*/

if not exists(select * from sys.symmetric_keys where [name] = '##MS_DatabaseMasterKey##')

begin

 create master key encryption by password = 'Pa$$w0rd!'

end

go

if exists(select * from sys.[external_data_sources] where name = 'openai_playground')

begin

 drop external data source [openai_playground];

end

go

if exists(select * from sys.[database_scoped_credentials] where name = 'openai_playground')

begin

 drop database scoped credential [openai_playground];

end

go

/*

 Create database scoped credential and external data source.

 File is assumed to be in a path like:

 https://saravinoteblogs.blob.core.windows.net/playground/wikipedia/vector_database_wikipedia_articles_embedded.csv

 Please note that it is recommened to avoid using SAS tokens: the best practice is to use Managed Identity as described here:

 https://learn.microsoft.com/en-us/sql/relational-databases/import-export/import-bulk-data-by-using-bulk-insert-or-openrowset-bulk-sql-server?view=sql-server-ver16#bulk-importing-from-azure-blob-storage

*/

create database scoped credential [openai_playground]

with identity = 'SHARED ACCESS SIGNATURE',

secret = 'sp=rwdme&st=2024-11-22T03:37:08Z&se=2024-11-29T11:37:08Z&spr=https&sv=2022-11-02&sr=b&sig=EWag2qRCAY7kRsF7LtBRRRExdWgR5h4XWrU%2'; -- make sure not to include the ? at the beginning

go

create external data source [openai_playground]

with

(

 type = blob_storage,

  location = 'https://saravinoteblogs.blob.core.windows.net/playground',

  credential = [openai_playground]

);

Go

b. 01-import-wikipedia.sql:

/*

Create table

*/

drop table if exists [dbo].[wikipedia_articles_embeddings];

create table [dbo].[wikipedia_articles_embeddings]

(

[id] [int] not null,

[url] [varchar](1000) not null,

[title] [varchar](1000) not null,

[text] [varchar](max) not null,

[title_vector] [varchar](max) not null,

[content_vector] [varchar](max) not null,

[vector_id] [int] not null

)

go

/*

Import data

*/

bulk insert dbo.[wikipedia_articles_embeddings]

from 'wikipedia/vector_database_wikipedia_articles_embedded.csv'

with (

data_source = 'openai_playground',

    format = 'csv',

    firstrow = 2,

    codepage = '65001',

fieldterminator = ',',

rowterminator = '0x0a',

    fieldquote = '"',

    batchsize = 1000,

    tablock

)

go

/*

Add primary key

*/

alter table [dbo].[wikipedia_articles_embeddings]

add constraint pk__wikipedia_articles_embeddings primary key clustered (id)

go

/*

Add index on title

*/

create index [ix_title] on [dbo].[wikipedia_articles_embeddings](title)

go

/*

Verify data

*/

select top (10) * from [dbo].[wikipedia_articles_embeddings]

go

select * from [dbo].[wikipedia_articles_embeddings] where title = 'Alan Turing'

go

c. 02-use-native-vectors.sql:

/*

    Add columns to store the native vectors

*/

alter table wikipedia_articles_embeddings

add title_vector_ada2 vector(1536);

alter table wikipedia_articles_embeddings

add content_vector_ada2 vector(1536);

go

/*

    Update the native vectors

*/

update

    wikipedia_articles_embeddings

set

    title_vector_ada2 = cast(title_vector as vector(1536)),

    content_vector_ada2 = cast(content_vector as vector(1536))

go

/*

    Remove old columns

*/

alter table wikipedia_articles_embeddings

drop column title_vector;

go

alter table wikipedia_articles_embeddings

drop column content_vector;

go

/*

Verify data

*/

select top (10) * from [dbo].[wikipedia_articles_embeddings]

go

select * from [dbo].[wikipedia_articles_embeddings] where title = 'Alan Turing'

go

d. 03-store-openai-credentials.sql

/*

    Create database credentials to store API key

*/

if exists(select * from sys.[database_scoped_credentials] where name = 'https://postssearch.openai.azure.com')

begin

drop database scoped credential [https://postssearch.openai.azure.com];

end

create database scoped credential [https://postssearch.openai.azure.com]

with identity = 'HTTPEndpointHeaders', secret = '{"api-key": "7cGuGvTm7FQEJtzFIrZBZpOCJxXbAsGOMDd8uG0RIBivUXIfOUJRJQQJ99AKACYeBjFXJ3w3AAABACOGAL8U"}';

go

e. 04-create-get-embeddings-procedure.sql:

/*

    Get the embeddings for the input text by calling the OpenAI API

*/

create or alter procedure dbo.get_embedding

@deployedModelName nvarchar(1000),

@inputText nvarchar(max),

@embedding vector(1536) output

as

declare @retval int, @response nvarchar(max);

declare @payload nvarchar(max) = json_object('input': @inputText);

declare @url nvarchar(1000) = 'https://postssearch.openai.azure.com/openai/deployments/' + @deployedModelName + '/embeddings?api-version=2023-03-15-preview'

exec @retval = sp_invoke_external_rest_endpoint

    @url = @url,

    @method = 'POST',

    @credential = [https://postssearch.openai.azure.com],

    @payload = @payload,

    @response = @response output;

declare @re nvarchar(max) = null;

if (@retval = 0) begin

    set @re = json_query(@response, '$.result.data[0].embedding')

end else begin

    select @response as 'Error message from OpenAI API';

end

set @embedding = cast(@re as vector(1536));

return @retval

go

f. 05-find-similar-articles.sql:

/*

    Get the embeddings for the input text by calling the OpenAI API

    and then search the most similar articles (by title)

    Note: postssearchembedding needs to be replaced with the deployment name of your embedding model in Azure OpenAI

*/

declare @inputText nvarchar(max) = 'the foundation series by isaac asimov';

declare @retval int, @embedding vector(1536);

exec @retval = dbo.get_embedding 'postssearchembedding', @inputText, @embedding output;

select top(10)

    a.id,

    a.title,

    a.url,

    vector_distance('cosine', @embedding, title_vector_ada2) cosine_distance

from

    dbo.wikipedia_articles_embeddings a

order by

    cosine_distance;

go

3. Finally, manually review the results.




Wednesday, January 1, 2025

 This is a summary of the book titled “Your AI Survival Guide” written by Sal Rashidi and published by Wiley in 2024. Sal argues that organizations cannot afford to be Laggards and Late majority sections of people adopting AI even if they are non-technical because that is here to stay and unless they want to be eliminated in business. So, the only choices are the Early Majority who adopt technology once it has demonstrated its advantages, early adopters who are more on the forefront, and innovators who pioneer the use of AI in their respective fields. Each group plays a crucial role in the adoption of lifecycle of technology which usually spans the duration until something better replaces it, so there is no wrong pick, but the author’s book lays out everything that helps you uncover your “why” to building your team and making your AI responsible. With applications already ranging from agriculture to HR, the time to be proactive is now. His playbook involves assessing which AI strategy fits you and your team, selecting relevant use cases, planning how to launch your AI project, choosing the right tools and partners to go live, ensuring the team is gritty, ambitious, and resilient and incorporating human oversight onto AI decision making.

To successfully implement AI within a company, it is essential to balance established protocols with the need to adapt to changing times. To achieve this, consider the reasons for deploying AI, develop an AI strategy, and start small and scale quickly. Choose a qualified AI consultant or development firm that fits your budget and goals. Set a realistic pace for your project. Conduct an AI readiness assessment to determine the best AI strategy for your company. Score yourself on various categories, such as market strategy, business understanding, workforce acumen, company culture, role of technology, and data availability.

Select relevant use cases that align with your chosen AI strategy and measure the criticality and complexity of each use case. For criticality, measure how the use case will affect sales, growth, operations, culture, public perception, and deployment challenges. For complexity, measure how the use case will affect resources for other projects, change management, and ownership. Plan how to launch your AI project well to ensure success and adaptability.

To launch an AI project successfully, outline your vision, business value, and key performance indicators (KPIs). Prioritize project management by defining roles, deliverables, and tracking progress. Align goals, methods, and expectations, and establish performance benchmarks. Outline a plan for post-launch support, including ongoing maintenance, enterprise integration, and security measures. Establish a risk mitigation process for handling unintended consequences. Choose the right AI tool according to your needs and expertise, ranging from low-cost to high-cost, requiring technical expertise. Research options, assess risks and rewards, and collaborate with experts to create standard operating procedures. Ensure your team is gritty, ambitious, and resilient by familiarizing yourself with AI archetypes. To integrate AI successfully, focus on change management, create a manifesto, align company leadership, plan transitions, communicate changes regularly, celebrate small wins, emphasize iteration over perfection, and monitor progress through monthly retrospectives.

AI projects require human oversight to ensure ethical, transparent, and trustworthy systems. Principles for responsible AI include transparency, accountability, fairness, privacy, inclusiveness, and diversity. AI is expected to transform various sectors, generating $9.5 to $15.4 trillion annually. Legal professionals can use AI to review contracts, HR benefits from AI-powered chatbots, and sales teams can leverage AI for automated follow-up emails and personalized pitches. AI will drive trends and raise new challenges for businesses, such as automating complex tasks, scaling personalized marketing, and disrupting management consulting. However, AI opportunities come with risks such as cyber threats, privacy and bias concerns, and a growing skills gap. To seize AI opportunities while mitigating risks, businesses must learn how AI applies to their industry, assess their capabilities, identify high-potential use cases, build a capable team, create a change management plan, and keep a human in the loop to catch errors and address ethical issues.


Tuesday, December 31, 2024

 This is the summary of the book titled “Rumbles: A Curious History of the Gut” written by Elsa Richardson and published by Pegasus books in 2024. For anyone who has read Susie Flaherty’s “Gut Feelings: Microbiome and health”, the topic of intestinal heath might already be familiar. Elsa’s book takes on a journey of the impact of intestinal health over the centuries with its surprising influence on medicine, culture, and politics. Bodily processes such as the activities of the gut and microbiome defines the way people live, think, and govern and our understanding of this phenomenon has led to medical advancements, changes in cultural norms and sparked political movements. Modern technologies inhibit naturally paced, mindful eating and the drive to be productive has eaten away at our work-life balance. The gut can be a predictor of physical and mental health. Hunger and beliefs about digestion can drive social and political change.

The gut's influence on mental health has been a topic of debate throughout history. Initially, people viewed the gut with suspicion, with the Ancient Greeks using it to predict battle outcomes. In the Middle Ages, the gut was seen as a potential source of demonic possession, leading to mental and spiritual chaos. Medical figures like James Johnson connected patients' mental despondency to toxins in their bowels. However, some individuals, like George Cheyne, believed that diet directly influences emotions and mental states. Societal rules about diet and manners have been used to maintain social order and mental and physical well-being. The aim of regulating the gut was to maintain order, both socially and emotionally. By the late 17th century, etiquette books dictated the proper use of utensils and conversational practices, labeling those who followed these rules as "civilized" and those who did not as "savage."

Scientists have studied digestion for centuries to understand deeper aspects of human existence. The digestive process involves the coordination of various organs, enzymes, acids, and muscles. In the 19th century, French-Canadian voyageur Alexis St. Martin's accident allowed Dr. William Beaumont to observe the digestive process, leading to a new understanding of gastric juice and its role in digestion. Modern technologies have inhibited naturally paced, mindful eating, with the gut-brain connection being a significant factor. Avicenna, the founder of modern medicine, argued that the digestive system was designed to store waste and allow humans to focus on higher intellectual pursuits. However, modern distractions like smartphones have led to overeating and obesity, highlighting the need for mindfulness and avoiding multitasking during meals. The push to be productive has eroded people's ability to establish a healthy work-life balance, with pre-packaged meals like sandwiches highlighting the pressures of modern capitalism. Lunch became the barometer for how modern working life affects human health. British unions started to demand the introduction of workplace canteens to provide healthier, more structured meal breaks. British employers introduced canteens during World War 1 for reasons of boosting productivity albeit not for worker well-being.

The history of sanitation and human excrement management reveals how societies have struggled to control the consequences of digestion. In the mid-19th century, London's waste problem led to the installation of public toilets and the construction of a sewer network. Proper disposal of waste became linked to civilization, and hygiene and cleanliness bolstered social hierarchies. The gut can be a predictor of physical and mental health, with studies showing that gut bacteria can forecast potential health outcomes. Health reformers like William Arbuthnot Lane and John Harvey Kellogg argued that modern city life damaged people's digestion, causing constipation. Today, concerns about gut health remain, with concepts like "leaky gut syndrome" and the popularity of probiotics and fermented foods reflecting both old fears and new discoveries about digestion's impact on overall health. A better understanding of the human microbiome has led to treatments like fecal microbiota transplants, which can treat conditions like Crohn's disease, multiple sclerosis, and depression.

Hunger and beliefs about digestion can drive social and political change, serving as a means of bodily control. In 18th-century France, the Digesting Duck, a mechanical creature, was hailed as proof of France's modernity and commitment to scientific progress. Hunger and hunger can drive national upheaval, as seen in post-revolutionary France. Dieting, a concept popularized by figures like William Banting, reinforces societal norms and weight control. Dieting has also played a role in gender politics, reinforcing stereotypes about women's frailty and men's strength. However, suffragettes in the early 20th century reversed harmful gender notions by using their guts as political tools.

#Codingexercise: CodingExercise-12-31-2024.docx 


Sunday, December 29, 2024

 The preceding articles on security and vulnerability management mentioned that organizations treat the defense-in-depth approach as the preferred path to stronger security. They also engage in feedback from security researchers via programs like AI Red Teaming and Bug Bounty program to make a positive impact to their customers. As they evaluate the ROI for their efforts, Bug Bounty and penetration testing have proved of exceptional value. Bug Bounty is a relatively small investment for an organization that can measure the ROI in terms of 1. The absence of incidents or breaches, 2. Risk assessment, 3. Financial savings estimated from avoiding risk or avoiding breaches and 4. Agility and speed of security teams responsiveness, 5. Discount on cyber insurance, and 6. Estimated savings of reputational or customer-related impacts as a result of a security program. Penetration testing, on the other hand, tends to identify systemic or architectural vulnerabilities such as cryptographic weakness or secure design issues which are essential for long-term security but may not be immediately apparent to attackers. It is a bit ironic that organizations discover critical bugs using pentests during the deployment phase. Pentest-as-a-service aka PTaaS is gaining grounds as organizations shift to community-driven. SaaS based models that are more flexible, grant access to a more diverse pool of vetted security researchers, and wider coverage than traditional methods. It is common to discover a dozen vulnerabilities per engagement. Together with bug bounty programs and pentests, organizations gain comprehensive security coverage and achieve greater ROI than before albeit measuring ROI remains a challenge.

In contrast, there is a new metric in the industry that is called ROM or Return on Mitigation that is fast gaining acceptance. It compares the cost of mitigating risks to the potential financial losses from cyber incidents, providing a clear metric to measure how security efforts protect businesses from costly breaches. This nuanced view offers both qualitative and quantitative benefits as it articulates factors such as restoring compromised systems, lost revenue due to downtime legal and regulatory penalties, and damage to public trust and reputation.

ROM= (Anticipated Breach Cost)/(Mitigation Cost)

While ROI is similar to calculating profit percentage in its inspiration for an overall metric for the outcome, the factors that ROM represents are not covered by ROI alone and it highlights the importance of risk management and the overall benefits of security measures.

As with all reports, a human powered security program is needed internally to evaluate the priority and the severity of the reports’ findings and use the data to better understand and protect against malicious hackers. The program draws attention from the whole of the organization and not just the security team. The unique ability of the skilled security professionals to mitigate complex security vulnerabilities and deliver context-driven value, coupled with ROM, makes a compelling business case.

Reference: previous articles

#codingexercise: CodingExercise-12-29-2024.docx