Wednesday, January 15, 2025

 

The preceding articles on security and vulnerability management mentioned that organizations treat the defense-in-depth approach as the preferred path to stronger security. They also engage in feedback from security researchers via programs like AI Red Teaming and Bug Bounty program to make a positive impact to their customers. AI safety and security are primary concerns for the emerging GenAI applications. The following section outlines some of the best practices that are merely advisory and not a mandate in any way.

As these GenAI applications become popular as productivity tools, the speed of AI releases and adoption acceleration must be matched with improvements to existing SecOps techniques. The security-first processes to detect and respond to AI risks and threats effectively include visibility, zero critical risks, democratization, and prevention techniques. Out of these the risks refer to data poisoning that alters training data to make predictions erroneous, model theft where proprietary AI models suffer from copyright infringement, adversarial attacks by crafting inputs that make model hallucinate, model inversion attacks by sending queries that cause data exfiltration and supply chain vulnerabilities for exploiting weaknesses in the supply chain.

The best practices leverage the new SecOps techniques and mitigate the risks with:

1.      Achieving full visibility by removing shadow AI which refers to both unauthorized and unaccounted for AI. AI bill-of-materials will help here as much as setting up relevant networking to ensure access for only allow-listed GenAI providers and software. Employees must also be trained with a security-first mindset.

2.      Protecting both the training and inference data by discovering and classifying the data according to its security criticality, encrypting data at rest and in transit, performing sanitizations or masking sensitive information, configuring data loss prevention policies, and generating a full purview of the data including origin and lineage.

3.      Securing access to GenAI models by setting up authentication and rate limiting for API usage, restricting access to model weights, and allowing only required users to kickstart model training and deployment pipelines.

4.      Using LLM-built-in guardrails such as content filtering to automatically removing or flagging inappropriate or harmful content, abuse detection mechanisms to uncover and mitigate general model misuse, and temperature settings to change AI output randomness to the desired predictability.

5.      Detecting and removing AI risks and attack paths by continuously scanning for and identifying vulnerabilities in AI models, verifying all systems and components that have the most recent patches to close known vulnerabilities, scanning for malicious models, assessing for AI misconfigurations, effective permissions, network resources, exposed secrets, and sensitive data to detect attack paths, regularly auditing access controls to guarantee authorizations and least-privilege principles, and providing context around AI risks so that we can proactively remove attack paths to models via remediation guidance.

6.      Monitoring against anomalies by using detection and analytics at both input and output, detecting suspicious behavior in pipelines, keeping track of unexpected spikes in latency and other system metrics, and supporting regular security audits and assessments.

7.      Setting up incident response by including processes for isolation, backup, traffic control, and rollback, integrating with SecOps tools, and availability of an AI focused incident response plan.

In this way, existing SecOps practices that leverage well-known STRIDE threat modeling and Assets, Activity Matrix and Actions chart with enhancements and techniques specific to GenAI.

References:

Across Industry

Row-level security

Metrics

 

No comments:

Post a Comment