AI for security
The ways in which AI is used for Security Research and
vulnerability management depends a lot on human expertise as much as the risk
management in AI deployments and keeping it trustworthy for everyone. As industry
struggles to catch up with AI improvements, the AI-based tools are often
pitched one against another which is showing significant number of defects and
bringing into question the quality of the tools. LLM-as-a-judge
is one such example to evaluate AI models for security. Among the risks faced
by organizations, the most notable ones are GenAI, supply chain/third parties,
phishing, exploited vulnerabilities, insider threat, and nation-state actors in
that order. While there is growing confidence in managing risks in AI
deployments, the listing of GenAI as a top concern is reflected in the
widespread use of GenAI. There are no standards but there is a growing
perception that AI legislations will help enhance safety and security. Most
organizations are already reaping benefits of GenAI in their operations, so the
ability to defend against AI threats is catching up. In the high-tech sector,
there is a deep understanding of the challenges in securing this emerging
technology while in other industry sectors, there is more concern for
reputational risks of AI.
Safety and security flaws in the AI products are being
addressed with a practice called AI Red Teaming – where organizations invite
security researchers for an external unbiased review. This is highly effective,
and the benefits of cross-company expertise are valuable. The AI assets
inventory must be actively managed to make the most use of this best practice.
Organizations that are engaged in AI Red Teaming have discovered a common
pattern in the common vulnerabilities in AI applications with a simple majority
of those falling in AI safety defects. The remainder comprises business logic
errors, prompt injection, training data poisoning and sensitive information
disclosure. Unlike traditional security vulnerabilities, it is rather difficult
to gate the reporting of defects and presents a different risk profile. This
might explain why the AI safety defect category is dominating other categories.
Companies without AI and automation face longer response
times and higher breach costs. Copilots have gained popularity among security
engineers across the board for the wealth of information at their fingertips.
Agents, monitors, alerts, and dashboards are being sought after by those savvy
to leverage automation and continuous monitoring. Budgets for AI projects
including security are blooming as specific investments become deeper while
others are being pulled back after less successful ventures or initial
experiments.
AI powered vulnerability scanners, for example, quickly
identify potential weak points in a system and AI is also helpful for
reporting. There is a lot of time saved by AI from streamlining processes and
report writing. All the details are still included, the tone is appropriate,
and the review is often easy. This allows security experts to focus on more
complex and nuanced aspects of security testing.
#codingexercise: CodingExercise-12-25-2024.docx