Sunday, December 22, 2024

 AI Safety and security are distinct concerns. AI safety focuses on preventing AI systems from generating harmful content, from instructions for creating weapons to offensive language and inappropriate images and demonstrating bias, misrepresentations and hallucinations. It aims to ensure responsible use of AI and adherence to ethical standards. AI security involves testing AI systems with the goal of preventing bad actors from abusing AI to compromise the confidentiality, integrity, or availability of the systems the AI is hosted in.

These are significant concerns that have driven organizations to seek third-party testing. More than half of the AI vulnerability reports submitted pertain to AI safety. These issues often have a lower barrier to entry for valid reporting and present a different risk profile compared to traditional security vulnerabilities. Bounties for AI Safety are lower than those for AI security. The volume of reports is higher and one of top five reported vulnerabilities with the business logic errors, prompt injection, sensitive information disclosure, and training data poisoning constituting the remainder. Prompt injection is a new threat and unlike the perceived dismissals of it as a threat to get the model to say something it shouldn’t, it can be used by attackers to disclose a victim’s entire chat history, files and objects which leads to far greater security risks.

The recommendations for AI safety and security align with the well-recognized charter of a defense-in-depth strategy with fortified security posture at every layer and continuous vulnerability testing throughout software development lifecycle. These include establishing continuous testing, evaluation, verification, and validation, throughout the AI model lifecycle, providing regular executive metrics and updates on AI model functionality, security, reliability and robustness, and regularly scanning, and updating the underlying infrastructure and software for vulnerabilities. There are also some geographical compliance requirements from the mandates issued by country, state or government-specific agencies. In these cases, the regulations might exist around specific AI features, such as facial recognition, and employment-related systems. Organizations do well when they establish an AI governance framework outlining roles, responsibilities, and ethical considerations, including incident response planning and risk management. All employees and users of AI could be trained on ethics, responsibilities, legal issues, AI security risks, best practices including warranty, license, and copyright. Ultimately, this strives to establish a culture of open and transparent communication on the organization’s use of predictive or generative AI.


No comments:

Post a Comment