This is a summary of the book titled “Beyond the algorithm: AI, security, privacy, and ethics” written by Omar Santos and Petar RadanLiev and published by Addison-Welsey in 2024. The authors are AI security experts who explain the legal and security risks associated with AI and ML generated algorithms and statistical models in a variety of industries. Their treatise is both enticing in that its non-technical and guiding as it talks about how to tackle the challenges. AI comes in many forms such as to analyze data, make predictions and automate tasks and they are used in many contexts across different industries. GenAI and LLMs analyze patterns in existing data and generate new content based on those patterns. When they are targeted by malicious and unwanted elements, hackers can exploit security vulnerabilities and target their exploitations in phases. They leverage system vulnerabilities in the form of improper access control. Aside from system vulnerabilities, privacy and ethics are core societal issues. Working with AI demands an understanding of legal issues and regulatory compliance.
Artificial Intelligence (AI) has been a subject of speculation since ancient times, with early advances in the field occurring during World War II. The first significant advances in AI were made during the Dartmouth Conference in 1956, which included AI founders like Alan Turing, Marvin Minsky, and Herbert A. Simon. AI uses Machine Learning (ML) to analyze data, make predictions, and automate tasks. The current AI systems are called "narrow" AI, which performs specific tasks in a human-like way. Some are working on developing "general" AI, which can learn, comprehend, and apply knowledge across various fields. AI comes in various forms, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs). These innovations have the potential to change various businesses and human-computer interactions. For example, natural language generation (NLG) can turn raw, unstructured information into coherent sentences, while AI and ML-powered speech recognition technologies can improve human communication with computers and other technologies.
Generative AI and large language models (LLMs) analyze data patterns to create new content. Generative AI models like ChatGPT-4 can produce content similar to human creation but training them can be challenging. LLMs are based on "transformer models," neural networks that can weigh the significance of every word in a sequence and use multiple centers or "heads." OpenAI's GPT-4 is the most popular use of LLMs, but it raises ethical concerns as it may reproduce biases in the data. Hackers can exploit AI and ML's security vulnerabilities, potentially leading to serious consequences. Adversarial attacks involve subtle manipulation of input data to cause AI models to make mistakes, leading to accidents, financial losses, or deception of national security surveillance systems. Data poisoning attacks alter an AI system's training data, allowing bad actors to infect social media networks and spread misinformation.
Malicious attacks on AI and ML systems occur in phases, starting with reconnaissance and acquiring resources. They can gain initial access through insinuation, poisoning data sources, or manipulating public models. Cyber attackers will use various methods to evade detection, particularly by fooling the system’s breach detection systems. In the final stages, they seek valuable information and attempt to "exfiltrate" the ML system. Identifying system vulnerabilities and ensuring AI system infrastructure security is crucial. Network, physical, and software vulnerabilities can be exploited by attackers. In an AI-dominated world, privacy and ethics are core societal issues. AI has transformed business, industry, and healthcare, and it is essential to ensure fairness and prevent discrimination. Developers must train AI on diverse datasets and monitor algorithm performance to avoid biases and maintain data privacy. Companies should ask users for their personal data and ensure they have access to their data to prevent misuse and maintain security.
AI developers must maintain privacy and security by obtaining user consent for data collection and transparent processing practices. Companies should prioritize data security and safe storage. As AI technologies advance, ethical issues must be considered. Legal and regulatory frameworks for AI are still developing, but they are becoming more significant. The European Union passed the Artificial Intelligence Act (AIA) in 2023, while the UK and the US are working on AI policies prioritizing data security, transparency, equity, and governance. The US is expected to adopt a regulatory framework for AI within the next few years.
Notice how many parallels we can draw between this review and the earlier discussion of AI evaluations in various industries1 as well as vulnerability disclosure programs2. Also, the discussion of AI safety and security3 seems to be one of consensus among professionals.
References:
1. https://1drv.ms/w/c/d609fb70e39b65c8/ERsKj7_TWl9Kgay0P5WWaYMBhgb1-Ko5aaxab3PN2P229g?e=RTEiZr
2. https://1drv.ms/w/c/d609fb70e39b65c8/Edogg2nr_01IgzG768O3328BE9DQ8YP_vs9Bd7afNrz9Jw?e=ujoerM
3. https://1drv.ms/w/c/d609fb70e39b65c8/ETJ4CZIx_ONMgv1bk-qtPOsB3yGrFH1xAvqqUmGOEobWKQ?e=MoEFhX
No comments:
Post a Comment