Wednesday, October 16, 2024

 This is a summary of the book “Cloud Ethics” written by Louise Amoore and published hby the Duke University School of Law in 2020. Most people in the cloud computing industry recognize that algorithms are mainstream when it comes to decision making and governance of human activity and those who build algorithms and models know that bias creeps in from the data. The author challenges the notion that these biases are a fixable glitch. He goes on to explore how the self-generating value judgements which develop from ongoing algorithm-human interactions forms a locus-point for ethicopolitics. A geographically-located understanding of the cloud does not solve the problem of oversight. Algorithmic reasoning works to bring possible links to light rather than confirm the existence of link. Machine Learning algorithms inextricably connect to human practices. Learning algorithms become self-authoring entities prone to hallucinations as they interact with the world. Seeming errors in output are not deviations but are intrinsic to the algorithms’ adaptive, generative abilities. Before an algorithm makes a decision, doubt and uncertainty flourish in a liminal space in which ethical intervention is possible. Cloud Ethic allows individuals to intervene in and take responsibility for an algorithm’s future.

Cloud computing has the potential to analyze complex digital data, but its geographically-located understanding does not solve the problem of oversight. Algorithmic reasoning, which works to bring possible links to light, allows for a more comprehensive understanding of the cloud. By analyzing the threads of power in the present world, algorithms can extract patterns and features from data, determining targets of opportunity, commercial, and governmental interest. These algorithms delineate between the probable and improbable, offering clear actions in response to overwhelming data sets. Algorithmic reasoning is causal, allowing for error and allowing for the creation of new information. For example, algorithms can scrape social media for potential threats, making future events more accessible for law enforcement. All conclusions are malleable and actionable, making cloud computing a valuable tool for addressing privacy concerns and ensuring the protection of users' data.

Machine learning algorithms are closely linked to human practices, as they learn from and with humans and other machines. The ethical issues surrounding machine learning arise from how it shifts the concept of humanness, as it allows robots to perform feats beyond human capabilities. Learning algorithms become self-authoring entities, and while some call for the elimination of biases in algorithms, they require biases to determine what is meaningful. Humans provide initial training data sets and adjust the weighting of certain data inputs, while learning machines adjust parameters and modify their own code in response to data inputs. The output of learning machines is creative and can lead to new inferences, associations, biases, and outcomes. The ethics of the cloud require acknowledging that the output results from infinitely changeable inputs and parameters, and that alternate futures remain possible, regardless of the output.

Algorithms' seemingly crazed outputs are not deviations but intrinsic to their adaptive, generative abilities. They constantly change limits over time in response to new inputs, making the incalculable future seem knowable. When an algorithmic decision causes harm, it results from a system premised upon making calculated decisions in "conditions of nonknowledge." Doubt and uncertainty flourish in a liminal space in which ethical intervention is possible. Algorithms' "truth claims" are based upon their "ground truth" data, which is the training data from which it produces its model of the world. In this sense, the algorithm removes doubt by staying true to its "ground truth data." The ethicopolicial import of bringing doubts inherent in the algorithmic decision-making process to the surface is highlighted by Richard Feynman's investigation of the 1986 Challenger disaster. Cloud ethics stress the ever-incomplete nature of algorithmic decision-making, pointing to the moments in the decision-making process where a different weighting might have produced a different output. People must identify the moments where future possibilities remain open, allowing the parts that comprise the final output to show their limits.

Cloud ethics allows individuals to take r1esponsibility for an algorithm's future, challenging social scientists and scholars to alter the weights, parameters, and assumptions of algorithms. Cloud ethics emphasizes the infinite, ever-shifting nature of attributes and rejects the notion that individuals, groups, or society can be reduced to their attributes. It calls for the preservation of the irresolvable in the face of algorithmic certainty, highlighting the importance of ethics in shaping future possibilities.


#Codingexercise: https://1drv.ms/w/s!Ashlm-Nw-wnWhNNXH-U-qsNwQq3G2g?e=HQp3cA


No comments:

Post a Comment