Saturday, September 30, 2023

 

This is a continuation of a previous article on the use of Artificial Intelligence and Product Development. This article talks about the bias against AI as outlined in reputed journals.

A summary of the bias against AI is that some of it comes from inaccurate information from generative AI. Others come from the bias served up by the AI tools. These are overcome with a wider range of datasets. AI4ALL for instance, works to feed AI a broad range of content to be more inclusive of the world. Another concern has been over-reliance on AI. A straightforward way to resolve this is to balance the use of AI with those requiring skilled supervision.

The methodical approach to managing bias involves three steps: First, data and design must be decided. Second, outputs must be checked and third, problems must be monitored.

Complete fairness is impossible due in part to decision-making committees not being adequately diverse and choosing the acceptable threshold for fairness and determining whom to prioritize being challenging. This makes the blueprint for fairness in AI across the board for companies and situations to be daunting. An algorithm can check whether there is adequate representation or weighted threshold, and this is in common use but unless equal numbers of each class is included in the input data, these selection methods are mutually exclusive. The choice of approach is critical. Along with choosing the groups to protect, a company must determine what the most important issue is to mitigate. Differences could stem from the sizes of the group or accuracy rate between the groups. The choices might result in a decision tree where the decisions must align with the company policy.

Missteps remain common. Voice recognition, for example, can leverage AI to reroute sales calls but might be prone to failures with regional accents. In this case, fairness could be checked by creating a more diverse test group. The final algorithm and its fairness tests need to consider the whole population and not just those who made it past the early hurdles. Model designers must accept that data is imperfect.

The second step of checking outputs involves checking fairness by way of intersections and overlaps in data types. When companies have good intentions, there’s a danger that an ill-considered approach can do more harm than good. An algorithm that is deemed neutral can still result in disparate impact on different groups. One effective strategy is a two-model solution such as the generative adversarial networks approach. This is a balanced approach between the original model and a second model where one checks for individual’s fairness. They converge to produce a more appropriate and fair solution.

The third step is to create a feedback loop. Frequently examining the output and looking for suspicious patterns on an ongoing basis, especially where the input progresses with time, is important. Since bias goes unnoticed usually, this can catch it. A fully diverse outcome can look surprising, so people may reinforce bias when developing AI. This is evident in rare events where people may object to its occurrence and might not object if it fails to happen. A set of metrics such as precision and recall can be helpful. Predictive factors and error rates are affected. Ongoing monitoring can be rewarding. For example, demand forecasting by adapting to changes in data and correction in historical bias can show improved accuracy.

A conclusion is that bias may not be eliminated but it can be managed.

 

No comments:

Post a Comment