This is a summary of the book titled “The Equality Machine:
harnessing digital technology for a brighter, more inclusive future” written by
Orly Lobel and published by Public Affairs in 2022. The author proposes “An
Equality Machine” in his drive to use the common grounds of humanity to bridge
two disparate and often at opposite ends of the spectrum of people impacted by
technology: 1. those who fear new technologies due to their potential to
exacerbate existing inequities and 2. those who envision a technological utopia
without anticipating risks. The goal of this proposal is to create a better
future in which humanity uses “technology for good”. It’s common knowledge that
advances in technology such as artificial intelligence and chatbots are recognized
both for their potential to empower as well as their drawbacks in meeting
equity and fairness. Careful auditing can help algorithms from displaying the
same bias as humans do. Making the data more transparent helps to value the
labor involved. Feminizing agents and chatbots can normalize existing
inequities. New technologies also help to discover gaps in representation and
protect people from crime and disease. With their interactions to these
technologies, humans are cognizant of their shift in interactions with others
and with bots. Makers of chatbots and new technological inventions can explore
assumptions that disrupt stereotypes.
The rise of intelligent machines has prompted a need for
upholding values of equity and fairness. Technological change has been
polarized, with insiders focusing on disruption and embracing new technologies,
while outsiders, such as people of color, women, and those from rural areas,
worry about exclusion and inequities. To improve machine fairness, humanity
must strike a balance between naive optimism and fearful pessimism. Machine
learning algorithms can often ascertain identity markers from other data, but
this does not address the root causes of inequities. To prevent algorithmic
models from reflecting human biases, organizations must be proactive about
auditing the output of their AI models as well as their data inputs. Human
resources can run hypothetical job candidates through their AI models to test
for biases and choose more inclusive data sets. AI decision-making can offer
advantages, such as easier dissecting and correcting machine bias than flawed
human decision-making. Additionally, predictive algorithmic models can help
companies screen a larger pool of applicants for more nuanced qualities, such
as high performance and long-term retention. It would be prudent to strike a balance
between machine screening and human review.
Technology can help stakeholders work towards a future of
financial equity by enabling access to vast amounts of data, identifying and
correcting disparities, and reducing biases. Research shows that algorithms
created to reduce bias in the fintech industry were 40% less discriminatory
than humans. Research also shows that companies are more likely to penalize
women for initiating salary negotiations even though men might be praised for
assertiveness. AI and societal shifts towards greater data transparency are
empowering workers with a better understanding of their labor market value.
Some governments have passed legislation banning employers from asking
prospective employees to disclose their past salaries. New digital resources,
such as Payscale, are bringing greater transparency to the salary negotiation
process. Feminizing AI assistants and chatbots can normalize existing
inequities, but companies must reflect on the preference to depict subservient
robots as female. This reinforces gender as a binary construct and promotes
outmoded views of women's roles in society.
Researchers are using new technologies to detect patterns in
representation gaps and address systemic inequities. Natural language
processing (NLP) methods are being used to analyze large amounts of
information, revealing unequal power dynamics and opportunities. AI can be used
to assess whether people with different identity markers are getting equitable
representation in media forms. Machine learning and AI analytics can help
detect gaps in representation and biases in various media industries and inspire
more empowering narratives. Technology can also help protect people from harmful
influences by enabling organizations to share data and develop data hubs. AI
and health data can also help stakeholders accelerate drug discovery and
collaborate to prevent the global spread of viruses. However, democratizing AI
use in medical research contexts is crucial to ensure improved health outcomes
for everyone, not just the rich.
Algorithms and embodied robots are transforming human
connection and social bonds. Algorithmic biases can exacerbate existing class,
racial, and social divides, while the growing prevalence of robots with sexual
capacities is transforming intimacy and emotional connection. Some argue that
framing robots solely as AI-empowered sex dolls is oversimplification, while
others worry about the potential for violence against women.
Roboticists can challenge stereotypes by creating robots
that challenge assumptions. Embodied robots can support humans in various
functions, such as care labor, reception work, and space exploration. However,
some critics worry about privacy risks, consent, and misuse of data.
Robots can surprise those they interact with by disrupting
expectations. NASA uses feminine-looking robots like Valkyrie to support
in-space exploration, while masculine-looking robots like Tank act as
"roboceptionists." These robots demonstrate the choice roboticists
face when designing robots that cater to existing biases or inspire imaginative
new possibilities.
#codingexercise: CodingExercise-01-07-2025.docx