Steve is the Head of Data Science and AI at Australian Computer Society, a proactive social media contributor and LinkedIn influencer.
Artificial intelligence (AI) has evolved exponentially, from driverless vehicles to voice automation in households, and is no longer just a term from sci-fi books and movies. The future of artificial intelligence comes sooner than the projections that were seen in the futuristic Minority Report film. AI will become an essential part of our lives in the next few years, approaching the level of super-intelligent computers that transcend human analytical abilities. Imagine opening your car by coming near it or getting products delivered to your place via drones; AI can make it all a reality.
However, recent discussions about the algorithmic bias reflect the loopholes in the “so perfect” AI systems. The lack of fairness that results from the performance of a computer system is algorithmic bias. In algorithmic bias, the lack of justice mentioned comes in different ways but can be interpreted as one group’s prejudice based on a particular categorical distinction.
Human bias is an issue that has been well researched in psychology for years. It arises from the implicit association that reflects bias we are not conscious of and how it can affect an event’s outcomes. Over the last few years, society has begun to grapple with exactly how much these human prejudices, with devastating consequences, can find their way through AI systems. Being profoundly aware of these threats and seeking to minimize them is an urgent priority when many firms are looking to deploy AI solutions. Algorithmic bias in AI systems can take varied forms such as gender bias, racial prejudice and age discrimination.
The critical question to ask is: What is the root cause for introducing bias in AI systems, and how can it be prevented? In numerous forms, bias may infiltrate algorithms. Even if sensitive variables such as gender, ethnicity or sexual identity are excluded, AI systems learn to make decisions based on training data, which may contain skewed human decisions or represent historical or social inequities.
The role of data imbalance is vital in introducing bias. For instance, in 2016, Microsoft released an AI-based conversational chatbot on Twitter that was supposed to interact with people through tweets and direct messages. However, it started replying with highly offensive and racist messages within a few hours of its release. The chatbot was trained on anonymous public data and had a built-in internal learning feature, which led to a coordinated attack by a group of people to introduce racist bias in the system. Some users were able to inundate the bot with misogynistic, racist and anti-Semitic language. This incident was an eye-opener to a broader audience of the potential negative implications of unfair algorithmic bias in the AI systems.
Facial recognition systems are also under scrutiny. The class imbalance is a leading issue in facial recognition software. A dataset called “Faces in the Wild,” considered the benchmark for testing facial recognition software, had data that was 70% male and 80% white. Although it might be good enough to be used on lower-quality pictures, “in the wild” is a highly debatable topic.
Concerns are arising as to how to test facial recognition technologies transparently. On June 30, 2020, the Association for Computing Machinery (ACM) in New York City called for the cessation of private and government use of facial recognition technologies due to “clear bias based on ethnic, racial, gender and other human characteristics.” The ACM said that the bias caused “profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups.” Due to the pervasive nature of AI, it is crucial to address the algorithmic bias issues to make the systems more fair and inclusive.
Apart from algorithms and data, researchers and engineers developing these systems are also responsible for AI bias. According to VentureBeat, a Columbia University study found that “the more homogenous the [engineering] team is, the more likely it is that a given prediction error will appear.” This can create a lack of empathy for the people who face problems of discrimination, leading to an unconscious introduction of bias in these algorithmic-savvy AI systems.
The hidden use of AI systems in our society can be dangerous for marginalized people. Consequently, people don’t have the option to opt out of these AI systems’ biased surveillance. Countries like the U.S. and China have deployed thousands of cameras, and the AI-enabled cameras track the movements of the people without their consent. It undermines those discriminated against, and it can also mitigate individuals’ willingness to partake in the economy and culture.
By promoting distrust and delivering distorted outcomes, it lowers the potential of AI for industry and society. Company and corporate executives need to ensure that human decision-making is strengthened by the AI technologies they use. They are responsible for supporting scientific advancement and standards that can minimize AI bias.
Joy Buolamwini, a postgraduate researcher at the Massachusetts Institute of Technology, realizes the repercussions of algorithmic bias in our society, and to address it, she founded the Algorithmic Justice League. The organization’s primary goal is to highlight the social and cultural implications of AI bias using art and scientific research. The work of such organizations will be monumental in addressing obscure issues like AI bias. Along with scientific researchers, governments have to join forces to address the AI bias problem toward a more progressive and fair society.
In seeking to explain AI and science in general, one must determine the global societal complexities because most of the fundamental transition emerges at the social level.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify
More Stories
You can now sign up for Telegram without a SIM
Google merges its Waze and Maps teams into one in latest cost-cutting measure
Check Out This Itsy-Bitsy Retro Raspberry Pi Desktop – Review Geek