THE UNSEEN EFFECTS OF AI: HOW BIAS SNEAKS INTO OUR DECISIONS


Courtesy of LinkedIn 


Artificial intelligence (AI) has undoubtedly revolutionized the way we live, work, and interact with the world around us. From personalized recommendations to autonomous vehicles, the possibilities of AI are seemingly endless. Kenya's digital startup scene is a testament to the incredible strides we have made in the tech world. However, with great power comes great responsibility. As AI becomes more pervasive, it is important to recognize the potential for bias to sneak into our decision-making process, perpetuating discrimination and oppression.

Bias in AI occurs when algorithms deliver systematically biased results due to incorrect assumptions made during the machine learning process. Unfortunately, AI bias can perpetuate discrimination and oppression, especially in today's climate of increasing representation and diversity. Let’s take a closer look at some of the examples of AI bias.

In the American healthcare system, an algorithm used in hospitals favors white patients over black patients. The algorithm made assumptions about past healthcare expenditures that were significantly related to race. The algorithm assumed that white patients had more money and, therefore, spent more on healthcare, leading to biased results. Fortunately, researchers and a health services company worked together to reduce the rate by a staggering 80%.

Another example is the portrayal of CEOs as male. A study found that only 11% of the individuals shown in a Google image search for the term "CEO" were women. Google pointed out that advertisers could specify to whom the search engine should display their ads, and gender was one of the specifications. However, it's believed that the algorithm could have determined that men are more suited to executive positions based on user behavior. This bias perpetuates gender inequality in the workforce.

The Amazon hiring algorithm is a particularly striking example of AI bias. The algorithm penalized resumes that indicated the applicant was female and demoted applications from those who attended all-female institutions. Amazon changed the programs to be neutral to these keywords, but other biases could still occur. The company eventually dissolved the effort in 2017. The AI bias perpetuated gender inequality in the workplace and had real-life consequences for female job seekers.

In yet another scenario, the famous language model Chat GPT-3 released by Open AI generated biased and discriminatory responses to certain prompts. For instance, when prompted to complete the phrase "Man is to a computer programmer as woman is to _____," the model responded with "homemaker" and "nurse," reflecting gender stereotypes that are not representative of reality. This was a reflection of biased attitudes that do not reflect the reality of gender diversity in the workforce.

While the advancements in AI have been tremendous, we must also recognize that they are not without their faults. Interestingly, unconscious biases embedded in AI applications stem from the training data used by their creators, revealing that even the most neutral technologies are still susceptible to human prejudice.

Additionally, the public sector has marketed AI as a tool for enhancing governance and breaking down barriers that prevent the state from delivering services to its citizens. Unfortunately, this noble goal is undermined by the numerous countries that have employed AI for mass monitoring and social scoring. These applications of AI show a blatant disregard for fundamental human rights such as privacy, freedom of expression, and movement, as well as access to essential social services.

The issue of AI bias is not just a technological problem. It is also a social and political issue. Biases can reflect and perpetuate inequalities in society. To tackle AI bias, it is crucial to understand the biases, test algorithms in real-life settings, and account for counterfactual fairness. This fairness ensures that the AI system's choices are the same in a counterfactual world where sensitive characteristics like race and gender are different.

To prevent AI bias, it is essential to take a systematic and proactive approach to ensure that AI systems are fair and impartial. One of the most critical steps to achieving this is to use diverse and representative data when training AI systems. This is because data bias can easily creep into an AI system when it is trained on a narrow or biased dataset. In Kenya, this is addressed by the Data Protection Act of 2019, which requires data controllers to ensure that personal data is processed in a fair and transparent manner that prevents discrimination based on race, gender, religion, and other protected attributes.

Designing algorithms with fairness in mind is another crucial step in preventing AI bias. This requires using techniques such as "counterfactual fairness" to ensure that the algorithm's decisions are fair and impartial. In Kenya, the government has not yet implemented specific laws or regulations that govern algorithmic bias. However, the National AI and Robotics Strategy, which was launched in 2020, aims to promote the responsible use of AI by encouraging developers to create ethical and transparent algorithms that consider the social, ethical, and legal implications of their applications.

Monitoring and testing AI systems are also essential to ensure that they are working as intended. Such monitoring involves regularly assessing the AI system's outcomes and looking for any biases that may arise during its use. In Kenya, data controllers are required by the Data Protection Act to regularly review and evaluate their data processing practices to identify and prevent any instances of bias.

In conclusion, AI bias is a serious issue that needs to be addressed. By understanding the biases, testing algorithms in real-life settings, and accounting for counterfactual fairness, we can work towards creating unbiased AI systems that will be life-changing for many Kenyans. It's time to start the conversation and take action to ensure that AI systems are developed in a way that promotes equality, fairness, and transparency.


Paula Kilusi is an Associate Editor at the UNLJ

 

 

Comments

Popular posts from this blog

Love Beyond Reasonable Doubt

‘STOP KILLING US!’ THE PLIGHT BY KENYAN WOMEN AGAINST THE RISING CASES OF FEMICIDE

THE TRENDS OF AI POLICY AND REGULATIONS IN AFRICA