ASSESSING KENYA’S ARTIFICIAL INTELLIGENCE REGULATORY FRAMEWORK: NAVIGATING CHALLENGES IN ALGORITHMIC CONTENT MODERATION AND SYSTEMIC BIAS IN SOCIAL MEDIA PLATFORMS

 

Illustration by Gemini AI.[i]

 

Article by Maxwell Otieno.

1.    INTRODUCTION

 

The rapid diffusion of artificial intelligence (AI) across social media platforms has transformed how information is produced, distributed, and consumed, with profound implications for democratic discourse, human rights, and equality. As such, regulation of AI-driven algorithms has become a defining feature of contemporary governance due to the pervasiveness of data-driven analytics across social and political life.[1]

The Data Protection Act No. 24 of 2019, the Computer Misuse and Cybercrimes Act No. 5 of 2018, and the National Artificial Intelligence Strategy (2025–2030) together signal a national commitment to regulating the digital domain and promoting innovation. However, the existing regulatory and legal frameworks meant to address the unique challenges AI technologies pose (namely, algorithmic content moderation and systemic bias) are inadequate. Without coherent guidelines and frameworks, AI development might outpace the ability to govern it effectively, leading to potential misuse and harm.[2]

The deployment of recommender systems and automated moderation tools has produced a class of distributed and continuous opaque harms[3] that traditional actor-centered legal frameworks struggle to capture. At the same time, structural constraints exacerbate Kenya’s ability to oversee transnational platforms whose infrastructural capacities far outstrip domestic regulators. These dynamics generate legal and institutional uncertainty for platforms, users, and oversight bodies, complicating the attribution of responsibility and the design of effective accountability mechanisms.

2.    REGULATORY SCOPE AND GAPS OF KENYA’S EXISTING LEGAL AND POLICY INSTRUMENTS

Data Protection Act (2019)

The Act’s protective power is significant but inherently circumscribed by its focus on identifiable personal data, leaving a regulatory void where algorithmic harms intersect with collective discourse. This inadequacy is most evident in cases where AI moderation systems do not merely "process" data but characterize it.

Many algorithmic harms on social media occur through inferred data—where an algorithm assigns a "harmful" attribute to a user’s post based on aggregate trends.[4] Because these inferences often bypass the threshold of "identifiable" information[5] under Sections 2 and 35 of the DPA, the DPA offers an inadequate remedial framework for algorithmic censorship. The DPA does not explicitly regulate the accuracy of algorithmic interpretation outside of individual profiling that produces "legal effects," it thus remains a blunt instrument.  

With regards to systemic bias, the statute regulates the data, but not the model. It lacks explicit provisions requiring bias audits, representativeness of training datasets, or mandatory reporting of disparate-impact metrics. Hence, systemic bias is treated indirectly and primarily as a privacy-risk or individual-harms problem rather than a structural algorithmic governance issue.

Computer Misuse and Cybercrimes Act (2018)

The CMCA contains no express provisions addressing algorithmic moderation design, explainability, or bias. Sections criminalizing unauthorized interference with systems[6] or fraud[7] could, in theory, apply to malicious technical manipulation of moderation tools, but they do not govern legitimate platform design choices that produce biased outcomes.

Consequently, systemic bias arising from algorithmic ranking/moderation sits outside the CMCA’s primary scope except where bias amounts to a legally cognizable offence (e.g., fraud, discrimination under separate laws) and can be linked to culpable misconduct.

Kenya National AI Strategy (2025–2030)

The National AI Strategy is a policy roadmap. Its substantive value is normative and programmatic rather than legal. The Strategy foregrounds fairness, mitigation of bias, and inclusive datasets; it explicitly identifies bias as a priority risk[8]. It notes that there are significant concerns about the ethical use of AI, including issues of bias, discrimination, perpetuation of existing inequalities, and potential exploitation for surveillance and other invasive purposes.[9] It advocates the creation of multi-stakeholder governance bodies (AI councils/directorates)[10], and sandboxes for testing moderation tools[11].

3.    SOCIO-ETHICAL AND LEGAL IMPACTS OF ALGORITHMIC CONTENT MODERATION AND SYSTEMIC BIAS ON SOCIAL MEDIA PLATFORMS IN KENYA

Meareg & 2 others v Meta Platforms, Inc.

Professor Meareg Amare, was a well-known and widely respected Tigrayan member of staff at Bahir Dar University and lived in the city of Bahir Dar for several years. On 9 October 2021, an anonymously run Facebook page called “BDU Staff”, with over 50,000 followers, posted his picture, announcing he was “hiding” at Bahir Dar University where he was working as a chemistry professor and had carried out “abuses”.[12]

In the comments, people called for violence against the professor, calling him a “snake” and suggesting that he posed a risk to people from the Amhara ethnic group.[13] On 3 November 2021 (three weeks after the posts appeared on the BDU staff page) a group of men followed Meareg home from the university where he taught and shot him in the legs and the chest outside of his home.[14]

As a result, a petition was filed at the High Court of Kenya, alleging that the killing occurred after Facebook posts doxed the professor, revealed his home address, and incited violence against him, which Meta failed to remove despite repeated reports. They argued that the respondent’s Facebook algorithm recommended content that amounts to propaganda for war, incitement to violence to the Facebook users in Kenya. They also accused the respondent of granting preferential treatment to users in other countries as opposed to Facebook users in Africa thus is discriminative.

The brutal murder of Professor Meareg Amare serves as a harrowing testament to the real-world consequences of algorithmic moderation. When automated systems prioritize engagement over safety and systemic bias leaves non-English speaking populations vulnerable to automated disinformation, the limitations of self-regulation become undeniable. Consequently, this elicits an urgent necessity to investigate the efficacy of Kenya's AI regulatory framework.

CONCLUSION

To address gaps and mitigate impacts, Kenya should do a number of things. First, it ought to adopt a risk-based, multi-tiered approach. This involves the classification of social media AI systems based on the level of risk they pose. A comprehensive list of recommendations derived from comparative best practices as well as the theoretical underpinnings of this new legal phenomenon is provided in Part 2 of this document. It is imperative that Kenya strategically positions itself as a hub for AI development and adoption. However, this must be preceded by the establishment of robust legal safeguards.

 



[1] Andrea Mennicken and Karen Yeung, Algorithmic Regulation (CARR Discussion Paper No 68, Centre for Analysis of Risk and Regulation 2015).  

[2] Ministry of Information, Communications and the Digital Economy, ‘Kenya National Artificial Intelligence (AI) Strategy 2025–2030’ (2025) https://www.ict.go.ke/kenyas-artificial-intelligence-ai-strategy-2025-2030-launched-kicc-nairobi accessed 8 February 2026

[3] Gilles Deleuze, ‘Postscript on the Societies of Control’ (1992) 59 October 3.

 

 

[4] ‘A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI’ (2019) 2019(2) Columbia Business Law Review 494, 494–620

[5] ibid

[6] Computer Misuse and Cybercrimes Act 2018, s 14.

[7] Computer Misuse and Cybercrimes Act 2018, s 16.

[8] MICDE (n 2) ch 4

[9] MICDE (n 2) ch 4.

[10] MICDE (n 2) ch 5.

[11] MICDE (n 2) ch 3.

[12] NBC News, “Facebook hit with $2 billion lawsuit connected to political violence in Africa”, 14 December 2022, https://www.nbcnews.com/tech/misinformation/facebook-lawsuit-africa-content-moderation-violence-rcna61530

[13] Time, “New lawsuit accuses Facebook of contributing to deaths from ethnic violence in Ethiopia”, 14 December 2022, https://time.com/6240993/facebook-meta-ethiopia-lawsuit/; NBC News, “Facebook hit with $2 billion lawsuit connected to political violence in Africa”, 14 December 2022, https://www.nbcnews.com/tech/misinformation/facebook-lawsuit-africa-content-moderation-violence rcna61530

[14] Foxglove, “Death by design: a major new case against Facebook”, 14 December 2022, https://www.foxglove.org.uk/2022/12/14/death by-design-major-new-case-facebook/

 



[i] 'Create an image that suits this topic: ASSESSING KENYA’S ARTIFICIAL INTELLIGENCE REGULATORY FRAMEWORK: NAVIGATING CHALLENGES IN ALGORITHMIC CONTENT MODERATION AND SYSTEMIC BIAS IN SOCIAL MEDIA PLATFORMS' (Gemini, Gemini 3 Flash Image version, Google, 24 March 2026) https://gemini.google.com accessed 25 March 2026.

 

Comments

Popular posts from this blog

THE EQUITABKE PARADOX- REIMAGINING THE TENSION BETWEEN EQUITY AND COMMON LAW

IS THE CURRENT APPLICATION OF OXYGEN RULE A THREAT TO THE LEGAL PROFESSION?

NO WORKER LEFT BEHIND; RECLAIMING LABOR RIGHTS IN BANGLADESH BEYOND MAY DAY