AI Ethics · IBM Framework · POPIA · GDPR

AI Ethics
Framework

Every tool I build and every system I architect is governed by this framework — derived from IBM AI Ethics principles, POPIA legislation, and international best practice.

Established: 19 February 2026 · Aligned with IBM AI Ethics · POPIA Act 4 of 2013 · GDPR 2018

The Five IBM AI Ethics Pillars

My consulting practice and all tools I build are anchored in the five pillars of ethical AI as established by IBM — the organisation that certified my AI expertise.

🔍
Explainability
AI systems I build can explain their reasoning in plain language. No black boxes. Every output can be understood and justified.
IBM AI Ethics · Pillar 1
⚖️
Fairness
AI systems I build are tested for bias and designed to treat all users equitably — regardless of race, gender, age, or geography.
IBM AI Ethics · Pillar 2
🛡️
Robustness
AI systems I build handle unexpected inputs safely, fail gracefully, and maintain reliable performance even in constrained environments.
IBM AI Ethics · Pillar 3
👁️
Transparency
Users always know they are interacting with AI. The source, purpose, and limitations of every AI system I deploy are clearly disclosed.
IBM AI Ethics · Pillar 4
🔒
Privacy
Data minimisation, zero retention, and privacy-by-design are built into every system from the ground up — not added as afterthoughts.
IBM AI Ethics · Pillar 5

My Specific Commitments

Data Sovereignty
Your Data Stays Yours
No data entered into my demo tools is ever stored, shared, or used for any purpose beyond generating your immediate response. Zero retention. By design.
Transparency
Always Disclosed as AI
Every AI tool I build clearly discloses that users are interacting with artificial intelligence. No deception. No impersonation of humans.
Purpose Limitation
Used Only for Stated Purpose
Data collected during consulting engagements is used only for the specific purpose agreed upon in writing. Never repurposed. Never monetised.
Bias Prevention
Actively Anti-Bias
I evaluate AI outputs for cultural, racial, and gender bias before deployment. African contexts are tested specifically — not assumed to match Western defaults.
POPIA Compliance
South African Law First
All tools and consulting services comply with the Protection of Personal Information Act 4 of 2013, including Sections 19 (security) and 22 (notification of breaches).
GDPR Alignment
International Standard
For international clients, all systems are designed to meet GDPR requirements — data minimisation, right to erasure, and explicit consent where required.

I WILL NEVER BUILD AI THAT:

Africa-Specific Ethics Commitments

As an African AI consultant serving African businesses and communities, I hold additional ethical obligations that go beyond global frameworks:

THE MAHLO KGOTLENG ETHICS PROMISE

Every AI system I build, every strategy I recommend, and every tool I deploy will be one I would be proud to demonstrate to any regulator, any community it serves, and any future generation that inherits its impact. I build for dignity. I build for justice. I build for Africa and the world.

Reporting an Ethics Concern

If you believe any tool or service I provide violates these ethics principles, I want to know. Contact me directly at [email protected] with the subject line "Ethics Concern" — I commit to responding within 3 business days and taking appropriate corrective action.

For POPIA-related complaints, you may also contact South Africa's Information Regulator at [email protected] or visit inforegulator.org.za.

<