As a Chartered Biomedical Scientist, an agile practitioner in tech, and a mentor to technology-focused students, I've seen firsthand how artificial intelligence (AI) can democratize expertise. AI is revolutionizing how we work: speeding up image analysis, flagging anomalies, and aiding decision-making in high-volume labs. It makes advanced diagnostics accessible even in under-resourced areas, supporting clinicians and empowering patients with faster insights.
But promise comes with peril. This revolution must be handled with care, or it risks embedding today’s biases into tomorrow’s standard of care.
Most AI models learn from datasets that under-represent women, ethnic minorities, and non-Western presentations of disease. In diagnostics, algorithms trained predominantly on lighter skin tones can miss subtle changes in melanin-rich samples. Facial-analysis tools used in some research contexts fail women of color at higher rates. When these systems move into clinical workflows, the cost isn’t abstract – it’s misdiagnosis, delayed care, and eroded trust.
Gendered harms add another layer of peril. Women in science, technology, engineering, and mathematics (STEM) already navigate disproportionate online abuse – from sexualized deepfakes to coordinated harassment that drives many out of the profession. AI can scale those harms: generative tools create non-consensual images, biased recruitment algorithms screen out female candidates, and social platforms amplify misogynistic content. The result? A talent pipeline that leaks at every stage, leaving pathology and lab medicine poorer.
It was that bleak vision that prompted me to take action and submit a proposal to the UK's All-Party Parliamentary Group (APPG) on Diversity and Inclusion in STEM. And that action paid off. My proposal was selected to launch the APPG's flagship project: Towards a Fairer AI Future in STEM.
The project marks the start of something vital: a structured exploration of how AI is reshaping STEM, starting with its failures and opportunities in gendered harms.
The APPG’s approach is refreshingly grounded: it begins with listening. Regional roundtables will bring together pathologists, lab staff, patients, educators, under-represented voices – along with experts in AI, tech, and STEM – to share lived experiences. Evidence sessions will examine root causes – narrow data, non-diverse teams, governance gaps. Policy recommendations will follow, with public commitments from organizations to embed equity audits, inclusive design, and diverse datasets from the outset.
This isn’t about slowing innovation; it’s about directing it wisely. A STEM sector that reflects the whole population produces better science – safer algorithms, more robust diagnostics, greater public trust.
For me, this moment closes a circle. I arrived in the UK as a child, faced accent shaming and non-traditional routes into science, yet persisted. Awards like the UK Biomedical Scientist of the Year 2022 and The Pathologist Power List 2025: Leading Voices felt like milestones. Now, selection by the APPG validates the quiet belief that lived experience matters – that a Black British woman from the diaspora can help shape policy such that future generations face fewer barriers.
To my fellow pathologists, lab professionals, agile practitioners, STEM leaders, and beyond, my call to action is to shape our digital future with care. Let’s ensure AI arrives as an ally, not an amplifier of inequality. Let's speak up, share our stories, and demand diversity in data and design. The APPG’s project is an invitation to collaborate – one where our collective voice can make it a leap towards equitable innovation.
I’m proud, humbled, and ready to contribute. Because a fairer AI future in STEM isn’t just good science – it’s good medicine, good society, and good sense.
