Challenging gender bias in artificial intelligence
With artificial intelligence (AI) rapidly transforming healthcare, ensuring that these technologies are ethical, inclusive, and free from bias has never been more important. In this blog, consultant paediatrician Dr Sharon Jheeta gives an insight into her research on inequities in AI and her involvement in our Trust's AI steering group.
What inspired your career in paediatrics?
Essentially, children are amazing and every day I learn something new from my patients. There is something unique and clear about the way children approach problems which I find completely inspiring. No doubt it can be a demanding speciality physically and emotionally, but this is what keeps me going when the District Line is delayed every morning…
Paediatrics is also an incredibly dynamic and challenging speciality. My colleagues are wonderful and because everyone is passionate about going that extra mile for the child and their family, it is also incredibly rewarding.
The needs of a child are constantly evolving with new risks of the online world and the impact of the Covid pandemic. It is a tough time to be a child or a young person and recognising that their mental health needs have changed, we have had to quickly adapt to learn how to support them and their families.
Dr Sharon Jheeta
"Our Trust AI steering group understands that we need to make AI available, secure and accessible to all, whilst recognising that different ethnic/racial/gender groups have different ideas or preconceptions of AI."
Within paediatrics you have an interest in genomic testing for rare diseases. Tell us more about this.
I believe genomic medicine is the future and when I was a registrar here, I developed an interest in rapid exome sequencing in sick children. There is a huge cohort of children who undergo a long, painful diagnostic journey through multiple tertiary centres, endless tests, scans, opinions and eventually most of them do not get a diagnosis. I have seen how hard it is for those families and the impact it has on their wellbeing.
Genomics and the mainstreaming of genomics will give the best chance of getting those answers and for me this is a gamechanger. There is power in having a diagnosis, and even if there is no therapy available, families are able to move forward. It is also the first step towards Precision Medicine which is the future of medicine.
But this doesn’t come without data risks, and for this reason I am championing that patients remain at the centre of owning their genomic data. Genomics has very quickly become a huge commercial opportunity and I worry about the impact on the reliability and security of this. AI tools are now also being used to interpret data and that brings a whole other load of concerns.
Therefore, education is key, and as clinicians we should be empowering our patients to know and be aware of this. I have been at various conferences and meetings with the commercial sector and one thing I have realised is that the industry wants to hear from doctors and healthcare professionals so the more people who get involved, the better. I sometimes feel like the slightly cynical doctor in these meetings but I do think it is important to ensure that commercial partners be reminded that at the heart of the problem they are trying to solve is a child.
I sit on the Royal College of Paediatricians and Child Health (RCPCH) Genomics Working Group which advises on how to roll out genomic testing within the speciality, as well as promoting genomic testing within the department.
You are working closely with Dr Bob Klaber, director of strategy, research and innovation on our Trust-wide approach to AI, highlighting that in order for it to be ethical, it must be free from bias. Tell us more about this.
Bob is a fellow general paediatrician and has been keen to get as many clinicians involved in the Trust’s AI strategy. AI is here to stay, and I think it is important to harness all the potential benefits whilst recognising its weaknesses too.
The Trust's AI steering group is a wonderful group to be a part of. Often any AI meeting gets overrun with lots of technical jargon that people pretend to understand. But at its core our group is keen to use AI responsibly. I love the word responsible because it gives ownership and accountability to the organisation and this is so key ethically. It is important that patients and families understand this too.
In our AI steering group, this is very much at the forefront, alongside the recognition that we need to make AI available, secure and accessible to all, whilst recognising that different ethnic/racial/gender groups have different ideas or preconceptions of AI.
Can you talk more about your work on gender bias?
The more I began to research the inequities in AI, I became fascinated with gender bias. Weirdly, I had assumed that AI would be able to eliminate any conscious bias because it isn't human, but it was quite the opposite.
Learning models rely on the data that they are trained on and currently the people who are inputting these are men. So as these models are based on biased algorithms, the more they are used they reinforce existing inequalities and gender discrimination in AI. When you ask ChatGPT to generate a picture of a doctor for example, six months ago the first 78 images were of a white male. In the UK, the male to female ratio is 50/50. In the USA, (where most algorithms are made) it’s 60/40. This opens up a massive can of worms. There is no doubt that AI can help – particularly in healthcare – but generating awareness of its pitfalls and its bias is so important to every user.
Dr Sharon Jheeta
"I consider myself to be pretty privileged to be working somewhere which supports women. The Trust is full of living examples of women who are multi-tasking and managing many roles. I look at my colleagues and see amazing and strong role models."
What is interesting is that the more discussion you have with AI providers, the more they are willing to have a dialogue to try and remedy this. Already there are improvements – at the time of writing this blog, I asked ChatGPT to generate an image of a doctor and only the first 34 were white males.
As women we can help shape future improvements, educate everyone on gender bias, and advise industry and corporations that we are aware and therefore they need to change at a technical level. The more women who get involved, the more we can ensure that training samples are as diverse as possible in terms of gender, age, race etc. Of course, the biggest impact would be if the AI industry had more women in senior roles, but I strongly feel that women from other backgrounds such as health can act in an advisory capacity and that’s what I am trying to do.
The theme for International Women’s Day 2025 (8 March) is #AccelerateAction – calling for increased momentum and urgency in addressing the systemic barriers and biases that women face. What has been your experience of working at our Trust?
I consider myself to be pretty privileged to be working somewhere which supports women. The Trust is full of living examples of women who are multi-tasking and managing many roles. I look at my colleagues and see amazing and strong role models.
But I do recognise that not everywhere is like this and therefore I think it is important that women stick together, and we continue to champion ourselves.