PARQAIR-MH: A participatory initiative to include LGBTQIA+ voices in AI for mental health
Contact: parqair@gmail.com
Contact: parqair@gmail.com
Artificial intelligence (AI) has the potential to revolutionize healthcare by, for example, analyzing data that is routinely collected in electronic health records. However, concerns arise due to AI's potential risks to minority communities, including the LGBTQIA+ community. While comprehensive patient data is crucial for clinical decisions, balancing the collection of data with concerns about privacy and defining how that sensitive data is re-used is especially relevant for stigmatized LGBTQIA+ identities. It is acknowledged that collecting data to support LGBTQIA+ affirmative healthcare is not straight-forward. But data that is incomplete, missing or inaccurately represents sensitive characteristics (such as sexual orientation and gender identity) introduce significant biases into any analysis of that data. This can lead data-driven technologies like AI systems to amplify biases, making unintended harm difficult to detect and address. For a more equitable future, AI in healthcare must be transparent, ethical, and involve marginalized communities in its development. For these reasons, we introduce the PARQAIR-MH initiative1: a multi-stage participatory initiative, aiming to harmonize technology, policy, and LGBTQIA+ community needs.
AI is a powerful technology, proceeding at incredible pace and it is already being applied in healthcare settings. However, recent revelations indicate that AI might pose significant risks to minority communities, including the LGBTQIA+ demographic. This concern is especially relevant in mental healthcare because research has identified specific vulnerabilities for people in the LGBTQIA+ community.
For any technology that makes use of patient data, we require that data about an individual to be comprehensive, granular, and representative. Many in the LGBTQIA+ community have championed the collection of high-quality data on sexual orientation and gender identity, viewing it as pivotal in forwarding population health. Yet, this call for inclusivity is not without its challenges. There are a diversity of proposals for how to collect data on sexual orientation and gender identity; for example, in healthcare contexts, it is clear that the traditional binary sexes of "male" and "female" are inadequate to capture the diversity of gender identity. However, some research proposes that using categories of gender identity fundamentally misrepresents the nature of gender identity itself and given this, question why healthcare providers would need to record categories of gender identity. Others have argued that collecting gender identity is irrelevant to healthcare because instead, we could capture medically-relevant and clinically-useful information using anatomical inventories for understanding how an individual's risk of certain illnesses relates to their organ systems. Similarly, we might ask if sexual orientation should be routinely collected in healthcare records and if so, how should this be recorded? These are challenging questions for which we have limited data and therefore, little guidance available to policy makers, institutions and engineers and scientists involved in using data to improve healthcare provision.
Consider this concrete example: hospitals use some elements of patients' data to discover how they can better provide for the needs of the community they serve. It is impractical to request written consent, for each specific analysis of that data from every patient included in these analyses. Consequently, most nations have legal frameworks to enable these kinds of data re-use provided the data is properly protected and only used for a purpose that clearly serves the interests of the people whose data is being analysed.
All health data is sensitive and requires rigorous safeguards for re-use. Unfortunately, experiencing mental ill-health and having LGBTQIA+ identity continue to attract stigmatising attitudes and behaviour in our societies. Therefore, we may require additional safeguards and extra stewardship of this data for those affected. Put bluntly, just because you can legally analyse the data, it doesn't mean that it is acceptable to those affected.
It is difficult to design data stewardship principles and guidelines for every possible re-use of data collected by a healthcare system. However, it is possible to directly involve people at risk of the adverse consequences of healthcare data re-use to understand what the boundaries are, or when special scrutiny would be required.
There is a pressing demand for AI in healthcare to embrace greater transparency, ethics, and participatory principles. Engaging the LGBTQIA+ community, along with other crucial stakeholders, is a clear imperative in moving forward.
In acknowledgment of these challenges, we've announed the PARQAIR-MH study. It seeks to address the complex network of issues around privacy, data collection and fidelity alonsgide understanding what equitable AI in mental health should look like. With its participatory, consensus-seeking approach, the initiative aims to bridge the gap between technology, policy, and the LGBTQIA+ community, ensuring that advancements in healthcare data-driven technology, especially contemporary AI, are aligned with the needs of the communities it will affect.
The PARQAIR-MH delphi study was granted ethical approval from the Institute of Population Health Research Ethics Committee of the University of Liverpool (Study REC Reference Number: 12413. Date: 24th July 2023).
We are keen to involve as many people as possible in PARQAIR-MH. If you are interested in:
hearing more about public events related to the study
invitations to participate in the Delphi study
learning about any outputs or progress in the PARQAIR-MH initiative
... then please consider joining our secure mailing list, operated by the University of Oxford and administered by one of the PARQAIR-MH organisers, Dr Andrey Kormilitzin. You can subscribe to the mailing below by clicking on "Participate in the initiative" below.
Andrey Kormilitzin (University of Oxford, UK)
Krunoslav Lehman Pavasovic (Meta, ENS, France)
Julia Hamer-Hunt (University of Oxford, UK)
Kevin R. McKee (Google DeepMind, UK)
Nenad Tomasev (Google DeepMind, UK)
Dan W. Joyce (University of Liverpool, UK)
Images are generated using craiyon and edited by Krunoslav Lehman Pavasovic with "Colorful illustration of LGBTQIA+ mental health" as a prompt.