The integration of artificial intelligence (AI) into everyday life has galvanized a global conversation on the possibilities and perils of AI on human health. In particular, there is a growing need to anticipate and address the potential impact of widely accessible, enhanced, and conversational AI on mental health. We propose 3 considerations to frame how AI may influence population mental health: through the advancement of mental health care; by altering social and economic contexts; and through the policies that shape the adoption, use, and potential abuse of AI-enhanced tools.JMIR Ment Health 2023;10:e49936
The widespread incorporation of artificial intelligence (AI) in daily use has sparked a global dialogue about the potential benefits and risks of AI on human well-being. Specifically, there is an increasing urgency to anticipate and address the potential impact of widely accessible, enhanced, and conversational AI on mental health. We propose 3 points to consider when determining how AI may influence population mental health: through the advancement of mental health care; by altering social and economic contexts; and through the policies that shape the adoption, use, and potential abuse of AI-enhanced tools ().
Prevention, Screening, and Treatment of Mental Health Disorders
With over 970 million people living with a mental disorder worldwide , as well as a shortage of accessible care for many people, leveraging tools such as artificial intelligence (AI) could influence mental health through prevention and treatment. AI-enabled tools can prevent more severe mental illness from developing by identifying higher-risk populations that lead to quicker intervention. AI can detect, assess, and predict stress [ ]. For example, AI can process natural language from electronic health records to detect early cognitive impairment [ ] or child maltreatment [ ], which can have effects on mental health across the course of one’s life.
In addition to preventing mental health challenges through more effective and rapid screening, AI has the potential to improve access to mental health care . One could imagine a world where AI serves as the “front line” for mental health, providing a clearinghouse of resources and available services for individuals seeking help. In addition, targeted interventions delivered digitally can help reduce the population burden of mental illness, particularly in hard-to-reach populations and contexts, for example, through stepped care approaches that aim to help populations with the highest risk following natural disasters.
While AI has promise in terms of early identification of risk and in triaging and treating large volumes of patients, significant flaws exist in using AI for this purpose , including bias that may lead to inaccurate assessment and perpetuation of stereotypes. AI efforts to improve risk prediction thus far have been met with mixed results, such as suicide risk prediction by AI being no better than simpler models [ ]. The recent improvements in AI technology, however, suggest that as AI improves, it can rapidly become more useful to identify risk for personalized interventions [ ]. While some efforts are attempting to leverage AI to deliver mental health care, such as in the form of responsive chatbots, there remains a gulf between vision and implementation—as well as understanding the long-term consequences of replacing human compassion, judgment, and experience with AI-generated responses.
Social and Economic Contexts That Shape Mental Health
More foundationally, AI may shift or exacerbate differences in the distribution of assets, which serve as a buffer against mental health challenges. Mental health is sensitive to economic and social contexts. First, it is possible that AI may transform or modify existing economic contexts, such as distributions of wealth and employment, which both protect mental health. Unemployment is associated with adverse mental health outcomes long after initial job loss occurs . Potential loss of jobs that may follow AI replacement of specific tasks and industries could lead to psychological sequelae, particularly among workers more vulnerable to job loss, borne disproportionately by populations with fewer assets [ ]. In this way, AI could widen existing economic gaps between groups and exacerbate mental health inequities [ , ], thereby fulfilling cumulative inequality theory [ ]. Alternatively, AI may benefit mental health through the creation of new entrepreneurial opportunities and access to capital previously unavailable.
Second, the use of AI and generative AI in particular, with human-like responses, may shift how people interact with each other. Meaningful social connections and social support serve as protective mechanisms against diminished health, and AI may shift how people interact with each other. AI may lead to greater polarization and extremism as users consume curated information and may lead to further breakdown of social networks  and ties that bond and protect mental health.
Policy, Regulation, and Guardrails
The policy environment we live in, along with the values that drive our policies, will inform how AI can influence mental health. AI may create opportunities to rapidly synthesize seemingly unlimited information about individuals; if used maliciously, these tools can cause harm to the health of populations. Three considerations, therefore, will be important in this area as we consider how AI may influence population mental health.
First, policies, standards, and regulations should consider how to safeguard sensitive patient information and individuals’ privacy. Given rapidly evolving technology, services, and functions, regulation has not yet kept up with the potential use and misuse of targeted data. Particularly in the case of sharing sensitive mental health data, it will be important to ensure that patients are protected from exposure to malefactors who can exploit their mental health status. While the Health Insurance Portability and Accountability Act (HIPAA) protects digital patient health information in certain settings, it does not extend to new health ecosystems such as the medical internet of things  and mobile health (mHealth) applications that collect copious data about individuals and their environments. As the landscape of mental health care and well-being evolves, policies to protect privacy will need to evolve. While there may be benefits to highly accurate data, such as faster arrival of support following suicide and crisis lifeline calls [ ], costs include lack of patient privacy and potential abuse by bad actors.
Second, alignment on values and implementation of policy to reduce the influence of bias in AI will be critical to ensure that existing gaps are not exacerbated and that groups are not targeted, mistreated, or maligned intentionally or unintentionally. A growing awareness of the importance of algorithmic fairness has prompted discussion on the appropriate use of AI and machine learning; in the absence of thoughtful intervention, existing algorithms could perpetuate bias and heighten health disparities across groups . Given a history of stigma around mental health in particular, alignment by stakeholders across sectors on the values and sensitivities of using AI broadly will be needed to prevent the exacerbation of stigma and mental health disparities.
Third, guardrails around AI-generated responses can prevent harm. Suicide attempts are more successful when the means used are more lethal; it is possible that users could leverage AI to learn more quickly about self-harm or harming others. Ensuring that AI has built-in guardrails to prevent the proliferation of lethal means and to instead leverage resources to create a pathway to treatment may help to prevent unfavorable outcomes of AI-human engagement.
While AI may pose potential risks and benefits to human mental health, the mechanism by which they occur is through the real world. Mental health and physical health are experienced in real life. Perhaps the best way to prepare for the oncoming changes that new tools will bring will be to ensure that even as we develop new digital tools, we continue to invest in the basic infrastructure, assets, and social connections that we know protect mental health—and make human life worth living.
Conflicts of Interest
- GBD 2019 Mental Disorders Collaborators. Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Psych 2022 Feb;9(2):137-150 [https://linkinghub.elsevier.com/retrieve/pii/S2215-0366(21)00395-3] [CrossRef] [Medline]
- Mentis AA, Lee D, Roussos P. Applications of artificial intelligence-machine learning for detection of stress: a critical overview. Mol Psychiatry 2023 Apr 05:1-13 [CrossRef] [Medline]
- Penfold RB, Carrell DS, Cronkite DJ, Pabiniak C, Dodd T, Glass AM, et al. Development of a machine learning model to predict mild cognitive impairment using natural language processing in the absence of screening. BMC Med Inform Decis Mak 2022 May 12;22(1):129 [https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-022-01864-z] [CrossRef] [Medline]
- Negriff S, Lynch FL, Cronkite DJ, Pardee RE, Penfold RB. Using natural language processing to identify child maltreatment in health systems. Child Abuse Negl 2023 Apr;138:106090 [CrossRef] [Medline]
- Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim H, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep 2019 Nov 07;21(11):116 [https://europepmc.org/abstract/MED/31701320] [CrossRef] [Medline]
- Tornero-Costa R, Martinez-Millana A, Azzopardi-Muscat N, Lazeri L, Traver V, Novillo-Ortiz D. Methodological and quality flaws in the use of artificial intelligence in mental health research: systematic review. JMIR Ment Health 2023 Feb 02;10:e42045 [https://mental.jmir.org/2023//e42045/] [CrossRef] [Medline]
- Shortreed SM, Walker RL, Johnson E, Wellman R, Cruz M, Ziebell R, et al. Complex modeling with detailed temporal predictors does not improve health records-based suicide risk prediction. NPJ Digit Med 2023 Mar 23;6(1):47 [https://doi.org/10.1038/s41746-023-00772-4] [CrossRef] [Medline]
- Gallo WT, Bradley EH, Dubin JA, Jones RN, Falba TA, Teng H, et al. The persistence of depressive symptoms in older workers who experience involuntary job loss: results from the health and retirement survey. J Gerontol B Psychol Sci Soc Sci 2006 Jul 01;61(4):S221-S228 [https://europepmc.org/abstract/MED/16855043] [CrossRef] [Medline]
- Ettman CK, Adam GP, Clark MA, Wilson IB, Vivier PM, Galea S. Wealth and depression: a scoping review. Brain Behav 2022 Mar 08;12(3):e2486 [https://europepmc.org/abstract/MED/35134277] [CrossRef] [Medline]
- Ettman CK, Fan AY, Philips AP, Adam GP, Ringlein G, Clark MA, et al. Financial strain and depression in the U.S.: a scoping review. Transl Psychiatry 2023 May 13;13(1):168 [https://doi.org/10.1038/s41398-023-02460-z] [CrossRef] [Medline]
- Ferraro KF, Shippee TP. Aging and cumulative inequality: how does inequality get under the skin? Gerontologist 2009 Jun 17;49(3):333-343 [https://europepmc.org/abstract/MED/19377044] [CrossRef] [Medline]
- Santos FP, Lelkes Y, Levin SA. Link recommendation algorithms and dynamics of polarization in online social networks. Proc Natl Acad Sci U S A 2021 Dec 14;118(50):e2102141118 [https://europepmc.org/abstract/MED/34876508] [CrossRef] [Medline]
- Theodos K, Sittig S. Health information privacy laws in the digital age: HIPAA doesn't apply. Perspect Health Inf Manag 2021;18(Winter):1l [https://europepmc.org/abstract/MED/33633522] [Medline]
- Purtle J, Chance Ortego J, Bandara S, Goldstein A, Pantalone J, Goldman ML. Implementation of the 988 Suicide & Crisis Lifeline: estimating state-level increases in call demand costs and financing. J Ment Health Policy Econ 2023 Jun 01;26(2):85-95 [Medline]
- Agarwal R, Bjarnadottir M, Rhue L, Dugas M, Crowley K, Clark J, et al. Addressing algorithmic bias and the perpetuation of health inequities: an AI bias aware framework. Health Policy Technol 2023 Mar;12(1):100702 [CrossRef]
|AI: artificial intelligence|
|HIPAA: Health Insurance Portability and Accountability Act|
|mHealth: mobile health|
Edited by J Torous; submitted 14.06.23; peer-reviewed by M del Pozo Banos, A Bachu; comments to author 20.08.23; revised version received 09.10.23; accepted 29.10.23; published 16.11.23Copyright
©Catherine K Ettman, Sandro Galea. Originally published in JMIR Mental Health (https://mental.jmir.org), 16.11.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.