TY - JOUR AU - Benda, Natalie AU - Desai, Pooja AU - Reza, Zayan AU - Zheng, Anna AU - Kumar, Shiveen AU - Harkins, Sarah AU - Hermann, Alison AU - Zhang, Yiye AU - Joly, Rochelle AU - Kim, Jessica AU - Pathak, Jyotishman AU - Reading Turchioe, Meghan PY - 2024 DA - 2024/9/18 TI - Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study JO - JMIR Ment Health SP - e58462 VL - 11 KW - artificial intelligence KW - AI KW - mental health KW - patient perspectives KW - patients KW - public survey KW - application KW - applications KW - health care KW - health professionals KW - somatic issues KW - radiology KW - perinatal health KW - Black KW - professional relationship KW - patient-health KW - autonomy KW - risk KW - confidentiality KW - machine learning KW - digital mental health KW - computing KW - coding KW - mobile phone AB - Background: The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. Objective: This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. Methods: We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. Results: A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients’ information. Conclusions: Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI’s accuracy, factors that drive patients’ mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient–health professional relationship is preserved when AI is used. SN - 2368-7959 UR - https://mental.jmir.org/2024/1/e58462 UR - https://doi.org/10.2196/58462 DO - 10.2196/58462 ID - info:doi/10.2196/58462 ER -