Abstract
The emergence of generative artificial intelligence (GenAI) in clinical settings—particularly in health documentation and communication—presents a largely unexplored but potentially transformative force in shaping placebo and nocebo effects. These psychosocial phenomena are especially potent in mental health care, where outcomes are closely tied to patients’ expectations, perceived provider competence, and empathy. Drawing on conceptual understanding of placebo and nocebo effects and the latest research, this Viewpoint argues that GenAI may amplify these effects, both positive and negative. Through tone, assurance, and even the rapidity of responses, GenAI-generated text—either co-written with clinicians or peers, or fully automated—could influence patient perceptions in ways that mental health clinicians may not currently fully anticipate. When embedded in clinician notes or patient-facing summaries, AI language may strengthen expectancies that underlie placebo effects, or conversely, heighten nocebo effects through subtle cues, inaccuracies, or potentially via loss of human nuance. This article explores the implications of AI-mediated clinical communication particularly in mental health care, emphasizing the importance of transparency, ethical oversight, and psychosocial awareness as these technologies evolve.
JMIR Ment Health 2025;12:e78663doi:10.2196/78663
Keywords
Introduction
Unlike traditional search engines that return lists of web links, the new wave of chatbots—such as OpenAI’s GPT, Google’s Bard, and Bing AI—responds to user queries with conversational, human-like text. Powered by large language models (LLMs), these systems draw on vast datasets and probabilistic algorithms to generate fluent, contextually relevant responses. Their ability to recognize, summarize, and produce complex content has driven rapid adoption across sectors, including health care. This surge in use has sparked urgent debates about the safety, accuracy, and efficiency of generative artificial intelligence (GenAI) in clinical settings []. Despite the accelerated use of GenAI tools in health care, so far, little attention has been paid to the subtle psychological mechanisms that may be engaged by AI-generated communication, especially in mental health settings. Indeed, mental health care has always been a space where language not only influences treatment but, in the case of psychotherapy, can constitute it [].
This Viewpoint article offers a conceptual exploration of one psychosocial aspect of care: how GenAI could strengthen placebo or nocebo effects. These psychological phenomena, which can exert both positive or negative effects on our health outcomes, are deeply rooted in patient expectations, perceptions of clinician competence and empathy, and relational trust in the clinic []. I propose that GenAI may enhance both placebo and nocebo effects. I also explore the ethical implications of this novel pathway in mental health care, exploring why GenAI has the potential to shift the foci of longstanding debates on practice dilemmas surrounding placebo and nocebo effects.
How AI Could Influence Placebo and Nocebo Effects
Placebo and nocebo effects have traditionally been thought to arise in brick-and-mortar clinic environments. These effects are shaped by what clinicians say and do, that is, how they present information, express confidence, or deliver treatments []. However, patients’ own expectations—shaped by prior experiences, conditioning, and social learning—can also play a key role in their responses [,] In the age of digital tools, placebo and nocebo effects may be increasingly shaped through virtual, asynchronous channels [-].
For example, previous research suggests that patients often interpret online clinical notes more deeply than clinicians expect []. Patient-accessible records, though remote and asynchronous, may serve as a channel through which placebo and nocebo effects are shaped [,]. A clearly worded, hopeful, and empathetic summary could enhance placebo effects by shaping more positive expectations about recovery and therapeutic alliance. In contrast, clinical communications that subtly convey hopelessness or diagnostic pessimism, or which contain significant errors may contribute to loss of trust and anxiety, in turn influencing nocebo effects. Relatedly, in a variety of digital tools, “digital placebo effects” are thought to arise from features of online or app-based health interventions, such as interface design or ease of use, that shape patient expectations with respect to interventions [,].
It is important to emphasize that direct empirical evidence is still lacking, and much more research focus is needed to examine placebo and nocebo effects in digital contexts. Nonetheless, there are plausible reasons to believe that GenAI tools might elicit these effects. This shift is partly driven by the tendency of GenAI tools to evoke anthropomorphism among users [,], that is, people often, unintentionally, lapse into perceiving these systems as human-like agents, attributing to them qualities such as empathy, intent, and competence. These human-like qualities can, in turn, shape patient expectations and emotional reactions—key drivers of placebo and nocebo effects—by fostering a sense of optimistic reassurance or, conversely, anxiety.
First, consider placebo effects. One of the most consistent features of GenAI tools is the speed, fluency, and confidence with which it generates text. Whether summarizing a clinical encounter, or drafting follow-up instructions, studies suggest that GenAI has the potential to appear more coherent and polished than time-pressured human clinicians []. This fluency may signal competence to patients, perhaps more so than hastily typed or fragmented notes, replete with typos and acronyms, written by overburdened clinicians. The result may be that such tools thereby strengthen placebo effects. For example, studies show that clinical notes generated with GenAI tools—including “listening AI” that transcribes visits directly into electronic health records—are typically longer, and may do a better job of unpacking and explaining medical terms, than those written solely by clinicians [-].
Signals of empathy conveyed by GenAI tools may also augment placebo effects. A growing body of research shows that GenAI often conveys rich cues of empathy, particularly cognitive empathy, reflected in its ability to identify patients’ mental states, and compassion, reflected in signature expressions of concern and pro-social engagement [,,]. Notably, as researchers emphasize, these empathetic signals contrast sharply with the often detached tone of physician-authored clinical documentation [,]. For example, GenAI tools have demonstrated a surprising ability to interpret emotional states, and in some cases, even outperform clinicians on measures of social intelligence [,]. A recent study tested whether people could distinguish between therapist- and AI-generated responses. Surprisingly, ChatGPT-4.0 not only proved difficult to distinguish from human therapists but was also rated higher on core therapeutic principles []. In one blinded experiment, physicians rated ChatGPT as 10 times more empathetic in written responses to patients’ queries on an online social media platform [].
On the flip side, GenAI could also amplify nocebo effects by augmenting negative patient expectations. Tools such as ChatGPT are renowned for making authoritative sounding errors (known as “hallucinations”), and a high proportion of recommendations elicited by these tools can be egregiously wrong in health contexts []. From a patient perspective, such errors—if detected, or even if feared—might trigger increased anxiety or worry.
Similarly, when AI adoption erodes trust or undermines confidence in providers, it may heighten patient anxiety, amplifying nocebo effects and increasing the risk of adverse health outcomes. For example, some may find the responses offered by AI, which are designed to be agreeable and user-friendly, instead to be obsequious or lacking in authenticity [,]. It is conceivable that for these individuals, GenAI chatbots may prompt cynicism, either leading to disengagement, or even nocebo effects. According to the latest survey research, public skepticism toward AI in health care remains high. A 2023 study in JAMA Network Open found that nearly two-thirds of American adults express distrust that health care systems will use AI responsibly, with women showing even greater concern []. Crucially, this distrust was not linked to health literacy or AI knowledge. A year later, the 2024 KFF health tracking survey echoed these concerns: 56% doubted the accuracy of chatbot-generated health advice, and most remained uncertain about AI’s role in accessing reliable health information; only 21% saw it as helpful, while 23% believed it does more harm, and 55% were unsure [].
GenAI: A New Clinical Communicator
A growing body of survey research shows that clinicians, including psychiatrists, are incorporating GenAI tools into clinical practice, with the fastest task adoption for documentation [-]. Patients are also increasingly turning to these tools to seek health information or for socioemotional support []. For example, in 2024, a KFF health tracking survey found that GenAI chatbots like ChatGPT emerged as a popular source of health information, with 17% of American adults using them monthly, rising to 25% among those aged under 30 years []. Emerging studies show that patients derive benefits from using these tools to disclose and discuss socially sensitive or emotional concerns, offering round-the-clock support with perceived comfort of anonymity and lack of human judgment [].
The adoption of GenAI tools as a new communicative “agent” in health care may meaningfully shape placebo and nocebo effects, long-recognized phenomena in clinical research. As noted, the placebo effect is a genuine biopsychosocial phenomenon wherein patient expectations, influenced by factors like provider communication, treatment rationale, and the care environment [,,], can lead to meaningful symptom alleviation, particularly for pain, depression, and anxiety [,]. Conversely, nocebo effects involve negative expectations that lead to adverse outcomes, triggered by concerns about side effects, risks, or aspects of treatment delivery or design [,]. Some researchers propose that psychotherapy may also be a setting where nocebo effects emerge, particularly when clinicians unintentionally convey subtle negative suggestions about treatments, symptoms, or prognoses, or when the therapeutic alliance is weakened by a lack of trust [].
Old Ethics, New Tech: Placebo, Nocebo, and Practice Dilemmas
Long-standing, substantial academic literature has focused on the ethics of placebo and nocebo effects. This body of literature has focused almost entirely on how clinicians, primarily physicians [], communicate with patients, examining both verbal and nonverbal cues with only limited attention given to how clinical artifacts and environmental features may also influence placebo and nocebo effects []. The result is that almost exclusively, the focus of ethical attention in placebo studies, including in mental health care, has hitherto been on provider communication. As GenAI tools become increasingly enmeshed in clinical care, especially in mental health settings, their potential to benefit patients, but also cause harm, must not be ignored [,].
To fully evaluate the ethicality of these tools, they cannot be considered in isolation; it is also imperative to explore how these tools compare to traditional human-mediated care, including whether they may fare better or worse for patients along a variety of ethical concerns []. Below, I offer exploratory—rather than exhaustive—reflections on how the advent of GenAI in health care may reframe longstanding ethical dilemmas surrounding placebo and nocebo effects.
Consider the most well-known ethical dilemma pertaining to placebo studies: the administration of placebos in clinical contexts. Here, placebos may be offered to patients with the intention of harnessing beneficial effects, even while doing so deceives the patient. Most medical ethicists strongly oppose this deception, emphasizing that it undermines trust, compromises patient autonomy, and promotes a paternalistic model of care [].
With GenAI, the deception operates at a different level. It’s not just about a physician misleading a patient about a pill; it may involve patients being misled about who they are interacting with. Placebo effects may be amplified through this illusion, but if patients believe they’re communicating with a human when it’s actually a chatbot, trust is at risk. A relevant parallel can be drawn from the well-known “open–hidden” paradigm in placebo research, where patients receiving identical treatments experience significantly stronger effects when the intervention is administered openly by a clinician rather than covertly [,]. These findings underscore how the source and visibility of a therapeutic act, not just its content, can powerfully shape outcomes. Similarly, patients’ perceptions of whether they are interacting with a human or a chatbot may critically influence both placebo and nocebo effects. Although not directly indicative of placebo or nocebo effects, in one notable case, the company Koko, which provides online mental health support, issued a public apology in January 2023 after using ChatGPT to generate emotional responses while misleading users into believing they were written by real people [].
Beyond issues of identity, LLMs themselves can be sources of deception []. Some models have been shown to produce false or misleading responses, and in certain contexts, even reportedly admitting to doing so []. This raises further ethical concerns about trust, transparency, and oversight in patient-facing AI tools.
Complicating matters, even when patients are aware, or later discover, that they were interacting with AI, placebo effects may be dampened but not entirely lost, with meaningful psychological or therapeutic benefits still emerging. For example, a study found that AI-generated messages made recipients feel more understood than human-written ones, and AI was better at detecting emotions []. Yet, once recipients learned the messages came from AI, this effect diminished. However, when comparing transparently labeled AI and human responses, participants rated them almost identically. Notably, however, this does not resolve the ethical tension between respecting patient autonomy and pursuing beneficence through placebo mechanisms. Nevertheless, the study does highlight the complexity of perception, expectation, and ethical transparency in AI-mediated care, suggesting that patients may yet derive placebo effects even when they know they’re “talking to” AI.
Another well-recognized ethical challenge lies in balancing the duty of honesty—such as disclosing potential side effects of medications—with the risk of inducing harm through nocebo effects, where negative expectations may contribute to self-fulfilling adverse outcomes []. Here, it has been proposed, that one way to resolve the ethical dilemma is to reframe information in ways that are truthful but which influence more positive outcomes [].
For instance, rather than stating about a treatment or intervention, “side effects of this drug are quite common—20% of people experience them,” it may be more effective—and equally truthful -to say, “80% of people do not experience side effects.” GenAI tools, with their capacity to rapidly adjust language and tone, offer a promising means of delivering such positively reframed information. Unlike human clinicians, whose communication may be affected by burnout, variability, or limited time, GenAI could help consistently implement ethically sensitive messaging, offering a potential solution to this longstanding ethical dilemma.
In addition, consider a more overshadowed ethical issue in placebo studies: justice [,]. This concern lies in the unequal distribution of harm that may arise from differences in patient–clinician communication. Research shows that patients from marginalized groups—such as those who are racially minoritized, have lower incomes, higher body weight, or psychiatric diagnoses—often receive lower-quality communication, less empathy, and more negative framing in clinical encounters []. These disparities reduce placebo effects, and heighten negative expectations, increasing susceptibility to nocebo effects. Thus, nocebo-related harm may disproportionately affect already disadvantaged populations by compounding existing health inequities, therby raising serious concerns about fairness and justice in care.
GenAI may help bridge some face-to-face health communication disparities by offering more consistent, high-quality interactions that are not influenced by appearance []. Unlike face-to-face encounters, GenAI chatbots do not discriminate based on race, body size, or other visible traits. They also provide scalable access to health information, allowing users to engage at their own pace and literacy level, with customizable tone and style. Notably, the 2024 KFF Health Tracking Survey found that Black (23%) and Hispanic (31%) adults were more likely than White adults (16%) to say AI chatbots help them find accurate health information []. Similarly, 51% of Black and 44% of Hispanic adults reported trusting AI-provided advice, compared to just 29% of White adults, highlighting how GenAI may be perceived as a tool for expanding access in historically underserved communities.
However, the promise of GenAI to reduce disparities, including nocebo-related harms, remains contingent on addressing persistent digital divides []. Patients most vulnerable to health inequities may also face barriers to accessing the very tools designed to support them, such as the lack of digital devices, broadband, or sufficient digital literacy to effectively use GenAI chatbots []. Without tackling these structural barriers, GenAI risks reinforcing, rather than resolving, existing inequities.
Conclusion
As GenAI begins to generate the words patients read—whether within patient accessible records, online messaging with peer support or clinicians, or via fully automated chatbot dialogs—it may influence psychosocial aspects of care. While the concerns of this Viewpoint are exploratory in nature, the arguments rest on emerging research, and further empirical studies are needed to investigate how GenAI may shape placebo and nocebo effects when it comes to real-world mental health care. Understanding how these tools shape patient perceptions is not a peripheral concern; it is central to the future of ethical, effective mental health care. Future research should include qualitative and co-design approaches to center patient voices and explore how patients with mental health conditions perceive, interpret, and respond to GenAI in clinical contexts []. Indeed, while this Viewpoint has focused on mental health, similar mechanisms are likely at play across many areas of medicine, and future work should explore how generative AI may amplify or mitigate placebo and nocebo effects in other clinical contexts, including chronic illness and polypharmacy. Not only what is written, but how it is phrased, may influence patient behavior and outcomes, via placebo and nocebo effects. It is time for the fields of placebo studies and health communication to enter a new era of research.
Conflicts of Interest
None declared.
References
- Blease C, Torous J. ChatGPT and mental healthcare: balancing benefits with risks of harms. BMJ Ment Health. Nov 2023;26(1):e300884. [CrossRef] [Medline]
- Wampold BE, Minami T, Tierney SC, Baskin TW, Bhati KS. The placebo is powerful: estimating placebo effects in medicine and psychotherapy from randomized clinical trials. J Clin Psychol. Jul 2005;61(7):835-854. [CrossRef] [Medline]
- Evers AWM, Colloca L, Blease C, et al. Implications of placebo and nocebo effects for clinical practice: expert consensus. Psychother Psychosom. Aug 16, 2018;87(4):204-210. [CrossRef]
- Locher C, Frey Nascimento A, Kirsch I, Kossowsky J, Meyer A, Gaab J. Is the rationale more important than deception? A randomized controlled trial of open-label placebo analgesia. Pain. Dec 2017;158(12):2320-2328. [CrossRef] [Medline]
- Colloca L, Benedetti F. Placebo analgesia induced by social observational learning. PAIN. Jul 2009;144(1-2):28-34. [CrossRef] [Medline]
- Benedetti F, Pollo A, Lopiano L, Lanotte M, Vighetti S, Rainero I. Conscious expectation and unconscious conditioning in analgesic, motor, and hormonal placebo/nocebo responses. J Neurosci. May 15, 2003;23(10):4315-4323. [CrossRef] [Medline]
- Torous J, Firth J. The digital placebo effect: mobile mental health meets clinical psychiatry. Lancet Psychiatry. Feb 2016;3(2):100-102. [CrossRef] [Medline]
- Blease CR, Delbanco T, Torous J, et al. Sharing clinical notes, and placebo and nocebo effects: Can documentation affect patient health? J Health Psychol. Jan 2022;27(1):135-146. [CrossRef] [Medline]
- Blease C, Torous J, McMillan B, Hägglund M, Mandl KD. Generative language models and open notes: exploring the promise and limitations. JMIR Med Educ. Jan 4, 2024;10:e51183. [CrossRef] [Medline]
- Blease C, Kharko A, Hägglund M, et al. The benefits and harms of open notes in mental health: A Delphi survey of international experts. PLoS ONE. 2021;16(10):e0258056. [CrossRef] [Medline]
- Blease C. Sharing online clinical notes with patients: implications for nocebo effects and health equity. J Med Ethics. Aug 2, 2022. [CrossRef] [Medline]
- Blease C. Out of control: how to design digital placebos. Curr Treat Options Psych. Jun 2, 2023;10(3):109-118. [CrossRef]
- Dennett DC. Intentional systems. J Philos. 1971;68(4):87-106. [CrossRef]
- Epley N, Waytz A, Cacioppo JT. On seeing human: a three-factor theory of anthropomorphism. Psychol Rev. Oct 2007;114(4):864-886. [CrossRef] [Medline]
- Baker HP, Dwyer E, Kalidoss S, Hynes K, Wolf J, Strelzow JA. ChatGPT’s ability to assist with clinical documentation: a randomized controlled trial. J Am Acad Orthop Surg. Feb 1, 2024;32(3):123-129. [CrossRef] [Medline]
- Rosenberg GS, Magnéli M, Barle N, et al. ChatGPT-4 generates orthopedic discharge documents faster than humans maintaining comparable quality: a pilot study of 6 cases. ActaO. 2024;95:152-156. [CrossRef]
- Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. Jun 1, 2023;183(6):589-596. [CrossRef] [Medline]
- Allen JW, Earp BD, Koplin J, Wilkinson D. Consent-GPT: is it ethical to delegate procedural consent to conversational AI? J Med Ethics. Jan 23, 2024;50(2):77-83. [CrossRef] [Medline]
- Hatch SG, Goodman ZT, Vowels L, et al. When ELIZA meets therapists: a Turing test for the heart and mind. PLOS Ment Health. 2025;2(2):e0000145. [CrossRef]
- Kharko A, McMillan B, Hagström J, et al. Generative artificial intelligence writing open notes: a mixed methods assessment of the functionality of GPT 3.5 and GPT 4.0. Digit Health. 2024;10:20552076241291384. [CrossRef] [Medline]
- Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023;14:1199058. [CrossRef] [Medline]
- Sufyan NS, Fadhel FH, Alkhathami SS, Mukhadi JYA. Artificial intelligence and social intelligence: preliminary comparison study between AI models and psychologists. Front Psychol. 2024;15:1353022. [CrossRef] [Medline]
- Goddard J. Hallucinations in ChatGPT: a cautionary tale for biomedical researchers. Am J Med. Nov 2023;136(11):1059-1060. [CrossRef] [Medline]
- MacRae I. Beware the obsequious AI assistant. Psychol Today; 2025. URL: https://www.psychologytoday.com/us/blog/silicon-psyche/202504/beware-the-obsequious-ai-assistant [Accessed 2025-06-06]
- Nong P, Platt J. Patients’ trust in health systems to use artificial intelligence. JAMA Netw Open. Feb 3, 2025;8(2):e2460628. [CrossRef]
- Presiado M, Montero A, Lopez L, Hamel L. KFF health misinformation tracking poll: artificial intelligence and health information. KFF; Aug 2024. URL: https://www.kff.org/health-misinformation-and-trust/poll-finding/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information/ [Accessed 2024-09-13]
- Blease CR, Locher C, Gaab J, Hägglund M, Mandl KD. Generative artificial intelligence in primary care: an online survey of UK general practitioners. BMJ Health Care Inform. Sep 17, 2024;31(1):e101102. [CrossRef] [Medline]
- Blease C, Worthen A, Torous J. Psychiatrists’ experiences and opinions of generative artificial intelligence in mental healthcare: an online mixed methods survey. Psychiatry Res. Mar 2024;333:115724. [CrossRef] [Medline]
- Shryock T. AI special report: what patients and doctors really think about AI in health care. Med Econ; 2023. URL: https://www.medicaleconomics.com/view/ai-special-report-what-patients-and-doctors-really-think-about-ai-in-health-care [Accessed 2023-08-22]
- Siddals S, Torous J, Coxon A. “It happened to be the perfect thing”: experiences of generative AI chatbots for mental health. npj Mental Health Res. 2024;3(1):48. [CrossRef]
- Howe LC, Goyer JP, Crum AJ. Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychol. Nov 2017;36(11):1074-1082. [CrossRef] [Medline]
- Fernández-López R, Riquelme-Gallego B, Bueno-Cavanillas A, Khan KS. Influence of placebo effect in mental disorders research: A systematic review and meta-analysis. Eur J Clin Invest. Jul 2022;52(7):e13762. [CrossRef] [Medline]
- Locher C, Koechlin H, Gaab J, Gerger H. The other side of the coin: nocebo effects and psychotherapy. Front Psychiatry. 2019;10:555. [CrossRef] [Medline]
- Annoni M, Buergler S, Stewart-Ferrer S, Blease C. Placebo studies and patient care: where are the nurses? Front Psychiatry. 2021;12:591913. [CrossRef] [Medline]
- Bernstein MH, Locher C, Kube T, Buergler S, Stewart-Ferrer S, Blease C. Putting the ‘art’ into the ‘art of medicine’: the under-explored role of artifacts in placebo studies. Front Psychol. 2020;11:1354. [CrossRef]
- Blease C, Rodman A. Generative artificial intelligence in mental healthcare: an ethical evaluation. Curr Treat Options Psych. Dec 9, 2024;12(1):5. [CrossRef]
- Bok S. The ethics of giving placebos. Sci Am. Nov 1974;231(5):17-23. [CrossRef] [Medline]
- Tondorf T, Kaufmann LK, Degel A, et al. Employing open/hidden administration in psychotherapy research: A randomized-controlled trial of expressive writing. PLoS One. 2017;12(11):e0187400. [CrossRef] [Medline]
- Benedetti F, Maggi G, Lopiano L, et al. Open versus hidden medical treatments: The patient’s knowledge about a therapy affects the therapy outcome. Prevention & Treatment. 2003;6(1):1-19. [CrossRef]
- Ingram D. A mental health tech company ran an AI experiment on real users. nothing’s stopping apps from conducting more. NBC News; 2023. URL: https://www.nbcnews.com/tech/internet/chatgpt-ai-experiment-mental-health-tech-app-koko-rcna65110 [Accessed 2023-08-13]
- ChatGPT caught lying to developers: new AI model tries to save itself from being replaced and shut down. Econ Times; 2024. URL: https://economictimes.indiatimes.com/magazines/panache/chatgpt-caught-lying-to-developers-new-ai-model-tries-to-save-itself-from-being-replaced-and-shut-down/articleshow/116077288.cms?from=mdr [Accessed 2025-07-02]
- Yin Y, Jia N, Wakslak CJ. AI can help people feel heard, but an AI label diminishes this impact. Proc Natl Acad Sci U S A. Apr 2, 2024;121(14):e2319112121. [CrossRef] [Medline]
- Colloca L, Finniss D. Nocebo effects, patient-clinician communication, and therapeutic outcomes. JAMA. Feb 8, 2012;307(6):567-568. [CrossRef] [Medline]
- Leibowitz KA, Howe LC, Crum AJ. Changing mindsets about side effects. BMJ Open. Jan 2021;11(2):e040134. [CrossRef]
- Friesen P, Blease C. Placebo effects and racial and ethnic health disparities: an unjust and underexplored connection. J Med Ethics. Nov 2018;44(11):774-781. [CrossRef] [Medline]
- Blease C. Sharing online clinical notes with patients: implications for nocebo effects and health equity. J Med Ethics. Jan 2023;49(1):14-21. [CrossRef]
- Inzlicht M, Cameron D, D’Cruz J, Bloom P. In praise of empathic AI. Trends Cogn Sci. Preprint posted online on 2023. URL: https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(23)00289-9 [CrossRef]
- Kohli K, Jain B, Patel TA, Eken HN, Dee EC, Torous J. The digital divide in access to broadband internet and mental healthcare. Nat Mental Health. 2024;2(1):88-95. [CrossRef]
- Jones CMP, Lin CWC, Blease C, Lawson J, Abdel Shaheed C, Maher CG. Time to reflect on open-label placebos and their value for clinical practice. Pain. 2023;164(10):2139-2142. [CrossRef]
Abbreviations
| GenAI: generative artificial intelligence |
| LLM: large language model |
Edited by Stephen Schueller; submitted 06.06.25; peer-reviewed by Marco Annoni, Matthew Muldoon, Otse Ogorry; final revised version received 02.07.25; accepted 14.07.25; published 15.08.25.
Copyright© Charlotte Blease. Originally published in JMIR Mental Health (https://mental.jmir.org), 15.8.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.

