JMIR Mental Health
Internet interventions, technologies, and digital innovations for mental health and behavior change.
JMIR Mental Health is the official journal of the Society of Digital Psychiatry.
Editor-in-Chief:
John Torous, MD, MBI, Harvard Medical School, USA
Impact Factor 4.8 CiteScore 10.8
Recent Articles
While the number of digital therapeutics (DTx) has proliferated, there is little real-world research on the characteristics of providers recommending DTx, their recommendation behaviors, or the characteristics of patients receiving recommendations in the clinical setting. Objective: Characterize the clinical and demographic characteristics of patients receiving DTx recommendations, and describe provider characteristics and behaviors regarding DTx.
Digital mental health is a rapidly growing field with an increasing evidence base due to its potential scalability and impacts on access to mental health care. Further, within underfunded service systems, leveraging personal technologies to deliver or support specialized service delivery has garnered attention as a feasible and cost-effective means of improving access. Digital health relevance has also improved as technology ownership in individuals with schizophrenia has improved and is comparable to that of the general population. However, less digital health research has been conducted in groups with schizophrenia spectrum disorders compared to other mental health conditions, and overall feasibility, efficacy, and clinical integration remain largely unknown.
Depression affects 5% of adults and it is a major cause of disability worldwide. Digital psychotherapies offer an accessible solution addressing this issue. This systematic review examines a spectrum of digital psychotherapies for depression, considering both their effectiveness and user perspectives.
Motivational Interviewing (MI) is a therapeutic technique that has been successful in helping smokers reduce smoking but has limited accessibility due to the high cost and low availability of clinicians. To address this, the MIBot project has sought to develop a chatbot that emulates an MI session with a client with the specific goal of moving an ambivalent smoker towards the direction of quitting. One key element of an MI conversation is reflective listening, where a therapist expresses their understanding of what the client has said by uttering a reflection that encourages the client to continue their thought process. Complex reflections link the client’s responses to relevant ideas and facts to enhance this contemplation. Backward-looking complex reflections (BLCRs) link the client’s most recent response to a relevant selection of the client’s previous statements. Our current chatbot can generate complex reflections - but not BLCRs - using large language models (LLMs) such as GPT-2, which allows the generation of unique, human-like messages customized to client responses. Recent advances in these models, such as the introduction of GPT-4, provide a novel way to generate complex text by feeding the models instructions and conversational history directly, making this a promising approach to generate BLCRs.
Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions.
This article contends that the responsible artificial intelligence (AI) approach—which is the dominant ethics approach ruling most regulatory and ethical guidance—falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI’s impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new “therapeutic” area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools.
National suicide prevention strategies are general population-based approaches to prevent suicide by promoting help-seeking behaviours and implementing interventions. Crisis helplines are one of the suicide prevention resources available for public use where individuals experiencing a crisis can talk to a trained volunteer. Samaritans UK operates on a national scale, with a number of branches located in within each of the UK’s four countries or regions.
Adolescence and early adulthood are pivotal stages for the onset of mental health disorders and the development of health behaviors. Digital behavioral activation interventions, with or without coaching support, hold promise for addressing risk factors for both mental and physical health problems by offering scalable approaches to expand access to evidence-based mental health support.
The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored.
This paper reports on the growing issues experienced when conducting web-based–based research. Nongenuine participants, repeat responders, and misrepresentation are common issues in health research posing significant challenges to data integrity. A summary of existing data on the topic and the different impacts on studies is presented. Seven case studies experienced by different teams within our institutions are then reported, primarily focused on mental health research. Finally, strategies to combat these challenges are presented, including protocol development, transparent recruitment practices, and continuous data monitoring. These strategies and challenges impact the entire research cycle and need to be considered prior to, during, and post data collection. With a lack of current clear guidelines on this topic, this report attempts to highlight considerations to be taken to minimize the impact of such challenges on researchers, studies, and wider research. Researchers conducting web-based research must put mitigating strategies in place, and reporting on mitigation efforts should be mandatory in grant applications and publications to uphold the credibility of web-based research.