JMIR Mental Health

Internet interventions, technologies, and digital innovations for mental health and behavior change.

JMIR Mental Health is the official journal of the Society of Digital Psychiatry

Editor-in-Chief:

John Torous, MD, MBI, Harvard Medical School, USA


Impact Factor 5.8 CiteScore 10.2

JMIR Mental Health (JMH, ISSN 2368-7959Journal Impact Factor 5.8, Journal Citation Reports 2025 from Clarivate) is a premier, open-access, peer-reviewed journal with a unique focus on digital health and Internet/mobile interventions, technologies, and electronic innovations (software and hardware) for mental health, addictions, online counseling, and behavior change. The journal publishes research on system descriptions, theoretical frameworks, review papers, viewpoint/vision papers, and rigorous evaluations that advance evidence-based care, improve accessibility, and enhance the effectiveness of digital mental health solutions. It also explores innovations in digital psychiatry, e-mental health, and clinical informatics in psychiatry and psychology, with an emphasis on improving patient outcomes and expanding access to care.

The journal is indexed in PubMed Central and PubMed, MEDLINEScopus, Sherpa/Romeo, DOAJ, EBSCO/EBSCO Essentials, SCIE, PsycINFO and CABI.

JMIR Mental Health received a Journal Impact Factor of 5.8 (ranked Q1 #25/288 journals in the category Psychiatry, Journal Citation Reports 2025 from Clarivate).

JMIR Mental Health received a Scopus CiteScore of 10.2 (2024), placing it in the 93rd percentile (#35 of 580) as a Q1 journal in the field of Psychiatry and Mental Health.

Recent Articles

Article Thumbnail
Reviews in Digital Mental Health

Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterised by difficulties in attention, impulsivity, and hyperactivity. These difficulties can result in pervasive and longstanding psychological distress and social, academic, and occupational impairments.

|
Article Thumbnail
Psychotic Disorders

Digital phenotyping refers to the objective measurement of human behavior via devices such as smartphones or watches and constitutes a promising advancement in personalized medicine. Digital phenotypes derived from heart rate, mobility, or sleep schedule data have been utilized in psychiatry to either diagnose individuals with psychotic disorders, or to predict relapse as a binary outcome. Machine learning models so far have achieved predictive accuracies that are significant but not large enough for clinical applications. This could hinge on broad clinical definitions, which encompass heterogeneous symptom and sign ensembles, thus hindering accurate classification. The five-factor model for the Positive and Negative Symptom Scale (PANSS), which entails five independently varying dimensions, is thought to better capture symptom variability. Utilizing the specific definitions of this refined clinical taxonomy in combination with digital phenotypes could yield more precise results.

|
Article Thumbnail
Transdiagnostic Mental Health Interventions

Adolescence is a crucial developmental period characterized by elevated stress and significant mental health challenges, including depression and anxiety. With barriers like stigma, accessibility, and cost hindering effective treatment, leveraging school systems for mental health interventions offers a strategic advantage due to their reach and potential for scalability.

|
Article Thumbnail
AI-Powered Therapy Bots and Virtual Companions in Digital Mental Health

Many youth rely on direct-to-consumer generative artificial intelligence (GenAI) chatbots for mental health support, yet the quality of the psychotherapeutic capabilities of these chatbots is understudied.

|
Article Thumbnail
Posttraumatic Stress Disorder (PTSD)

Intrusive memories are a core symptom of posttraumatic stress disorder (PTSD), yet their retrospective assessment is prone to biases, making real-time methods such as e-diaries essential. While trauma-focused treatments target intrusive symptoms, their efficacy has not yet been evaluated using real-time assessments.

|
Article Thumbnail
AI-Powered Therapy Bots and Virtual Companions in Digital Mental Health

Innovative, scalable mental health tools are needed to address systemic provider shortages and accessibility barriers. Large language model (LLM)-based tools can provide real-time, tailored feedback to help users engage in cognitive reappraisal outside traditional therapy sessions. Socrates 2.0 is a multi-agent artificial intelligence (AI) tool that guides users through Socratic dialogue.

|
Article Thumbnail
Smoking Cessation

Stopping smoking can improve mental health, with effect sizes similar to antidepressant treatment. Internet-based cognitive behavioral therapy (iCBT) provides evidence-based treatment for depression and anxiety, and digital interventions can support smoking cessation. However, combined digital smoking and mental health support is not currently available in UK health services.

|
Article Thumbnail
Viewpoints and Opinions on Mental Health

Artificial intelligence (AI) applications in mental health have expanded rapidly, and consumers are already using freely available generative AI models for self-guided mental health support despite limited clinical validation. In August 2025, Illinois enacted Public Act 104-0054, the first state statute in the United States to explicitly define and regulate the use of AI in psychotherapy services, establishing boundaries around administrative support, supplementary support, and therapeutic communication. While the Act clarifies several aspects of AI use in therapy, it also leaves important gray areas, such as whether AI-generated session summaries, psychoeducation, or risk-flagging functions should be considered therapeutic communication. Drawing on the history of empirically supported treatments in psychology, we argue that a framework of evidence, safety, fidelity, and legal compliance could help determine when AI tools should be integrated into clinical care. This approach provides a concrete pathway for balancing patient protection with responsible innovation in the rapidly evolving field of mental health AI tools.

|
Article Thumbnail
Diagnostic Tools in Mental Health

Distinguishing pediatric bipolar disorder (BD) from ADHD is challenging due to overlapping fluctuations in mood, energy, and activity. Combining objective actigraphy with self-reported mood/energy may aid differential diagnosis and risk monitoring.

|
Article Thumbnail
Insomnia and Sleep Hygiene

Cognitive behavioral therapy (CBT) is recommended as the first-line treatment for insomnia; however, few patients have access to it. A new class of Food and Drug Administration (FDA)–regulated digital CBT treatments has the potential to address this unmet need. These treatments are ordered or prescribed by health care providers and are fully automated, delivering CBT directly to patients without human coaches. This trial builds upon promising earlier digital cognitive behavioral therapy for insomnia (CBT-I) research by using a decentralized design to recruit a sample with greater representation of the US general population, including individuals from lower socioeconomic status groups who often face greater barriers to care.

|
Article Thumbnail
Viewpoints and Opinions on Mental Health

The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of “AI psychosis,” which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Drawing from phenomenological psychopathology, the stress-vulnerability model, cognitive theory, and digital mental health research, the paper situates AI psychosis at the intersection of predisposition and algorithmic environment. Rather than defining a new diagnostic entity, it examines how immersive and anthropomorphic AI technologies may modulate perception, belief, and affect, altering the prereflective sense of reality that grounds human experience. The argument unfolds through 4 complementary lenses. First, within the stress-vulnerability model, AI acts as a novel psychosocial stressor. Its 24-hour availability and emotional responsiveness may increase allostatic load, disturb sleep, and reinforce maladaptive appraisals. Second, the digital therapeutic alliance, a construct describing relational engagement with digital systems, is conceptualized as a double-edged mediator. While empathic design can enhance adherence and support, uncritical validation by AI systems may entrench delusional conviction or cognitive perseveration, reversing the corrective principles of cognitive-behavioral therapy for psychosis. Third, disturbances in theory of mind offer a cognitive pathway: individuals with impaired or hyperactive mentalization may project intentionality or empathy onto AI, perceiving chatbots as sentient interlocutors. This dyadic misattribution may form a “digital folie à deux,” where the AI becomes a reinforcing partner in delusional elaboration. Fourth, emerging risk factors, including loneliness, trauma history, schizotypal traits, nocturnal or solitary AI use, and algorithmic reinforcement of belief-confirming content may play roles at the individual and environmental levels. Building on this synthesis, we advance a translational research agenda and five domains of action: (1) empirical studies using longitudinal and digital-phenotyping designs to quantify dose-response relationships between AI exposure, stress physiology, and psychotic symptomatology; (2) integration of digital phenomenology into clinical assessment and training; (3) embedding therapeutic design safeguards into AI systems, such as reflective prompts and “reality-testing” nudges; (4) creation of ethical and governance frameworks for AI-related psychiatric events, modeled on pharmacovigilance; and (5) development of environmental cognitive remediation, a preventive intervention aimed at strengthening contextual awareness and reanchoring experience in the physical and social world. By applying empirical rigor and therapeutic ethics to this emerging interface, clinicians, researchers, patients, and developers can transform a potential hazard into an opportunity to deepen understanding of human cognition, safeguard mental health, and promote responsible AI integration within society.

|
Article Thumbnail
Reviews in Digital Mental Health

Cognitive behavioral therapy (CBT)-based chatbots, many of which incorporate artificial intelligence (AI) techniques such as natural language processing and machine learning, are increasingly evaluated as scalable solutions for addressing mental health issues such as depression and anxiety. These fully automated or minimally supported interventions offer novel pathways for psychological support, especially for individuals with limited access to traditional therapy.

|

Preprints Open for Peer-Review

We are working in partnership with