Published on in Vol 12 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/84501, first published .
Accelerating Digital Mental Health: The Society of Digital Psychiatry’s Three-Pronged Road Map for Education, Digital Navigators, and AI

Accelerating Digital Mental Health: The Society of Digital Psychiatry’s Three-Pronged Road Map for Education, Digital Navigators, and AI

Accelerating Digital Mental Health: The Society of Digital Psychiatry’s Three-Pronged Road Map for Education, Digital Navigators, and AI

1Division of Digital Psychiatry, Beth Israel Deaconess Medical Center, 330 Brookline Ave, Boston, MA, United States

2Department of Psychiatry and Mental Health, Faculty of Medicine, Pontificia Universidad Javeriana, Bogota, Colombia

3JMIR Publications, Toronto, ON, Canada

4The Brain and Mind Centre, The University of Sydney, Camperdown, NSW, Australia

5Centre for Addiction and Mental Health, Toronto, ON, Canada

6Department of Psychiatry and Psychotherapy, Center for Mental Health, Immanuel Hospital Rüdersdorf, Brandenburg Medical School Theodor Fontane, Rüdersdorf, Germany

7Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, India

8College of Nursing, University of Nebraska Medical Center, Ohama, NE, United States

9Department of Psychiatry and Behavioral Sciences at McGovern Medical School, UTHealth Houston, Houston, TX, United States

10Department of Psychiatry and Family Medicine, School of Medicine, Centers for American Indian and Alaska Native Health, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO, United States

11Department of Global Health, Faculty of Medicine and Health Sciences, Institute for Life Course Health Research, Stellenbosch University, Stellenbosch, South Africa

Corresponding Author:

John Torous, MD


Digital mental health tools such as apps, virtual reality, and artificial intelligence (AI) hold great promise but continue to face barriers to widespread clinical adoption. The Society of Digital Psychiatry, in partnership with JMIR Mental Health, presents a 3-pronged road map to accelerate their safe, effective, and equitable implementation. First, education: integrate digital psychiatry into core training and professional development through a global webinar series, annual symposium, newsletter, and an updated open-access curriculum addressing AI and the evolving digital navigator role. Second, AI standards: develop transparent, actionable benchmarks and consensus guidance through initiatives like MindBench.ai to assess reasoning, safety, and representativeness across populations. Third, digital navigators: expand structured, train-the-trainer programs that enhance digital literacy, engagement, and workflow integration across diverse care settings, including low- and middle-income countries. Together, these pillars bridge research and practice, advancing digital psychiatry grounded in inclusivity, accountability, and measurable clinical impact.

JMIR Ment Health 2025;12:e84501

doi:10.2196/84501

Keywords



As research on digital mental health tools, such as apps, virtual reality, and artificial intelligence (AI), continues to expand and suggest benefits, clinical interest has also increased, yet the gap between research and real-world implementation remains stark [1]. Toward bridging this divide, the Society of Digital Psychiatry (SODP) and JMIR Mental Health have developed a multipronged international approach to support training, implementation, and standards for digital mental health worldwide. Education and training must be complemented by workforce support and shared development of new digital and AI tools. Toward these aims, the SODP will support three pillars: (1) education and training through accessible learning activities, (2) workforce development through advancing support and standards for Digital Health Navigators, and (3) the development of clinical AI through creating actionable benchmarks and guidance.

The potential of digital mental health tools has become a global focus, with mental health now viewed as the single greatest health concern [2] and technology supporting this growing need [3]. With smartphone ownership now exceeding 90% in many countries and internet access improving across the world [4], technical barriers to digital mental health continue to fall. Although digital literacy barriers persist and are even greater for non-English speakers, they are now well-recognized, with programs demonstrating the capacity to support even patients with serious mental illness, adolescents navigating developmental transitions, and older adults.

Notably, however, even when reimbursement is available [5], multiple reviews have confirmed that adoption of digital solutions—ranging from mobile apps to digital therapeutics and AI—remains low among both patients and clinicians [6,7]. Patient self-help tools have also faced low engagement with data, suggesting that some degree of clinical support is necessary [8,9], especially since the smartphone app market is nonhomogeneous in providing privacy and efficacy information [10]. These challenges are not new, and landmark research by Bamul et al in 2019 [11] demonstrated that user engagement with mental health apps is negligible after 1 week. In 2019, Ng et al [12] also revealed that a lack of consensus in measuring engagement with and uptake of digital mental health tools precludes progress. The literature also points to inequities between high-income countries and low- and middle-income countries [13]. Despite the abundance of apps, only 14.5% are available in Spanish and none have undergone evaluation with Spanish-speaking users [14]. Moreover, digital competency—particularly patients’ and clinicians’ ability to effectively use and evaluate mental health apps—remains difficult to assess, as only a few validated instruments are currently available [15]. In 2015, Ben-Zeev et al [16] called for the need for a technology coach to support integration and engagement. Now, in 2025, these challenges remain largely unsolved, with engagement remaining low, standards nonexistent, and the use of technology coaches, now referred to as digital navigators [17], still minimal.

Although continued research will provide new solutions, additional approaches are necessary to ensure progress in digital mental health implementation can accelerate to meet the need for clinical care. The SODP is implementing a 3-pronged approach (Figure 1). First, expand education to increase clinical uptake and engagement with digital tools. Second, develop standards for AI to catalyze progress by establishing clear benchmarks for success and guiding future innovation. Lastly, provide training and support for digital navigators to facilitate broad implementation.

Figure 1. The Society of Digital Psychiatry’s 3-pronged approach. AI: artificial intelligence.

The SODP is dedicated to promoting clinician education in digital mental health. Today, many medical and nursing schools, peer specialist training programs, psychology and therapy degree programs, and other clinical training pathways do not cover digital mental health. Clinicians are understandably hesitant to recommend digital tools without adequate training or evidence-based guidance. Even when motivated to integrate technology, they frequently experience pressure to preserve existing workflows and not address ongoing barriers such as patient engagement and digital literacy, resulting in technology being deprioritized. These challenges are compounded by the lack of clinic- and system-level infrastructures to facilitate effective implementation [18].

JMIR Mental Health serves as the official journal of the SODP, embodying its shared mission to advance and disseminate cutting-edge research in digital mental health. So, in response to these challenges, the journal and society jointly host a monthly webinar series that highlights emerging innovations and diverse perspectives from invited speakers, featuring experts in digital psychiatry, internet interventions, cutting-edge technologies, mental health policy, ethics, and more [19]. Guests are thoughtfully selected based on their real-world work and ability to present up-to-date content. This series is designed to spotlight rigorous, critical, and forward-looking research that explores how and why new digital mental health tools and technologies work, their limitations, and their place within broader care models. This collaborative effort not only extends the reach of this research to a global audience but also enhances the society’s visibility and representation at major conferences, including the American Psychiatric Association annual meeting. These events provide a crucial platform for authors, editors, reviewers, and readers to engage with both JMIR Mental Health and SODP, learn about their initiatives, and contribute to the broader movement for evidence-based, peer-reviewed research. Complementary outreach activities, such as participation in the American Psychiatric Association’s Mental Health Innovation Zone and the publication of postevent blogs, further extend the journal and society’s reach, ensuring their work bridges the gap between research and clinical implementation. The recordings from these monthly webinars are subsequently repurposed into evergreen video content and made available asynchronously on YouTube and other social media platforms, including LinkedIn, X (formerly Twitter), Facebook, and Bluesky.

A similar approach is applied to the annual symposium, with each session produced and promoted individually to transform valuable knowledge into lasting, accessible educational resources. Furthermore, a quarterly newsletter copublished by JMIR Mental Health and the SODP provides insights into emerging trends and developments in digital mental health, featuring highlighted books, recent webinar recordings, upcoming industry events, and a quarterly question-and-answer section. This newsletter serves as a free, credible resource drafted by the SODP coleaders with citations and references for the 300 to 400 interprofessional members, ensuring they remain informed about advances in the field. As an immediate next step, the SODP will formally support the training and educational goals of clinicians and researchers by collaborating on a core curriculum with major educational bodies and providing a structured training plan for researchers, which is often required when applying for career development awards. The first step will be to release a new curriculum that updates the older one [20], with a focus on the expanded role of the Digital Health Navigator today as well as AI. Training will be accessible online, and coleaders will aim to offer local treatment in their region that will expand with time.


The SODP is also committed to advancing the safe and effective application of AI to improve and expand mental health care. As AI and large language models become increasingly powerful and capable of enhancing aspects of clinical care, the need for clear benchmarks and standards is crucial [21]. Without effective ways to evaluate the risks and benefits of AI in mental health, the field will be unable to self-regulate and pursue the most promising advances. Today, there are over 50 frameworks to evaluate AI, yet none of them are actionable and able to guide care considerations or research advances [22-86]. Leveraging the global reach of the SODP, we will propose clear and actionable recommendations that will improve transparency, understanding, and collaboration in the mental health AI space. Importantly, these recommendations will emphasize the need for representativeness across populations—ensuring that datasets, evaluations, and implementations reflect the diversity of real-world users and contexts—to enhance fairness and generalizability. We will organize a Delphi consensus meeting with SODP members representing stakeholders from around the world. In addition, we will support the creation of benchmarking tools through the MindBench.ai initiative, which will enable any AI program to score against a representative, comprehensive, and rigorous battery of cases.


Ultimately, the SODP is dedicated to promoting the growth of digital navigator roles. Digital navigators are currently conceptualized as human support coaches who can assist with digital access and equity, patient engagement, and the clinical integration of any digital health technology, are increasingly in demand [17,87-93]. However, foundational and scalable training, let alone certification and support for the role, is missing. The SODP will share leading examples of digital navigator programs worldwide to offer training and support through a theme issue in JMIR Mental Health, training on the SODP website, and training for the trainers by SODP coleaders. While each team will need to customize the role for their unique clinical setting and technology use case, there are core skills and competencies that the SODP will support to ensure consistent base training, fidelity to that training, and access to new training and skills as the field evolves while also exploring how best to adapt the digital navigator role into diverse health systems across the globe.

Education, standards, and digital navigator training are not panaceas that negate the need for more research. Instead, each of these 3 efforts can support new research advances by ensuring they reach clinicians quickly through education, are transparent about risks and benefits through standards, and are ready to be implemented through digital navigators. Thus, the ongoing partnership between the SODP and JMIR Mental Health provides an ideal platform to ensure that the best research informs the SODP’s efforts and its efforts to advance that research toward its goal of supporting care.

With this renewed focus, it is essential to note that the SODP’s name refers to the goal to support digital tools, including but not limited to therapy-based ones, and all patients, including those with severe mental illness. Such a goal requires an inclusive team, and the society’s educational focus already reflects this, as well as our leadership team, which represents different roles, professions, and voices from around the world. Looking ahead, the society is also committed to ensuring these efforts are relevant and actionable across the Global South by supporting culturally adapted AI standards, supporting scalable digital navigator training tailored for low- and middle-income country contexts, and fostering equitable research and policy partnerships worldwide.

Membership today stands between 300 and 400 interprofessional members and remains free at this time, with access at [94]. We hope you will join us in supporting the acceleration of digital mental health toward improving outcomes for all patients today and preventing these illnesses for people tomorrow.

Conflicts of Interest

JT is the Editor-in-Chief of JMIR Mental Health, GS is an Associate Editor for JMIR Mental Health, JH was Digital Marketing Coordinator at JMIR Publications at the time of submission, SK is a Managing Editor of JMIR Mental Health at JMIR Publications (as of November 2025). All authors are affiliated with SODP in either a colead or JMIR staff role.

  1. Torous J, Linardon J, Goldberg SB, et al. The evolving field of digital mental health: current evidence and implementation issues for smartphone apps, generative artificial intelligence, and virtual reality. World Psychiatry. Jun 2025;24(2):156-174. [CrossRef] [Medline]
  2. Stinson J. Ipsos Health Service Report 2024: mental health seen as the biggest health issue. Ipsos; Sep 17, 2024. URL: https://www.ipsos.com/en-us/ipsos-health-service-report [Accessed 2025-11-14]
  3. Rudd BN, Beidas RS. Digital mental health: the answer to the global mental health crisis? JMIR Ment Health. Jun 2, 2020;7(6):e18472. [CrossRef] [Medline]
  4. Wike R, Silver L, Fetterolf J, Huang C, Austin S, Clancy L, et al. Internet, smartphone and social media use. Pew Research Center’s Global Attitudes Project; 2022. URL: https:/​/www.​pewresearch.org/​global/​2022/​12/​06/​internet-smartphone-and-social-media-use-in-advanced-economies-2022/​ [Accessed 2022-08-22]
  5. Wüllner S, Hecker T, Flottmann P, Hermenau K. Why do psychotherapists use so little e-mental health in psychotherapy? Insights from a sample of psychotherapists in Germany. PLOS Ment Health. Mar 19, 2025;2(3):e0000270. [CrossRef]
  6. van Kessel R, Roman-Urrestarazu A, Anderson M, et al. Mapping factors that affect the uptake of digital therapeutics within health systems: scoping review. J Med Internet Res. Jul 25, 2023;25:e48000. [CrossRef] [Medline]
  7. Ridout SJ, Ridout KK, Lin TY, Campbell CI. Clinical use of mental health digital therapeutics in a large health care delivery system: retrospective patient cohort study and provider survey. JMIR Ment Health. Oct 2, 2024;11:e56574. [CrossRef] [Medline]
  8. Nowels MA, McDarby M, Brody L, et al. Predictors of engagement in multiple modalities of digital mental health treatments: longitudinal study. J Med Internet Res. Nov 7, 2024;26:e48696. [CrossRef] [Medline]
  9. Jardine J, Nadal C, Robinson S, Enrique A, Hanratty M, Doherty G. Between rhetoric and reality: real-world barriers to uptake and early engagement in digital mental health interventions. ACM Trans Comput-Hum Interact. Apr 30, 2024;31(2):1-59. [CrossRef]
  10. Camacho E, Cohen A, Torous J. Assessment of mental health services available through smartphone apps. JAMA Netw Open. Dec 1, 2022;5(12):e2248784. [CrossRef] [Medline]
  11. Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res. Sep 25, 2019;21(9):e14567. [CrossRef] [Medline]
  12. Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv. Jul 1, 2019;70(7):538-544. [CrossRef] [Medline]
  13. Gama B, Laher S. Self-help: a systematic review of the efficacy of mental health apps for low- and middle-income communities. J Technol Behav Sci. Sep 2024;9(3):428-439. [CrossRef]
  14. Muñoz AO, Camacho E, Torous J. Marketplace and literature review of Spanish language mental health apps. Front Digit Health. 2021;3:615366. [CrossRef] [Medline]
  15. Faux-Nightingale A, Philp F, Chadwick D, Singh B, Pandyan A. Available tools to evaluate digital health literacy and engagement with eHealth resources: a scoping review. Heliyon. Aug 2022;8(8):e10380. [CrossRef] [Medline]
  16. Ben-Zeev D, Drake R, Marsch L. Clinical technology specialists. BMJ. Feb 19, 2015;350:h945. [CrossRef] [Medline]
  17. Alon N, Perret S, Cohen A, et al. Digital navigator training to increase access to mental health care in community-based organizations. Psychiatr Serv. Jun 1, 2024;75(6):608-611. [CrossRef] [Medline]
  18. Emerson MR, Johnson DJ, Dinkel D, Thomas R, Culjat C. Feasibility and implementation of depression and trauma-focused mobile apps in integrated primary care clinics: lessons learned from two pilot studies. Fam Syst Health. Jul 3, 2025. [CrossRef] [Medline]
  19. Webinars. Society of Digital Psychiatry. URL: https://www.sodpsych.com/webinars-1 [Accessed 2025-11-14]
  20. Wisniewski H, Gorrindo T, Rauseo-Ricupero N, Hilty D, Torous J. The role of digital navigators in promoting clinical care and technology integration into practice. Digit Biomark. 2020;4(Suppl 1):119-135. [CrossRef] [Medline]
  21. Malgaroli M, Schultebraucks K, Myrick KJ, et al. Large language models for the mental health community: framework for translating code to care. Lancet Digit Health. Apr 2025;7(4):e282-e285. [CrossRef] [Medline]
  22. Reddy S, Rogers W, Makinen VP, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. Oct 2021;28(1):e100444. [CrossRef] [Medline]
  23. Goktas P, Grzybowski A. Shaping the future of healthcare: ethical clinical challenges and pathways to trustworthy AI. J Clin Med. Feb 27, 2025;14(5):1605. [CrossRef] [Medline]
  24. Solanki P, Grundy J, Hussain W. Operationalising ethics in artificial intelligence for healthcare: a framework for AI developers. AI Ethics. Feb 2023;3(1):223-240. [CrossRef]
  25. Palaniappan K, Lin EYT, Vogel S. Global regulatory frameworks for the use of artificial intelligence (AI) in the healthcare services sector. Healthcare (Basel). Feb 28, 2024;12(5):562. [CrossRef] [Medline]
  26. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc. Mar 1, 2020;27(3):491-497. [CrossRef] [Medline]
  27. Ullagaddi P. Cross-regional analysis of global AI healthcare regulation. JCC. 2025;13(5):66-83. [CrossRef]
  28. van de Sande D, Van Genderen ME, Smit JM, et al. Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter. BMJ Health Care Inform. Feb 2022;29(1):e100495. [CrossRef] [Medline]
  29. Lund B, Orhan Z, Mannuru NR, et al. Standards, frameworks, and legislation for artificial intelligence (AI) transparency. AI Ethics. Aug 2025;5(4):3639-3655. [CrossRef]
  30. Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. Jul 2021;8(2):e188-e194. [CrossRef] [Medline]
  31. Crossnohere NL, Elsaid M, Paskett J, Bose-Brill S, Bridges JFP. Guidelines for artificial intelligence in medicine: literature review and content analysis of frameworks. J Med Internet Res. Aug 25, 2022;24(8):e36823. [CrossRef] [Medline]
  32. Scott I, Carter S, Coiera E. Clinician checklist for assessing suitability of machine learning applications in healthcare. BMJ Health Care Inform. Feb 2021;28(1):e100251. [CrossRef] [Medline]
  33. Park Y, Jackson GP, Foreman MA, Gruen D, Hu J, Das AK. Evaluating artificial intelligence in medicine: phases of clinical research. JAMIA Open. Oct 2020;3(3):326-331. [CrossRef] [Medline]
  34. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Lancet Digit Health. Oct 2020;2(10):e537-e548. [CrossRef] [Medline]
  35. Vasey B, Nagendran M, Campbell B, et al. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ. May 18, 2022;377:e070904. [CrossRef] [Medline]
  36. Lekadir K, Frangi AF, Porras AR, et al. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. Feb 5, 2025;388:e081554. [CrossRef] [Medline]
  37. Hernandez-Boussard T, Bozkurt S, Ioannidis JPA, Shah NH. MINIMAR (MINimum Information for Medical AI Reporting): developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc. Dec 9, 2020;27(12):2011-2015. [CrossRef] [Medline]
  38. Jagtiani P, Karabacak M, Margetis K. A concise framework for fairness: navigating disparate impact in healthcare AI. J Med Artif Intell. Dec 2025;8:51-51. [CrossRef]
  39. Stade EC, Eichstaedt JC, Kim JP, Stirman SW. Readiness evaluation for AI-mental health deployment and implementation (READI): a review and proposed framework. Technol Mind Behav. 2025;6(2). [CrossRef] [Medline]
  40. Golden A, Aboujaoude E. The framework for AI tool assessment in mental health (FAITA - Mental Health): a scale for evaluating AI-powered mental health tools. World Psychiatry. Oct 2024;23(3):444-445. [CrossRef] [Medline]
  41. Sherwani SK, Khan Z, Samuel J, Kashyap R, Patel KG. ESHRO: an innovative evaluation framework for AI-driven mental health chatbots. SSRN. Preprint posted online on Jun 19, 2025. [CrossRef]
  42. Desage C, Bunge B, Bunge EL. A revised framework for evaluating the quality of mental health artificial intelligence-based chatbots. Procedia Comput Sci. 2024;248:3-7. [CrossRef]
  43. Schwabe D, Becker K, Seyferth M, Klaß A, Schaeffter T. The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review. NPJ Digit Med. Aug 3, 2024;7(1):203. [CrossRef] [Medline]
  44. van der Vegt AH, Scott IA, Dermawan K, Schnetler RJ, Kalke VR, Lane PJ. Implementation frameworks for end-to-end clinical AI: derivation of the SALIENT framework. J Am Med Inform Assoc. Aug 18, 2023;30(9):1503-1515. [CrossRef] [Medline]
  45. Ji M, Genchev GZ, Huang H, Xu T, Lu H, Yu G. Evaluation framework for successful artificial intelligence-enabled clinical decision support systems: mixed methods study. J Med Internet Res. Jun 2, 2021;23(6):e25929. [CrossRef] [Medline]
  46. Mohseni S, Zarei N, Ragan ED. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans Interact Intell Syst. Dec 31, 2021;11(3-4):1-45. [CrossRef]
  47. Ray PP. Benchmarking, ethical alignment, and evaluation framework for conversational AI: advancing responsible development of ChatGPT. BenchCouncil Transactions on Benchmarks, Standards and Evaluations. Sep 2023;3(3):100136. [CrossRef]
  48. Callahan A, McElfresh D, Banda JM, et al. Standing on FURM Ground: a framework for evaluating fair, useful, and reliable AI models in health care systems. NEJM Catalyst. Sep 18, 2024;5(10):CAT-C24. [CrossRef]
  49. Muley A, Muzumdar P, Kurian G, Basyal GP. Risk of AI in healthcare: a comprehensive literature review and study framework. AJMAH. 2023;21(10):276-291. [CrossRef]
  50. Abbasian M, Khatibi E, Azimi I, et al. Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI. NPJ Digit Med. Mar 29, 2024;7(1):82. [CrossRef] [Medline]
  51. Arora RK, Wei J, Hicks RS, et al. HealthBench: evaluating large language models towards improved human health. arXiv. Preprint posted online on May 13, 2025. [CrossRef]
  52. Collins GS, Dhiman P, Andaur Navarro CL, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. Jul 9, 2021;11(7):e048008. [CrossRef] [Medline]
  53. Norgeot B, Quer G, Beaulieu-Jones BK, et al. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med. Sep 2020;26(9):1320-1324. [CrossRef] [Medline]
  54. Sounderajah V, Ashrafian H, Golub RM, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open. Jun 28, 2021;11(6):e047709. [CrossRef] [Medline]
  55. Guni A, Sounderajah V, Whiting P, Bossuyt P, Darzi A, Ashrafian H. Revised tool for the quality assessment of diagnostic accuracy studies using AI (QUADAS-AI): protocol for a qualitative study. JMIR Res Protoc. Sep 18, 2024;13(1):e58202. [CrossRef] [Medline]
  56. Muralidharan V, Burgart A, Daneshjou R, Rose S. Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI. NPJ Digit Med. Sep 6, 2023;6(1):166. [CrossRef] [Medline]
  57. APA’s AI tool guide for practitioners. American Psychological Association. 2024. URL: https:/​/www.​apaservices.org/​practice/​business/​technology/​tech-101/​evaluating-artificial-intelligence-tool [Accessed 2022-08-22]
  58. Augmented intelligence development, deployment, and use in health care. American Medical Association. 2024. URL: https://www.ama-assn.org/system/files/ama-ai-principles.pdf [Accessed 2025-08-22]
  59. Kwong JCC, Khondker A, Lajkosz K, et al. APPRAISE-AI tool for quantitative evaluation of AI studies for clinical decision support. JAMA Netw Open. Sep 5, 2023;6(9):e2335377. [CrossRef] [Medline]
  60. Artificial intelligence-enabled device software functions: lifecycle management and marketing submission recommendations. US Food and Drug Administration. 2025. URL: https:/​/www.​fda.gov/​regulatory-information/​search-fda-guidance-documents/​artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing [Accessed 2025-08-22]
  61. Artificial intelligence in software as a medical device. US Food and Drug Administration. Mar 25, 2025. URL: https://www.fda.gov/media/145022/download?attachment [Accessed 2025-08-22]
  62. The regulation of artificial intelligence as a medical device. Regulatory Horizons Council. 2022. URL: https:/​/assets.​publishing.service.gov.uk/​media/​6384bf98e90e0778a46ce99f/​RHC_regulation_of_AI_as_a_Medical_Device_report.​pdf [Accessed 2025-08-22]
  63. Software and AI as a medical device change programme roadmap. GOV.UK. URL: https://tinyurl.com/3mpkf7ey [Accessed 2025-08-22]
  64. The EU artificial intelligence act: up-to-date developments and analyses of the EU AI act. EU Artificial Intelligence Act. URL: https://artificialintelligenceact.eu/ [Accessed 2025-08-22]
  65. Kenny LM, Nevin M, Fitzpatrick K. Ethics and standards in the use of artificial intelligence in medicine on behalf of the Royal Australian and New Zealand College of Radiologists. J Med Imaging Radiat Oncol. Aug 2021;65(5):486-494. [CrossRef] [Medline]
  66. Han Y, Ceross A, Bergmann J. Regulatory frameworks for AI-enabled medical device software in China: comparative analysis and review of implications for global manufacturer. JMIR AI. Jul 29, 2024;3:e46871. [CrossRef] [Medline]
  67. de Freitas Júnior AR, Zapolla LF, Cunha PFN. The regulation of artificial intelligence in Brazil. ILR Review. Oct 2024;77(5):869-878. [CrossRef]
  68. Singapore national AI strategy. Smart Notion Singapore. URL: https://www.smartnation.gov.sg/files/publications/national-ai-strategy.pdf [Accessed 2025-08-22]
  69. Singapore’s approach to AI governance. Personal Data Protection Commision Singapore. URL: https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework [Accessed 2025-08-22]
  70. Artificial Intelligence in Healthcare Guidelines (AIHGIe). Ministry of Health Singapore, Health Sciences Authority, Integrated Health Information Systems; 2021. URL: https:/​/isomer-user-content.​by.gov.sg/​3/​9c0db09d-104c-48af-87c9-17e01695c67c/​1-0-artificial-in-healthcare-guidelines-(aihgle)_publishedoct21.​pdf [Accessed 2025-08-22]
  71. Neary M, Fulton E, Rogers V, et al. Think FAST: a novel framework to evaluate fidelity, accuracy, safety, and tone in conversational AI health coach dialogues. Front Digit Health. 2025;7:1460236. [CrossRef] [Medline]
  72. Pre-market guidance for machine learning-enabled medical devices. Health Canada. 2025. URL: https://tinyurl.com/5avvxpbc [Accessed 2025-08-22]
  73. Good machine learning practice for medical device development: guiding principles. Health Canada; 2021. URL: https://tinyurl.com/2tndf57u [Accessed 2025-08-22]
  74. Bedi S, Cui H, Fuentes M, et al. MedHELM: holistic evaluation of large language models for medical tasks. arXiv. Preprint posted online on May 26, 2025. [CrossRef]
  75. 7001-2021 - IEEE standard for transparency of autonomous systems. IEEE Xplore. 2022. URL: https://ieeexplore.ieee.org/document/9726144 [Accessed 2025-11-14]
  76. Health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing (HTI-1) final rule. Internet Archive. Assistant Secretary for Technology Policy; 2025. URL: https:/​/web.​archive.org/​web/​20250813103115/​https:/​/www.​healthit.gov/​topic/​laws-regulation-and-policy/​health-data-technology-and-interoperability-certification-program [Accessed 2025-08-22]
  77. Chmielinski K, Newman S, Kranzinger CN, et al. The CLeAR Documentation Framework for AI Transparency: Recommendations for Practitioners & Context for Policymakers. Harvard Kennedy School Shorenstein Center Discussion Paper; 2024. URL: https:/​/shorensteincenter.​org/​resource/​clear-documentation-framework-ai-transparency-recommendations-practitioners-context-policymakers/​ [Accessed 2025-11-14]
  78. Huang Y, Miao X, Zhang R, et al. Training, testing and benchmarking medical AI models using Clinical AIBench. BenchCouncil Transactions on Benchmarks, Standards and Evaluations. Mar 2022;2(1):100037. [CrossRef]
  79. Azad R, Bai F, Bassi P, et al. Touchstone benchmark: are we on the right way for evaluating AI algorithms for medical segmentation? Presented at: Advances in Neural Information Processing Systems 37; Dec 10-16, 2024:15184-15201; Vancouver, BC. URL: http://www.proceedings.com/79017.html [CrossRef]
  80. Cai J, Chen P, Deng Z, et al. GMAI-MMBench: a comprehensive multimodal evaluation benchmark towards general medical AI. Presented at: Advances in Neural Information Processing Systems 37; Dec 10-16, 2024. [CrossRef]
  81. Yuan R, Hao W, Yuan C. Benchmarking AI in mental health: a critical examination of LLMs across key performance and ethical metrics. Presented at: International Conference on Pattern Recognition; Dec 1-5, 2024:351-366; Kolkata, India. [CrossRef]
  82. Schmidgall S, Ziaei R, Harris C, Reis E, Jopling J, Moor M. AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments. arXiv. Preprint posted online on May 13, 2024. [CrossRef]
  83. Xu J, Lu L, Peng X, et al. Data set and benchmark (MedGPTEval) to evaluate responses from large language models in medicine: evaluation development and validation. JMIR Med Inform. Jun 28, 2024;12(1):e57674. [CrossRef] [Medline]
  84. Lin Z, Ouyang Y, Wang H, et al. MedJourney: benchmark and evaluation of large language models over patient clinical journey. Presented at: Advances in Neural Information Processing Systems 37; Dec 10-16, 2024:87621-87646; Vancouver, BC. [CrossRef]
  85. Bhatt D, Ayyagari S, Mishra A. A scalable approach to benchmarking the in-conversation differential diagnostic accuracy of a health AI. arXiv. Preprint posted online on Dec 17, 2024. [CrossRef]
  86. Vollmer S, Mateen BA, Bohner G, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ. Mar 20, 2020;368:l6927. [CrossRef] [Medline]
  87. Chen K, Lane E, Burns J, Macrynikola N, Chang S, Torous J. The digital navigator: standardizing human technology support in app-integrated clinical care. Telemed J E Health. Jun 2024;30(7):e1963-e1970. [CrossRef] [Medline]
  88. Pape C, Kassam I, Shin HD, et al. The emergent role of digital navigators: case examples. In: From Insights to Action: Empowering Health. IOS Press; 2025:22-26. [CrossRef]
  89. Gorban C, McKenna S, Chong MK, et al. Building mutually beneficial collaborations between digital navigators, mental health professionals, and clients: naturalistic observational case study. JMIR Ment Health. Nov 6, 2024;11:e58068. [CrossRef] [Medline]
  90. Schwarz J, Chen K, Dashti H, et al. Piloting digital navigators to promote acceptance and engagement with digital mental health apps in German outpatient care: protocol for a multicenter, single-group, observational, mixed methods interventional study (DigiNavi). JMIR Res Protoc. Sep 25, 2025;14:e67655. [CrossRef] [Medline]
  91. Emerson MR, Dinkel D, Watanabe-Galloway S, Torous J, Johnson DJ. Adaptation of digital navigation training for integrated behavioral health providers: Interview and survey study. Transl Behav Med. Aug 11, 2023;13(8):612-623. [CrossRef] [Medline]
  92. Steidtmann D, McBride S, Pew C, et al. Implementation of a computer-assisted cognitive-behavioral therapy program for adults with depression and anxiety in an outpatient specialty mental health clinic. Mhealth. 2025;11:10. [CrossRef] [Medline]
  93. Choudhary S, Mehta UM, Naslund J, Torous J. Translating digital health into the real world: the evolving role of digital navigators to enhance mental health access and outcomes. J technol behav sci. [CrossRef]
  94. Society of Digital Psychiatry. URL: https://www.sodpsych.com/ [Accessed 2025-11-14]

Edited by Tiffany Leung; This is a non–peer-reviewed article. submitted 20.Sep.2025; accepted 28.Oct.2025; published 27.Nov.2025.

Copyright

© John Torous, Kathryn Taylor Ledley, Carla Gorban, Gillian Strudwick, Julian Schwarz, Soumya Choudhary, Margaret Emerson, Michelle Patriquin, Allison Dempsey, Jason Bantjes, Laura Ospina-Pinillos, Jennie Hornick, Shruti Kochhar. Originally published in JMIR Mental Health (https://mental.jmir.org), 27.Nov.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.