Abstract
Background: Mental health disorders significantly impact global populations, prompting the rise of digital mental health interventions, such as artificial intelligence (AI)-powered chatbots, to address gaps in access to care. This review explores the potential for a “digital therapeutic alliance (DTA),” emphasizing empathy, engagement, and alignment with traditional therapeutic principles to enhance user outcomes.
Objective: The primary objective of this review was to identify key concepts underlying the DTA in AI-driven psychotherapeutic interventions for mental health. The secondary objective was to propose an initial definition of the DTA based on these identified concepts.
Methods: The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for scoping reviews and Tavares de Souza’s integrative review methodology were followed, encompassing systematic literature searches in Medline, Web of Science, PsycNet, and Google Scholar. Data from eligible studies were extracted and analyzed using Horvath et al’s conceptual framework on a therapeutic alliance, focusing on goal alignment, task agreement, and the therapeutic bond, with quality assessed using the Newcastle-Ottawa Scale and Cochrane Risk of Bias Tool.
Results: A total of 28 studies were identified from an initial pool of 1294 articles after excluding duplicates and ineligible studies. These studies informed the development of a conceptual framework for a DTA, encompassing key elements such as goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers affecting therapeutic outcomes. The interventions primarily focused on AI-powered chatbots, digital psychotherapy, and other digital tools.
Conclusions: The findings of this integrative review provide a foundational framework for the concept of a DTA and report its potential to replicate key therapeutic mechanisms such as empathy, trust, and collaboration in AI-driven psychotherapeutic tools. While the DTA shows promise in enhancing accessibility and engagement in mental health care, further research and innovation are needed to address challenges such as personalization, ethical concerns, and long-term impact.
doi:10.2196/69294
Keywords
Introduction
Mental health disorders represent a significant public health burden, affecting around 13% of the global population annually [
]. These conditions lead to an important human and economic impact, including a diminished quality of life, decreased productivity, and increased health care costs [ , ]. As an example, in Canada, this increase in demand is combined with multiple barriers to accessing mental health services, such as costs, lack of information, long waiting times, and insufficient funding, which all lead to reduced access to care [ ]. This affects vulnerable populations disproportionately, leaving many without timely or adequate treatment [ ]. As the demand for mental health services continues to exceed the ability of care, innovative approaches are needed to address this gap.In the last years, digital mental health interventions (DMHIs), particularly mental health applications and chatbots, have emerged to complement conventional therapeutic models [
]. Many of them leverage artificial intelligence (AI) paired with evidence-based psychotherapeutic frameworks, such as cognitive-behavioral therapy (CBT) or mindfulness-based interventions [ ]. Engagement and adherence to digital treatment was generally considered low in past studies, but the integration of factors such as human support and personalization led to better outcomes [ - ]. Chatbots gained attention for their ability to deliver support in a flexible and user-friendly format with a conversational approach mimicking human interaction. They showed effectiveness in improving depression and anxiety in multiple reviews and meta-analyses with moderate to large effect sizes, but limitations were noted, such as a high risk of bias and lack of long-term follow-up [ - ].When designing chatbots to effectively address mental health problems through a psychotherapeutic approach, it seems essential to understand some of the mechanisms underlying the effectiveness of psychotherapy. Many authors explored the role of the therapeutic alliance in this matter, and regardless of the approach, it emerges as a cornerstone of effective treatment [
- ]. Zetzel, an important figure in the field of psychoanalysis, believed the therapeutic alliance reflected the collaborative state between a patient and the therapist that facilitates treatment [ , ]. Authors such as Hausner [ ] pushed this comprehension by distinguishing between the therapeutic alliance and the working alliance. The therapeutic alliance seemed to be made by “mutual identification, empathy and role-responsiveness,” while the working alliance could only be possible “after a therapeutic alliance has to some degree been established” [ ].Considering the clear role of these concepts in psychotherapy efficacy, this raises important questions concerning the application of chatbots in mental health. With a design made to simulate human interaction, is it possible to nurture a meaningful alliance between the chatbot and the user? While most of the studies focus on satisfaction and engagement metrics, few of them analyze all the components of a therapeutic alliance [
, , ]. Empathy appears to be a recurring interest in studies and an important factor for the development of an alliance with a chatbot by creating a sense of warmth [ ]. On that topic, Boucher et al [ ] pointed out that “chatbots designed to display empathetic reactions are rated more positively (ie, more enjoyable, understanding, sociable, trustworthy, and intelligent) than one that is not programmed to respond empathetically.” With the rise of generative AI, which refers to complex mathematical algorithms that can learn from large data sets to produce text, audio, or visuals that resemble those of a human, this is even more pertinent [ ]. Considering these, algorithms are currently studied as to their potential integration in mental health treatments to create AI-powered chatbots and digital therapists that offer people quick, individualized support, personalized interactions between the machines and humans must be further defined [ ].These preliminary findings, especially on empathy, hint at the possibility of a “digital therapeutic alliance (DTA),” which some have tried to grasp its scope using existing scales such as the Working Alliance Inventory-Short Revised (WAI-SR) [
]. To our knowledge, there is no standardized concept that defines and evaluates the presence of an alliance between a chatbot and its user, and if the same components as with the human therapist apply. Understanding how this bond can emerge might be a key factor in boosting engagement and designing more impactful DMHIs. The main objective of this review was to identify the key concepts in a DTA between AI-driven psychotherapeutic interventions and the user in the context of mental health interventions. The secondary objective is to provide an initial definition of DTA using these concepts. It is hypothesized that components of a DTA will align with those found in traditional human-therapist alliances but will manifest differently due to the nonhuman nature of AI-driven psychotherapeutic interventions. Furthermore, it is possible that higher levels of perceived empathy and trust within AI-driven psychotherapeutic interventions will predict greater user engagement and adherence to DMHIs, similar to the dynamics observed in human-therapist relationships.Methods
Search Strategies
The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for scoping reviews methodology was conducted to provide a comprehensive overview of the key concepts in DTA between AI-driven psychotherapeutic interventions and the user in the context of mental health interventions. The methodology also followed the methodology developed by Souza and Silva [
] for integrative reviews. This approach includes 6 steps: the development of the research question, literature search, data collection, critical analysis of the identified articles, discussion of the results, and presentation of the integrative perspective of the identified articles [ ].The literature review was conducted in collaboration with a librarian specialized in mental health. The databases Medline, Web of Science, PsycNet (PsycINFO), and Google Scholar were searched to retrieve articles published from their inception up to December 2024. These databases were selected with the help of an experienced librarian in the field of mental health to ensure comprehensive coverage of biomedical, psychological, and multidisciplinary literature relevant to digital interventions and generative AI in mental health care. Keywords and indexing terms related to AI, therapeutic alliance, and digital interventions were used. The literature search was carried out by all the authors (AML, JC, CL, and AH). Complete search strategies are available in
and the PRISMA for Scoping Reviews checklist is also provided as .Study Eligibility
The reviewed studies were included in the analysis if they met the following inclusion criteria: (1) the main topic of interest was on a psychotherapeutic intervention; (2) the study was conducted in the field of psychiatry or mental health; (3) the psychotherapeutic intervention used AI as part of its design or included a data-driven approach; and (4) the manuscript was written in French or English. Case studies, protocols, pre-experimental studies, and unpublished writings were excluded from the analysis.
Data Extraction
Data were extracted using a standardized Microsoft Excel form. The studies identified were independently counter-verified for consistency and integrity by 2 authors (AML and AH). Any disagreements regarding the inclusion or exclusion of a study were mutually resolved by the authors. The extracted information included authors, population (sample), type of interventions, type of engagement, facilitators, challenges, and outcomes.
Data Analysis
All the studies were analyzed according to the conceptual framework developed by Horvath and Luborsky [
] about the key components usually found in a therapeutic alliance. This framework, designed to understand further the alliance between a therapist and a patient, defines three key components: (1) agreement on therapeutic goals, (2) agreement on therapeutic tasks, and (3) the therapeutic bond.A key component of successful treatment is the client and therapist’s agreement on therapeutic goals, which promotes cooperation, trust, and mutual understanding [
]. Achieving therapeutic success depends on both parties working toward meaningful and mutually acknowledged goals, which is ensured by this alignment. Clients are more inclined to participate actively in the process, which increases motivation and commitment, when they believe their goals are recognized and understood [ ]. Additionally, goal agreement reduces misconceptions, creates a sense of direction, and clarifies expectations [ ]. Because it strengthens the therapeutic bond, which is essential to effective interventions, research continuously shows that agreement on therapeutic goals is a strong predictor of beneficial outcomes across a variety of therapeutic modalities [ ].Another important component of the therapeutic process is the agreement on therapeutic tasks, which implies that the client and the therapist agree on the methods, exercises, and interventions needed to meet the goals of the therapy [
]. This mutual comprehension improves teamwork and gives patients a sense of empowerment and involvement in their healing process. Task agreement also enables therapists to customize interventions to the client’s preferences and needs, enhancing the therapy’s efficacy and relevance [ ]. Additionally, studies show that the therapeutic alliance, which is closely linked to successful outcomes, depends on alignment on therapeutic activities [ ].Finally, the primary element of the therapeutic relationship is the therapeutic bond, which includes the client and therapist’s mutual regard, trust, and emotional connection [
]. A secure and encouraging atmosphere where clients feel appreciated, understood, and free to express their feelings is built on this connection. Strong therapeutic relationships promote cooperation, increase the client’s sense of trust in the therapist, and promote candor and openness [ ]. Because it provides clients with the comfort of a trustworthy and understanding ally, research continuously shows that the therapeutic relationship is a significant predictor of successful outcomes [ ].Quality Assessment
The quality of the studies included in this analysis was assessed using 2 widely recognized tools: the Newcastle-Ottawa Scale for nonrandomized controlled studies and the Cochrane Risk of Bias Tool for randomized controlled trials [
, ]. The Newcastle-Ottawa Scale evaluates the quality of cohort and case-control studies by examining 3 key domains: selection of study groups, comparability between groups, and the ascertainment of either exposure or outcome. Each domain is associated with specific criteria, and studies earn stars for meeting these standards, with a maximum score of 9 stars representing the highest quality [ ].For randomized controlled trials, the Cochrane Risk of Bias Tool was employed to systematically assess potential biases. This tool examines 7 domains: random sequence generation, allocation concealment, participant and personnel blinding, outcome assessment blinding, completeness of outcome data, selective reporting, and other potential sources of bias [
]. Each domain is categorized as having a low, high, or unclear risk of bias based on predefined guidelines.In this review, studies were categorized based on their quality as follows: studies receiving 1‐3 stars on the Newcastle-Ottawa Scale or rated as having a high risk of bias using the Cochrane tool were deemed low quality; those with 4‐6 stars or a moderate risk of bias were considered moderate quality; and those earning 7‐9 stars on the Newcastle-Ottawa Scale or demonstrating a low risk of bias were classified as high quality.
Results
Description of Studies
The literature review initially identified 1294 articles. Of these, 203 duplicates were removed. Among the remaining 1091 studies, a total of 806 articles were excluded after reviewing their titles and abstracts as they did not meet the inclusion criteria (307 were not targeting the specific population and 499 did not involve a psychotherapeutic intervention). A total of 28 articles were fully retained following the comprehensive analysis of the 285 articles selected for eligibility evaluation. Details of the article selection process are presented in
, and the identified articles are listed in .
A conceptual framework to define the concept of DTA was developed by identifying and integrating the findings of the analyzed studies. The key concepts are all inter-connected and imply the following elements: goal alignment, task agreement, therapeutic bond, user engagement, as well as barriers and facilitators. Key concepts to define the DTA are presented in
.
From the 28 identified studies, 6 major themes regarding the type of interventions were found: AI-powered chatbots (n=12), digital psychotherapy (n=7), exploratory or review studies (n=3), general AI health care tools (n=2), personalized therapy (n=2), and behavioral change tools (n=2).
Goal Alignment
A total of 16 studies reported the use of goal alignment in their data-driven intervention. Goal alignment refers to the idea that there must be an agreement over common goals or outcomes when initiating psychotherapy. While heterogenous, several approaches were identified with regard to goal alignment. Aggarwal et al [
] emphasized the integration of behavior change theories, including goal-setting frameworks, to improve both primary and secondary outcomes. Similarly, Beatty et al [ , , ], He et al [38], and Jeong et al [ , , ] highlighted the importance of measuring goal alignment using the WAI-SR subscale, which evaluates how users and the chatbot collaborate to set mental health-related goals. Entenberg et al [ , ] as well as Martinengo et al [ , ] reported in the case of their study that therapeutic goals were established by the chatbot at the beginning of the interaction, ensuring clarity from the outset. Forman-Hoffman et al [ ] reported that in their context, therapeutic goal alignment was evaluated via the therapeutic alliance score, highlighting a structured measurement approach.Like human-based therapeutic alliances, goal-setting mechanisms are important to provide effective therapeutic alliances within both digital and human contexts. In the absence of goal alignment between the digital intervention itself and the user, a few studies reported that this was done before the intervention was used with a human agent [
, ].Another study reports that the alignment on therapeutic goals is mediated by emotion regulation self-efficacy, with digital and therapeutic alliances predicting symptom reduction through increased emotion regulation self-efficacy [
]. This is similar to Cross et al’s [ ] study, which reported that new eHealth-specific therapeutic alliance subscales, such as perceived emotional investment and sense of relatedness, align users’ goals with therapy outcomes more effectively than conventional measures in digital parent training programs. In Doukani et al’s [ ] work, alignment on therapeutic goals was higher in blended CBT compared to treatment-as-usual, with good system usability enhancing working alliance and leading to improved depression outcomes. Finally, in Brotherdale et al, it is demonstrated that the alignment of goals and expectations in fully automated digital mental health apps often involves lower expectations compared to in-person therapy but can be enhanced by personalized interfaces and validating features.Task Agreement
Agreement on therapeutic tasks refers to the mutual understanding and collaboration between the therapist (or digital tool) and the user on the specific activities or steps required to achieve therapeutic goals. A total of 8 studies provided insight on this key concept. Similarly to the findings identified in the goal alignment section, various approaches have been conducted to achieve task agreement among the analyzed studies. Beatty et al [
] evaluated therapeutic tasks using the WAI-SR subscale for tasks. Forman-Hoffman et al [ ] assessed task agreement via the therapeutic alliance score. Goldberg et al [ ] described task alignment as conducted between humans like in traditional therapy. Hocking et al [ ] emphasized the use of the Specific, Measurable, Achievable, Relevant, and Time-Bound (SMART) goals framework for aligning tasks with therapeutic objectives. Martinengo et al [ , ] focused on task agreement as observed in human interactions, while Liu et al [ , ] explored task alignment through conversational agents. Similarly, Goldberg et al [ ] reported that the agreement on therapeutic tasks and goals in unguided smartphone apps can be assessed using the Digital Working Alliance Inventory and showed positive predictions for app engagement and reductions in psychological distress. Collectively, these studies reported the importance of task agreement to enhance therapeutic alliances across both digital and human-mediated interventions.Therapeutic Bond
Therapeutic bond refers to the emotional connection and trust developed between the user and the therapist or digital tool. In total, 12 studies reported how they assessed the therapeutic bond between the human and the machine during digital interventions.
Beatty et al [
, ] reported that users expressed strong emotional connections in the first assessment, comparable to traditional in-person CBT, as measured by the WAI-SR subscale for the bond. Forman-Hoffman et al [ ], as for the previous components, evaluated the bond using the therapeutic alliance score. He et al [ , ] and Jeong et al [ , ] measured the bond using the WAI-SR.Interestingly, Liu et al [
] found that conversational AI showed higher therapeutic alliance scores compared to bibliotherapy with a Cohen d=0.83. MacNeill et al [ ] noted that while some users felt reassured and connected, others experienced difficulty establishing a bond due to conversational issues. Liu et al [ ] observed that an empathic tone and conversational nature facilitated engagement but that rule-based systems limited the bond. Plakun [ ] highlighted that AI struggles to replicate the emotional depth and transference found in human relationships, limiting its ability to build authentic therapeutic bonds. Russo et al [ ] reported that therapists found the therapeutic bond with AI tools the most challenging aspect of therapy, as it is difficult to establish. Interestingly, in Prescott and Hanley’s [ ] review, it was observed that elders were more expressive with sociable robots than with task-oriented ones when comparing groups. Finally, Ta-Johnson et al [ ] described how users build confidence with chatbots, using them as a safe space to discuss feelings and develop a bond resembling a therapeutic relationship.User Engagement
The concept of user engagement refers to the level of sustained interaction, interest, and participation between the user and the therapeutic tool. Insights into this key concept were reported in 22 studies, highlighting a range of factors influencing engagement.
Aggarwal et al [
] found that engagement was primarily established through the frequency of messages exchanged between the chatbot and the user, though this varied across studies. Alfano et al [ ] emphasized that initial engagement was high, driven by ease of access and the perception of nonjudgmental interactions, but noted that dropout rates increased when users did not experience quick results. Anisha et al [ ] observed that anthropomorphic conversational agents (CAs) were preferred, leading to higher intervention compliance compared to mechanical chatbots. Similarly, Beatty et al [ ] reported that 73.8% of participants continued using the Wysa chatbot after the first assessment, with engagement improving over time and correlating with an increased bond subscore in later assessments.Several studies explored how engagement was measured or facilitated. Entenberg et al [
] described engagement as determined by the number of messages and characters sent. Escobar Viera et al [ ] found that participants sent an average of 49.3 messages to the chatbot, highlighting message frequency as an indicator of engagement. Forman-Hoffman et al [ ] evaluated engagement through app utilization rates, which varied across conditions. Goonesekera and Donkin [ ] noted that while daily check-ins were appreciated, some participants found them tedious due to competing social engagements, fatigue, and daily responsibilities.Specific tools and features also played a role in user engagement. He et al [
] used the User Engagement Scale and found higher interaction levels with motivational-interviewing chatbots. Hocking et al [ ] observed stable engagement among 2 patients using the Rehabilitation Therapy Engagement Scale. Inkster et al [ ] measured engagement based on the number of active session days between screenings. Kettle and Lee [ ] identified key drivers of engagement, including agent connection, initial motivation (eg, curiosity or loneliness), and technical features like accessibility. Liu et al [ ] noted that rule-based conversational agents often provided inadequate responses, frustrating users and reducing engagement.Furthermore, in eHealth interventions, novel engagement techniques, including perceived emotional investment and application-induced accountability, were particularly useful. As though speaking with a supportive spouse, these tactics urged users to remain dedicated and sense a connection to the digital instrument. Accessibility and empowerment were reported as important for standalone apps, enabling users to interact at their own comfort level and at their own pace [
]. Furthermore, the application of innovative metrics such as the eHealth Therapeutic Alliance Inventory and the Digital Working Alliance Inventory showed that engagement could be quantitatively associated with outcomes like improved behavior change, decreased psychological distress, and app usage [ , ].Finally, studies highlighted broader implications of user engagement. Russo et al [
] emphasized that digital tools could improve access for hard-to-reach individuals, reducing inequalities in therapy. Ta-Johnson et al [ ] described how users engaged steadily with chatbots across various topics, often returning to share updates about their emotions and experiences, reflecting the potential for long-term engagement.Facilitators and Barriers
A DTA can also be defined by its variety of facilitators and barriers that intersect every other mentioned key concept. In total, 22 studies reported either a facilitator, a barrier, or examples for both categories. Facilitators and barriers refer to the factors that enhance or hinder the effectiveness of digital therapeutic tools, ranging from technological features to user perceptions.
Aggarwal et al [
, ] identified facilitators such as free-flow conversations, which enhanced user experience through personalization, while Alfano et al [ , ] highlighted the role of comfort with digital technology, particularly among adolescents. Anisha et al [ ] emphasized strong social presence and emotional closeness as key facilitators for positive outcomes but noted that scripted chatbots often failed to provide personalized or empathetic responses, creating user frustration. Furthermore, the ability to generate emotion regulation self-efficacy was an important facilitator. By enhancing users’ confidence in managing emotions, digital tools increased motivation to engage and persist in the therapeutic process [ , ]. Beatty et al [ ] reported that anonymity and flexibility raised user autonomy but highlighted challenges with chatbots’ limited understanding of user needs, leading to perceived ineffectiveness.Chan et al [
] discussed issues with inappropriate chatbot reinforcements, context misunderstandings, and technical errors, which undermined their reliability. Entenberg et al [ , ] noted that customizable and human-like features significantly enhanced user satisfaction, while Forman-Hoffman et al [ , ] highlighted design elements like responsiveness and inclusivity (eg, avoiding past names for trans users) as facilitators. However, limited conversational content and inability to handle complex interactions were key barriers [ ]. Goonesekera and Donkin [ ] identified anthropomorphic features and engaging content as key facilitators but pointed to technical difficulties and limited interactivity in decision-tree-based chatbots as barriers. Also, Brotherdale et al’s [ , ] and Ashur et al [ ] studies reported the flexibility, choice, and empowerment that standalone apps provided allowed users to engage with the intervention at their own comfort level and at their own speed. Mobile app accessibility decreased stigma and offered anonymity, especially for populations that are difficult to reach.Jeong et al [
] observed that users appreciated engaging wellness activities and positive reinforcement but struggled to adapt to having a robot in their personal space. Martinengo et al [ ] highlighted empathic responses and integrated CBT exercises as facilitators, while the lack of variety in conversational personas and inability to handle crisis scenarios emerged as significant barriers. Plakun [ ]stressed the importance of theory-grounded techniques, such as CBT and relaxation exercises, as facilitators but noted the lack of true empathy and privacy concerns as major hindrances to AI effectiveness.Russo et al emphasized that digital tools reduced therapy inequalities, appealing to tech-savvy users; however, the small sample sizes and the cohort’s limited understanding of AI’s advancements in therapeutic contexts constrained their ability to fully assess the situation [
]. Prescott and Hanley [ ] found that “socially skilled” robots enhanced elder engagement, although speech recognition challenges and cognitive impairments posed obstacles. It was also difficult, according to Goldberg et al [ ], to maintain engagement with fully automated operations that lacked human assistance. Dropout rates were occasionally caused by the imagined therapeutic tie being reduced by the lack of a direct human connection, empathy, or prompt feedback [ ]. Lastly, Ta-Johnson et al [ ] highlighted the chatbot’s role as a “safe space” and its ability to favor long-term interactions, though limited sample sizes and insufficient motivation assessments hindered deeper understanding.Quality Assessment of the Identified Studies
Although the studies varied in their design, all were deemed to be of moderate to high quality with minimal risk of bias. However, common challenges included small sample sizes and limited external validity, which were among the most frequently noted issues in the analyzed studies. Quality assessment for each individual study is found in
.Definition of the DTA
Summarizing the abovementioned findings and integrating these concepts, a first definition of the DTA can be established as follows:
The Digital Therapeutic Alliance (DTA) refers to the collaborative relationship and emotional connection between a user and an artificial intelligence (AI)-driven psychotherapeutic tool, encompassing goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers that influence therapeutic outcomes.
Discussion
Principal Results
This integrative review aimed to identify the key concepts in a DTA between AI-driven psychotherapeutic interventions and the user in the context of mental health interventions and provide an initial definition of DTA using these concepts. A total of 23 studies were fully analyzed, and 5 key components such as goal alignment, task agreement, therapeutic bond, user engagement, and the facilitators and barriers that mediate these relationships were observed. The studies were overall of moderate to high quality.
Comparison With Prior Work
Goal alignment, an essential component of traditional therapy, emerges as equally relevant in digital interventions. Studies highlighted the importance of integrating structured frameworks, such as behavior change theories and SMART goals, to promote shared understanding and collaboration between users and chatbots. These findings are consistent with Horvath and Luborsky [
], who emphasized goal alignment as a predictor of treatment success in traditional therapeutic contexts. Interestingly, digital tools like chatbots simplify goal-setting through automated initiation and measurement, as reported in studies by Aggarwal et al [ , ] and Beatty et al [ , ]. However, the findings also reveal challenges in aligning goals with user expectations, which could limit engagement if not adequately addressed, echoing prior concerns about designing interventions in AI-based therapy [ ].The therapeutic bond in the DTA, while conceptually similar to human-therapist relationships, manifests uniquely in digital tools. Emotional connection, empathy, and trust are important factors, as highlighted in studies by Beatty et al [
, ] and Liu et al [ , ], which found that conversational agents with empathic tones and anthropomorphic features enhance the bond. However, as noted by Plakun [ ], the inability of AI to replicate the emotional depth and transference of human therapists remains a significant barrier. These insights align with literature suggesting that perceived empathy and warmth are central to establish trust and engagement in digital tools, perceived empathy being more important than the real ability of the AI-driven intervention to feel empathy [ ]. Although encouraging, issues like rule-based replies and a lack of subtlety in interpreting user emotions highlight the necessity for more technology developments to improve the therapeutic relationship in AI-driven solutions.User engagement was found to be influenced by both design elements and user perceptions. High initial engagement driven by accessibility and nonjudgmental interfaces, as reported by Alfano et al [
], mirrors earlier findings on the role of usability in DMHIs. However, as Goonesekera and Donkin [ ] noted, engagement tends to decline when users do not perceive immediate benefits, highlighting the importance of maintaining long-term motivation. These findings resonate with studies emphasizing the critical role of user engagement in predicting adherence and outcomes in digital interventions [ ]. Additionally, themes such as technical reliability, personalization, and responsiveness emerged as key facilitators of sustained engagement, suggesting that addressing these elements could improve adherence in future interventions.Finally, the review underscores the interplay of facilitators and barriers in shaping the DTA. Facilitators such as anonymity, flexibility, and empathic design were identified across multiple studies, aligning with prior research advocating for user-centered design in DMHIs [
, ]. However, barriers like limited conversational capabilities, privacy concerns, and technical errors, as noted by Chan et al [ , ] and Russo et al [ , ], remain persistent challenges. These barriers not only hinder engagement but also undermine user trust. Addressing these barriers requires robust ethical guidelines and technological improvements to address these limitations.As digital tools evolve, integrating user feedback and advancing AI capabilities will be necessary in overcoming these barriers and optimizing the DTA [
]. However, this evolution raises critical ethical questions, such as “Should therapy be AI-driven?’’ particularly concerning aspects like security, privacy, efficacy, inclusivity, and maintaining a user-centered approach [ ]. Vilaza and McCashin [ ] have emphasized the importance of implementing evidence-based and empirically tested interventions to ensure that automated therapy genuinely contributes to the improvement of mental health outcomes. Also, earlier findings on DTA support that it may be possible to increase the efficacy of tools like smartphone applications and enhance user adherence by evaluating and improving the DTA [ ]. Moreover, applying an ethical framework to the development of these technologies appears essential to ensure accountability and responsibility from AI-developing companies in the field of mental health [ ].Limitations
It is important to recognize the limitations of this study. First, it is difficult to draw broad conclusions that go beyond the initial definition of a DTA because of the included studies’ variety in terms of design, demographics, and intervention type. Although the analysis made an effort to integrate the findings, differences in the approaches and results could restrict how broadly the findings can be applied. Second, the majority of the research used self-reported metrics to assess ideas like engagement, therapeutic connection, and results, which are prone to response bias and might not fully represent the subtleties of user experiences. It is also important to note that this study included only articles in French or English to limit interpretation bias, which could limit the external validity of the findings in other contexts. Finally, the reviewed literature did not include longitudinal investigations, which limit our understanding of the DTA’s long-term viability and efficacy.
Conclusions
This integrative review attempted to define the emerging concept of a DTA, offering a preliminary framework for its definition and components. By integrating insights from 28 studies, this work highlights the important role of key elements such as goal alignment, task agreement, therapeutic bond, user engagement, and the interplay of facilitators and barriers. These components mirror the foundations of traditional therapeutic alliances while adapting to the unique dynamics of AI-driven psychotherapeutic tools.
The findings emphasize that, while digital interventions hold promise for enhancing accessibility and engagement in mental health care, challenges such as limited emotional depth, personalization, and ethical concerns persist. Addressing these barriers through technological innovation and user-centered design will be necessary in advancing the DTA. Furthermore, aligning these interventions with evidence-based frameworks, such as CBT or SMART goals, can improve therapeutic outcomes and user satisfaction.
Importantly, the DTA highlights the potential for digital tools to replicate key therapeutic mechanisms, such as empathy, trust, and collaboration, even if in a different context. However, as this field evolves, further research is required to standardize measurement tools, validate the framework across diverse populations, and assess the long-term impact of DTAs on mental health outcomes.
In conclusion, this study provides a foundational step toward conceptualizing the DTA, emphasizing its importance in bridging gaps in mental health care delivery. By addressing its limitations and advancing its understanding, the DTA could play a transformative role in the future of DMHIs.
Acknowledgments
This study was funded indirectly by La Fondation de l’Institut universitaire en santé mentale de Montréal.
Authors' Contributions
AH and AML were involved in conceptualization. Data curation was done by AH, AML, CL, and JC. Formal analysis was done by AH, AML, CL, and JC. AH was involved in funding acquisition. Investigation was done by AH and AML. AH, AML, CL, and JC contributed to methodology. Project administration was done by AH. AH contributed to resources. Supervision was done by AH. Validation was done by AH and AML. AH, AML, CL, and JC were involved in writing, original draft and writing, and review and editing.
Conflicts of Interest
None declared.
Electronic search strategy for the integrative review conducted.
DOCX File, 17 KBPRISMA for Scoping Review checklist.
DOCX File, 85 KBIntegrative review study selection detailed results.
DOCX File, 42 KBReferences
- Castaldelli-Maia JM, Bhugra D. Analysis of global prevalence of mental and substance use disorders within countries: focus on sociodemographic characteristics and income levels. Int Rev Psychiatry. Feb 2022;34(1):6-15. [CrossRef] [Medline]
- Arias D, Saxena S, Verguet S. Quantifying the global burden of mental illness and its economic value. SSRN J. Jan 2022. [CrossRef]
- Chen J. Evaluating the cost of mental illness: a call for a cost-effective care coordination model. Am J Geriatr Psychiatry. Feb 2016;25(2):142-143. [CrossRef] [Medline]
- Moroz N, Moroz I, D’Angelo MS. Mental health services in Canada: barriers and cost-effective solutions to increase access. Healthc Manage Forum. Nov 2020;33(6):282-287. [CrossRef] [Medline]
- Urbanoski K, Inglis D, Veldhuizen S. Service use and unmet needs for substance use and mental disorders in Canada. Can J Psychiatry. Aug 2017;62(8):551-559. [CrossRef] [Medline]
- Lam RW, Kennedy SH, Adams C, et al. Canadian Network for Mood and Anxiety Treatments (CANMAT) 2023 update on clinical guidelines for management of major depressive disorder in adults. Can J Psychiatry. May 2024;69(9):641-687. [CrossRef]
- Baños RM, Herrero R, Vara MD. What is the current and future status of digital mental health interventions? Span J Psychol. 2022;25. [CrossRef]
- Torous J, Nicholas J, Larsen ME, Firth J, Christensen H. Clinical review of user engagement with mental health smartphone apps: evidence, theory and improvements. Evid Based Mental Health. Aug 2018;21(3):116-119. [CrossRef]
- Ly KH, Ly AM, Andersson G. A fully automated conversational agent for promoting mental well-being: a pilot RCT using mixed methods. Internet Interv. Dec 2017;10:39-46. [CrossRef] [Medline]
- Johansson R, Andersson G. Internet-based psychological treatments for depression. Expert Rev Neurother. Jul 2012;12(7):861-869. [CrossRef] [Medline]
- Baumeister H, Reichler L, Munzinger M, Lin J. The impact of guidance on Internet-based mental health interventions — a systematic review. Internet Interv. Oct 2014;1(4):205-215. [CrossRef]
- Daley K, Hungerbuehler I, Cavanagh K, Claro HG, Swinton PA, Kapps M. Preliminary evaluation of the engagement and effectiveness of a mental health chatbot. Front Digit Health. 2020;2:576361. [CrossRef] [Medline]
- Zhong W, Luo J, Zhang H. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: a systematic review and meta-analysis. J Affect Disord. Jul 1, 2024;356:459-469. [CrossRef] [Medline]
- Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. Jul 13, 2020;22(7):e16021. [CrossRef] [Medline]
- Horvath AO, Luborsky L. The role of the therapeutic alliance in psychotherapy. J Consult Clin Psychol. 1993;61(4):561-573. [CrossRef]
- Martin DJ, Garske JP, Davis MK. Relation of the therapeutic alliance with outcome and other variables: a meta-analytic review. J Consult Clin Psychol. 2000;68(3):438-450. [CrossRef]
- Flückiger C, Del Re AC, Wampold BE, Horvath AO. The alliance in adult psychotherapy: a meta-analytic synthesis. Psychotherapy (Chic). Dec 2018;55(4):316-340. [CrossRef] [Medline]
- Hausner RS. The therapeutic and working alliances. J Am Psychoanal Assoc. 2000;48(1):155-187. [CrossRef] [Medline]
- Steindl SR, Matos M, Dimaggio G. The interplay between therapeutic relationship and therapeutic technique: the whole is more than the sum of its parts. J Clin Psychol. Jul 2023;79(7):1686-1692. [CrossRef]
- MacNeill AL, Doucet S, Luke A. Effectiveness of a mental health chatbot for people with chronic diseases: randomized controlled trial. JMIR Form Res. May 30, 2024;8:e50025. [CrossRef] [Medline]
- Saadati SA, Saadati SM. The role of chatbots in mental health interventions: user experiences. AI Tech Behav Soc Sci. 2023;1(2):19-25. [CrossRef]
- Seitz L. Artificial empathy in healthcare chatbots: does it feel authentic? Comput Human Behav. Jan 2024;2(1):100067. [CrossRef]
- Boucher EM, Harake NR, Ward HE, et al. Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev Med Devices. Dec 2021;18(sup1):37-49. [CrossRef] [Medline]
- Volkmer S, Meyer-Lindenberg A, Schwarz E. Large language models in psychiatry: opportunities and challenges. Psychiatry Res. Sep 2024;339:116026. [CrossRef] [Medline]
- Beatty C, Malik T, Meheli S, Sinha C. Evaluating the therapeutic alliance with a free-text CBT conversational agent (WYSA): a mixed-methods study. Front Digit Health. 2022;4:847991. [CrossRef] [Medline]
- Souza MTD, Silva MDD, Carvalho RD. Integrative review: what is it? How to do it? Einstein (Sao Paulo). Mar 2010;8(1):102-106. [CrossRef] [Medline]
- Sagui-Henson SJ, Welcome Chamberlain CE, Smith BJ, Li EJ, Castro Sweet C, Altman M. Understanding components of therapeutic alliance and well-being from use of a global digital mental health benefit during the COVID-19 Pandemic: longitudinal observational study. J Technol Behav Sci. 2022;7(4):439-450. [CrossRef] [Medline]
- Ryan RM, Lynch MF, Vansteenkiste M, Deci EL. Motivation and autonomy in counseling, psychotherapy, and behavior change: a look at theory and practice. Couns Psychol. 2010;39(2):193-260. [CrossRef]
- Stubbe DE. The therapeutic alliance: the fundamental element of psychotherapy. Focus (Am Psychiatr Publ). Oct 2018;16(4):402-403. [CrossRef] [Medline]
- Lavik KO, McAleavey AA, Kvendseth EK, Moltu C. Relationship and alliance formation processes in psychotherapy: a dual-perspective qualitative study. Front Psychol. 2022;13:915932. [CrossRef] [Medline]
- Lindhiem O, Bennett CB, Trentacosta CJ, McLear C. Client preferences affect treatment satisfaction, completion, and clinical outcome: a meta-analysis. Clin Psychol Rev. Aug 2014;34(6):506-517. [CrossRef] [Medline]
- Ardito RB, Rabellino D. Therapeutic alliance and outcome of psychotherapy: historical excursus, measurements, and prospects for research. Front Psychol. 2011;2:270. [CrossRef] [Medline]
- Crits-Christoph P, Rieger A, Gaines A, Gibbons MBC. Trust and respect in the patient-clinician relationship: preliminary development of a new scale. BMC Psychol. Dec 30, 2019;7(1):91. [CrossRef] [Medline]
- Zilcha-Mano S, Solomonov N, Chui H, McCarthy KS, Barrett MS, Barber JP. Therapist-reported alliance: Is it really a predictor of outcome? J Couns Psychol. Oct 2015;62(4):568-578. [CrossRef] [Medline]
- Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. Sep 2010;25(9):603-605. [CrossRef] [Medline]
- Higgins JPT, Altman DG, Gøtzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. Br Med J. Oct 18, 2011;343(oct18 2):d5928. [CrossRef] [Medline]
- Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial intelligence-based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. Feb 24, 2023;25:e40789. [CrossRef] [Medline]
- He L, Basar E, Wiers RW, Antheunis ML, Krahmer E. Can chatbots help to motivate smoking cessation? A study on the effectiveness of motivational interviewing on engagement and therapeutic alliance. BMC Public Health. Dec 2022;22(1). [CrossRef]
- Jeong S, Aymerich-Franch L, Alghowinem S, Picard RW, Breazeal CL, Park HW. A robotic companion for psychological well-being. Proc ACM SIGCHI. Mar 13, 2023:485-494. [CrossRef] [Medline]
- Entenberg GA, Dosovitsky G, Aghakhani S, et al. User experience with a parenting chatbot micro intervention. Front Digit Health. 2022;4:989022. [CrossRef] [Medline]
- Martinengo L, Lum E, Car J. Evaluation of chatbot-delivered interventions for self-management of depression: content analysis. J Affect Disord. Dec 15, 2022;319:598-607. [CrossRef] [Medline]
- Forman-Hoffman VL, Pirner MC, Flom M, et al. Engagement, satisfaction, and mental health outcomes across different residential subgroup users of a digital mental health relational agent: exploratory single-arm study. JMIR Form Res. Sep 27, 2023;7:e46473. [CrossRef] [Medline]
- Hocking J, Maeder A, Powers D, Perimal-Lewis L, Dodd B, Lange B. Mixed methods, single case design, feasibility trial of a motivational conversational agent for rehabilitation for adults with traumatic brain injury. Clin Rehabil. Mar 2024;38(3):322-336. [CrossRef] [Medline]
- Liu H, Peng H, Song X, Xu C, Zhang M. Using AI chatbots to provide self-help depression interventions for university students: a randomized trial of effectiveness. Internet Interv. Mar 2022;27:100495. [CrossRef] [Medline]
- Macrynikola N, Chang S, Torous J. Emotion regulation self-efficacy as a mechanism of alliance and outcomes in a brief, transdiagnostic digital mental health intervention/L’auto-efficacité de la régulation des émotions en tant que mécanisme d’alliance et de résultats dans une brève intervention transdiagnostique numérique en santé mentale. Can J Psychiatry. Sep 23, 2024:7067437241274201. [CrossRef] [Medline]
- Cross S, Bell I, Nicholas J, et al. Use of AI in mental health care: community and mental health professionals survey. JMIR Ment Health. Oct 11, 2024;11(1):e60589. [CrossRef] [Medline]
- Doukani A, Quartagno M, Sera F, et al. Comparison of the working alliance in blended cognitive behavioral therapy and treatment as usual for depression in Europe: secondary data analysis of the E-COMPARED randomized controlled trial. J Med Internet Res. May 31, 2024;26:e47515. [CrossRef] [Medline]
- Goldberg SB, Flemotomos N, Martinez VR, et al. Machine learning and natural language processing in psychotherapy research: alliance as example use case. J Couns Psychol. Jul 2020;67(4):438-448. [CrossRef] [Medline]
- Goldberg SB, Baldwin SA, Riordan KM, et al. Alliance with an unguided smartphone app: validation of the digital working alliance inventory. Assessment. Sep 2022;29(6):1331-1345. [CrossRef] [Medline]
- Plakun EM. Psychotherapy and artificial intelligence. J Psychiatr Pract. Nov 1, 2023;29(6):476-479. [CrossRef] [Medline]
- Russo A, D’Onofrio G, Gangemi A, et al. Dialogue systems and conversational agents for patients with dementia: the human-robot interaction. Rejuvenation Res. Apr 2018;22(2):109-120. [CrossRef] [Medline]
- Prescott J, Hanley T. Therapists’ attitudes towards the use of AI in therapeutic practice: considering the therapeutic alliance. Ment Health Soc Incl. May 10, 2023;27(2):177-185. [CrossRef]
- Ta-Johnson VP, Boatfield C, Wang X, et al. Assessing the topics and motivating factors behind human-social chatbot interactions: thematic analysis of user experiences. JMIR Hum Factors. Oct 3, 2022;9(4):e38876. [CrossRef] [Medline]
- Alfano L, Malcotti I, Ciliberti R. Psychotherapy, artificial intelligence and adolescents: ethical aspects. J Prev Med Hyg. Dec 2023;64(4):E438-E442. [CrossRef] [Medline]
- Anisha SA, Sen A, Bain C. Evaluating the potential and pitfalls of AI-powered conversational agents as humanlike virtual health carers in the remote management of noncommunicable diseases: scoping review. J Med Internet Res. Jul 16, 2024;26:e56114. [CrossRef] [Medline]
- Escobar-Viera CG, Porta G, Coulter RWS, Martina J, Goldbach J, Rollman BL. A chatbot-delivered intervention for optimizing social media use and reducing perceived isolation among rural-living LGBTQ+ youth: Development, acceptability, usability, satisfaction, and utility. Internet Interv. Dec 2023;34:100668. [CrossRef] [Medline]
- Goonesekera Y, Donkin L. A Cognitive Behavioral Therapy Chatbot (OTIS) for Health Anxiety Management: Mixed Methods Pilot Study. JMIR Form Res. Oct 20, 2022;6(10):e37877. [CrossRef] [Medline]
- Inkster B, Kadaba M, Subramanian V. Understanding the impact of an AI-enabled conversational agent mobile app on users’ mental health and wellbeing with a self-reported maternal event: a mixed method real-world data mHealth study. Front Glob Womens Health. 2023;4:1084302. [CrossRef] [Medline]
- Kettle L, Lee YC. User experiences of well-being chatbots. Hum Factors. Jun 2024;66(6):1703-1723. [CrossRef]
- Brotherdale R, Berry K, Bucci S. A qualitative study exploring the digital therapeutic alliance with fully automated smartphone apps. Digit Health. 2024;10:20552076241277712. [CrossRef] [Medline]
- Ashur O, Saar CR, Brandes O, Baumel A. Are there unique facets of therapeutic alliance for users of digital mental health interventions? An examination with the eHealth Therapeutic Alliance Inventory. Internet Interv. Dec 2024;38:100783. [CrossRef] [Medline]
- Chan WW, Fitzsimmons-Craft EE, Smith AC, et al. The challenges in designing a prevention chatbot for eating disorders: observational study. JMIR Form Res. Jan 19, 2022;6(1):e28003. [CrossRef] [Medline]
- Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. Jul 2021;8(2):e188-e194. [CrossRef] [Medline]
- Vilaza GN, McCashin D. Is the automation of digital mental health ethical? Applying an ethical framework to chatbots for cognitive behaviour therapy. Front Digit Health. 2021;3:689736. [CrossRef] [Medline]
- Henson P, Wisniewski H, Hollis C, Keshavan M, Torous J. Digital mental health apps and the therapeutic alliance: initial review. BJPsych Open. Jan 2019;5(1):e15. [CrossRef] [Medline]
- Tavory T. Regulating AI in mental health: ethics of care perspective. JMIR Ment Health. Sep 19, 2024;11:e58493. [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
CBT: cognitive-behavioral therapy |
DMHIs: digital mental health interventions |
DTA: digital therapeutic alliance |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
SMART: Specific, Measurable, Achievable, Relevant, and Time-Bound |
WAI-SR: Working Alliance Inventory-Short Revised |
Edited by John Torous; submitted 26.11.24; peer-reviewed by Fereshtehossadat Shojaei, Lucia Mosca; final revised version received 16.01.25; accepted 17.01.25; published 07.02.25.
Copyright© Amylie Malouin-Lachance, Julien Capolupo, Chloé Laplante, Alexandre Hudon. Originally published in JMIR Mental Health (https://mental.jmir.org), 7.2.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.