Published on in Vol 9, No 7 (2022): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/34254, first published .
The Impact of Mobile Technology-Delivered Interventions on Youth Well-being: Systematic Review and 3-Level Meta-analysis

The Impact of Mobile Technology-Delivered Interventions on Youth Well-being: Systematic Review and 3-Level Meta-analysis

The Impact of Mobile Technology-Delivered Interventions on Youth Well-being: Systematic Review and 3-Level Meta-analysis

Review

1Department of Psychology, Loyola University Chicago, Chicago, IL, United States

2Department of Psychology, Fordham University, Bronx, NY, United States

3Department of Psychology, University of Massachusetts Boston, Boston, MA, United States

4Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands

*these authors contributed equally

Corresponding Author:

Colleen S Conley, PhD

Department of Psychology

Loyola University Chicago

1032 W. Sheridan Road

Chicago, IL, 60660

United States

Phone: 1 7735083603

Email: cconley@luc.edu


Background: Rates of mental health problems among youth are high and rising, whereas treatment seeking in this population remains low. Technology-delivered interventions (TDIs) appear to be promising avenues for broadening the reach of evidence-based interventions for youth well-being. However, to date, meta-analytic reviews on youth samples have primarily been limited to computer and internet interventions, whereas meta-analytic evidence on mobile TDIs (mTDIs), largely comprising mobile apps for smartphones and tablets, have primarily focused on adult samples.

Objective: This study aimed to evaluate the effectiveness of mTDIs for a broad range of well-being outcomes in unselected, at-risk, and clinical samples of youth.

Methods: The systematic review used 5 major search strategies to identify 80 studies evaluating 83 wellness- and mental health-focused mTDIs for 19,748 youth (mean age 2.93-26.25 years). We conducted a 3-level meta-analysis on the full sample and a subsample of the 38 highest-quality studies.

Results: Analyses demonstrated significant benefits of mTDIs for youth both at posttest (g=0.27) and follow-up (range 1.21-43.14 weeks; g=0.26) for a variety of psychosocial outcomes, including general well-being and distress, symptoms of diverse psychological disorders, psychosocial strategies and skills, and health-related symptoms and behaviors. Effects were significantly moderated by the type of comparison group (strongest for no intervention, followed by inert placebo or information-only, and only marginal for clinical comparison) but only among the higher-quality studies. With respect to youth characteristics, neither gender nor pre-existing mental health risk level (not selected for risk, at-risk, or clinical) moderated effect sizes; however, effects increased with the age of youth in the higher-quality studies. In terms of intervention features, mTDIs in these research studies were effective regardless of whether they included various technological features (eg, tailoring, social elements, or gamification) or support features (eg, orientation, reminders, or coaching), although the use of mTDIs in a research context likely differs in important ways from their use when taken up through self-motivation, parent direction, peer suggestion, or clinician referral. Only mTDIs with a clear prescription for frequent use (ie, at least once per week) showed significant effects, although this effect was evident only in the higher-quality subsample. Moderation analyses did not detect statistically significant differences in effect sizes based on the prescribed duration of mTDI use (weeks or sessions), and reporting issues in primary studies limited the analysis of completed duration, thereby calling for improved methodology, assessment, and reporting to clarify true effects.

Conclusions: Overall, this study’s findings demonstrate that youth can experience broad and durable benefits of mTDIs, delivered in a variety of ways, and suggest directions for future research and development of mTDIs for youth, particularly in more naturalistic and ecologically valid settings.

JMIR Ment Health 2022;9(7):e34254

doi:10.2196/34254

Keywords



Youth Mental Health Needs

Rates of mental health problems among youth, including children, adolescents, and young adults, are alarmingly high and appear to have risen in recent decades [1,2]. Rates of impulse control disorders (eg, attention deficit hyperactivity disorder and conduct disorder) and some anxiety disorders begin rising as early as the age of 4 years, with sharp increases in the prevalence of anxiety, mood, and substance use disorders across adolescence and young adulthood [3]. Indeed, nationally representative samples of adolescents and young adults show that >40% of youth in this age range experience psychological disorders in a given year and lifetime prevalence rates are estimated to approach 60% [4,5]. Beyond diagnosable mental disorders, many youths struggle with a diverse array of subclinical emotional, behavioral, interpersonal, and academic challenges [6-10].

Despite the high prevalence of youth facing mental health problems, only one-third to one-half of those in need receive mental health treatment [11-14], and treatment rates are even lower among low-income youth [15] and those with marginalized racial and ethnic identities [13,16,17]. This treatment gap is also evident among college students [11], which is notable given that these youth often have convenient access on campus to no- or low-cost mental health services [18,19].

Youth and families face several barriers to receiving mental health services [20-24]. Many lack knowledge and awareness of common mental health problems and may assume that certain behavioral or emotional issues are simply temporary phases or difficulties they can address on their own. These types of assumptions may be compounded by the stigma about mental illness and psychological services within the youth’s family, culture, or broader community [25,26]. Many youths and caregivers also lack knowledge of available evidence-based treatments for mental health problems. Furthermore, several structural barriers severely limit the access of many youths and families to culturally sensitive, effective mental health care. Low-cost, evidence-based treatments for youth mental health problems are not available in many underserved communities across the world, including low-income rural and urban areas and countries with limited health care infrastructure. Even when such treatments are available, families may lack the time or resources needed to travel and take advantage of these treatments [27-30].

Technology’s Role in Youth Mental Health

Although efforts should continue to address the barriers that prevent formal services by qualified mental health professionals, it is also important to consider alternative ways of fulfilling the unmet mental health needs among youth. Mobile technology–delivered interventions (mTDIs), including mental health content delivered via mobile phones, tablets, and wearable smart devices (eg, watches, glasses, and virtual reality [VR] headsets), are potential ways of meeting this need. As of 2018, youth smartphone ownership and use were remarkably high, with 95% of teenagers having access to smartphones [31]. In 2016, the average age of first owning a smartphone was 10 years in the United States [32], and younger children commonly have access to smart devices through parents, siblings, or schools that provide tablets and other mobile devices to students. These mobile devices may be overlooked conduits for mental health information. Indeed, a recent survey of teenagers and young adults [33] reported that among those with moderate to severe depressive symptoms, 90% had searched the internet for information about mental health, and 38% used a mental health app. Parents also often use the internet for resources on health-related issues among their young children [34].

Technology can offer easy ways of connecting with mental health resources, such as mood-enhancing and skill-building apps purported to improve mental health. The ubiquitous, self-guided nature of such technology-delivered tools makes them appealing alternatives for those who are limited by access to, or trust in, formal mental health services [35]. Key themes in a recent review of research on internet-based help seeking for mental health difficulties among young people (aged up to 25 years) [36] showed that youth frequently engaged in technology-based (eg, internet-based) help seeking late at night (when traditional in-person mental health services are typically not available) and that youth endorsed several specific benefits of seeking help this way, including anonymity and privacy; lower perceived stigma and judgment; accessibility, including in times of crisis; and connection to others with similar experiences, which can foster a sense of community and acceptance.

Technology-Delivered Interventions for Youth

Potential and Pitfalls

Although mTDIs have great potential to improve access to evidence-based mental health content, it is important to carefully evaluate their effects when it comes to mental health care, especially for youth. In contrast to computer- and internet-based technology-delivered interventions (TDIs), which are typically developed by clinicians and researchers to incorporate comprehensive and evidence-based treatment methods that parallel professional psychotherapy, mobile TDIs often lack such comprehensive, evidence-based principles while also introducing privacy and safety concerns [37]. The rapidly developing and competitive mobile app marketplace also poses some challenges in connecting evidence-based mental health practices with marketable and engaging mobile technology. Commercially available apps are typically designed by technology companies outside of the health care industry [38,39] with the aim of being engaging and attractive, thereby prioritizing appealing features such as design and gamification over evidence-based clinical techniques [40,41]. In contrast, research-developed apps prioritize evidence-based content and rigorous trials, which slows widespread availability as it can take 2 decades from conceptualization to public dissemination [37,42], leaving a gap between research-tested and commercially available mTDIs. Indeed, recent reviews of the content of >10,000 purported mental health apps in the commercial marketplace have noted that a vast majority have not gone through rigorous intervention development and testing [43] and are lacking or inconsistent with evidence-based psychotherapy principles [44], including apps specifically targeting youth mental health [40,45]. Although research trials have demonstrated promising findings for some mTDIs, the heterogeneity and poor quality of certain mTDIs and studies have led to inconclusive evidence across outcomes [46], which makes the role of these interventions in mental health services less clear [47]. Finally, mental health technologies generally have low rates of engagement and adherence [35,43,48-50]. Thus, a key question for this emerging area of research is how mTDIs can best be designed, prescribed, and implemented to harness their benefits for youth.

Areas for Further Research

Previous reviews of TDIs, a broader category that goes beyond mobile interventions, have demonstrated the benefits of computer- and internet-based interventions, most commonly cognitive behavioral interventions, and most commonly examining the outcomes of depression, anxiety, and stress [51-58]. Similar to findings in adult populations, computer- and internet-based TDIs in youth samples have shown benefits, mostly in reducing internalizing symptoms, behavioral concerns, and eating disorders [51,52,54-60]. The literature on the efficacy of mobile TDIs, or mTDIs, is growing, with multiple meta-analytic reviews indicating positive impacts on a range of psychological outcomes in adults [42,53,61-63]. The emerging meta-analytic literature on mTDIs for youth is encouraging but limited, including generally beneficial results across (1) reviews blending a few trials of mTDIs together with mostly nonmobile TDIs in youth [52,64,65]; (2) a review combining 4 trials of mTDIs in children and adolescents with 21 trials of mTDIs in adults (mean age up to 59 years) [66]; (3) a recent meta-analysis of 12 youth trials, both with or without comparison groups, of smartphone apps exclusively on internalizing disorders [67]; and (4) another recent meta-analysis of 11 randomized controlled trials of smartphone apps for depression, anxiety, and stress in youth (aged 10-35 years) [68]. Although these initial findings are encouraging, a fitting next step for this emerging area of research is to meta-analytically review TDIs that are exclusively mobile in youth samples while including a broad array of youth clinical presentations and outcomes. Moreover, given the diverse designs of mTDIs for distinct youth characteristics and presenting problems, exploring the moderating influence of the interventions’ mobile technologies, theoretical orientations, technological and support features, and varying dosages would advance our ability to harness the full potential of mTDIs in improving youth well-being.

The Current Meta-analysis: Goals and Hypotheses

The current 3-level meta-analysis evaluated the impact of wellness- and mental health-focused mTDIs (including smartphones and tablets, other types of mobile phones, and other handheld and wearable devices, including mobile VR) for youth, broadly defined as children, adolescents, and intentional (eg, university student) young adult samples or those with a mean age of ≤26 years. Improving upon some limitations of previous reviews, we included published and unpublished reports, only included controlled (either randomized or quasi-experimental) designs, and evaluated a broad range of participant clinical presentations (eg, unselected, at-risk, or clinical samples), intervention theoretical orientations, and outcomes. Drawing on evidence from prior reviews, we predicted that these mTDIs would yield significant benefits at postintervention on diverse indicators of youth well-being relative to comparison conditions. In addition, we examined the role of several potential moderators of intervention impact within the categories of methodological, youth, and intervention characteristics.

Methodological Characteristics
Timing of Outcomes

Prior reviews note that there are a limited number of studies assessing the long-term effects of TDIs [51,54] and mTDIs [68] on youth. The reviews that compare the effects at postintervention versus later follow-up periods have been mixed, with some finding that effects are stable into follow-up periods (eg, parenting TDIs [59]) and others finding that some or all effects diminish over time (eg, adult mTDIs [69] and parenting TDIs [70]). Given that youth might have added challenges in implementing long-term gains [57], we tentatively predicted that the timing of the outcome assessment (posttest vs follow-up) would moderate the strength of the mTDIs’ effects such that the effects of mTDIs would wane over time.

Outcome Type

Prior reviews have established the benefits of mTDIs in reducing depression, anxiety, and stress, mostly in adults [42,62,69], with emerging evidence in youth [67,68]. In addition, mTDIs have been shown to be effective in improving life satisfaction, quality of life, and psychological well-being [69]. To broadly evaluate the potential impact of mTDIs on youth, we examined a broad range of youth outcomes, including those that have not yet been examined in prior reviews. Therefore, we expected mTDIs to have a beneficial impact on depression, anxiety, stress, and well-being, and explored whether mTDIs would also have beneficial effects on other outcome types, such as psychosocial strategies and skills, interpersonal relationship factors, academic functioning, health-related behaviors, or knowledge.

Comparison Group Type

Reviews of TDIs and mTDIs have generally found that effects are largest when they are compared with no-intervention or wait-list groups and smaller when compared with groups that are more active and clinically potent [42,52,58,60,62,69-71]. Thus, we predicted that the comparison group type would moderate the effects of mTDIs. Specifically, we expected that mTDIs would demonstrate the strongest benefits compared with no intervention (eg, wait-list), followed by inert interventions (including information-only and attentional or placebo controls), and demonstrate noninferiority compared with clinical comparisons, including usual clinical care and established clinical interventions.

Youth Characteristics
Age

Several previous reviews of TDIs in youth have demonstrated that older participants experience a greater reduction in symptoms than their younger counterparts [51,55,65,72]; however, others have found no effect of age [57,59], and preliminary evidence on a small sample of mTDIs in adolescents and young adults also failed to find an effect of age [68]. In an exploratory fashion, we examined whether age moderates the effects of mTDIs.

Gender

The few reviews on the mental health benefits of smartphone apps that explored the role of participants’ gender have revealed nonsignificant effects in adult [42] and youth [59] samples. Nevertheless, given the differences in the rates of various mental health problems as a function of gender across development [73,74], we tested the effects of gender in an exploratory fashion.

Risk Level

Some prior reviews of mTDIs with adult samples have indicated that higher pretreatment severity (ie, clinical diagnosis or elevated mental health symptoms) is related to a greater reduction in symptoms and, therefore, produces a larger effect size (ES) [60,63]. In contrast, Pennant et al [54] found greater effects of computerized therapies for youth with subclinical symptoms versus a clinical diagnosis of anxiety; however, this effect was not found for depression. Two more recent reviews suggest inconclusive evidence regarding whether TDIs or mTDIs are more effective for youth who present with diagnoses or severe symptomatology [47,65]. Therefore, we examined participant risk level (ie, clinical diagnosis, elevated symptoms, and nonclinical sample) as an exploratory moderator.

Intervention Characteristics
Type of Technology

A prior review on the impact of mTDIs on both youth and adults did not find significant differences in effects by type of technology (ie, smart mobile phones/tablets vs other types of mobile phones, PDAs, wearable devices, or VR headsets) but noted that smartphone apps produced (nonsignificantly) larger ESs than both PDA and SMS text messaging interventions [66]. Given the limited information, we tested the impact of technology type as an exploratory moderator.

Guiding Theoretical Framework

Cognitive behavioral–based and mindfulness- or acceptance-based interventions are the most commonly examined theoretical frameworks among TDIs for youth and adults [51,54-58] and mTDIs for adults [42,53,62,69] and have generally yielded positive effects for problems such as anxiety, depression, externalizing behaviors, and quality of life. Thus, we hypothesized that mTDIs with cognitive behavioral–based and mindfulness- or acceptance-based theoretical frameworks would produce significant effects and examined the impact of additional theoretical orientations (eg, motivational or positive psychology) in a more exploratory fashion.

Technological and Support Features

Prior reviews have suggested that specific features of mTDIs, such as tailoring (ie, content shifting based on responses), gamification, and automatic reminders, may increase engagement and yield more robust effects [53,75]; however, much of this research has been conducted with adult samples. Therefore, we explored the potential moderating impact of various technological features such as personalization, tailoring, gamification, and social elements (eg, peer mentoring).

Research also points to the possibility that support from a human or virtual (“bot”) professional, who can provide guidance, coaching, accountability, and, in some cases, supervised skills practice, may lead to increased adherence to mTDIs [47,76,77]. Indeed, several reviews have indicated that self-guided interventions are generally more effective with some access to human support or guidance, in part because of increased engagement and adherence [53,78,79]. Similarly, research on psychotherapy and other in-person youth interventions have highlighted the benefits of supervised practice in contributing to youths’ psychological skill development, especially when delivered over multiple sessions [79-84]. Nevertheless, the overall evidence on human support for mTDI use is mixed, with a handful of reviews not finding added benefits for interventions incorporating human support as compared with those that do not [42,54,55,85]. Thus, we explored the potential impact of human or bot support elements, such as coaching, supportive accountability, and supervised skills practice.

Dosage: Frequency and Duration

Previous studies have rarely investigated the dose-response relationship between TDI use and outcomes, and those that have done so tend to yield mixed results [47,51,54]. Some reviews have found that higher dosages, or longer durations, predicted a greater reduction in symptoms with particular outcomes (eg, problem behavior or depression) [59,86]. However, other studies were unable to establish such a relationship [57,85]. Therefore, we explored whether the prescribed or completed frequency (eg, weekly) or duration (eg, minutes, sessions, or weeks) of the intervention moderated the benefits of mTDIs.


Search Strategy and Study Selection

We used 5 systematic search strategies to assemble an unbiased, representative sample of published and unpublished controlled trials. First, we conducted searches for reports appearing through March 2021 in 5 academic databases: PsycINFO, ERIC, ProQuest Digital Dissertations, MEDLINE (Web of Science), and PubMed. We used a combination of several groups of search terms to find studies meeting our criteria for (1) participants (eg, child*, adolescen*, teen, youth, young adult, university student), (2) interventions (eg, mental health, psychological, intervention, cognitive behavioral, mindfulness), (3) mental health (eg, depress*, anxiety*, well-being), (4) technology (eg, smartphone app*, mobile app*, tablet-based), and (5) research design (eg, RCT, controlled trial, clinical trial, quasi, comparison group, PRISMA). Second, we also inspected the reference lists of each study meeting our criteria and of relevant previous reviews. Third, we hand-searched the contents of 16 selected journals most likely to publish studies on mobile mental health interventions involving youth. Fourth, we hand-searched the contents of proceedings for the recent years of 7 relevant academic conferences. Finally, we contacted authors of prior reviews, reports, and conference proceedings relevant to our sample to inquire about additional published or unpublished evaluations of fitting trials. Further details on these search strategies are provided in Multimedia Appendix 1.

To be included in our final sample, the studies had to meet six criteria: (1) examine an automated psychological or behavioral intervention, either selecting participants based on a diagnosis or risk factor or targeting an unselected sample to promote mental health and wellness; (2) deliver the intervention primarily via mobile (handheld or wearable) technology, including pre–cellular technology handheld computers (eg, palm pilot and PDA), mobile cellular phones or tablets (eg, iPad and iPod touch)—using SMS text messaging, instant messaging, or more current mobile mental health apps—and wearable devices (eg, smart watch, smart glasses, VR headsets that are fully self-contained or linked with a mental health app on a smartphone or tablet device that is portable and able to be used in the participant’s home); (3) contain at least one quantitatively assessed mental or behavioral health outcome measure (described in the following sections) for which ESs could be calculated; (4) target youth, broadly defined as children, adolescents, and intentional (eg, university student) young adult samples or those with a mean age ≤26 years (including interventions delivered to parents that targeted youth outcomes); (5) include a comparison group with at least 10 participants assigned to each condition; and (6) be reported in English, Spanish, Dutch, or German.

We excluded interventions with a primary focus on academics or physical health (eg, nutrition, weight loss, or diabetes management) but included studies that focused on psychobehavioral health such as smoking cessation, insomnia, and disordered eating. We did not include interventions delivered through audio or video tapes or videodiscs, a local computer program, or a website only. In addition, we did not include mobile interventions that were primarily reliant on human support (eg, therapists sending messages). Finally, we excluded interventions comprising solely medication reminders.

Figure 1 shows a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram of sample searching, selection, and inclusion. The aforementioned search procedures identified 7487 potentially relevant reports, including 2353 (31.43%) duplicates that were removed. An additional 51.44% (3851/7487) of reports were eliminated as they did not meet our inclusion criteria. Among the 1283 eligible reports, some contained variants of the same intervention (eg, 2 interventions with the same active component but varying lengths), and we only included the intervention that was more comprehensive (ie, contained more elements or was longer in duration) or completely technology-based. However, if conceptually distinct interventions (eg, 2 different apps using different techniques) were evaluated in the same report, each intervention was coded separately. Data from multiple reports on the same sample and intervention were combined into a single report, reducing 16 overlapping reports to a sample size of 6.

In cases where means and SDs were not included in the original reports or effects could not be calculated because of insufficient data, we attempted to contact study authors to secure missing data. On the basis of a lack of author response, we excluded 16 studies for which no ESs could be calculated for any relevant outcome measure. This screening process led to a final sample of 83 interventions reported in 80 studies between 2005 and 2021.

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram of the study selection process.
View this figure

Study Coding (Data Extraction)

Methodological Characteristics

For each report, we coded the year of the report, publication status, country in which the intervention took place, type of experimental design and comparison group, sample size, outcome types, and additional codes described in the following sections.

Timing of Outcomes

We coded the number of weeks between pre- and postintervention outcome assessments and between postintervention and each follow-up assessment period.

Outcome Type

We coded a broad array of outcomes to capture the various psychosocial and related aspects of functioning that might be affected by mTDIs. The relevant outcomes assessed in our sample of studies were classified into 14 possible categories, some of which were conceptually nested under higher-order categories, as noted in Textbox 1.

Outcome types coded.
  • General psychological well-being or distress included 2 subcategories:
    • Stress (eg, perceived and physiological indices of stress)
    • General or global psychological distress and well-being (eg, distress, positive and negative affect, mood states, quality of life, happiness, or life satisfaction)
  • Psychosocial strategies or skills included 2 subcategories:
    • Social-cognitive strategies or styles (eg, different types of affective, cognitive, and social skills related to effective coping strategies, help-seeking behaviors, or mindfulness practices; overcoming dysfunctional beliefs, rumination, or hostility; resilience; or emotional self-awareness and regulation)
    • Self-perceptions (eg, self-esteem or self-efficacy)
  • Internalizing symptoms included 2 subcategories:
    • Depression
    • Anxiety
  • Other (noninternalizing) mental health problems (eg, autism, attention deficit hyperactivity disorder, or eating disorders)
  • Health and health behavior (eg, substance use, sleep, physical activity, pain, or eating behaviors)
  • Interpersonal relationships (eg, conflict, perceived social support, belongingness, loneliness, or social skills)
  • Academics (eg, academic performance or adjustment)
  • Psychology or health-related knowledge (eg, knowledge about topics such as substance use norms and consequences or sleep hygiene)
  • Psychosocial outcomes in someone other than the target youth (eg, parent stress, warmth, use of punishment)
  • Other (eg, perceptions of productivity, stigma, or close friend’s smoking behavior)
  • Intervention (ie, app) ratings (eg, intervention feasibility and social validity, acceptability of the mobile technology–delivered intervention (mTDI), and its uptake or use)
Textbox 1. Outcome types coded.
Comparison Group Type

Studies were coded as having 1 of 3 different comparison groups. The majority of studies included a no-intervention (eg, wait-list control) condition in which the comparison group only completed assessment procedures. Some studies compared the intervention of interest with an inert comparison group, whether information-only (eg, pamphlets or website links to general health-related information), attention-placebo (eg, passive SMS text messages), or minimal treatment-as-usual (eg, standard protocol before a medical procedure) conditions that did not contain the therapeutic elements of the evaluated intervention. These comparison groups generally attempted to control for nonspecific factors such as attention or social interaction. Finally, some studies included a clinical comparison group, whether a usual clinical care comparison or some other established (validated or otherwise intended to be beneficial) intervention. In some studies with a clinical comparison group, both the mTDI and comparison groups received a similar base intervention (eg, counseling vs counseling+app) and thus tested the added or incremental benefit of the mTDI of interest.

Youth Characteristics
Age, Gender, Race, and Ethnicity

When the information was available, studies were coded for the sample age (mean, SD, and range), gender, race, and ethnicity.

Risk Level and Type

We coded whether researchers selected participants based on particular symptoms or risk factors into the following categories: (1) psychological clinical sample (ie, symptoms indicative of a Diagnostic and Statistical Manual of Mental Disorders diagnosis) [87,88], (2) psychological or mental health at-risk sample (ie, subclinical symptoms of psychopathology), (3) nonmental health risk (ie, medical risk, diagnosis, or procedure), and (4) general (unselected) community sample not selected for any particular risk factor.

Intervention Characteristics
Type of Technology

We coded each intervention’s primary and secondary (if relevant) type of technology into one of the following categories: (1) smartphone or tablet (eg, iPhone, iPad, or iPod touch), (2) presmartphone mobile device (eg, presmart mobile phone, palm pilot, or PDA), (3) mobile VR (eg, headsets) or video game (ie, handheld), and (4) other wearable devices (eg, smart watch, biosensor or activity monitor, and smart glasses). VR headsets and other wearable devices were typically used in conjunction with smartphones or tablets. Finally, some interventions were also able to be accessed on a (5) computer as a secondary type of technology.

Guiding Theoretical Framework

Interventions were coded as having one of the following primary guiding theoretical frameworks: (1) cognitive behavioral, (2) mindfulness- or acceptance-based (eg, mindfulness-based stress reduction, mindfulness-based cognitive therapy, acceptance and commitment therapy, or dialectical behavior therapy), (3) blended cognitive behavioral and mindfulness, (4) other or multiple (ie, positive psychology, interpersonal, motivational or stages of change, and transtheoretical), or (5) atheoretical or not specified. When mTDIs were available in the commercial market, they were consulted directly to supplement the information obtained from the research reports.

Technological and Support Features

We coded whether the intervention included personalization (ie, the ability to alter the app environment through features such as personal preferences; personal dashboards; or use of photos, music, or contacts), tailoring (ie, the use of algorithms that alter intervention content based on contact sensing, prior responses, feedback, or other input), a social component (eg, forum or social media use or mentoring), or gamification (eg, rewards, badges, points, levels, or quests).

We also coded several intervention features designed to support participants in using the mTDI: (1) training or orientation for the participants about using the mTDI (eg, virtual training within the app or via email or video chat, in-person training, or a paper manual); (2) in-person element besides orientation or training (eg, simultaneous counseling); (3) reminders sent to encourage the use of the mTDI, either automatically through the app (eg, push or banner notifications) or outside of the app (eg, emails, texts, or calendar reminders); (4) human or bot (automatic) support (eg, supportive SMS text messages, phone calls, or personalized feedback) specifically around the mTDI; (5) targeted guidance indicative of supportive accountability, designed to increase adherence to an intervention via support and accountability from a trustworthy coach who assists with setting process-oriented expectations and goals [77]; and (6) targeted guidance in the form of supervised skills practice [79].

Dosage: Frequency and Duration

When the information was available, studies were coded for the intervention’s prescribed and completed (both objectively determined and self-reported) frequency and duration. Specifically, the frequency of use was coded as one of the following categories: at least 4 days per week or as much as feasible, 2 to 3 days per week, once per week, less than once per week, one-time session, and not stated or at user discretion. The intervention duration was coded in terms of minutes, weeks, and sessions. When data were available, we also calculated the percentage of the completed duration of the intervention by dividing the completed duration by the prescribed duration.

Risk of Bias (Quality) Assessment

For study quality, we followed the approach of an integrative study quality coding scheme [89] designed to draw upon the strengths of several previously validated quality indices, including the Cochrane Collaboration’s tool for assessing the risk of bias [90-92]. This coding scheme rates each study on 10 features: peer review and impact factor, experimental design, sample size, attrition, reliability of measures, validity of measures, adjustment for pretest differences, intent-to-treat analysis, reporting of sample characteristics, and involvement of study authors in mTDI development. Each feature is rated on a 4-point scale (from 0, indicating the lowest quality, to 3, indicating the highest quality). The 10 item scores are then summed, resulting in a score for which a score of 20 represents average or normal research practices.

Reliability of Coding

A team of 5 trained postbaccalaureate and graduate students assessed the studies for eligibility and inclusion in the meta-analysis and met weekly to review any questions for consensus. A team of 6 graduate students with advanced clinical and quantitative training then reviewed and coded eligible reports for descriptive features, moderators, quality indicators, and outcome data. The coders were supervised by 3 faculty members with expertise in clinical psychology, mTDIs, and meta-analytic procedures. After the iterative training phase, coders had ongoing opportunities for consensus checks through a consultation system and weekly faculty supervision. From a subsample of 18 to 31 studies (depending on the code) containing 44 interventions, 46 comparisons, and 108 ESs, any code that did not reach adequate reliability (ie, >0.80 κ, 85% agreement, or 0.95 intraclass correlation coefficient, as fitting) [93,94] was reviewed by at least one other coder in the entire sample. Lead authors provided an additional review of randomly selected articles throughout the coding process. Any questions or discrepancies were resolved through discussions.

Meta-analytic Strategy

ES Calculation

Cohen d was calculated for each outcome to reflect the effect of mTDIs relative to the comparison condition, with positive ESs representing outcomes in which the intervention group outperformed the comparison group. If d values could not be obtained directly from primary studies, the formulas by Borenstein et al [95] and Lipsey and Wilson [96] were used to transform the reported statistical information into Cohen d. Whenever possible, d values were calculated using means and SDs, frequencies or proportions, odds ratios, or results from F or t tests. If a primary study did not report sufficient information to extract or calculate the ES, the study authors were contacted for additional information. When the only information available indicated that an ES was nonsignificant, we conservatively set that ES to zero, following Mullen [97]. This procedure was preferred above excluding primary studies from the review, as the latter would reduce the statistical power in the analyses. To correct for pretreatment differences, we adjusted the postintervention and follow-up effects for preintervention baseline outcome levels (using subtraction, similar to procedures in other meta-analyses) [89,98,99] when pretreatment data were available. Finally, before analysis, all ESs were converted to Hedges g to account for potential bias in small sample sizes.

The 3-Level Meta-analytic Model

Most primary studies included in this review reported on multiple intervention effects, typically because multiple outcomes were tested or multiple comparison conditions were part of the study design. The resulting dependency in ESs (ie, the fact that ESs extracted from the same study are more alike than the ESs extracted from different studies) violates the assumption of independent ESs underlying traditional meta-analytic techniques [96].

Therefore, a 3-level random-effects model was used for all analyses [100-104]. In this 3-level model, 3 sources of variance were modeled: sampling variance of the observed ESs (ie, sampling variance; level 1), variance between ESs derived from the same study (ie, within-study variance; level 2), and variance in ESs derived from different studies (ie, between-study variance; level 3). The sampling variance at level 1 of the model is not estimated but considered known and calculated using the formula given by Cheung [101].

To determine whether testing select moderators would be informative, we first examined the ES heterogeneity by testing the significance of the within-study variance (level 2) and the between-study variance (level 3). We performed 2 one-sided log-likelihood ratio tests in which the deviance of the full model was compared with the deviance of the model without one of these variance parameters. If the within-study variance or the between-study variance were significant, we proceeded with the moderator analyses. The coded variables were only tested as moderators when (categories of) these variables were based on at least three studies or three ESs. In some cases, we consolidated categories with <3 studies or ESs into another (or the other) category.

Software and Parameters

We used the function rma.mv of the metafor package [105] in the R statistical environment (version 3.6.1; R Foundation for Statistical Computing) [106], following the setup and R syntax by Assink and Wibbelink [100], to model the 3 sources of ES variance [103,104]. The overall effect was estimated using an intercept-only model, and potential moderators were examined by adding these variables as covariates to the intercept-only model. The t distribution was used in testing individual regression coefficients of the models and for calculating the corresponding CIs [107]. When models were extended with categorical moderators comprising >3 categories, the omnibus test followed an F distribution. The restricted maximum likelihood estimation method was used to estimate the model parameters. Before conducting moderator analyses, continuous variables were centered on their means, and dichotomous dummy variables were created for categorical variables. The log-likelihood ratio tests were conducted as 1-tailed, whereas all other significance tests were conducted as 2-tailed. The significance level was set to 0.05 in all analyses, and 95% CIs were estimated.

Publication Bias

A problem that may arise in meta-analysis is the file drawer problem [108], in that studies with nonsignificant or negative results are less likely to be published than studies that produced significant and positive results. To reduce this problem, we attempted to be exhaustive in our search strategy to retrieve both published and unpublished primary studies. To further assess bias in our data set of ESs, 2 analyses were conducted. First, we performed the trim-and-fill analysis by Duval and Tweedie [109,110] to examine the symmetry of a funnel plot in which ESs were plotted against their SEs. In the case of publication bias, the plot is asymmetrical, as the ESs are missing to the left of the estimated mean. The trim-and-fill algorithm estimates these missing ESs using an iterative nonparametric method. After imputing these ESs, the symmetry of the plot is restored, and an adjusted overall effect can be estimated. We also examined bias by performing the Egger test, in which ESs are regressed on their SEs [111]. This was performed by adding the SE as a covariate to an intercept-only, 3-level meta-analytic model. In this model, a significant positive slope indicated the presence of publication bias.


Study Sample and Descriptive Characteristics

Multimedia Appendix 2 [112-200] provides a table with details about each of the 80 studies eligible for this meta-analysis, 3 (4%) of which contained 2 eligible interventions and 10 (13%) that contained 2 eligible comparison groups, yielding 83 interventions, 93 comparisons, and a combined sample size of 19,748 youth. Of these 80 studies, 76 (95%) provided estimates of 484 postintervention ESs, and 29 (36%) studies provided estimates of 225 follow-up ESs.

Several aspects of the 80 included studies are worthy of comment. First, most (68/80, 85%) of the studies were published in peer-reviewed scientific journals, and the remainder were unpublished dissertations or in-preparation manuscripts that extended prior peer-reviewed work published as a pilot trial or presented at an academic conference. In addition, most of the studies were published within the past decade or so, with 96% (77/80) published since 2010 and 28% (22/80) since 2020. Of the 80 studies, 33 (41%) were conducted in the United States, with 36 (45%) reports from the broader North American continent, 23 (29%) from Europe, 11 (14%) from Australia, 9 (11%) from Asia or the Middle East, and 1 (1%) from South America.

In terms of participants, across the 93% (74/80) of studies reporting relevant demographic information (and among the 67/80, 84% of studies reporting SD), the average age ranged from 2.93 to 26.25 (weighted mean 15.92; SD 2.86) years, and on average, 63.83% of study samples were female (but notably, most studies did not report on, or likely assess, gender other than female or male). Only 38% (30/80) of studies provided a full breakdown of participant race and ethnicity, and 23% (18/80) provided no information on these demographics at all. Furthermore, 63% (50/80) of studies selected participants based on one or more risk factors versus recruiting a general community sample. The most common risk factor used for participant recruitment and screening was subclinical psychological risk (eg, substance use or elevated depression; 30/80, 38% of studies), followed by some nonmental health risk (12/80, 15% of studies; in all cases within this sample, this was a medical diagnosis such as spina bifida or obesity or a medical procedure such as surgery or dental work), and, finally, participants with a clinical psychological or psychiatric diagnosis (eg, anxiety or autism; 8/80, 10% of studies).

In terms of the 83 interventions, 74 (89%) used smartphones or tablets (1 used an iPod touch); 4 (5%) used presmartphone mobile devices (all phones, including Motorola A925, Sony Ericsson, and Vodafone); 4 (5%) used VR headsets, either freestanding or in conjunction with a mobile phone app; and 1 (1%) used a handheld video game. Most (70/83, 84%) of the interventions took place in participants’ daily environments; however, several (13/83, 16%) took place in a medical setting (eg, to address anxiety or pain related to a medical procedure). The most prevalent guiding theoretical framework of the mTDIs was cognitive behavioral (36/83, 43% of the interventions), followed by other or transtheoretical frameworks (eg, positive psychology and motivational; 20/83, 24%), mindfulness- or acceptance-based (17/83, 20%), and a few atheoretical or unspecified frameworks (3/83, 4%). Furthermore, in nonexclusive categories, the interventions’ technological features included personalization (18/83, 22%), tailoring (36/83, 43%), a social component (10/83, 12%), and gamification (20/83, 24%). In terms of support features, of the 83 interventions, 30 (36%) included some sort of orientation or training (either virtual or in person), 12 (14%) contained one or more other in-person element, 40 (48%) incorporated reminders to encourage the use of the intervention, and 22 (27%) included some form of human or bot support or guidance, with 20 (24%) containing supportive accountability and 9 (11%) containing supervised skills practice.

All (80/80, 100%) studies provided some information about the prescribed or completed dosage (or both) of their interventions, whether objectively pulled from the mTDI or self-reported by the participants; however, the specific details reported were variable. Of the 83 interventions, 13 (16%) were single-session interventions and the remainder were prescribed to range from 4 to 2505 sessions (weighted mean 89.86, SD 374.29; k=42 studies reporting on 44 interventions) across a time span of 2 days to 43.45 weeks (weighted mean 7.48, SD 7.46; k=65 studies reporting on 68 interventions). Of these 70 interventions (contained in 67 studies), 43 (61%) were prescribed for daily use, 9 (13%) for 2 to 3 days per week, 7 (10%) for once a week, and the remaining 11 (16%) were either prescribed to be used as needed or at the user’s discretion or not stated in the report. In terms of duration of use, the prescribed minutes of use for interventions ranged from 5 to 3650 minutes (mean 345.25, SD 789.95; k=32 studies reporting on 32 interventions). Notably, only 47 out of 80 (59%) studies provided some sort of objective information about how much participants actually engaged in the intervention (eg, number of sessions, minutes, or weeks). Using all available information, we calculated the intervention completion percentage and found the average to be 85.05% of the researchers’ prescribed sessions (k=29 studies reporting on 30 interventions), 87.19% of the prescribed intervention minutes (k=13 studies reporting on 13 interventions), and 86.7% of the prescribed intervention weeks (k=37 studies reporting on 43 interventions).

Studies assessed a variety of psychosocial outcomes, which we originally coded in 14 categories (see the Methods section) and then consolidated into 6 categories because of conceptually similar content or small numbers of studies or effects (see the final list of consolidated categories in the note below Multimedia Appendix 2).

Notably, 25% (20/80) of studies also provided information about the intervention group’s ratings on measures of the intervention’s social validity (eg, user satisfaction, perceived usefulness, quality, usability, or acceptability). As these data were generally only available at the posttest time points and for intervention but not comparison groups, we do not report ESs on these types of outcomes. However, to analyze trends in diverse measures of mTDI social validity across all studies with such data available, we standardized all available Likert scale ratings for these constructs onto a single scale, with 0 representing the lowest and 100 the highest possible rating of social validity. On this standardized scale, the average rating (weighted by included sample size) for self-report scores of the mTDIs’ social validity was 58.24 and ranged quite widely across studies (30.20-100).

There was variability in the types of comparison groups as well. Slightly less than half (37/80, 46%) of the comparisons involved groups such as wait-lists that contained no active intervention, whereas the remainder of the comparisons involved either passive information-only or placebo groups (28/80, 35%) or, less commonly, clinical comparisons that were intended to have therapeutic benefits (15/80, 19%). For the studies (43/80, 54%) that used an active (inert or clinical) comparison, the modality of the comparison group was distributed fairly evenly across in-person (20/43, 47%) and other technology-based interventions (18/43, 42%), with just a few (5/43, 12%) having some other modality (ie, blended interventions containing both technology and in-person elements or paper-and-pencil materials).

Average Effect of mTDIs

The average ES across all possible comparisons within the 80 studies (yielding 709 ESs across posttest and follow-up assessments) was g=0.27 (P<.001; 95% CI 0.20-0.33). There was significant heterogeneity across studies (σ2 level 3=0.06; P<.001; 51.76% of the variance among ESs), as well as between ESs extracted from the same study (σ2 level 2=0.03; P<.001; 27.50% of the variance among ESs). Random sampling error accounted for 20.74% of the variance. To explore the substantial variability between and within studies, a number of moderators were considered. These analyses are described in the following 3 sections and detailed in Table 1.

Table 1. Moderators of the effectiveness of mobile technology–delivered interventions for youtha.
CharacteristicskbEffect sizes, nB0 (intercept), g (95% CI)B1 (slope), g (95% CI)F (df1, df2)P value
Methodological characteristics

Study quality807090.25 (0.19 to 0.31)***–0.02 (–0.04 to –0.01)**7.03 (1, 707).01c

Timing of outcome0.07 (1, 707).79


Posttest (RCd)764840.27 (0.21 to 0.33)***N/Ae



Follow-up292250.26 (0.19 to 0.34)***–0.01 (–0.06 to 0.05)


Outcome type2.70 (5, 703).02


General psychological well-being or distress (RC)35980.28 (0.20 to 0.37)***N/A



Internalizing (depression, anxiety)441450.30 (0.22 to 0.39)***0.02 (–0.06 to 0.10)



Other (noninternalizing) mental health7420.21 (0.04 to 0.38)*–0.07 (–0.25 to 0.10)



Psychosocial strategies and skills261610.34 (0.25 to 0.42)***0.05 (–0.02 to 0.13)



Health (behavior; eg, substance use)351900.24 (0.16 to 0.32)***–0.04 (–0.15 to 0.06)



Other (eg, knowledge or relationships)20730.15 (0.04 to 0.25)**–0.14 (–0.25 to –0.03)**


Comparison group type0.17 (2, 706).84c


No intervention (RC; eg, wait-list)413760.28 (0.20 to 0.36)***N/A



Inert (eg, placebo or information-only)301860.26 (0.18 to 0.35)***–0.02 (–0.11 to 0.08)



Clinical (eg, established intervention)181470.24 (0.12 to 0.36)***–0.04 (–0.17 to 0.09)


Mean age (years)746690.26 (0.19 to 0.32)***0.003 (–0.01 to 0.01)0.17 (1, 667).68c

Gender (percentage female)776990.26 (0.20 to 0.33)***0.002 (–0.002 to 0.01)1.06 (1, 697).30

Risk level and type3.15 (3, 705).02c


General sample not selected for risk (RC)302470.19 (0.09 to 0.29)***N/A



Nonmental health (ie, medical) risks12460.52 (0.33 to 0.72)***0.33 (0.11 to 0.55)**



Psychological or mental health at-risk sample303320.29 (0.19 to 0.39)***0.09 (–0.05 to 0.23)



Psychological clinical sample (diagnosis)8840.20 (0.01 to 0.39)*0.01 (–0.21 to 0.22)

Intervention characteristics

Primary type of technology0.82 (2, 706).44


Smartphone or tablet (RC)716650.25 (0.19 to 0.32)***N/A



Presmartphone mobile device4230.37 (0.09 to 0.65)*0.11 (–0.18 to 0.40)



Mobile VRf headset or handheld video game5210.41 (0.13 to 0.68)**0.15 (–0.13 to 0.44)


Guiding theoretical framework1.04 (4, 704).38


Cognitive or behavioral (RC)352490.31 (0.21 to 0.40)***N/A



Mindfulness or acceptance162380.28 (0.15 to 0.42)***–0.02 (–0.19 to 0.15)



Cognitive behavioral and mindfulness6460.35 (0.12 to 0.58)**0.05 (–0.02 to 0.29)



Other or multiple (eg, motivational)201710.16 (0.04 to 0.29)**–0.14 (–0.30 to 0.01)g



Atheoretical or not specified350.41 (–0.04 to 0.85)g0.10 (–0.35 to 0.56)


Intervention technological features


Personalization1.37 (1, 706).24



Absent (RC)635750.28 (0.21 to 0.35)***N/A




Present171330.19 (0.07 to 0.32)**–0.09 (–0.23 to 0.06)



Tailoring0.84 (1, 706).36



Absent (RC)453990.28 (0.20 to 0.37)***N/A




Present343090.23 (0.13 to 0.32)***–0.06 (–0.18 to 0.07)



Social component0.17 (1, 706).68



Absent (RC)706280.26 (0.20 to 0.33)***N/A




Present9800.22 (0.05 to 0.40)*–0.04 (–0.23 to 0.15)



Gamification0.78 (1, 706).38



Absent (RC)605310.24 (0.17 to 0.31)***N/A




Present191770.31 (0.18 to 0.44)***0.07 (–0.08 to 0.21)


Intervention support features


Orientation to or training on mTDIh0.09 (1, 678).77



Absent (RC)483610.27 (0.19 to 0.35)***N/A




Present293190.25 (0.15 to 0.35)***–0.02 (–0.15 to 0.11)



Other in-person element0.004 (1, 707).95



Absent (RC)686370.27 (0.20 to 0.33)***N/A




Present12720.27 (0.11 to 0.43)***0.006 (–0.16 to 0.17)



Reminders1.91 (1, 696).17



Absent (RC)402330.31 (0.21 to 0.40)***N/A




Present374650.22 (0.13 to 0.30)***–0.09 (–0.22 to 0.04)



Guidance, coaching, and feedback0.24 (1, 707).62



Absent (RC)595310.28 (0.20 to 0.35)***N/A




Present211780.24 (0.12 to 0.36)***–0.04 (–0.18 to 0.11)



Supportive accountability0.14 (1, 707).71



Absent (RC)625380.27 (0.20 to 0.35)***N/A




Present181710.25 (0.12 to 0.38)***–0.03 (–0.18 to 0.12)



Supervised skills practice0.62 (1, 707).43



Absent (RC)726510.26 (0.19 to 0.33)***N/A




Present8580.34 (0.14 to 0.54)***0.08 (–0.13 to 0.30)


Dosage: prescribed frequency of use4.39 (4, 704).002c


As much as feasible; ≥4 days per week (RC)435070.23 (0.15 to 0.30)***N/A



Some days or more than once a week9530.27 (0.09 to 0.44)***0.04 (–0.15 to 0.23)



About once a week7580.53 (0.33 to 0.73)***0.30 (0.09 to 0.52)**



One-time session13450.46 (0.28 to 0.63)***0.23 (0.04 to 0.42)*



Not stated, when needed, or at user discretion8460.08 (–0.09 to 0.24)–0.15 (–0.33 to 0.03)


Dosage: prescribed duration of intervention


Weeks786970.27 (0.21 to 0.34)***–0.003 (–0.01 to 0.01)0.35 (1, 695).56


Sessions544140.33 (0.24 to 0.41)***–0.0001 (–0.0003 to 0.0002)0.44 (1, 412).51

aThe right columns list the omnibus F test and P value for each moderation test. The middle columns list the intercept (B0), or mean effect size, and slope (B1), an estimated unstandardized regression coefficient, of the relevant Hedges g statistics, with CIs around each. Effects and slopes that differ significantly from 0 are denoted with asterisks in the intercept (B0) and slope (B1) columns, respectively. For categorical moderators, each intercept represents the mean effect of a category, whereas each slope represents the difference in the mean effect between the category and reference category. Depending on its sign, the slope of a continuous moderator represents an increase or decrease in the effect size with each unit increase in the variable.

bNumber of studies with relevant effect size data for a given row. In cases of multiple interventions or comparisons, some studies were counted in multiple rows; thus, these numbers sometimes exceeded 80. Owing to missing data, some counts fall short of 80. Further details on what was included in different categories of the included moderators are provided in the Methods section.

cSignificance of moderation analysis changed when conducted on a subsample of the highest-quality studies (k=38; Table 2).

dRC: reference category.

eN/A: not applicable (as the slope represents a comparison with the reference category).

fVR: virtual reality.

gP<.10.

hmTDI: mobile technology–delivered intervention.

*P<.05.

**P<.01.

***P<.001.

Differences in Effects of mTDIs Based on Methodological Characteristics

Study Quality and Publication Bias

Overall study quality significantly moderated the overall effect of mTDIs in such a way that ESs decreased as the quality index increased (Table 1). The slope indicated that for every 1-point increase in quality index score, the effect decreased by 0.02. Given this moderation effect, we also ran all analyses with only the higher-quality studies—that is, studies that achieved a total quality index >20, which denotes studies that, on average, surpassed benchmarks for average-quality research methods [89]. Unless otherwise noted in the relevant presentation of results in the following sections, the pattern and significance of the results with this reduced, higher-quality sample of studies were identical to those of the full sample of studies. However, in cases where the statistical significance of results shifted when tested with only higher-quality studies, the results from analyses with only the higher-quality studies are presented separately in Table 2 (full set of results available from authors upon request). The average ES across all possible comparisons within the 38 higher-quality studies (yielding 428 ESs across posttest and follow-up assessments) was g=0.20 (P<.001; 95% CI 0.13-0.27). A funnel plot analysis revealed that publication bias was unlikely, with no studies missing on the left side of the funnel plot (Figure 2). Indeed, the trim-and-fill algorithm suggested that, if anything, 78 ESs from 34 studies were missing at the right side of the funnel plot, suggesting a possible selection bias that excluded studies with larger ESs. After imputation of these missing ESs, an adjusted overall effect was estimated, which produced an average ES of g=0.40 (P<.001; 95% CI 0.33-0.46), somewhat larger than our initially estimated overall effect (Δg=0.13). Nevertheless, it is important to note that the trim-and-fill analysis does not take the dependency in ESs into account. An Egger regression test, which better models dependencies among ESs, revealed that SE was a significant and positive predictor of ESs (β1=1.65, P<.001; 95% CI 1.07-2.23), which may indicate publication bias rather than selection bias.

Table 2. Moderators of the effectiveness of mobile technology–delivered interventions for youth: higher-quality studies onlya.
CharacteristicskbEffect sizes, nB0 (intercept), g (95% CI)B1 (slope), g (95% CI)F test (df1, df2)P value
Methodological characteristics

Study quality384280.20 (0.11 to 0.29)***–0.0005 (–0.03 to 0.03)0.001 (1, 426).97

Comparison group type3.25 (2, 425).04


No intervention (RCc; eg, wait-list)212150.26 (0.18 to 0.35)***N/Ad



Inert (eg, placebo or information-only)151120.14 (0.05 to 0.23)**–0.13 (–0.23 to –0.03)*



Clinical (eg, established intervention)71010.12 (–0.01 to 0.26)e–0.14 (–0.29 to 0.01)e

Youth Characteristics

Mean age (years)364070.19 (0.11 to 0.26)***0.01 (–0.0002 to 0.03)e3.75 (1, 405).05

Risk level and type1.18 (3, 424).32


General sample not selected for risk (RC)171750.13 (0.03 to 0.24)**N/A



Nonmental health (ie, medical) risks120.11 (–0.50 to 0.72)–0.03 (–0.64 to 0.59)



Psychological or mental health at-risk sample162290.25 (0.15 to 0.36)***0.12 (–0.03 to 0.27)



Psychological clinical sample (diagnosis)4220.31 (0.07 to 0.55)*0.18 (–0.09 to 0.44)

Intervention characteristics

Prescribed frequency of use1.79 (4, 423).13


As much as feasible; ≥4 days per week (RC)223060.23 (0.14 to 0.32)***N/A



Some days, or more than once a week5270.22 (0.02 to 0.42)*–0.01 (–0.22 to 0.21)



About once a week3360.40 (0.15 to 0.66)***0.18 (–0.09 to 0.44)



One-time session2150.13 (–0.20 to 0.46)–0.10 (–0.44 to 0.25)



Not stated, when needed, or at user discretion6440.03 (–0.14 to 0.20)–0.20 (–0.39 to –0.01)*

aThis table presents moderation results for higher-quality studies (k=38) only in cases where the statistical significance of the moderation effect differs from the full-sample (k=80) results presented in Table 1. The right columns list the omnibus F test and P value for each moderation test. The middle columns list the intercept (B0), or mean effect size, and slope (B1), an estimated unstandardized regression coefficient, of the relevant Hedges g statistics, with CIs around each. Effects and slopes that differ significantly from 0 are denoted with asterisks in the intercept (B0) and slope (B1) columns, respectively. For categorical moderators, each intercept represents the mean effect of a category, whereas each slope represents the difference in the mean effect between the category and reference category. Depending on its sign, the slope of a continuous moderator represents an increase or decrease in the effect size with each unit increase in the variable.

bNumber of studies with relevant ES data for a given row. In cases of multiple interventions or comparisons, some studies were counted in multiple rows; thus, these numbers sometimes exceeded 38. Owing to missing data, some counts fell short of 38. Further details on what was included in different categories of the included moderators are provided in the Methods section.

cRC: reference category.

dN/A: not applicable (as the slope represents a comparison with the reference category).

eP<.10.

*P<.05.

**P<.01.

***P<.001.

Figure 2. Funnel plot of observed mTDI effects (solid circles) and imputed effects (open circles) plotted against their standard error. mTDI: mobile technology–delivered intervention.
View this figure
Timing of Outcome Assessment

There were no significant differences in ESs immediately after the intervention versus those at longer-term follow-up assessments (Table 1).

Outcome Type

The effectiveness of mTDIs varied as a function of the type of youth outcome that was targeted or assessed. The results in Table 1 indicate that there were statistically significant, positive effects of mTDIs on all of the coded outcome categories: general psychological distress or well-being, internalizing distress, noninternalizing mental health concerns, psychosocial strategies and skills, health-related outcomes, and other outcomes (see the Methods section). However, the other outcomes showed significantly lower ESs than the reference category, on average.

Comparison Group Type

Contrary to expectations, in the full sample, the comparison group type did not moderate ES, such that effects were not statistically different across studies using no-intervention (eg, wait-list) or inert (eg, placebo or information-only) comparison groups, as well as studies using clinical treatments as their comparison group (Table 1). However, among the higher-quality studies, the results were more in line with our hypotheses, in that studies using inert comparison groups produced lower ESs than studies using no-intervention control groups, and studies using a clinical comparison no longer showed statistically significant effects on youth outcomes (Table 2).

Differences in Effects of mTDIs Based on Youth Characteristics

The results showed that the mean age of the youth participants did not moderate the impact of the mTDIs (Table 1). Among the higher-quality studies, there was an effect right at the P=.05 threshold, such that the older the mean age of participants, the stronger the effect (Table 2). There were no differences in the study ESs as a function of the youth gender breakdown in the sample. Missing data on race and ethnicity limited our ability to analyze this variable as a moderator.

Youth level and type of risk significantly moderated intervention effects in the full sample (Table 1), such that samples with nonmental health (ie, medical) risks showed larger effects of mTDIs than general, unselected youth samples. However, in the subsample of higher-quality studies, this moderating effect was not found; in fact, the medical risk category dropped to one study and was no longer significantly different from zero (Table 2).

Differences in Effects of mTDIs Based on Intervention Characteristics

Primary Type of Technology

Moderation analysis did not detect statistically significant differences in the impact of mTDIs based on the primary type of technology: mTDIs were effective—and similar in their impact on youth outcomes—whether delivered on a smartphone or tablet, a presmartphone mobile device, or a mobile VR or handheld video game (Table 1).

Guiding Theoretical Framework

As hypothesized, both cognitive behavioral and mindfulness- or acceptance-based interventions (as well as interventions that blended these 2 orientations) had significant effects on youth outcomes. Interventions grounded in one or multiple other theoretical frameworks also yielded significant effects and did not appear to differ systematically in their effects from cognitive behavioral interventions. Those mTDIs that were atheoretical or did not specify a guiding theoretical framework did not significantly differ from zero in their impact on youth outcomes, and the CI around their intercept (mean effect) was quite wide, indicating considerable heterogeneity. Of note, these studies were rare (k=3), and all 3 studies were dropped from the analysis of higher-quality studies; however, the overall pattern of results remained the same.

Technological and Support Features

Exploratory analyses of the impact of intervention features and support failed to detect significant moderation of intervention effects based on the presence or absence of various technological features of the mTDI, including personalization, tailoring, social components, or gamification elements. Similarly, there were no differences in effects for mTDIs that integrated various support features, such as a training or orientation to the mTDI; some other in-person element; reminders to use the app; human or bot guidance, coaching, feedback in mTDI use; provision of supportive accountability; or supervised practice of skills taught by the mTDI. Although no significant differences were found between the absence and presence of any of these features and support types, it is notable that mTDIs both with and without each of these features had significant and positive mean effects (Table 1).

Dosage: Prescribed Frequency and Duration of Use

There was a significant moderation effect for the prescribed frequency of mTDI use (Table 1). All prescribed use frequencies, except for leaving use to user discretion (including unstated use prescriptions), had a statistically significant impact on youth outcomes. Those mTDIs that involved prescribed use about once per week or were a single session yielded higher ESs than the reference category, which involved prescriptions of more frequent mTDI use (ie, at least 4 days per week or as much as feasible). However, this effect was not retained in the sample of higher-quality studies: The 2 remaining studies that prescribed a one-time session no longer yielded ESs that differed from zero statistically, and the prescribed use frequency was no longer a significant moderator of ESs (Table 2).

Additional moderation analyses probing the number of prescribed weeks or sessions of mTDI use did not detect significant effects for the prescribed duration of the intervention, whether the number of intervention sessions or weeks (Table 1). Although we also intended to examine the moderating effect of completed dosage (frequency, weeks, and sessions), there were substantial missing data, precluding meaningful analysis of these moderators.


Principal Findings and Comparisons With Prior Work

To our knowledge, this study represents the first review and meta-analysis of mTDIs for a wide variety of youth well-being outcomes, an area of research that has grown rapidly in the past decade. Rigorous searches of the published and unpublished literature in this area yielded 80 studies evaluating 83 mTDIs for youth. A 3-level meta-analysis revealed an overall Hedges g of 0.27 across all youth outcomes and follow-up assessments, indicating a small effect that is generally consistent with the observed impact of mTDIs in other meta-analyses [42,62,67-69]. This finding addresses a critical gap in the existing literature in that most previous meta-analyses have focused solely on the effects of mTDIs in adult populations [42,53,63], and the few studies focusing specifically on youth have aggregated across diverse types of mobile and nonmobile technologies [64,65] or limited their scope to a specific subset of youth disorders (eg, internalizing disorders [67]).

It is worth noting that our sample included many studies of mTDIs that were still relatively early in their development and were, therefore, primarily interested in evaluating the feasibility and acceptability of the technology, although they also included measures of more distal mental health outcomes that they ultimately aimed to influence. Therefore, our analyses may underestimate, to some extent, the impact that these mTDIs would have had in larger or longer efficacy trials more specifically designed to influence youth mental health outcomes. As this literature continues to mature, it will be important to focus the inclusion criteria more specifically on studies that measure the effects of mTDIs on more distal mental health outcomes as their primary focus.

Our publication bias analyses yielded conflicting findings that were difficult to interpret, given the lack of conventions for analyzing publication bias in 3-level models. However, it is worth noting that despite rigorous screening criteria for the methodology of included studies, our coding revealed significant variability in study quality, with less than half of the studies in our sample comprising effects that surpassed our defined standards for typical research practices [89]. Study quality appeared to significantly influence the observed ESs such that as the score for study quality increased, the ESs generally decreased. Only a few prior meta-analyses of TDIs have directly assessed the influence of study quality on ES and found no impact [57,201]. However, our findings are consistent with some previous findings linking study quality to observed ESs for other psychological interventions [202,203] and suggest that attention to rigorous experimental methods, such as the reporting of intent-to-treat analyses and the use of well-validated and reliable assessment tools, are essential to accurately identify the impact of mTDIs for youth.

Methodological Characteristics Affecting Outcomes

Given the substantial heterogeneity across ESs, both within and between studies, we explored several moderators as predictors of this variability. Interestingly, the ESs were similar in the immediate posttest assessments and longer-term follow-up assessments. This was true despite the fact that follow-up assessments occurred, on average, at 11.52 weeks, and ranged in length up to 43.14 weeks, after the active intervention period concluded. This finding is in contrast to the decrease in the effectiveness over time of some in-person mental health treatments [204-206] and suggests that the impact of mTDIs endures over time. It is possible that these enduring effects are because mTDIs are more easily integrated into youths’ lives, therefore leading to either greater generalizability of the intervention effects, more lasting engagement with the mTDI, or both.

Somewhat contrary to expectations, moderator analyses in the overall sample also revealed that mTDIs had a similar impact on youth outcomes regardless of whether they were compared with a no-intervention control, such as a wait-list, or a more active comparison group, such as an information-only condition or usual clinical care. This helps to rule out the effects of expectancies, demand characteristics, or nonspecific effects accounting for the benefits of mTDIs in youth. This finding contributes to the somewhat mixed literature on this topic, with some past reviews of TDIs with both youth and adult samples finding that, more generally, ESs tend to differ based on comparison type (eg, higher for wait-list vs more active comparisons) [45,52,58,69,70], whereas others indicate that TDIs tend to be similarly effective across various types of study designs (eg, the study by Farrer et al [85]). Indeed, even in this meta-analysis, some findings shifted when only higher-quality studies were analyzed, such that studies with information-only or placebo comparisons yielded lower ESs than studies with no-intervention control groups, and studies with clinical comparison groups no longer showed statistically significant effects. Moreover, it is worth noting that the specific nature of the comparison group varied quite widely across studies, even within a particular coded category. As such, future research should continue to explore the marginal benefits of mTDIs over other available interventions.

The positive impact of mTDIs was observed across the diverse youth outcome categories assessed by each study, with the largest ESs for psychosocial strategies and skills (eg, emotional self-awareness, self-efficacy, and coping) and internalizing symptoms such as depression and anxiety, followed by general psychological distress and well-being, health concerns and health-related behaviors, and other noninternalizing mental health concerns (eg, attention difficulties, aggression, or delinquency). The smallest ES was observed for other outcomes (eg, knowledge, peer relationship quality, and stereotype threat), which showed significantly smaller effects than the reference category of general psychological distress. However, it is difficult to interpret this finding, given that our other category contained a diverse set of outcomes, many of which were coded very infrequently. As such, the overall ES for this other category is not necessarily reflective of the lower impact of mTDIs on each of these less commonly coded outcome types, and future research should continue to explore the scope of the impact of mTDIs on diverse youth problems. Nevertheless, these findings regarding outcome types generally suggest that mTDIs can be effective in treating a wide array of problems across youth development, including diverse areas of psychopathology (ie, both internalizing and externalizing domains), in addition to a number of cognitive, behavioral, and social risk factors that are often associated with poor mental health. This is a significant contribution to the literature, which has previously focused on narrower sectors of outcomes when analyzing the effectiveness of TDIs and mTDIs for youth [51,67,68].

Youth Characteristics Affecting Outcomes

In the overall sample, there was no association between average youth age and the impact of mTDIs. However, when lower-quality studies were excluded from the analysis, the effects of youth age emerged more strongly, with ESs increasing as the mean participant age increased. This effect was right at the threshold for statistical significance (P=.05) and should, therefore, be interpreted with caution and replicated in future studies; however, this finding that suggests stronger effects of mTDIs for older youth is consistent with some past literature on TDIs more generally [51,54,55,65,72]. As many mental health interventions, including TDIs, were originally developed with adults in mind and only later adapted for youth at various stages of development, it is perhaps not surprising that mTDIs could have a more robust impact on older adolescents and young adults. For example, these youth may have greater internal motivation to engage with the intervention and be better able to interact with and adhere to the cognitive or behavioral skills taught by the mTDIs. However, given the complex ways in which developmental stages interact with risk for diverse mental health problems, as well as the effectiveness of mental health interventions, future research should continue to probe interactions between youth age and other dimensions of mTDIs (eg, level of human support, guiding theoretical framework, and availability of a social component) in predicting the impact of mTDIs.

Youth risk characteristics significantly moderated intervention effects in the full sample, with studies in which youth were selected for indicators of medical risk (eg, youth diagnosed with spina bifida or about to undergo surgery or another medical procedure) showing an average ES more than double that of studies with general, unselected samples of youth. Samples of youth with psychological risk (either clinically significant or subclinical risk) fell somewhere in the middle. However, in the analyses that excluded the lowest-quality studies, this moderating effect was no longer observed. The lack of differential findings for the impact of mTDIs on outcomes for youth with clinically significant versus subclinical risk, even when compared with general or unselected samples, is somewhat consistent with previous findings on TDIs, which tend to be quite mixed in terms of the impact of TDIs for youth with a variety of risk profiles [47,53,60,65]. Future research should continue to explore the presenting problems and risk indicators that are the best fit for referral to mTDIs versus more or less intensive interventions.

Other youth characteristics, in addition to age and risk factors, were not demonstrated to predict observed ESs. Youth race and ethnicity were reported inconsistently and according to widely varying conventions and, therefore, could not be tested as moderators of ESs. Furthermore, these identities and lived experiences are intertwined with structural inequalities and systems of discrimination and oppression that are more important for research to assess and relate to well-being outcomes. Consistent with several previous reviews [42,59], the breakdown of youth gender in the study sample did not predict ESs, although it is worth noting that studies infrequently made a note of nonbinary gender categories. These findings point to the need for a more careful and nuanced assessment of youth identities and lived experiences, including those connected to race, cultural identity, gender, sexual identity, and socioeconomic background, in studies testing the impact of mTDIs on youth.

Intervention Characteristics Affecting Outcomes

Smartphone- or tablet-based mTDIs were by far the most commonly reported primary technology in our sample of studies relative to presmartphone mobile devices or other (mobile VR and handheld video game) technologies. The type of technology did not appear to moderate ESs, with each of these types of mTDIs yielding average ESs that were statistically significant and of a similar size. Although our sample was limited in number, mobile VR technologies are promising avenues for further research, especially given the effectiveness of these technologies for conditions such as posttraumatic stress disorder, depression, and pediatric pain and anxiety during medical procedures [207,208]. Although no studies in our sample used wearable devices as the primary type of technology, a handful used a smartphone along with some sort of wearable biosensor such as a sleep monitor or a physical activity wristwatch [113,183,188,195]. As these technologies are likely to become more common over time, research should continue to explore their effectiveness as a primary or supplemental feature of mTDIs for youth well-being.

Consistent with the previous literature on TDIs and mTDIs for youth and adults [51,53,54,69], both cognitive behavioral and mindfulness- or acceptance-based interventions (as well as interventions that blended these 2 orientations) had significant effects on youth outcomes. In traditional in-person treatment settings, cognitive behavioral interventions, including third-wave cognitive behavioral treatments that include components of mindfulness and acceptance, have become increasingly popular as empirical support has grown for their effectiveness in treating a wide range of childhood disorders, including anxiety, depression, conduct or aggression problems, and attention difficulties [209]. However, a growing body of literature shows that many youths and families are not able to access these gold standard evidence-based treatments, whether because of lack of availability in their community or issues with accessing mental health care in general, such as cost and stigma [26,210,211]. Therefore, it is encouraging to see that the effectiveness of these evidence-based interventions can be translated into low-cost, mobile technology–delivered formats that can reach far larger numbers of youth, and perhaps in a way that is more generalizable to the naturalistic environments of their lives. Interventions in our sample that were grounded in one or multiple other theoretical frameworks, such as positive psychology or motivational interviewing, also yielded significant effects and did not appear to differ systematically in their effects from cognitive behavioral interventions. Although few studies have examined (m)TDIs using these theoretical approaches, these findings are consistent with previous research that has evaluated the impact of these specific theoretical orientations [202,212]. Our sample also included 3 mTDIs that did not specify a guiding theoretical framework [154,166,171], and collectively, they did not significantly differ from zero in their impact on youth outcomes. These findings should be interpreted with caution, given the small number of studies, wide CIs around their intercepts (mean effects), and the fact that these studies were excluded from the analysis of higher-quality studies. Future research should continue to explore the impact of mTDIs grounded in diverse theoretical frameworks. With the vast and rapidly growing number of available mTDIs purporting to support the well-being and mental health of youth, it is critical to ascertain the theoretical frameworks that may lend themselves best to developing active interventions with strong empirical support for their effectiveness in a mobile technology–delivered format.

As mTDIs have become increasingly popular, many have begun to incorporate additional technological features intended to better leverage the technology-based format to engage and sustain users’ attention. For example, some apps may personalize features of the intervention to the user’s personal preferences or tailor the intervention based on a user’s in-the-moment responses [75]. Others may include a social component, such as integration with social media platforms or a chat forum, or incorporate aspects of gamification, such as challenges or quests associated with points or badges [213]. Our moderator analyses showed no influence of these features on the ESs. However, it should be noted that many of these features are still relatively uncommon in mTDIs tested by research. For example, only 12% (10/83) of mTDIs in our meta-analysis mentioned a social component, and only 22% (18/83) described elements of personalization. Thus, the importance of these features may become more apparent as mTDIs targeting youth well-being begin to incorporate them more regularly and with greater proficiency. It is also likely that the most impactful mTDIs use an effective combination of these features to engage youth rather than simply incorporating one or another single design feature.

Given extensive theories [77,214,215] and some prior research on the benefits of outside guidance on engagement with mTDIs [53,57,216], we were somewhat surprised to see that the incorporation of support features, such as the provision of supportive accountability for technology use or supervised practice of the skills introduced by the mTDI, did not significantly moderate the ESs for mTDIs for youth. However, research in this area has been quite mixed, with several other studies finding little or no benefit from the inclusion of coaching or human support [42,54,55]. It is possible that mTDIs that do not rely on any component of human or bot support tend to be designed in a more comprehensive and self-contained way to offset this lack [42]. Moreover, the lack of significant moderation findings for these features in our meta-analysis does not necessarily indicate that these features are unimportant to the success of mTDIs in youth. Our meta-analysis captured an unusual sample of mTDI users, given that all effects were evaluated within the context of researcher-guided studies. As such, all participants were likely exposed to greater-than-usual accountability and support for their technology use as a function of taking part in a research study. This level of baseline accountability may have made it difficult to observe the added benefits of other forms of guidance or support. In addition, there are likely several kinds of informal human support—such as guidance or support from caregivers, teachers, or peers—that influence youth but were not typically assessed or reported in our sample of studies. Future research should continue to explore the kinds of support that are needed to maximize engagement with mTDIs for youth mental health and the ways in which these supports interact with factors such as youth age or risk characteristics (eg, younger or more clinically at-risk youth may require greater support).

Finally, moderation analyses provided tentative evidence that mTDIs can be effective regardless of the prescribed frequency or duration of use. In the overall sample, the number of prescribed weeks or sessions of use did not moderate the ESs. Moreover, mTDIs yielded significant ESs across all prescribed use frequencies, except for mTDIs that did not prescribe a use frequency or left use up to the user’s discretion. In the higher-quality sample, single-session interventions no longer yielded a significant ES; however, this finding should be interpreted with caution as only 2 single-session interventions remained in the higher-quality analytic sample.

These findings add to a growing body of mixed findings regarding the impact of prescribed and actual mTDI use on intervention outcomes [57,85,86]. Reviews of technology-based mental health interventions often highlight significant problems with treatment initiation and dropout [217-219], particularly for self-guided treatments that involve lower levels of structure and prescriptive guidance [220,221]. As such, it is critical for meta-analytic research to continue to explore trends in whether and how the prescribed dosage of mTDIs influences youth outcomes, with a particular focus on how different types of prescriptive guidance fit best with users’ specific needs. For example, youth with more severe clinical diagnoses may require a different dosage of mTDI than those engaging with a prevention-oriented mTDI designed to improve general well-being. Notably, there was wide variability in how studies reported on mTDI dosage and adherence. We chose to analyze the prescribed dosage and frequency of mTDI use, given that these statistics were most consistently reported. It would be ideal to analyze the actual completed use or uptake of mTDIs among participants as well; however, this was reported inconsistently among the studies. Several studies reported participant use only among study completers or dropped unengaged or low-engaged users from the analysis [121,127,152], whereas others used financial incentives for protocol compliance [151], potentially introducing bias into the use statistics recorded in research studies that have additional levels of accountability built into the protocol.

Limitations and Future Directions

The results of this meta-analysis should be interpreted with several limitations in mind. First, the quality of any meta-analysis is limited by the quality of the available primary studies. Only 60% of our coded ESs, which came from 48% (38/80) of the included studies, met our high-quality standards. We attempted to address this issue by running all analyses on both the full sample of studies and a subsample of higher-quality studies. Nevertheless, as a substantial number of studies and ESs of lower quality were dropped from the higher-quality subsample analysis, the statistical power declined, and some moderator categories could not be examined. Future research in this area should attend to existing procedures for designing and reporting on high-quality clinical intervention research. For example, there is a need for more studies that use larger sample sizes and retain larger percentages of their participants (regardless of their mTDI engagement), include more reliable outcome measurements, and use intent-to-treat analyses, as well as studies conducted by authors who were not involved with app development and are, therefore, able to provide a more unbiased assessment of the mTDI’s impact.

Relatedly, our coding scheme yielded incomplete data for many of our hypothesized moderators, given the variability in reports on characteristics of the tested mTDI (eg, human support features, duration and frequency of use), as well as youth characteristics (eg, race and ethnicity, gender, and risk characteristics). Therefore, additional studies that carefully document these kinds of data are needed to more thoroughly test the various moderators of the overall effects of mTDIs.

Given the lack of standard approaches for assessing publication bias within a 3-level meta-analysis, we applied 2 different techniques that produced conflicting findings. The trim-and-fill analysis pointed toward a potential underestimation of the true overall effect, whereas the Egger test pointed toward a potential overestimation of the true overall effect (and thus publication bias). The Egger test—in which ES dependencies are modeled—is likely more valid than the trim-and-fill results for our multilevel study. Nevertheless, the results of both techniques should be interpreted with caution, as neither was developed for a 3-level meta-analysis, and both rely on an assumption of homogeneity in ESs, which is often not met in meta-analytic studies, including this study.

As noted previously, another limitation in interpreting the present findings is that youth mTDI use likely differs within versus outside of a research study. At the very least, research participants tend to be much more informed about intervention goals and receive more structured support in the process of using an mTDI than users outside the research context. In more naturalistic settings, such as clinical practice or completely self-guided use, mTDIs are likely to be used much more flexibly and may be adapted to the needs and circumstances of individual youths.

When considering the implications of the present findings for the use of mTDIs in clinical and other naturalistic settings, it is also essential to consider issues such as accessibility and cultural sensitivity of mTDIs, which were not examined in this meta-analysis. Some types of mTDI technologies, such as VR headsets, may be prohibitively expensive and less readily accessible to some youth and families, particularly those living or seeking treatment in low-resource settings. Moreover, wide variability in families’ digital literacy and cultural norms around using technology to improve well-being is likely to play a key role in the effectiveness of these types of interventions. The studies in this meta-analysis were largely limited to Western cultural contexts, with very little research emerging from certain parts of the world (eg, our search did not yield any research from the African continent). In addition, reporting on youth characteristics such as socioeconomic status, race, and ethnicity was quite limited and followed widely varying conventions. Taken together, these issues limited our ability to test questions related to the cultural responsivity or tailoring of particular mTDIs based on youth cultural backgrounds—important questions that future research will need to explore to fulfill the promise of mTDIs for youth living in communities traditionally underserved by available in-person prevention and intervention programs.

In addition, participant ratings of social validity (ie, acceptability of mTDIs and user satisfaction), among the 25% (20/80) of studies for which these data were available, averaged under 60%, indicating that these technologies likely have room for improvement in user interface and user experience design. Further research should more diligently assess for different aspects of social validity, including qualitative feedback, and relate these elements to outcomes and potential moderators (eg, age of users, in-app features, and clinical severity of users), with the ultimate goal of improving the engagement and uptake, and thus impact, of mTDIs for youth.

Finally, future research should continue to explore the ideal setting and level of support for various mTDIs. The optimal approaches to integrating mTDIs with other mental health tools, as well as face-to-face interaction with mental health providers, remain largely an open question at this time. Although in some cases, mTDIs may serve as low-cost and accessible substitutions or adjunctive supports for face-to-face intervention programs in areas with limited access to health professionals, mTDIs may also be valuable as a way of socializing some youth to psychosocial interventions, with the goal of eventually connecting families to more traditional face-to-face services. In other cases, mTDIs may be most useful when accompanied by the support of a clinician or paraprofessional, such as a teacher, mentor, or academic advisor who guides the youth through the technology-based intervention [118,120,158].

Study Strengths and Conclusions

This is the first comprehensive 3-level meta-analysis to evaluate the effects of mTDIs on diverse aspects of well-being in youth. We built on prior work by taking an inclusive but rigorous approach to testing the impact of interventions using various types of mobile technologies on diverse outcomes across childhood, adolescence, and young adulthood. Using a 3-level approach to meta-analysis, we were able to synthesize all relevant ESs while accounting for both within- and between-study heterogeneity in ESs and maximizing the statistical power of the analyses. Moreover, we coded a comprehensive set of more than 3 dozen potential moderators of study effects and found sufficient information to analyze the moderating role of >20 of these variables, including technological and support features (eg, human support and availability of in-app reminders) that are hypothesized to be critical to the success of these interventions but have rarely been tested as predictors of effects in previous meta-analyses.

Our synthesis of primary research confirms the significant benefits of mTDIs across a variety of psychosocial outcomes, comparison types (ie, no intervention, inert, and clinical), and time points (both immediate postintervention and longer-term follow-up effects). Although additional high-quality research on which kinds of mTDIs are most effective and under what conditions is clearly needed, we conclude that mTDIs have the potential to improve multiple aspects of youth well-being, and may confer significant, durable benefits in a broad array of domains, particularly for youth who are not otherwise getting their mental health needs met.

Acknowledgments

The authors would like to thank Danielle Arntson, Jeremy Astesano, Maria Bandriwsky, Turner Block, Abigail Blum, Marie Chamberlain, Loan Ho, Cherrelle Jones, Lauren Nowakowski, Jean Rhodes, Emily Romero, Geert-Jan Stams, Allison Tetzlaff, and Charlotte Utschig. This research was supported by funding from the Center for Evidence-Based Mentoring.

Authors' Contributions

CC and ER conceptualized the study, oversaw sample selection and coding, and drafted the manuscript. KB, SB, MH, NF, and KC coded studies for analysis and contributed to the manuscript. MA ran analyses and contributed to manuscript writing.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Additional search strategy details.

PDF File (Adobe PDF File), 196 KB

Multimedia Appendix 2

Selected characteristics of 80 studies evaluating 83 mobile technology-delivered interventions for youth.

PDF File (Adobe PDF File), 402 KB

  1. Bitsko RH, Holbrook JR, Ghandour RM, Blumberg SJ, Visser SN, Perou R, et al. Epidemiology and impact of health care provider-diagnosed anxiety and depression among US children. J Dev Behav Pediatr 2018 Jun;39(5):395-403 [FREE Full text] [CrossRef] [Medline]
  2. Twenge JM, Joiner TE, Rogers ML, Martin GN. Increases in depressive symptoms, suicide-related outcomes, and suicide rates among U.S. Adolescents after 2010 and links to increased new media screen time. Clin Psychol Sci 2017 Nov 14;6(1):3-17. [CrossRef]
  3. Kessler RC, Amminger GP, Aguilar-Gaxiola S, Alonso J, Lee S, Ustün TB. Age of onset of mental disorders: a review of recent literature. Curr Opin Psychiatry 2007 Jul;20(4):359-364 [FREE Full text] [CrossRef] [Medline]
  4. Kessler RC, Berglund P, Demler O, Jin R, Merikangas KR, Walters EE. Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication. Arch Gen Psychiatry 2005 Jun;62(6):593-602. [CrossRef] [Medline]
  5. Merikangas KR, He J, Burstein M, Swanson SA, Avenevoli S, Cui L, et al. Lifetime prevalence of mental disorders in U.S. adolescents: results from the National Comorbidity Survey Replication--Adolescent Supplement (NCS-A). J Am Acad Child Adolesc Psychiatry 2010 Oct;49(10):980-989 [FREE Full text] [CrossRef] [Medline]
  6. American College Health Association. American College Health Association-National College Health Assessment II: Reference Group Executive Summary Spring 2019. American College Health Association. 2019.   URL: https://www.acha.org/documents/ncha/NCHA-II_SPRING_2019_US_REFERENCE_GROUP_EXECUTIVE_SUMMARY.pdf [accessed 2021-10-01]
  7. Belden AC, Thomson NR, Luby JL. Temper tantrums in healthy versus depressed and disruptive preschoolers: defining tantrum behaviors associated with clinical problems. J Pediatr 2008 Jan;152(1):117-122 [FREE Full text] [CrossRef] [Medline]
  8. Bucchianeri MM, Arikian AJ, Hannan PJ, Eisenberg ME, Neumark-Sztainer D. Body dissatisfaction from adolescence to young adulthood: findings from a 10-year longitudinal study. Body Image 2013 Jan;10(1):1-7 [FREE Full text] [CrossRef] [Medline]
  9. Feld LD, Shusterman A. Into the pressure cooker: student stress in college preparatory high schools. J Adolesc 2015 Jun 14;41(1):31-42. [CrossRef] [Medline]
  10. Leadbeater B, Thompson K, Gruppuso V. Co-occurring trajectories of symptoms of anxiety, depression, and oppositional defiance from adolescence to young adulthood. J Clin Child Adolesc Psychol 2012 Nov;41(6):719-730 [FREE Full text] [CrossRef] [Medline]
  11. Eisenberg D, Lipson SK, Ceglarek P, Zhou S, Talaski A, Patterson A. The Healthy Minds Study Fall Data Report. Healthy Minds Network.   URL: https://healthymindsnetwork.org/wp-content/uploads/2020/08/f2019_HMS_national_final.pdf [accessed 2021-10-01]
  12. Ghandour RM, Sherman LJ, Vladutiu CJ, Ali MM, Lynch SE, Bitsko RH, et al. Prevalence and treatment of depression, anxiety, and conduct problems in US children. J Pediatrics 2019 Mar;206:256-67.e3 [FREE Full text] [CrossRef] [Medline]
  13. Merikangas KR, He J, Burstein M, Swendsen J, Avenevoli S, Case B, et al. Service utilization for lifetime mental disorders in U.S. adolescents: results of the National Comorbidity Survey-Adolescent Supplement (NCS-A). J Am Acad Child Adolesc Psychiatry 2011 Jan;50(1):32-45 [FREE Full text] [CrossRef] [Medline]
  14. Whitney DG, Peterson MD. US national and state-level prevalence of mental health disorders and disparities of mental health care use in children. JAMA Pediatr 2019 Apr 01;173(4):389. [CrossRef] [Medline]
  15. Hodgkinson S, Godoy L, Beers LS, Lewin A. Improving mental health access for low-income children and families in the primary care setting. Pediatrics 2017 Jan;139(1):e20151175 [FREE Full text] [CrossRef] [Medline]
  16. Avenevoli S, Swendsen J, He J, Burstein M, Merikangas KR. Major depression in the National Comorbidity Survey-adolescent supplement: prevalence, correlates, and treatment. J Am Acad Child Adolesc Psychiatry 2015 Jan;54(1):37-44.e2 [FREE Full text] [CrossRef] [Medline]
  17. Hom MA, Stanley IH, Joiner TE. Evaluating factors and interventions that influence help-seeking and mental health service utilization among suicidal individuals: a review of the literature. Clin Psychol Rev 2015 Aug;40:28-39. [CrossRef] [Medline]
  18. Gallagher R. National survey of college counseling centers 2014. The International Association of Counseling Services (IACS). 2015.   URL: http://d-scholarship.pitt.edu/28178/ [accessed 2021-10-06]
  19. LeViness P, Bershad C, Gorman K, Braun L, Murray T. The association for university and college counseling center directors annual survey – Public version 2018. The Association for University and College Counseling Center Directors. 2018.   URL: https://www.aucccd.org/assets/documents/Survey/2018%20AUCCCD%20Survey-Public-June%2012-FINAL.pdf [accessed 2021-10-02]
  20. Martínez-Hernáez A, DiGiacomo SM, Carceller-Maicas N, Correa-Urquiza M, Martorell-Poveda MA. Non-professional-help-seeking among young people with depression: a qualitative study. BMC Psychiatry 2014 Apr 28;14:124 [FREE Full text] [CrossRef] [Medline]
  21. Radez J, Reardon T, Creswell C, Lawrence PJ, Evdoka-Burton G, Waite P. Why do children and adolescents (not) seek and access professional help for their mental health problems? A systematic review of quantitative and qualitative studies. Eur Child Adolesc Psychiatry 2021 Feb 21;30(2):183-211 [FREE Full text] [CrossRef] [Medline]
  22. Rickwood D, Deane FP, Wilson CJ, Ciarrochi J. Young people’s help-seeking for mental health problems. Aus e J Advancement Mental Health 2014 Dec 17;4(3):218-251. [CrossRef]
  23. Salaheddin K, Mason B. Identifying barriers to mental health help-seeking among young adults in the UK: a cross-sectional survey. Br J Gen Pract 2016 Sep 29;66(651):e686-e692. [CrossRef] [Medline]
  24. van Vulpen KS, Habegar A, Simmons T. Rural school-based mental health services: parent perceptions of needs and barriers. Children Schools 2018;40(2):104-111. [CrossRef]
  25. Gronholm PC, Nye E, Michelson D. Stigma related to targeted school-based mental health interventions: a systematic review of qualitative evidence. J Affect Disord 2018 Nov;240:17-26. [CrossRef] [Medline]
  26. Planey AM, Smith SM, Moore S, Walker TD. Barriers and facilitators to mental health help-seeking among African American youth and their families: a systematic review study. Child Youth Services Rev 2019 Jun;101:190-200. [CrossRef]
  27. Babatunde GB, van Rensburg AJ, Bhana A, Petersen I. Barriers and facilitators to child and adolescent mental health services in low-and-middle-income countries: a scoping review. Glob Soc Welf 2019 Jun 07;8(1):29-46. [CrossRef]
  28. Ingoldsby EM. Review of interventions to improve family engagement and retention in parent and child mental health programs. J Child Fam Stud 2010 Oct 01;19(5):629-645. [CrossRef] [Medline]
  29. Salloum A, Johnco C, Lewin AB, McBride NM, Storch EA. Barriers to access and participation in community mental health treatment for anxious children. J Affect Disord 2016 May 15;196:54-61. [CrossRef] [Medline]
  30. Sarikhani Y, Bastani P, Rafiee M, Kavosi Z, Ravangard R. Key barriers to the provision and utilization of mental health services in low-and middle-income countries: a scope study. Community Ment Health J 2021 Jul 13;57(5):836-852. [CrossRef] [Medline]
  31. Anderson M, Jiang J. Teens, social media and technology 2018. Pew Research Center. 2018.   URL: https://www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018/ [accessed 2021-10-02]
  32. Launching the new influence central trend report!. Influence Central. 2018.   URL: https://influence-central.com/trendspotting/launching-the-new-influence-central-trend-report [accessed 2021-10-08]
  33. Rideout V, Fox S, Well Being Trust. Digital health practices, social media use, and mental well-being among teens and young adults in the U.S. Articles, Abstract, and Reports. 2018.   URL: https://digitalcommons.psjhealth.org/publications/1093 [accessed 2021-10-02]
  34. Kubb C, Foran HM. Online health information seeking by parents for their children: systematic review and agenda for further research. J Med Internet Res 2020 Aug 25;22(8):e19985 [FREE Full text] [CrossRef] [Medline]
  35. Struthers A, Charette C, Bapuji SB, Winters S, Ye X, Metge C, et al. The acceptability of E-mental health services for children, adolescents, and young adults: a systematic search and review. Can J Community Mental Health 2015 Jul 01;34(2):1-21. [CrossRef]
  36. Pretorius C, Chambers D, Coyle D. Young people's online help-seeking and mental health difficulties: systematic narrative review. J Med Internet Res 2019 Nov 19;21(11):e13873 [FREE Full text] [CrossRef] [Medline]
  37. Wright JH, Mishkind M, Eells TD, Chan SR. Computer-assisted cognitive-behavior therapy and mobile apps for depression and anxiety. Curr Psychiatry Rep 2019 Jun 27;21(7):62. [CrossRef] [Medline]
  38. Ahmed I, Ahmad NS, Ali S, Ali S, George A, Saleem Danish H, et al. Medication adherence apps: review and content analysis. JMIR Mhealth Uhealth 2018 Mar 16;6(3):e62 [FREE Full text] [CrossRef] [Medline]
  39. Bauer M, Glenn T, Geddes J, Gitlin M, Grof P, Kessing LV, et al. Smartphones in mental health: a critical review of background issues, current status and future concerns. Int J Bipolar Disord 2020 Jan 10;8(1):2 [FREE Full text] [CrossRef] [Medline]
  40. Bry LJ, Chou T, Miguel E, Comer JS. Consumer smartphone apps marketed for child and adolescent anxiety: a systematic review and content analysis. Behav Ther 2018 Mar;49(2):249-261 [FREE Full text] [CrossRef] [Medline]
  41. Wu A, Scult MA, Barnes ED, Betancourt JA, Falk A, Gunning FM. Smartphone apps for depression and anxiety: a systematic review and meta-analysis of techniques to increase engagement. NPJ Digit Med 2021 Feb 11;4(1):20 [FREE Full text] [CrossRef] [Medline]
  42. Firth J, Torous J, Nicholas J, Carney R, Pratap A, Rosenbaum S, et al. The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017 Oct;16(3):287-298 [FREE Full text] [CrossRef] [Medline]
  43. Torous J, Andersson G, Bertagnoli A, Christensen H, Cuijpers P, Firth J, et al. Towards a consensus around standards for smartphone apps and digital mental health. World Psychiatry 2019 Feb;18(1):97-98 [FREE Full text] [CrossRef] [Medline]
  44. Huguet A, Rao S, McGrath PJ, Wozney L, Wheaton M, Conrod J, et al. A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLoS One 2016;11(5):e0154248 [FREE Full text] [CrossRef] [Medline]
  45. Grist R, Porter J, Stallard P. Mental health mobile apps for preadolescents and adolescents: a systematic review. J Med Internet Res 2017 May 25;19(5):e176 [FREE Full text] [CrossRef] [Medline]
  46. Colbert S, Thornton L, Richmond R. Smartphone apps for managing alcohol consumption: a literature review. Addict Sci Clin Pract 2020 May 07;15(1):17 [FREE Full text] [CrossRef] [Medline]
  47. Hollis C, Falconer CJ, Martin JL, Whittington C, Stockton S, Glazebrook C, et al. Annual research review: digital health interventions for children and young people with mental health problems - a systematic and meta-review. J Child Psychol Psychiatry 2017 Apr 10;58(4):474-503. [CrossRef] [Medline]
  48. Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res 2019 Sep 25;21(9):e14567 [FREE Full text] [CrossRef] [Medline]
  49. Lattie EG, Kashima K, Duffecy JL. An open trial of internet-based cognitive behavioral therapy for first year medical students. Internet Interv 2019 Dec;18:100279 [FREE Full text] [CrossRef] [Medline]
  50. Linardon J, Shatte A, Messer M, Firth J, Fuller-Tyszkiewicz M. E-mental health interventions for the treatment and prevention of eating disorders: an updated systematic review and meta-analysis. J Consult Clin Psychol 2020 Nov;88(11):994-1007. [CrossRef] [Medline]
  51. Ebert DD, Zarski A, Christensen H, Stikkelbroek Y, Cuijpers P, Berking M, et al. Internet and computer-based cognitive behavioral therapy for anxiety and depression in youth: a meta-analysis of randomized controlled outcome trials. PLoS One 2015;10(3):e0119895 [FREE Full text] [CrossRef] [Medline]
  52. Harrer M, Adam SH, Baumeister H, Cuijpers P, Karyotaki E, Auerbach RP, et al. Internet interventions for mental health in university students: a systematic review and meta-analysis. Int J Methods Psychiatr Res 2019 Jun 26;28(2):e1759 [FREE Full text] [CrossRef] [Medline]
  53. Linardon J, Cuijpers P, Carlbring P, Messer M, Fuller-Tyszkiewicz M. The efficacy of app-supported smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials. World Psychiatry 2019 Oct 09;18(3):325-336 [FREE Full text] [CrossRef] [Medline]
  54. Pennant ME, Loucas CE, Whittington C, Creswell C, Fonagy P, Fuggle P, Expert Advisory Group. Computerised therapies for anxiety and depression in children and young people: a systematic review and meta-analysis. Behav Res Ther 2015 Apr;67:1-18. [CrossRef] [Medline]
  55. Podina IR, Mogoase C, David D, Szentagotai A, Dobrean A. A meta-analysis on the efficacy of technology mediated CBT for anxious children and adolescents. J Rat Emo Cognitive Behav Ther 2015 Nov 17;34(1):31-50. [CrossRef]
  56. Rooksby M, Elouafkaoui P, Humphris G, Clarkson J, Freeman R. Internet-assisted delivery of cognitive behavioural therapy (CBT) for childhood anxiety: systematic review and meta-analysis. J Anxiety Disord 2015 Jan;29:83-92. [CrossRef] [Medline]
  57. Vigerland S, Lenhard F, Bonnert M, Lalouni M, Hedman E, Ahlen J, et al. Internet-delivered cognitive behavior therapy for children and adolescents: a systematic review and meta-analysis. Clin Psychol Rev 2016 Dec;50:1-10 [FREE Full text] [CrossRef] [Medline]
  58. Ye X, Bapuji SB, Winters SE, Struthers A, Raynard M, Metge C, et al. Effectiveness of internet-based interventions for children, youth, and young adults with anxiety and/or depression: a systematic review and meta-analysis. BMC Health Serv Res 2014 Jul 18;14:313 [FREE Full text] [CrossRef] [Medline]
  59. Florean IS, Dobrean A, Păsărelu CR, Georgescu RD, Milea I. The efficacy of internet-based parenting programs for children and adolescents with behavior problems: a meta-analysis of randomized clinical trials. Clin Child Fam Psychol Rev 2020 Dec 08;23(4):510-528. [CrossRef] [Medline]
  60. Grist R, Croker A, Denne M, Stallard P. Technology delivered interventions for depression and anxiety in children and adolescents: a systematic review and meta-analysis. Clin Child Fam Psychol Rev 2019 Jun 18;22(2):147-171 [FREE Full text] [CrossRef] [Medline]
  61. Donker T, Van Esveld S, Fischer N, Van Straten A. 0Phobia - towards a virtual cure for acrophobia: study protocol for a randomized controlled trial. Trials 2018 Aug 09;19(1):433 [FREE Full text] [CrossRef] [Medline]
  62. Firth J, Torous J, Nicholas J, Carney R, Rosenbaum S, Sarris J. Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. J Affect Disord 2017 Aug 15;218:15-22 [FREE Full text] [CrossRef] [Medline]
  63. Weisel KK, Fuhrmann LM, Berking M, Baumeister H, Cuijpers P, Ebert DD. Standalone smartphone apps for mental health-a systematic review and meta-analysis. NPJ Digit Med 2019 Dec 2;2(1):118 [FREE Full text] [CrossRef] [Medline]
  64. Conley CS, Durlak JA, Shapiro JB, Kirsch AC, Zahniser E. A meta-analysis of the impact of universal and indicated preventive technology-delivered interventions for higher education students. Prev Sci 2016 Aug;17(6):659-678. [CrossRef] [Medline]
  65. Domhardt M, Steubl L, Baumeister H. Internet- and mobile-based interventions for mental and somatic conditions in children and adolescents. Z Kinder Jugendpsychiatr Psychother 2020 Jan 13;48(1):33-46 [FREE Full text] [CrossRef] [Medline]
  66. Lindhiem O, Bennett CB, Rosen D, Silk J. Mobile technology boosts the effectiveness of psychotherapy and behavioral interventions: a meta-analysis. Behav Modif 2015 Nov 17;39(6):785-804 [FREE Full text] [CrossRef] [Medline]
  67. Buttazzoni A, Brar K, Minaker L. Smartphone-based interventions and internalizing disorders in youth: systematic review and meta-analysis. J Med Internet Res 2021 Jan 11;23(1):e16490 [FREE Full text] [CrossRef] [Medline]
  68. Leech T, Dorstyn D, Taylor A, Li W. Mental health apps for adolescents and young adults: a systematic review of randomised controlled trials. Children and Youth Services Review 2021 Aug;127:106073. [CrossRef]
  69. Gál É, Ștefan S, Cristea IA. The efficacy of mindfulness meditation apps in enhancing users’ well-being and mental health related outcomes: a meta-analysis of randomized controlled trials. Journal of Affective Disorders 2021 Jan;279:131-142. [CrossRef]
  70. Flujas-Contreras JM, García-Palacios A, Gómez I. Technology-based parenting interventions for children's physical and psychological health: a systematic review and meta-analysis. Psychol Med 2019 Aug;49(11):1787-1798. [CrossRef] [Medline]
  71. Välimäki M, Anttila K, Anttila M, Lahti M. Web-based interventions supporting adolescents and young people with depressive symptoms: systematic review and meta-analysis. JMIR Mhealth Uhealth 2017 Dec 08;5(12):e180 [FREE Full text] [CrossRef] [Medline]
  72. Bennett SD, Cuijpers P, Ebert DD, McKenzie Smith M, Coughtrey AE, Heyman I, et al. Practitioner review: unguided and guided self-help interventions for common mental health disorders in children and adolescents: a systematic review and meta-analysis. J Child Psychol Psychiatry 2019 Aug 18;60(8):828-847. [CrossRef] [Medline]
  73. Auerbach RP, Mortier P, Bruffaerts R, Alonso J, Benjet C, Cuijpers P, WHO WMH-ICS Collaborators. WHO World mental health surveys international college student project: prevalence and distribution of mental disorders. J Abnorm Psychol 2018 Oct;127(7):623-638 [FREE Full text] [CrossRef] [Medline]
  74. Campbell OL, Bann D, Patalay P. The gender gap in adolescent mental health: a cross-national investigation of 566,829 adolescents across 73 countries. SSM Popul Health 2021 Mar;13:100742 [FREE Full text] [CrossRef] [Medline]
  75. Bakker D, Kazantzis N, Rickwood D, Rickard N. Mental health smartphone apps: review and evidence-based recommendations for future developments. JMIR Ment Health 2016 Mar 01;3(1):e7 [FREE Full text] [CrossRef] [Medline]
  76. Baumeister H, Reichler L, Munzinger M, Lin J. The impact of guidance on internet-based mental health interventions — a systematic review. Internet Interv 2014 Oct;1(4):205-215. [CrossRef]
  77. Mohr DC, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res 2011 Mar 10;13(1):e30 [FREE Full text] [CrossRef] [Medline]
  78. Heber E, Ebert DD, Lehr D, Cuijpers P, Berking M, Nobis S, et al. The benefit of web- and computer-based interventions for stress: a systematic review and meta-analysis. J Med Internet Res 2017 Feb 17;19(2):e32 [FREE Full text] [CrossRef] [Medline]
  79. Conley CS, Durlak JA, Kirsch AC. A Meta-analysis of universal mental health prevention programs for higher education students. Prev Sci 2015 May 7;16(4):487-507. [CrossRef] [Medline]
  80. Bakırcı-Taylor AL, Reed DB, McCool B, Dawson JA. mHealth improved fruit and vegetable accessibility and intake in young children. J Nutr Educ Behav 2019 May;51(5):556-566. [CrossRef] [Medline]
  81. Elliott SN, Frey JR, Davies M. Systems for assessing and improving students' social skills to achieve academic competence. In: Durlak JA, Domitrovich CE, Weissberg RP, Gullotta TP, editors. Handbook of social and emotional learning: Research and practice. New York, United States: The Guilford Press; 2015:301-319.
  82. Gottfredson DC, Cook TD, Gardner FE, Gorman-Smith D, Howe GW, Sandler IN, et al. Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: next generation. Prev Sci 2015 Oct;16(7):893-926 [FREE Full text] [CrossRef] [Medline]
  83. Payton JW, Wardlaw DM, Graczyk PA, Bloodworth MR, Tompsett CJ, Weissberg RP. Social and emotional learning: a framework for promoting mental health and reducing risk behavior in children and youth. Journal of School Health 2000 May;70(5):179-185. [CrossRef] [Medline]
  84. Salas E, Cannon-Bowers JA. The science of training: a decade of progress. Annu Rev Psychol 2001 Feb;52(1):471-499. [CrossRef] [Medline]
  85. Farrer L, Gulliver A, Chan JK, Batterham PJ, Reynolds J, Calear A, et al. Technology-based interventions for mental health in tertiary students: systematic review. J Med Internet Res 2013 May 27;15(5):e101 [FREE Full text] [CrossRef] [Medline]
  86. Linardon J, Fuller-Tyszkiewicz M. Attrition and adherence in smartphone-delivered interventions for mental health problems: a systematic and meta-analytic review. J Consult Clin Psychol 2020 Jan;88(1):1-13. [CrossRef] [Medline]
  87. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 4th Edition. Washington, DC: American Psychiatric Association; 1994.
  88. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th Edition. Washington, DC: American Psychiatric Association; 2013.
  89. van der Stouwe T, Gubbels J, Castenmiller YL, van der Zouwen M, Asscher JJ, Hoeve M, et al. The effectiveness of social skills training (SST) for juvenile delinquents: a meta-analytical review. J Exp Criminol 2020 Mar 03;17(3):369-396. [CrossRef]
  90. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health 1998 Jun 01;52(6):377-384 [FREE Full text] [CrossRef] [Medline]
  91. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Cochrane Bias Methods Group, Cochrane Statistical Methods Group. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011 Oct 18;343(oct18 2):d5928 [FREE Full text] [CrossRef] [Medline]
  92. Thomas B, Ciliska D, Dobbins M, Micucci S. A process for systematically reviewing the literature: providing the research evidence for public health nursing interventions. Worldviews Evidence Based Nursing 2004 Jun 23;1(3):176-184. [CrossRef] [Medline]
  93. Bartko JJ, Carpenter WT. On the methods and theory of reliability. J Nerv Ment Dis 1976 Nov;163(5):307-317. [CrossRef] [Medline]
  94. Meyer GJ. Assessing reliability: critical corrections for a critical examination of the Rorschach Comprehensive System. Psychol Assess 1997;9(4):480-489. [CrossRef]
  95. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-analysis, First edition. United Kingdom: John Wiley & Sons, Ltd; 2009.
  96. Lipsey MW, Wilson DB. Practical Meta-Analysis. Thousand Oaks, CA: Sage Publications; 2001.
  97. Mullen B. Advanced BASIC Meta-Analysis, First edition. Hillsdale, NJ: Lawrence Erlbaum Associates; 1989.
  98. de Vries SL, Hoeve M, Assink M, Stams GJ, Asscher JJ. Practitioner review: effective ingredients of prevention programs for youth at risk of persistent juvenile delinquency--recommendations for clinical practice. J Child Psychol Psychiatry 2015 Feb 21;56(2):108-121. [CrossRef] [Medline]
  99. van Loon AW, Creemers HE, Beumer WY, Okorn A, Vogelaar S, Saab N, et al. Can schools reduce adolescent psychological stress? A multilevel meta-analysis of the effectiveness of school-based intervention programs. J Youth Adolesc 2020 Jun 7;49(6):1127-1145 [FREE Full text] [CrossRef] [Medline]
  100. Assink M, Wibbelink CJ. Fitting three-level meta-analytic models in R: a step-by-step tutorial. TQMP 2016 Oct 01;12(3):154-174. [CrossRef]
  101. Cheung MW. Modeling dependent effect sizes with three-level meta-analyses: a structural equation modeling approach. Psychol Methods 2014 Jun;19(2):211-229. [CrossRef] [Medline]
  102. Hox JJ. Multilevel Analysis: Techniques and Applications, second edition. New York: Routledge; 2010.
  103. Van den Noortgate W, López-López JA, Marín-Martínez F, Sánchez-Meca J. Three-level meta-analysis of dependent effect sizes. Behav Res Methods 2013 Jun;45(2):576-594. [CrossRef] [Medline]
  104. Van den Noortgate W, López-López JA, Marín-Martínez F, Sánchez-Meca J. Meta-analysis of multiple outcomes: a multilevel approach. Behav Res Methods 2015 Dec 1;47(4):1274-1294. [CrossRef] [Medline]
  105. Viechtbauer W. Conducting meta-analyses in with the package. J Stat Soft 2010;36(3):1-48. [CrossRef]
  106. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2020.
  107. Knapp G, Hartung J. Improved tests for a random effects meta-regression with a single covariate. Stat Med 2003 Sep 15;22(17):2693-2710. [CrossRef] [Medline]
  108. Rosenthal R. Writing meta-analytic reviews. Psychol Bull 1995;118(2):183-192. [CrossRef]
  109. Duval S, Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 2000;56(2):455-463. [CrossRef] [Medline]
  110. Duval S, Tweedie R. A nonparametric “trim and fill” method of accounting for publication bias in meta-analysis. J Am Stat Assoc 2000 Mar;95(449):89-98. [CrossRef]
  111. Sterne JA, Gavaghan D, Egger M. Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature. J Clin Epidemiol 2000 Nov;53(11):1119-1129. [CrossRef] [Medline]
  112. Anastasiadou D, Folkvord F, Brugnera A, Cañas Vinader L, SerranoTroncoso E, Carretero Jardí C, et al. An mHealth intervention for the treatment of patients with an eating disorder: a multicenter randomized controlled trial. Int J Eat Disord 2020 Jul 08;53(7):1120-1131. [CrossRef] [Medline]
  113. Antle AN, Chesick L, Sridharan SK, Cramer E. East meets west: a mobile brain-computer system that helps children living in poverty learn to self-regulate. Pers Ubiquit Comput 2018 Jun 12;22(4):839-866. [CrossRef]
  114. Arps ER, Friesen MD, Overall NC. Promoting youth mental health via text-messages: a New Zealand feasibility study. Appl Psychol Health Well Being 2018 Nov 19;10(3):457-480. [CrossRef] [Medline]
  115. Baskerville NB, Struik LL, Guindon GE, Norman CD, Whittaker R, Burns C, et al. Effect of a mobile phone intervention on quitting smoking in a young adult population of smokers: randomized controlled trial. JMIR Mhealth Uhealth 2018 Oct 23;6(10):e10893 [FREE Full text] [CrossRef] [Medline]
  116. Beidel DC, Tuerk PW, Spitalnick J, Bowers CA, Morrison K. Treating childhood social anxiety disorder with virtual environments and serious games: a randomized trial. Behav Ther 2021 Nov;52(6):1351-1363. [CrossRef] [Medline]
  117. Bohleber L, Crameri A, Eich-Stierli B, Telesko R, von Wyl A. Can we foster a culture of peer support and promote mental health in adolescence using a web-based app? A control group study. JMIR Ment Health 2016 Sep 23;3(3):e45 [FREE Full text] [CrossRef] [Medline]
  118. Borjalilu S, Mazaheri MA, Talebpour A. Effectiveness of mindfulness-based stress management in the mental health of Iranian university students: a comparison of blended therapy, face-to-face sessions, and mHealth app (Aramgar). Iran J Psychiatry Behav Sci 2019 May 12;13(2):e84726. [CrossRef]
  119. Breitenstein SM, Fogg L, Ocampo EV, Acosta DI, Gross D. Parent use and efficacy of a self-administered, tablet-based parent training intervention: a randomized controlled trial. JMIR Mhealth Uhealth 2016 Apr 20;4(2):e36 [FREE Full text] [CrossRef] [Medline]
  120. Broglia E, Millings A, Barkham M. Counseling with guided use of a mobile well-being app for students experiencing anxiety or depression: clinical outcomes of a feasibility trial embedded in a student counseling service. JMIR Mhealth Uhealth 2019 Aug 15;7(8):e14318 [FREE Full text] [CrossRef] [Medline]
  121. Bruehlman-Senecal E, Hook CJ, Pfeifer JH, FitzGerald C, Davis B, Delucchi KL, et al. Smartphone app to address loneliness among college students: pilot randomized controlled trial. JMIR Ment Health 2020 Oct 20;7(10):e21496 [FREE Full text] [CrossRef] [Medline]
  122. Choi EK, Jung E, Bae E, Ji Y, Lee A. Two-step integrative education program and mHealth for Korean children with spina bifida: a quasi-experimental pre-post study. J Pediatr Nurs 2020 Mar;51:e92-e99. [CrossRef] [Medline]
  123. Chow CH, Van Lieshout RJ, Schmidt LA, Buckley N. Tablet-based intervention for reducing children's preoperative anxiety: a pilot study. J Dev Behav Pediatr 2017;38(6):409-416. [CrossRef] [Medline]
  124. Clarke PJF, Bedford K, Notebaert L, Bucks RS, Rudaizky D, Milkins BC, et al. Assessing the therapeutic potential of targeted attentional bias modification for insomnia using smartphone delivery. Psychother Psychosom 2016;85(3):187-189. [CrossRef] [Medline]
  125. Conley CS, Huguenel BM, Duffecy J, Silton R. Benefits of a mobile mindfulness application for depressed college students: preliminary findings from an ongoing RCT. 2019 Presented at: Association for Behavioral and Cognitive Therapies Annual Convention; 2019; Atlanta, GA.
  126. Cumino DO, Vieira JE, Lima LC, Stievano LP, Silva RA, Mathias LA. Smartphone-based behavioural intervention alleviates children's anxiety during anaesthesia induction: a randomised controlled trial. Eur J Anaesthesiol 2017 Mar;34(3):169-175. [CrossRef] [Medline]
  127. de Niet J, Timman R, Bauer S, van den Akker E, Buijks H, de Klerk C, et al. The effect of a short message service maintenance treatment on body mass index and psychological well-being in overweight and obese children: a randomized controlled trial. Pediatr Obes 2012 Jun;7(3):205-219. [CrossRef] [Medline]
  128. Deluca P, Coulton S, Alam MF, Boniface S, Cohen D, Donoghue K, et al. Brief interventions to prevent excessive alcohol use in adolescents at low-risk presenting to Emergency Departments: three-arm, randomised trial of effectiveness and cost-effectiveness. Int J Drug Policy 2021 Jul;93:103113 [FREE Full text] [CrossRef] [Medline]
  129. Earle AM, LaBrie JW, Boyle SC, Smith D. In pursuit of a self-sustaining college alcohol intervention: deploying gamified PNF in the real world. Addict Behav 2018 May;80:71-81 [FREE Full text] [CrossRef] [Medline]
  130. Egilsson E, Bjarnason R, Njardvik U. Usage and weekly attrition in a smartphone-based health behavior intervention for adolescents: pilot randomized controlled trial. JMIR Form Res 2021 Feb 17;5(2):e21432 [FREE Full text] [CrossRef] [Medline]
  131. Elicherla SR, Bandi S, Nuvvula S, Challa RS, Saikiran KV, Priyanka VJ. Comparative evaluation of the effectiveness of a mobile app (Little Lovely Dentist) and the tell-show-do technique in the management of dental anxiety and fear: a randomized controlled trial. J Dent Anesth Pain Med 2019 Dec;19(6):369-378 [FREE Full text] [CrossRef] [Medline]
  132. Fish MT, Saul AD. The gamification of meditation: a randomized-controlled study of a prescribed mobile mindfulness meditation application in reducing college students’ depression. Simulation Gaming 2019 Jun 04;50(4):419-435. [CrossRef]
  133. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 2017 Jun 06;4(2):e19 [FREE Full text] [CrossRef] [Medline]
  134. Flett JA, Conner TS, Riordan BC, Patterson T, Hayne H. App-based mindfulness meditation for psychological distress and adjustment to college in incoming university students: a pragmatic, randomised, waitlist-controlled trial. Psychol Health 2020 Sep 12;35(9):1049-1074. [CrossRef] [Medline]
  135. Gajecki M, Berman AH, Sinadinovic K, Rosendahl I, Andersson C. Mobile phone brief intervention applications for risky alcohol use among university students: a randomized controlled study. Addict Sci Clin Pract 2014 Jul 02;9:11 [FREE Full text] [CrossRef] [Medline]
  136. Gajecki M, Andersson C, Rosendahl I, Sinadinovic K, Fredriksson M, Berman AH. Skills training via smartphone app for university students with excessive alcohol consumption: a randomized controlled trial. Int J Behav Med 2017 Oct 21;24(5):778-788 [FREE Full text] [CrossRef] [Medline]
  137. Gibson KL. Efficacy of a self-monitoring intervention for college students with attention problems. University of California, Los Angeles. 2015.   URL: https://escholarship.org/uc/item/77j3q163 [accessed 2021-10-02]
  138. Gipson CS, Chilton JM, Dickerson SS, Alfred D, Haas BK. Effects of a sleep hygiene text message intervention on sleep in college students. J Am Coll Health 2019 Jan 31;67(1):32-41. [CrossRef] [Medline]
  139. Glissmann C. Calm College: testing a brief mobile app meditation intervention among stressed college students. Arizona State University. 2018.   URL: https://keep.lib.asu.edu/items/156759 [accessed 2021-10-02]
  140. Gonzales R, Ang A, Murphy DA, Glik DC, Anglin MD. Substance use recovery outcomes among a cohort of youth participating in a mobile-based texting aftercare pilot program. J Subst Abuse Treat 2014 Jul;47(1):20-26 [FREE Full text] [CrossRef] [Medline]
  141. Gonzales R, Hernandez M, Murphy DA, Ang A. Youth recovery outcomes at 6 and 9 months following participation in a mobile texting recovery support aftercare pilot study. Am J Addict 2016 Jan 21;25(1):62-68 [FREE Full text] [CrossRef] [Medline]
  142. Gonzales-Castaneda R, McKay JR, Steinberg J, Winters KC, Yu CH, Valdovinos IC, et al. Testing mediational processes of substance use relapse among youth who participated in a mobile texting aftercare project. Subst Abus 2022 Oct 22;43(1):1-12. [CrossRef] [Medline]
  143. Grassi A, Preziosa A, Villani D, Riva G. A relaxing journey: the use of mobile phones for well-being improvement. Annual Rev CyberTherapy Telemed 2007;5:123-131.
  144. Grassi A, Gaggioli A, Riva G. The green valley: the use of mobile narratives for reducing stress in commuters. Cyberpsychol Behav 2009 Apr;12(2):155-161. [CrossRef] [Medline]
  145. Grassi A, Gaggioli A, Riva G. New technologies to manage exam anxiety. Annual Review of Cybertherapy and Telemedicine 2011;167:57-62. [CrossRef]
  146. Greer S, Ramo D, Chang Y, Fu M, Moskowitz J, Haritatos J. Use of the chatbot "Vivibot" to deliver positive psychology skills and promote well-being among young people after cancer treatment: randomized controlled feasibility trial. JMIR Mhealth Uhealth 2019 Oct 31;7(10):e15018 [FREE Full text] [CrossRef] [Medline]
  147. Haug S, Schaub MP, Venzin V, Meyer C, John U. Efficacy of a text message-based smoking cessation intervention for young people: a cluster randomized controlled trial. J Med Internet Res 2013 Aug 16;15(8):e171 [FREE Full text] [CrossRef] [Medline]
  148. Haug S. Mobile phone text messaging to reduce alcohol and tobacco use in young people – a narrative review. Smart Homecare Technol TeleHealth 2013 Oct;1:11. [CrossRef]
  149. Haug S, Paz Castro R, Kowatsch T, Filler A, Dey M, Schaub MP. Efficacy of a web- and text messaging-based intervention to reduce problem drinking in adolescents: results of a cluster-randomized controlled trial. J Consult Clin Psychol 2017 Feb;85(2):147-159. [CrossRef] [Medline]
  150. Hides L, Dingle G, Quinn C, Stoyanov SR, Zelenko O, Tjondronegoro D, et al. Efficacy and outcomes of a music-based emotion regulation mobile app in distressed young people: randomized controlled trial. JMIR Mhealth Uhealth 2019 Jan 16;7(1):e11482 [FREE Full text] [CrossRef] [Medline]
  151. Hilt LM, Swords CM. Mindfulness mobile app for ruminative adolescents: a randomized controlled trial. In: Proceedings of the Anxiety and Depression Association of America Annual Conference. 2022 Presented at: Anxiety and Depression Association of America annual conference; 2022; Denver, CO.
  152. Huberty J, Green J, Glissmann C, Larkey L, Puzia M, Lee C. Efficacy of the mindfulness meditation mobile app "Calm" to reduce stress among college students: randomized controlled trial. JMIR Mhealth Uhealth 2019 Jun 25;7(6):e14273 [FREE Full text] [CrossRef] [Medline]
  153. Johnson N, Bree O, Lalley EE, Rettler K, Grande P, Gani MO, et al. Effect of a social script iPad application for children with autism going to imaging. J Pediatr Nurs 2014 Nov;29(6):651-659. [CrossRef] [Medline]
  154. Kajitani K, Higashijima I, Kaneko K, Matsushita T, Fukumori H, Kim D. Short-term effect of a smartphone application on the mental health of university students: a pilot study using a user-centered design self-monitoring application for mental health. PLoS One 2020 Sep 25;15(9):e0239592 [FREE Full text] [CrossRef] [Medline]
  155. Kauer SD, Reid SC, Crooke AH, Khor A, Hearps SJ, Jorm AF, et al. Self-monitoring using mobile phones in the early stages of adolescent depression: randomized controlled trial. J Med Internet Res 2012 Jun 25;14(3):e67 [FREE Full text] [CrossRef] [Medline]
  156. Reid SC, Kauer SD, Hearps SJ, Crooke AH, Khor AS, Sanci LA, et al. A mobile phone application for the assessment and management of youth mental health problems in primary care: a randomised controlled trial. BMC Fam Pract 2011 Nov 29;12:131 [FREE Full text] [CrossRef] [Medline]
  157. Reid SC, Kauer SD, Hearps SJ, Crooke AH, Khor AS, Sanci LA, et al. A mobile phone application for the assessment and management of youth mental health problems in primary care: health service outcomes from a randomised controlled trial of mobiletype. BMC Fam Pract 2013 Jun 19;14(1):84 [FREE Full text] [CrossRef] [Medline]
  158. Kennard BD, Goldstein T, Foxwell AA, McMakin DL, Wolfe K, Biernesser C, et al. As Safe As Possible (ASAP): a brief app-supported inpatient intervention to prevent postdischarge suicidal behavior in hospitalized, suicidal adolescents. Am J Psychiatry 2018 Sep 01;175(9):864-872 [FREE Full text] [CrossRef] [Medline]
  159. Kenny R. Is There an App for That? Development and Evaluation of a Mobile App-based Mental Health Intervention for Young People. Dublin, Ireland: University College Dublin; 2016.
  160. Kenny R, Dooley B, Fitzgerald A. Ecological momentary assessment of adolescent problems, coping efficacy, and mood states using a mobile phone app: an exploratory study. JMIR Ment Health 2016 Nov 29;3(4):e51 [FREE Full text] [CrossRef] [Medline]
  161. Kenny R, Fitzgerald A, Segurado R, Dooley B. Is there an app for that? A cluster randomised controlled trial of a mobile app-based mental health intervention. Health Informatics J 2020 Sep 08;26(3):1538-1559 [FREE Full text] [CrossRef] [Medline]
  162. Kollei I, Lukas CA, Loeber S, Berking M. An app-based blended intervention to reduce body dissatisfaction: a randomized controlled pilot study. J Consult Clin Psychol 2017 Nov;85(11):1104-1108. [CrossRef] [Medline]
  163. Lee J, Jung H, Lee G, Kim H, Park S, Woo S. Effect of behavioral intervention using smartphone application for preoperative anxiety in pediatric patients. Korean J Anesthesiol 2013 Dec;65(6):508-518 [FREE Full text] [CrossRef] [Medline]
  164. Lee RA, Jung ME. Evaluation of an mHealth app (DeStressify) on university students' mental health: pilot trial. JMIR Ment Health 2018 Jan 23;5(1):e2 [FREE Full text] [CrossRef] [Medline]
  165. Levin ME, Hicks ET, Krafft J. Pilot evaluation of the stop, breathe & think mindfulness app for student clients on a college counseling center waitlist. J Am Coll Health 2022 Jan 09;70(1):165-173 [FREE Full text] [CrossRef] [Medline]
  166. Liguori S, Stacchini M, Ciofi D, Olivini N, Bisogni S, Festini F. Effectiveness of an app for reducing preoperative anxiety in children: a randomized clinical trial. JAMA Pediatr 2016 Aug 01;170(8):e160533. [CrossRef] [Medline]
  167. Marx LS. A mindful eating "App" for non-treatment-seeking university women with eating and weight concerns. ProQuest. 2016.   URL: https://www.proquest.com/docview/1896581612/abstract/8306C60B5DDF4ADCPQ/1 [accessed 2021-10-02]
  168. Mason M, Mennis J, Way T, Zaharakis N, Campbell LF, Benotsch EG, et al. Text message delivered peer network counseling for adolescent smokers: a randomized controlled trial. J Prim Prev 2016 Oct 7;37(5):403-420. [CrossRef] [Medline]
  169. Mason MJ, Zaharakis NM, Russell M, Childress V. A pilot trial of text-delivered peer network counseling to treat young adults with cannabis use disorder. J Subst Abuse Treat 2018 Jun;89:1-10 [FREE Full text] [CrossRef] [Medline]
  170. McCloud T, Jones R, Lewis G, Bell V, Tsakanikos E. Effectiveness of a mobile app intervention for anxiety and depression symptoms in university students: randomized controlled trial. JMIR Mhealth Uhealth 2020 Jul 31;8(7):e15418 [FREE Full text] [CrossRef] [Medline]
  171. Moore SC, Crompton K, van Goozen S, van den Bree M, Bunney J, Lydall E. A feasibility study of short message service text messaging as a surveillance tool for alcohol consumption and vehicle for interventions in university students. BMC Public Health 2013 Oct 25;13(1):1011 [FREE Full text] [CrossRef] [Medline]
  172. Newman MG, Jacobson NC, Rackoff GN, Bell MJ, Taylor CB. A randomized controlled trial of a smartphone-based application for the treatment of anxiety. Psychother Res 2021 Apr 14;31(4):443-454 [FREE Full text] [CrossRef] [Medline]
  173. Nguyen-Feng V. A Randomized Controlled Trial of a Mobile Ecological Momentary Stress Management Intervention for Students With and Without a History of Emotional Abuse. University of Minnesota. 2019.   URL: http://conservancy.umn.edu/handle/11299/209119 [accessed 2021-10-02]
  174. Nolan J. Effectiveness, Feasibility, and Acceptability of a Mindfulness Based Mobile Application for Undergraduate Health Science Students. ProQuest. 2019.   URL: https:/​/www.​proquest.com/​openview/​80ad66e2794d4571056ed01e8e9f2902/​1?pq-origsite=gscholar&cbl=51922&diss=y [accessed 2021-10-02]
  175. O'Dea B, Han J, Batterham PJ, Achilles MR, Calear AL, Werner-Seidler A, et al. A randomised controlled trial of a relationship-focussed mobile phone application for improving adolescents' mental health. J Child Psychol Psychiatry 2020 Aug;61(8):899-913 [FREE Full text] [CrossRef] [Medline]
  176. Orosa-Duarte A, Mediavilla R, Muñoz-Sanjose A, Palao A, Garde J, López-Herrero V, et al. Mindfulness-based mobile app reduces anxiety and increases self-compassion in healthcare students: a randomised controlled trial. Med Teach 2021 Jun;43(6):686-693. [CrossRef] [Medline]
  177. Parisod H, Pakarinen A, Axelin A, Löyttyniemi E, Smed J, Salanterä S. Feasibility of mobile health game "Fume" in supporting tobacco-related health literacy among early adolescents: a three-armed cluster randomized design. Int J Med Inform 2018 May;113:26-37. [CrossRef] [Medline]
  178. Patel A, Schieble T, Davidson M, Tran MC, Schoenberg C, Delphin E, et al. Distraction with a hand-held video game reduces pediatric preoperative anxiety. Paediatr Anaesth 2006 Oct;16(10):1019-1027. [CrossRef] [Medline]
  179. Pbert L, Druker S, Crawford S, Frisard C, Trivedi M, Osganian SK, et al. Feasibility of a smartphone app with mindfulness training for adolescent smoking cessation: craving to quit (C2Q)-teen. Mindfulness (N Y) 2020 Mar;11(3):720-733 [FREE Full text] [CrossRef] [Medline]
  180. Pierce B. Tacting of function in college student mental health: an online and app-based approach to psychological flexibility. Utah State University. 2019.   URL: https://digitalcommons.usu.edu/etd/7619 [accessed 2021-10-02]
  181. Piskorz J, Czub M. Effectiveness of a virtual reality intervention to minimize pediatric stress and pain intensity during venipuncture. J Spec Pediatr Nurs 2018 Jan;23(1). [CrossRef] [Medline]
  182. Piskorz JE, Czub M, Šulžickaja B, Kiliś-Pstrusińska K. Mobile virtual reality distraction reduces needle pain and stress in children? Cyberpsychol 2020 Feb 21;14(1). [CrossRef]
  183. Ponzo S, Morelli D, Kawadler JM, Hemmings NR, Bird G, Plans D. Efficacy of the digital therapeutic mobile app BioBase to reduce stress and improve mental well-being among university students: randomized controlled trial. JMIR Mhealth Uhealth 2020 Apr 06;8(4):e17767 [FREE Full text] [CrossRef] [Medline]
  184. Ranney ML, Pittman SK, Dunsiger S, Guthrie KM, Spirito A, Boyer EW, et al. Emergency department text messaging for adolescent violence and depression prevention: a pilot randomized controlled trial. Psychol Serv 2018 Nov;15(4):419-428 [FREE Full text] [CrossRef] [Medline]
  185. Rodgers A, Corbett T, Bramley D, Riddell T, Wills M, Lin R, et al. Do u smoke after txt? Results of a randomised trial of smoking cessation using mobile phone text messaging. Tob Control 2005 Aug;14(4):255-261 [FREE Full text] [CrossRef] [Medline]
  186. Rodgers RF, Donovan E, Cousineau T, Yates K, McGowan K, Cook E, et al. BodiMojo: efficacy of a mobile-based intervention in improving body image and self-compassion among adolescents. J Youth Adolesc 2018 Jul 18;47(7):1363-1372. [CrossRef] [Medline]
  187. Sandrick J, Tracy D, Eliasson A, Roth A, Bartel J, Simko M, et al. Effect of a counseling session bolstered by text messaging on self-selected health behaviors in college students: a preliminary randomized controlled trial. JMIR Mhealth Uhealth 2017 May 17;5(5):e67 [FREE Full text] [CrossRef] [Medline]
  188. Siembor B. Exploring the effectiveness of a mindfulness training app for managing stress in a university student population: a pilot study. Northeastern University. 2017.   URL: https://repository.library.northeastern.edu/files/neu:cj82rb241 [accessed 2021-10-02]
  189. Suffoletto B, Kristan J, Callaway C, Kim KH, Chung T, Monti PM, et al. A text message alcohol intervention for young adult emergency department patients: a randomized clinical trial. Ann Emerg Med 2014 Dec;64(6):664-72.e4 [FREE Full text] [CrossRef] [Medline]
  190. Suffoletto B, Kristan J, Chung T, Jeong K, Fabio A, Monti P, et al. An interactive text message intervention to reduce binge drinking in young adults: a randomized controlled trial with 9-month outcomes. PLoS One 2015;10(11):e0142877 [FREE Full text] [CrossRef] [Medline]
  191. Suffoletto B, Callaway C, Kristan J, Kraemer K, Clark DB. Text-message-based drinking assessments and brief interventions for young adults discharged from the emergency department. Alcohol Clin Exp Res 2012 Mar;36(3):552-560. [CrossRef] [Medline]
  192. Tennant M, Youssef GJ, McGillivray J, Clark T, McMillan L, McCarthy MC. Exploring the use of immersive virtual reality to enhance psychological well-being in pediatric oncology: a pilot randomized controlled trial. Eur J Oncol Nurs 2020 Oct;48:101804. [CrossRef] [Medline]
  193. Tighe J, Shand F, Ridani R, Mackinnon A, De La Mata N, Christensen H. Ibobbly mobile health intervention for suicide prevention in Australian Indigenous youth: a pilot randomised controlled trial. BMJ Open 2017 Jan 27;7(1):e013518 [FREE Full text] [CrossRef] [Medline]
  194. Vu A. Randomized controlled trial of Pacifica, a CBT and mindfulness-based app for stress, depression, and anxiety management with health monitoring. University of Minnesota. 2018.   URL: http://conservancy.umn.edu/handle/11299/216811 [accessed 2021-10-02]
  195. Wang SS, Teo WZ, Teo WZ, Chai YW. Virtual reality as a bridge in palliative care during COVID-19. J Palliat Med 2020 Jun 01;23(6):756-756. [CrossRef] [Medline]
  196. Wantanakorn P, Harintajinda S, Chuthapisith J, Anurathapan U, Rattanatamrong P. A new mobile application to reduce anxiety in pediatric patients before bone marrow aspiration procedures. Hosp Pediatr 2018 Oct;8(10):643-650. [CrossRef] [Medline]
  197. Whitehouse AJ, Granich J, Alvares G, Busacca M, Cooper MN, Dass A, et al. A randomised controlled trial of an iPad-based application to complement early behavioural intervention in Autism Spectrum Disorder. J Child Psychol Psychiatry 2017 Sep 25;58(9):1042-1052. [CrossRef] [Medline]
  198. Whittaker R, Merry S, Dorey E, Maddison R. A development and evaluation process for mHealth interventions: examples from New Zealand. J Health Commun 2012;17 Suppl 1:11-21. [CrossRef] [Medline]
  199. Whittaker R, Stasiak K, McDowell H, Doherty I, Shepherd M, Chua S, et al. MEMO: an mHealth intervention to prevent the onset of depression in adolescents: a double-blind, randomised, placebo-controlled trial. J Child Psychol Psychiatry 2017 Sep 02;58(9):1014-1022. [CrossRef] [Medline]
  200. Yap AG, Roy RE, Lasala JR, Tan D, Hechanova MR, Diy WD, et al. Evaluation of a cognitive-behavioral game design-based mobile game on alcohol use for adolescents. Games Health J 2020 Oct 01;9(5):353-357. [CrossRef] [Medline]
  201. Carlbring P, Andersson G, Cuijpers P, Riper H, Hedman-Lagerlöf E. Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: an updated systematic review and meta-analysis. Cogn Behav Ther 2018 Jan;47(1):1-18. [CrossRef] [Medline]
  202. Bolier L, Haverman M, Westerhof GJ, Riper H, Smit F, Bohlmeijer E. Positive psychology interventions: a meta-analysis of randomized controlled studies. BMC Public Health 2013 Feb 08;13:119 [FREE Full text] [CrossRef] [Medline]
  203. Cuijpers P, van Straten A, Bohlmeijer E, Hollon SD, Andersson G. The effects of psychotherapy for adult depression are overestimated: a meta-analysis of study quality and effect size. Psychol Med 2009 Jun 03;40(2):211-223. [CrossRef]
  204. Keles S, Idsoe T. A meta-analysis of group cognitive behavioral therapy (CBT) interventions for adolescents with depression. J Adolesc 2018 Aug 26;67(1):129-139. [CrossRef] [Medline]
  205. Shea MT, Elkin I, Imber SD, Sotsky SM, Watkins JT, Collins JF, et al. Course of depressive symptoms over follow-up. Findings from the national institute of mental health treatment of depression collaborative research program. Arch Gen Psychiatry 1992 Oct 01;49(10):782-787. [CrossRef] [Medline]
  206. van Aar J, Leijten P, Orobio de Castro B, Overbeek G. Sustained, fade-out or sleeper effects? A systematic review and meta-analysis of parenting interventions for disruptive child behavior. Clin Psychol Rev 2017 Feb;51:153-163. [CrossRef] [Medline]
  207. Deng W, Hu D, Xu S, Liu X, Zhao J, Chen Q, et al. The efficacy of virtual reality exposure therapy for PTSD symptoms: a systematic review and meta-analysis. J Affect Disord 2019 Oct 01;257:698-709. [CrossRef] [Medline]
  208. Eijlers R, Utens EM, Staals LM, de Nijs PFA, Berghmans JM, Wijnen RM, et al. Systematic review and meta-analysis of virtual reality in pediatrics. Anesthesia Analgesia 2019;129(5):1344-1353. [CrossRef] [Medline]
  209. Weisz JR, McCarty CA, Valeri SM. Effects of psychotherapy for depression in children and adolescents: a meta-analysis. Psychol Bull 2006 Jan;132(1):132-149 [FREE Full text] [CrossRef] [Medline]
  210. Gopalan G, Goldstein L, Klingenstein K, Sicher C, Blake C, McKay MM. Engaging families into child mental health treatment: updates and special considerations. J Can Acad Child Adolesc Psychiatry 2010 Aug;19(3):182-196 [FREE Full text] [Medline]
  211. Murry VM, Heflinger CA, Suiter SV, Brody GH. Examining perceptions about mental health care and help-seeking among rural African American families of adolescents. J Youth Adolesc 2011 Sep 23;40(9):1118-1131. [CrossRef] [Medline]
  212. Burke BL, Arkowitz H, Menchola M. The efficacy of motivational interviewing: a meta-analysis of controlled clinical trials. J Consult Clin Psychol 2003 Oct;71(5):843-861. [CrossRef] [Medline]
  213. Wasil AR, Venturo-Conerly KE, Shingleton RM, Weisz JR. A review of popular smartphone apps for depression and anxiety: assessing the inclusion of evidence-based content. Behav Res Ther 2019 Dec;123:103498. [CrossRef] [Medline]
  214. Lattie EG, Schueller SM, Sargent E, Stiles-Shields C, Tomasino KN, Corden ME, et al. Uptake and usage of Intellicare: a publicly available suite of mental health and well-being apps. Internet Interv 2016 May;4(2):152-158 [FREE Full text] [CrossRef] [Medline]
  215. Schueller SM, Tomasino KN, Mohr DC. Integrating human support into behavioral intervention technologies: the efficiency model of support. Clin Psychol Sci Pract 2017 Mar;24(1):27-45. [CrossRef]
  216. Wright JH, Owen JJ, Richards D, Eells TD, Richardson T, Brown GK, et al. Computer-assisted cognitive-behavior therapy for depression: a systematic review and meta-analysis. J Clin Psychiatry 2019 Mar 19;80(2):18r12188 [FREE Full text] [CrossRef] [Medline]
  217. Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res 2011 Aug 05;13(3):e52 [FREE Full text] [CrossRef] [Medline]
  218. Kaltenthaler E, Sutcliffe P, Parry G, Beverley C, Rees A, Ferriter M. The acceptability to patients of computerized cognitive behaviour therapy for depression: a systematic review. Psychol Med 2008 Nov;38(11):1521-1530. [CrossRef] [Medline]
  219. Waller R, Gilbody S. Barriers to the uptake of computerized cognitive behavioural therapy: a systematic review of the quantitative and qualitative evidence. Psychol Med 2009 May;39(5):705-712. [CrossRef] [Medline]
  220. Christensen H, Griffiths KM, Farrer L. Adherence in internet interventions for anxiety and depression. J Med Internet Res 2009 Apr 24;11(2):e13 [FREE Full text] [CrossRef] [Medline]
  221. Marcolino MS, Oliveira JA, D'Agostino M, Ribeiro AL, Alkmim MB, Novillo-Ortiz D. The impact of mHealth interventions: systematic review of systematic reviews. JMIR Mhealth Uhealth 2018 Jan 17;6(1):e23 [FREE Full text] [CrossRef] [Medline]


ES: effect size
mTDI: mobile technology–delivered intervention
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
TDI: technology-delivered intervention
VR: virtual reality


Edited by J Torous; submitted 21.10.21; peer-reviewed by M Attridge, S Badawy; comments to author 26.12.21; revised version received 04.03.22; accepted 05.03.22; published 29.07.22

Copyright

©Colleen S Conley, Elizabeth B Raposa, Kate Bartolotta, Sarah E Broner, Maya Hareli, Nicola Forbes, Kirsten M Christensen, Mark Assink. Originally published in JMIR Mental Health (https://mental.jmir.org), 29.07.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.