Published on in Vol 10 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/44658, first published .
The Effectiveness of Fully Automated Digital Interventions in Promoting Mental Well-Being in the General Population: Systematic Review and Meta-Analysis

The Effectiveness of Fully Automated Digital Interventions in Promoting Mental Well-Being in the General Population: Systematic Review and Meta-Analysis

The Effectiveness of Fully Automated Digital Interventions in Promoting Mental Well-Being in the General Population: Systematic Review and Meta-Analysis

Review

1Department of Psychology, University of Bath, Bath, United Kingdom

2Cyberlimbic Systems Ltd, London, United Kingdom

3Department of Computer Science, University of Bath, Bath, United Kingdom

4Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, United Kingdom

5School of Psychology, Faculty of Environmental and Life Sciences, University of Southampton, Southampton, United Kingdom

Corresponding Author:

Julia Groot, MSc

Department of Psychology

University of Bath

Claverton Down

Bath, BA2 7AY

United Kingdom

Phone: 44 01225 383800

Email: jmdg20@bath.ac.uk


Background: Recent years have highlighted an increasing need to promote mental well-being in the general population. This has led to a rapidly growing market for fully automated digital mental well-being tools. Although many individuals have started using these tools in their daily lives, evidence on the overall effectiveness of digital mental well-being tools is currently lacking.

Objective: This study aims to review the evidence on the effectiveness of fully automated digital interventions in promoting mental well-being in the general population.

Methods: Following the preregistration of the systematic review protocol on PROSPERO, searches were carried out in MEDLINE, Web of Science, Cochrane, PsycINFO, PsycEXTRA, Scopus, and ACM Digital (initial searches in February 2022; updated in October 2022). Studies were included if they contained a general population sample and a fully automated digital intervention that exclusively used psychological mental well-being promotion activities. Two reviewers, blinded to each other’s decisions, conducted data selection, extraction, and quality assessment of the included studies. Narrative synthesis and a random-effects model of per-protocol data were adopted.

Results: We included 19 studies that involved 7243 participants. These studies included 24 fully automated digital mental well-being interventions, of which 15 (63%) were included in the meta-analysis. Compared with no intervention, there was a significant small effect of fully automated digital mental well-being interventions on mental well-being in the general population (standardized mean difference 0.19, 95% CI 0.04-0.33; P=.02). Specifically, mindfulness-, acceptance-, commitment-, and compassion-based interventions significantly promoted mental well-being in the general population (P=.006); insufficient evidence was available for positive psychology and cognitive behavioral therapybased interventions; and contraindications were found for integrative approaches. Overall, there was substantial heterogeneity, which could be partially explained by the intervention duration, comparator, and study outcomes. The risk of bias was high, and confidence in the quality of the evidence was very low (Grading of Recommendations, Assessment, Development, and Evaluations), primarily because of the high rates of study dropout (average 37%; range 0%-85%) and suboptimal intervention adherence (average 40%).

Conclusions: This study provides a novel contribution to knowledge regarding the effectiveness, strengths, and weaknesses of fully automated digital mental well-being interventions in the general population. Future research and practice should consider these findings when developing fully automated digital mental well-being tools. In addition, research should aim to investigate positive psychology and cognitive behavioral therapy–based tools as well as develop further strategies to improve adherence and reduce dropout in fully automated digital mental well-being interventions. Finally, it should aim to understand when and for whom these interventions are particularly beneficial.

Trial Registration: PROSPERO CRD42022310702; https://tinyurl.com/yc7tcwy7

JMIR Ment Health 2023;10:e44658

doi:10.2196/44658

Keywords



General Background

Mental well-being is commonly defined as a complex construct that includes a subjective experience (subjective well-being, which is often referred to as “happiness”) [1] and a process of self-realization (psychological well-being) [2,3]. Traditionally, it was thought that mental well-being would arise in the absence of mental illness, as they were considered opposite ends of 1 continuum [4]. However, the absence of mental illness was found to be insufficient to produce good mental well-being [5]. The dual-continuum model has identified that mental well-being and mental illness are 2 distinct but related continua instead [6], both of which could be considered part of mental health [7]. It is important to focus exclusively on the effective promotion of mental well-being [8], as only a small proportion of the general population has optimal levels of mental well-being [7,9].

In addition, mental well-being in the general population is crucial for allowing society and the individuals within it to thrive. Improved mental well-being is connected to increased productivity, personal growth, a higher quality of life, stronger social cohesion, and more fulfilling and lasting relationships, as well as a decreased likelihood of developing diseases and mental illnesses and a longer lifespan [5,7,10,11]. Promoting mental well-being in the general population is therefore considered a fundamental goal by the World Health Organization (WHO), as described in the Mental Health Action Plan 2013-2030 [12]. Mental well-being promotion interventions provide “various activities or practices that aim to promote, build on, increase or foster primarily individuals’ strengths, resourcefulness or resiliency” [10].

Evidence suggests that a variety of psychological approaches are effective in promoting mental well-being, including acceptance and commitment therapy (ACT), compassion, cognitive behavioral therapy (CBT), mindfulness, positive psychology, and multitheoretical interventions [7]. These psychological approaches were found to have small to moderate effects on mental well-being in the general population, whereby mindfulness-based interventions (MBIs) and multicomponent positive psychology interventions were particularly efficacious [7,13]. Further meta-analyses focusing on positive psychology interventions, MBIs, and ACT-based interventions separately also found similar effects on mental well-being [14-16].

However, these systematic reviews did not focus on fully automated digital interventions. Fully automated digital interventions are interventions that are delivered entirely by the technology itself, not requiring any form of human support (by clinicians or nonclinicians) [17]. Although fully automated digital interventions might be less effective, as recent research has found that any form of human support enhances the effectiveness of interventions [18], fully automated digital interventions allow for great scalability and are highly cost-effective and accessible [19]. Therefore, fully automated digital interventions provide a particularly pertinent way to promote mental well-being in the general population.

Overall, there is a need to systematically review the evidence of the effectiveness of fully automated digital mental well-being interventions to improve mental well-being (which includes subjective and psychological well-being) in the general population. Furthermore, an understanding of what psychological approaches work when delivered fully automated digitally and for whom (as one approach does not suit all) [20] is needed.

Main Objective

This systematic review aims to understand the effectiveness of fully automated digital interventions in promoting mental well-being in the general population.

Secondary Objectives

Furthermore, the systematic review aims to explore the effectiveness of fully automated digital mental well-being interventions across psychological approaches and population subgroups.


Study Protocol

The systematic review protocol was registered on PROSPERO (CRD42022310702). The Cochrane handbook was used when designing and conducting the systematic review [21], and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were followed for reporting of the systematic review [22].

Eligibility Criteria

Studies were included if they used a fully automated digital intervention that aimed to promote mental well-being in the general population.

The study needed to include adults, meaning that the population needed to be aged ≥18 years. General population was further defined as any adult population subgroup that was not a clinical population and was not specifically recruited by the researchers because of (expected) lower mental well-being baseline scores.

Digital interventions were defined according to the National Institute for Health and Care Excellence [17] as interventions that are delivered through hardware and electronic devices (eg, smartwatches and smartphones), software (eg, computer programs and apps), and websites. The intervention needed to be fully automated, which means it should be delivered by the technology itself entirely, independent from health care professionals, and not containing any other form of social support [17]. For example, a digital web-based intervention in which video content was delivered automatically would have been included, whereas a digital video call intervention in which a health care professional delivered content would have been excluded. Although the content should be delivered entirely by the technology itself, the elements of the study could still have been conducted by the researchers. For example, researchers could have screened, obtained measures, and obtained informed consent (digitally or in person), after which they could have provided the participants with access to the intervention.

Furthermore, the intervention needed to use individual mental well-being promotion, defined by the WHO as “various activities or practices that aim to promote, build on, increase or foster primarily individuals’ strengths, resourcefulness or resiliency” [10]. This should be a psychological intervention.

Interventions that included physical activity–related or lifestyle-related interventions were excluded. If an intervention contained elements that did not include mental well-being promotion, they would also be excluded, as the detection of the effectiveness of mental well-being promotion strategies would not be possible. For example, an MBI would have been included; however, an MBI that included a yoga session would have been excluded.

The outcome needed to consider a validated measure of mental well-being, including psychological well-being or subjective well-being.

Finally, studies needed to investigate the effectiveness of this digital intervention on mental well-being. Therefore, quantitative randomized and nonrandomized studies of interventions, such as before-after studies, were considered appropriate, as they can provide insights into the effectiveness of interventions [23]. For further details regarding the inclusion and exclusion criteria, please refer to the protocol [24].

Searches

The initial search was conducted in February 2022 and updated using a title and keyword search in October 2022. The databases searched included MEDLINE, Web of Science, Cochrane, PsycINFO, PsycEXTRA, Scopus, and ACM Digital. Combinations of the following key search terms were used: “mental well being,” “mental wellbeing,” “psychological well being,” “psychological wellbeing,” “subjective well being,” and “subjective wellbeing,” in combination with “digital*,” “online,” “internet,” “web-based,” “app,” “apps,” “smartphone application*,” and “mobile application*.” No restrictions were applied. Refer to Multimedia Appendix 1 [25-42] for the detailed searches conducted in each database.

Study Selection

Each record was double screened, and the reviewers were blinded to each other’s decisions throughout the process. To ensure consistency and quality of the screening process, the lead author (JG) screened all records, and double screening was conducted by MB, ET, and MZ. After screening 10.71% (776/7764) of the records, interreviewer reliability was calculated, which ranged from moderate to substantial agreement (Cohen κ=0.54-0.79) [43]. Inconsistencies in the screening process were discussed, and conflicts were resolved through discussion. If conflicts remained, an additional discussion with a third, senior reviewer (BA) was conducted. Upon completion of the screening, interreviewer reliability was recalculated (Cohen κ=0.42-0.80), and conflicts were again resolved using the same process. This process was then repeated for full-text screening.

Data Extraction

Before data extraction, the Cochrane data collection form was adapted and prepiloted for this review. Data extraction included information regarding the study population, participant demographics, and setting; details of the intervention and control conditions (such as duration, frequency, timing, and activities); study methodology; recruitment and study completion rates; outcomes, outcome measures, and times of measurement; and information for the assessment of the risk of bias (RoB). Two reviewers (JG and AM) independently extracted all relevant data from the included studies and held meetings to discuss any discrepancies in data extraction. When conflicting views on the data extraction occurred, a third, senior reviewer (BA) advised on how to resolve the issue. Missing data were sought by contacting the lead author of the study via email, which was identified through the journal paper.

RoB Assessment

RoB was assessed independently by 2 reviewers (JG and AM) using the Cochrane RoB 2.0 tool for randomized controlled trials (RCTs) [44]. No standardized tools were available for noncontrolled before-after studies; therefore, the National Institutes of Health tool, “Quality Assessment Tool for Before-After (Pre-Post) Studies with No Control Group,” was used as a guidance to provide an indication of the RoB in these studies [45]. However, it was considered that these studies would provide a lower quality of evidence. Following the RoB assessments, discussions were held to discuss conflicts, and any remaining disagreements were resolved through verbal discussion with a third reviewer (BA).

Data Synthesis and Meta-Analysis

Mean, SD, and total number of participants were extracted for each postintervention mental well-being outcome in the study arms that met the inclusion criteria of the digital mental well-being intervention and control group. The effect estimates were averaged, where the studies included multiple study outcomes. This method was also adopted for multiarm studies because it was considered meaningful to combine the intervention effects, as all the included intervention arms were digital mental well-being interventions. In addition, this avoided double counting of participants in the control group. Standardized mean differences (SMDs) were used in a random-effects model.

Initially, both the per-protocol (PP) and intention to treat (ITT) data were extracted. However, only PP data were included in the meta-analysis, as high dropout rates (ranging up to 85%) led to ITT data being less meaningful.

Visual inspection of the forest plot and the chi-square and I2 tests were used to assess heterogeneity. A value of >50% was considered to represent substantial heterogeneity. Heterogeneity was explored, interpreted, and contextualized.


Description of Studies

An initial search yielded 12,672 records. Following deduplication, 7764 records were screened in Covidence (Veritas Health Innovation). A total of 7526 records were excluded following title and abstract screening, and 238 records were sought for retrieval for full-text screening. A total of 230 full-text records were screened, leading to the exclusion of another 213 records. The most common reasons for exclusion were the population being a clinical population, intervention not solely using mental well-being promotion, intervention not being fully automated and digital, or that the study was still ongoing. For full details of the study selection process, refer to Figure 1.

An updated title and keyword search in October 2022 yielded another 525 records. After deduplication, 366 articles were screened in Covidence. A total of 347 articles were excluded, and full texts of 19 articles were obtained. Furthermore, 17 articles were excluded following full-text screening.

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart of the search strategy outcomes.

Narrative Summary

A total of 18 records containing 19 studies were included in this systematic review, including 17 RCTs and 2 non-RCTs before-after trials.

Setting and Participants

Studies mainly occurred in Western countries; the participants were primarily female and highly educated; and the study populations were students, employees, mothers, and other general population samples (Table 1).

Table 1. Characteristics of the included studies.
Study, yearPopulationSettingComparatorOutcomea
Study 3 from Avey et al [25], 2022EmployeesUnited States and AustraliaUnknownPWBb
Bakker et al [26], 2018General populationAustraliaWaitlist controlMWBc
Brazier et al [27], 2022TraineesUnited KingdomWaitlist controlMWB
Champion et al [28], 2018EmployeesUnited Kingdom and United StatesWaitlist controlSWBd
Chung et al [29], 2021StudentsAustralia and United KingdomWaitlist controlMWB
Study 1 from Di Consiglio et al [30], 2021StudentsItalyActive controlPWB
Study 2 from Di Consiglio et al [30], 2021StudentsItalyNonePWB
Eisenstadt et al [31], 2021Real-world app usersUnited KingdomNoneMWB
Gammer et al [32], 2020Mothers of infants aged <1 yUnited KingdomWaitlist controlMWB
Liu et al [33], 2021StudentsChinaPlaceboSWB
Ly et al [34], 2017General populationSwedenWaitlist controlPWB and SWB
Mak et al [35], 2018General populationChinaActive controlMWB
Manthey et al [36], 2016General populationGermanyActive controlSWB
Mitchell et al [37], 2009AdultsAustraliaPlaceboPWB
Neumeier et al [38], 2017EmployeesGermany and AustraliaWaitlist controlSWB
Pheh et al [39], 2020General populationMalaysiaActive controlMWB
Schulte-Frankenfeld and Trautwein [40], 2021Students with a part-time jobGermanyWaitlist controlSWB
Shin et al [41], 2020StudentsUnited StatesPlaceboSWB
Walsh et al [42], 2019StudentsCanadaActive controlPWB

aMental well-being outcomes included 5-item mental well-being index (World Health Organization-5) [46] and Warwick-Edinburgh Mental Well-Being Scale (version 1) [47]. Subjective well-being outcomes included Satisfaction With Life Scale [48], Positive And Negative Affect Schedule [49], Satisfaction with Life and happiness [50], Subjective Happiness Scale [51], and single-item life satisfaction and affect measure [38]. Psychological well-being outcomes included psychological well-being [52], Psychological Well-Being Scale [53], Psychological Well-Being Index (adult) scale [54], and Flourishing Scale [34].

bPWB: psychological well-being.

cMWB: mental well-being.

dSWB: subjective well-being.

Psychological Approaches

Several different psychological approaches were used, including the following: (1) mindfulness, ACT, and self-compassion; (2) positive psychology; (3) cognitive behavioral; and (4) integrative (Table 2). The most frequently used psychological approach was mindfulness, ACT, and self-compassion. General intervention activities and behavior change techniques, such as well-being tips and behavior change techniques to form habits, were adopted across psychological approaches and in most interventions (Textbox 1).

The intervention content was primarily developed by the study researchers and clinical psychologists (15/19, 79% of studies), some studies collaborated with companies or digital laboratories to develop the intervention (2/19, 11%), and some studies tested a preexisting intervention developed by a company (2/19, 11%).

Table 2. Description of intervention characteristicsa.
Psychological approach underpinning the interventionActivities or practicesStudies adopting the approach
Mindfulness, ACTb, and self-compassion
  • Meditation: awareness of inner experiences, present moment, and acceptance
  • Overcoming obstacles in mindfulness meditation
  • Body scan
  • Increasing awareness through biofeedback
  • Being mindful in daily life
  • Loving-kindness meditation
  • Compassionate journaling and breaks
  • Self-kindness activities
[28,29,32,35,39,40,42]
Positive psychology
  • Gratitude (gratitude diary and letter)
  • Positive future imagination
  • Best possible self
  • Counting blessings
  • Random acts of kindness
  • Replaying positive experiences
  • Using strengths
  • Savoring the moment
  • Wearing a smile
  • Brainstorming meaningfulness
[33,36-38,41]
Cognitive behavioral approach
  • Mood-related activities (eg, mood tracker, mood diary, and mood improvement activities)
  • Challenging thoughts and behaviors
  • Problem-solving
  • Goal setting (SMARTc goals and planning)
  • Committed actions
  • Journaling
[26,37]
Integrative approach
  • A combination of intervention activities or practices of these psychological approaches
[25,27,30,31,34]

aFor more detailed intervention description, refer to Multimedia Appendix 2.

bACT: acceptance and commitment therapy.

cSMART: Specific, Measurable, Achievable, Relevant, and Time-bound.

Textbox 1. General psychological intervention components.

General intervention components adopted across interventions

  • Psychoeducation (eg, on emotions, needs, values, and mental illness)
  • Support-seeking information
  • Well-being tips

Behavior change techniques adopted across interventions [55]

  • Habit formation
  • Goal setting
  • Action planning (eg, implementation intentions)
  • Prompts or cues
  • Self-monitoring of behavior or outcome of behavior
  • Self-assessment of affective consequences
  • Feedback on behavior
  • Material or nonspecific reward

Intervention Delivery

A total of 24 fully automated digital mental well-being interventions were included. The interventions were app based (n=10), web based (n=11), both app and web based (n=2), and SMS text message (n=1) interventions (Table 3).

Table 3. Intervention characteristics and dropouta.
Study, yearParticipants randomized, NbInterventionDurationFrequencyMode of deliveryDropout, n (%)c
Study 3 from Avey et al [25], 2022102Resilience intervention10 wkWeeklyApp based3 (2.9)
Bakker et al [26], 2018226Moodkit, Moodprism30 dDailyApp based108 (47.8)
Brazier et al [27], 2022279Dear Doctor10 moFortnightlySMS text message126 (45.2)
Champion et al [28], 201874Headspace30 dDailyApp based12 (16.2)
Chung et al [29], 2021427Brief MBId6 wkWeeklyWeb based280 (65.6)
Study 1 from Di Consiglio et al [30], 202124Noibene3 mo4 timesWeb based0 (0)
Study 2 from Di Consiglio et al, [30], 2021178NoibeneNoneNoneWeb based119 (66.9)
Eisenstadt et al [31], 2021115Paradym2 wkDailyApp based81 (70.4)
Gammer et al [32], 2020206Kindness For Mums Online5 wkWeeklyWeb based80 (38.8)
Liu et al [33], 20211000Positive psychology intervention1-3 dTwiceWeb based132 (13.2)
Ly et al [34], 201730Shim2 wkDailyApp based3 (10)
Mak et al [35], 20182282Mindfulness-based program and self-compassion program28 dDailyApp based and web based1933 (84.7)
Manthey et al [36], 2016666Best possible self and gratitude8 wkWeeklyWeb-based video112 (16.8)
Mitchell et al [37], 2009160Strengths intervention and problem-solving intervention3 wkDailyWeb based111 (77.6)
Neumeier et al [38], 2017431PERMAe program and gratitude program7 dDailyApp based128 (29.7)
Pheh et al [39], 2020206Brief MBI1 dOnceWeb based100 (48.5)
Schulte-Frankenfeld and Trautwein [40], 202199Balloon8 wkDailyApp based35 (35.4)
Shin et al [41], 2020630Gratitude writing20 minOnceWeb based49 (7.8)
Walsh et al [42], 2019108Wildflowers3 wkDailyApp based22 (20.4)

aThis table represents the general characteristics of the studies included in this systematic review. Only interventions of the studies that met the inclusion criteria are presented in this table.

bN denotes the number of participants randomized in the study, irrespective of whether people conducted baseline and follow-up assessments.

cDropout rates are calculated from randomization to final assessment.

dMBI: mindfulness-based intervention.

ePERMA: Positive emotion, Engagement, Relationships, Meaning, Accomplishment.

Intervention Duration, Frequency, and Timing

The participants were expected to use the intervention for substantially varied duration across interventions, ranging from 1 single session to 10 months, and there did not appear to be a clear end strategy across interventions. Most commonly, intervention use was recommended daily for up to 30 days, weekly for up to 8 weeks, and fortnightly for up to 10 months. Participants were often encouraged to use and access the intervention content for 5 to 15 minutes at a time, irrespective of the duration of the intervention.

Level of Automation of Interventions

Access was generally automated with instant, sequential, or weekly access to content (Table 4). Most digital content was delivered in a standard way, and tailoring and dynamic delivery of content occurred in only 2 mental well-being interventions [34,42].

Table 4. Level of automation and engagement of intervention.
Study, yearInterventionFrequency of content releaseHow access to intervention content was providedTailoring of content to improve or maintain engagementOther digital intervention strategies to improve or maintain engagementActual engagement with intervention contenta (%)
Study 3 from Avey et al [25], 2022Resilience interventionUnknownUnknownNoneNoneUnknown
Bakker et al [26], 2018MoodkitInstant accessN/AbN/ANoneUnknown
Bakker et al [26], 2018MoodprismInstant accessN/AFeedback on mental well-beingNoneUnknown
Brazier et al [27], 2022Dear DoctorFortnightlyAutomated text messageNoneNoneUnknown
Champion et al [28], 2018HeadspaceSequential accessAutomated access upon completion of step in the appNoneNone20.7
Chung et al [29], 2021Brief MBIcFortnightly or weeklyUnknownNoneNotifying of new contentUnknown
Study 1 from Di Consiglio et al [30], 2021NoibeneInstant accessN/ANoneNone100
Study 2 from Di Consiglio et al [30], 2021NoibeneInstant accessN/ANoneNoneUnknown
Eisenstadt et al [31], 2021ParadymUnknownUnknownNonePush notification32.1
Gammer et al [32], 2020Kindness for Mums OnlineWeeklyUnknownNoneNoneUnknown
Liu et al [33], 2021Positive psychology interventionSequential accessUnknownNoneNoneUnknown
Ly et al [34], 2017ShimUpon opening of appAutomated by digital conversational agentOn the basis of individual and external factors (eg, time of day)None126.5
Mak et al [35], 2018Mindfulness-based ProgramWeeklyUnknownNoneSticker earning and alarm feature29.5
Mak et al [35], 2018Compassion-based programWeeklyUnknownNoneSticker earning and alarm feature32.2
Manthey et al [36], 2016Best possible selfWeeklyAutomated emailNoneNoneUnknown
Manthey et al [36], 2016GratitudeWeeklyAutomated emailNoneNoneUnknown
Mitchell et al [37], 2009Strengths interventionInstant accessN/ANoneInteractive features and automated email remindersUnknown
Mitchell et al [37], 2009Problem-solving interventionInstant accessN/ANoneInteractive features and automated email remindersUnknown
Neumeier et al [38], 2017PERMAd programSequential accessAutomated access upon completion of step in programNoneNoneUnknown
Neumeier et al [38], 2017Gratitude programSequential accessAutomated access upon completion of step in programNoneNoneUnknown
Pheh et al [39], 2020Brief MBIInstant accessN/ANoneNoneUnknown
Schulte-Frankenfeld and Trautwein [40], 2021BalloonSequential accessAutomated access upon completion of step in the appNoneA reminder was sent if a session was missed.40.2
Shin et al [41], 2020Gratitude writingInstant accessN/ANoneNone100
Walsh et al [42], 2019WildflowersSequential accessAutomated access upon completion of step in the appOn the basis of mood and stress levels recommendations were made for meditationsNone77.7

aActual engagement with content is based on the requested frequency of engagement with the intervention (eg, daily for 2 wk=14 d=100%) compared with the actual frequency of engagement in the intervention (eg, on average, participants engaged with the intervention on 5 d=35.7%).

bN/A: not applicable.

cMBI: mindfulness-based intervention.

dPERMA: Positive emotion, Engagement, Relationships, Meaning, Accomplishment.

Intervention Engagement

Overall, intervention engagement was suboptimal, below the required or recommended intervention engagement levels (Table 4). On average, participants engaged in 40.2% (median) of the recommended intervention sessions or days. Only few studies (3/19, 16%) contained optimal levels of engagement, engaging in the recommended intervention sessions or days or more [30,34,41].

Studies attempted to improve intervention engagement in a variety of different ways (Tables 2 and 4), including (1) sending automated email reminders or notifications to use the intervention, (2) increasing participant motivation (eg, increasing awareness of potential benefits and using in-app reward earning features), (3) increasing habit formation, and (4) tailoring intervention content based on external factors (such as time of day) or internal factors (such as suggestion of a specific activity based on someone’s mood).

Although caution should be used when interpreting the impact of these strategies on the engagement with the intervention because of the variety and inconsistency in reporting, preliminary results imply that tailored content improves engagement more than interventions that use reminders (habit formation and prompts) or sticker earning features (nonspecific rewards). Furthermore, it seems that interventions that require little engagement—engaging once or 4 times in the intervention in total [30,41]—also allow for more optimal intervention engagement. This is in line with studies showing that engagement was generally highest at the start of the intervention and decreased with time.

Study Dropout and Attrition

Dropout occurred at any point throughout the study period when a participant failed to complete the research protocol associated with the digital intervention [56].

On average, there was a 37% dropout rate (mean), which ranged from 0% to 85% in the studies (Table 3). Strategies used to reduce study dropout included monetary incentives, the intervention being a mandatory element of university courses, and follow-up of participants by sending email reminders.

There were a range of findings across studies on the association between participants’ demographic characteristics and dropout. One study found that male participants were more likely to drop out [36], whereas others (2/19, 11%) found no difference [27,31]. Some studies (2/19, 11%) found that participants who remained in the study were older [35,38], although other studies (2/19, 11%) did not find this effect [31,36]. One study found that educational level was higher among participants who dropped out [35], whereas another study did not find this effect [38].

Several studies have compared whether baseline mental well-being was associated with dropout. Most studies (5/19, 26%) did not find any differences in baseline mental well-being levels between participants who did and did not drop out [27,29,32,35,36]. However, 1 study found that participants with lower mental well-being and higher levels of anxiety, depression, and distress were more likely to drop out [30], whereas another study found that participants with higher mental well-being and lower levels of anxiety, depression, and distress were more likely to drop out [31].

Few studies (2/19, 11%) excluded participants from their analysis (considered them to have dropped out) if they did not adhere with the intervention content at a minimum required level [37,42]; most studies (17/19, 89%) included participants with any level of intervention engagement.

Outcomes

A variety of validated standardized questionnaires were used to measure mental well-being across studies, including the WHO 5 item mental well-being index and Warwick-Edinburgh Mental Well-Being Scale for mental well-being, Psychological Well-Being scale and Flourishing Scale for psychological well-being, and Satisfaction with Life Scale and Positive And Negative Affect Schedule for subjective well-being (Table 1). Nevertheless, the authors of 1 study created and validated their own mental well-being questionnaires, which included a combination of different measures. Although not included in this systematic review (as it is not considered the primary aim of mental well-being promotion), most studies (17/19, 89%) included additional outcome measures such as distress, depression, anxiety, and stress.

RoB Assessments

Generally, the RoB of the included studies was considered to be high (Table 5). High levels of dropout and nonadherence led to a high RoB in domain 2 of Cochrane’s RoB-2.0 tool. This domain assesses RoB because of deviations from the intended interventions (effect of adhering to the intervention) and leads to high RoB, as the included studies did not appropriately account for intervention nonadherence in their analysis. For example, the Cochrane RoB-2.0 tool recommends using an instrumental variable analysis or inverse probability weighting to appropriately account for nonadherence; however, none of the included studies conducted these analyses.

Table 5. Bias assessment using Cochrane’s risk of bias (RoB) 2.0 toola.
Study, yearRandomization processDeviation from intended interventionMissing outcome dataMeasurement of outcomeSelection of the reported resultsOverall RoBb
Study 3 from Avey et al [25], 2022Some concernscHighdLoweSome concernsHighHigh
Bakker et al [26], 2018Some concernsHighLowSome concernsSome concernsHigh
Brazier et al [27], 2022LowHighLowSome concernsSome concernsHigh
Champion et al [28], 2018Some concernsHighLowSome concernsLowHigh
Chung et al [29], 2021HighHighHighHighSome concernsHigh
Study 1 from Di Consiglio et al [30], 2021Some concernsHighLowSome concernsSome concernsHigh
Gammer et al [32], 2020LowHighLowSome concernsLowHigh
Liu et al [33], 2021Some concernsHighHighHighHighHigh
Ly et al [34], 2017LowHighLowSome concernsSome concernsHigh
Mak et al [35], 2018LowHighLowSome concernsSome concernsHigh
Manthey et al [36], 2016LowHighSome concernsLowSome concernsHigh
Mitchell et al [37], 2009LowHighHighLowSome concernsHigh
Neumeier et al [38], 2017Some concernsHighHighHighSome concernsHigh
Pheh et al [39], 2020Some concernsHighSome concernsHighSome concernsHigh
Schulte-Frankenfeld and Trautwein [40], 2021LowHighHighSome concernsSome concernsHigh
Shin et al [41], 2020LowLowLowHighSome concernsHigh
Walsh et al [42], 2019LowHighSome concernsHighSome concernsHigh

aThe National Institutes of Health bias assessment tool: before-after studies with no control group was used for study 2 from Di Consiglio et al [30], 2021 (overall RoB: high) and Eisenstadt et al [31], 2021 (overall RoB: high).

bThe overall RoB judgement for that specific study.

cSome concerns: indicates that the authors considered there to be some concerns with the RoB for that study on that specific domain of the Cochrane RoB-2.0 tool.

dHigh: indicates that the authors considered there to be a high RoB for that study on that specific domain of the Cochrane RoB-2.0 tool.

eLow: indicates that the authors considered there to be a low RoB for that study on that specific domain of the Cochrane RoB-2.0 tool.

Furthermore, domain 4 in the RoB-2.0 tool, assessing RoB in measuring the outcome, led to a high RoB because of the nature of the research being fully automated and digital. Self-report measures were used to digitally assess mental well-being; however, participants were aware of the intervention they received when self-reporting their mental well-being scores, as most studies (11/19, 58%) included a waitlist control group. Although active controls account for this issue, these control interventions also contained high levels of dropout and therefore might not be appropriate as a control group [35].

A high RoB was also detected in studies because of the lack of general high-quality research practice. For example, several studies (7/19, 37%) did not provide any information regarding the randomization process, most studies did not preregister (12/19, 63%), and studies that did preregister (2/19, 11%) sometimes did not indicate their preintended analysis plan.

Intervention Effects

All studies included fully automated digital mental well-being interventions in the general population and were therefore considered sufficiently homogeneous for a meta-analysis. Methodological homogeneity was also considered, which led to a comparison across RCTs only, as these were considered sufficiently homogeneous for a meta-analysis. Considering the incredibly high range of missing values, a meta-analysis based on ITT data was considered inappropriate; therefore, we conducted a meta-analysis based on PP data instead. Nevertheless, this increases the risk of underestimating or overestimating the real effect, which should be considered when interpreting the meta-result. Full PP data were available for a subset of 12 studies. A random-effect model was applied, as different measures were used to measure the same multidimensional construct mental well-being. Average effect estimates were computed for each study, with negative affect scores reversed to ensure that a higher score in each study indicated elevated levels of mental well-being. SMDs, 95% CIs, and 2-sided P values were calculated.

Outlier

During data extraction, the negative affect score in the intervention group of 1 study [33] was flagged by both reviewers as unexpectedly high, and further information was sought to identify what could potentially explain this unusually large result. Normative data for negative affect was mean 14.8 (SD 5.4) [57]; however, the negative affect score in the waitlist control group in this study was mean 26.98 (SD 5.19). When exploring this data further, no methodological or clinical differences could reliably explain this result in our opinion. In addition, when included in the meta-analysis, CIs were entirely outside the range of any other study, and heterogeneity was incredibly high (92%; Multimedia Appendix 3). Removing this study from the meta-analysis reduced the overall heterogeneity from 92% to 50%. Therefore, the study was considered an outlier and was excluded from the meta-analysis.

Main Effect

The pooled SMD, for the 12 trials, calculated using a random-effects model was 0.19 (95% CI 0.04-0.33; P=.01), indicating a small clinical effect in favor of digital mental well-being interventions (Figure 2). There was substantial heterogeneity (I2=50%).

Figure 2. Per-protocol meta-analysis of fully automated digital interventions compared with control groups on mental well-being in the general population [27-30,32,34-36,38-40,42].

Sensitivity Analyses

As there was substantial heterogeneity (I2=50%), sensitivity analyses were performed to explore, interpret, and contextualize heterogeneity. First, intervention duration was explored using subgroups of interventions lasting up to 2 weeks (short), 2 to 6 weeks (medium), and >6 weeks (long).

A small significant effect was found for short interventions (SMD 0.24, 95% CI 0.04-0.45; P=.02) and medium interventions (SMD 0.29, 95% CI 0.05-0.52; P=.02); however, no effect was found for long interventions (SMD 0.02, 95% CI −0.22 to 0.26; Figure S1 in Multimedia Appendix 4). No significant levels of heterogeneity were found in any of the subgroups (all P>.05), and the subgroups substantially reduced the overall level of heterogeneity (I2=28.6%).

Another sensitivity analysis was performed to explore methodological heterogeneity across studies based on the comparator. We argue that placebo controls are not feasible in psychological interventions, considering the difficulty in isolating intervention components in psychological interventions [58]. Therefore, we grouped placebo controls under active controls in this review. A small significant effect was found in studies using a waitlist control as a comparator (SMD 0.28, 95% CI 0.07-0.50; P=.008), but no significant effect was found in studies using a placebo or active control as a comparator (SMD 0.05, 95% CI −0.08 to 0.18; P=.49; Figure S2 in Multimedia Appendix 4). No significant levels of heterogeneity were present in either of the 2 subgroups (all P>.05), although substantial heterogeneity remained in studies that used a waitlist control comparator (I2=53%).

Finally, a sensitivity analysis was performed based on the outcomes of mental well-being, psychological well-being, and subjective well-being. A small significant effect was found on subjective well-being (SMD 0.23, 95% CI 0.04-0.42; P=.02). However, no significant effect was found on mental well-being (SMD 0.14, 95% CI −0.12 to 0.40; P=.31) or psychological well-being (SMD 0.26, 95% CI −0.08 to 0.59; P=.14; Figure S3 in Multimedia Appendix 4). Despite reducing heterogeneity in subjective well-being and psychological well-being, substantial heterogeneity was found in mental well-being (I2=72%).

Reporting Bias

Visual inspection of the funnel plot, which appeared asymmetrical, indicated evidence of reporting bias (Figure 3). Few smaller studies were found, and larger random variation would be expected within smaller studies; this is potentially because of a publication bias, although other aspects such as heterogeneity can also cause asymmetrical funnel plots.

Figure 3. Funnel plot. Asymmetrical plot due to the presence of publication bias or low methodological quality studies. The funnel plot only represents studies that were included in the main per-protocol meta-analysis. SMD: standardized mean difference.

Certainty of Body of Evidence (Grading of Recommendations, Assessment, Development, and Evaluations)

The certainty of the body of evidence was assessed using Grading of Recommendations, Assessment, Development, and Evaluations [59]. The evidence was downgraded because of high RoB (effect of adhering to the intervention; Table 5), inconsistency (heterogeneity was considered substantial; Figure 2), imprecision (wide CIs and insufficiently small sample sizes were observed; Figure 2), and publication bias (visual asymmetry in the funnel plot; Figure 3). Thus, we consider a very low confidence in the quality of evidence of the main PP meta-effect (Figure 2), meaning that we are very uncertain about the estimate of the effect.

Subgroup Analysis

An a priori subgroup analysis was planned to detect the effects of digital mental well-being interventions across individual differences (eg, age, sex, and educational level). Nevertheless, insufficient data were available for a meaningful comparison to be made.

Another a priori subgroup analysis was planned to identify the effectiveness across psychological approaches. Mindfulness, ACT, and self-compassion interventions were the most common. A total of 7 studies were included in this subgroup. A small significant effect was found for fully automated digital mindfulness, ACT, and self-compassion interventions to promote mental well-being in the general population (SMD 0.26, 95% CI 0.08-0.44; P=.006), with moderate levels of heterogeneity (I2=44%; Figure 4). The positive psychology intervention subgroup only included 2 studies, and there were significant levels of heterogeneity (P=.03; I2=78%). Studies investigating CBT-based interventions did not contain any PP data and could therefore not be included as a subgroup in the analysis. The final subgroup included an integrative approach; 3 studies contained sufficient PP data to be included. There was no significant level of heterogeneity in this subgroup (P=.53; I2=0%); however, integrative approaches did not have a significant effect on mental well-being in the general population (P=.33).

Overall, no significant subgroup difference was found when comparing the effects of mindfulness, ACT, self-compassion, positive psychology, and integrative interventions on mental well-being (P=.06).

Figure 4. Subgroup analysis of different psychological approaches to promote mental well-being [27-30,32,34-36,38-40,42].

Main Effect

The aim of this systematic review and meta-analysis was to understand the effectiveness of fully automated digital interventions in promoting mental well-being in the general population. We evaluated 24 fully automated digital mental well-being interventions lasting from a single session to 10 months, with daily, weekly, and biweekly delivery. After the intervention, we found a small significant effect of fully automated digital mental well-being interventions compared with control groups on mental well-being in the general population.

The effect found in this meta-analysis of fully automated digital interventions (SMD 0.19) was smaller than the effect found in previous meta-analyses of nonautomated mental well-being interventions (effect sizes ranging between 0.26 and 0.42) [7,15,16]. This could highlight the importance of nonspecific psychological factors, such as the therapeutic relationship and social support, in the effectiveness of these psychological interventions. In contrast, this could also indicate the importance of social support in the adherence to mental well-being interventions. Previous research found that improved adherence was linked to better mental well-being outcomes and that adherence tended to be higher in nonautomated interventions [18,56]. As suboptimal intervention adherence was observed in this review, with average engagement in 40% of the intervention content, it is likely that the reported effectiveness in this review is an underestimation of the potential effectiveness of fully automated digital interventions that could be achieved when reaching optimal levels of engagement (the level of engagement recommended by the researchers). Nevertheless, the recommended engagement levels differed tremendously between studies, and studies lacked a clear end strategy.

Exploratory Findings

We found that short (<2 wk) and medium (<6 wk) interventions were effective in promoting mental well-being in the general population but long (>6 wk) interventions were not. This could be further related to intervention adherence, as (in line with previous research findings) intervention adherence reduced with time [56]. It does appear that the optimal intervention duration may also depend on the outcome that is being targeted. Research has found that short interventions led to a greater effect on subjective well-being, whereas long interventions had a greater effect on psychological well-being [60]. As most studies (9/19, 47%) in this review included a subjective well-being outcome, this might explain why shorter interventions were found to be effective in this review.

In contrast to prior research, our exploratory analysis showed no significant effect on general mental well-being outcomes (eg, Warwick-Edinburgh Mental Well-Being Scale) [7,15]. Measures of general mental well-being might lack the sensitivity to detect subtle changes occurring within the general population. This could be attributed to the concise nature of mental well-being measures, which encompass both subjective and psychological aspects [47]. Previous research includes a clinical population alongside a general population and nonautomated interventions alongside fully automated digital mental well-being interventions [7,14,15]. Both these factors increase the effectiveness of mental well-being interventions, which could lead to a sufficiently large effect to detect using a general mental well-being measure.

Furthermore, we found a small significant effect when comparing a fully automated digital mental well-being intervention with a waitlist control group, although no significant effect was found when comparing it with an active or placebo control group. The effect when compared with an active and placebo control is expected to be smaller than the effect when compared with a passive control [61]. This indicates that the effects of mental well-being interventions and other psychological interventions (eg, active control) on mental well-being do not currently differ.

Subgroup Effects

It was not possible to analyze the effects of digital mental well-being promotion across population subgroups (based on age, sex, socioeconomic status, and educational level) because of a lack of studies reporting these results separately.

Nevertheless, studies did provide exploratory findings on the relationship between individual differences and dropout in fully automated digital mental well-being interventions. These exploratory findings indicated largely conflicting evidence on whether and how individual differences were related to dropout, which is in line with previous research findings [56].

A subgroup analysis comparing psychological approaches adopted in fully automated digital mental well-being interventions indicated a small significant effect of fully automated digital mindfulness-, ACT-, and compassion-based interventions on mental well-being in the general population, with most studies (7/19, 37%) adopting this psychological approach. The effectiveness of fully automated digital positive psychology and CBT-based approaches remains largely unknown. A potential explanation for this is the large focus of CBT-based interventions on symptom reduction rather than on mental well-being improvement [62]. Furthermore, positive psychology interventions have been criticized recently because of the limited ability of studies to replicate positive psychology results [63], potentially leading to fewer studies investigating positive psychology interventions.

Finally, although several studies (6/19, 32%) have adopted an integrative approach, we did not find an effect of fully automated digital integrative approaches on mental well-being in the general population. This contradicts previous meta-analytic findings that found a significant effect of multitheoretical interventions on mental well-being in the general population [7]. Nevertheless, previous meta-analysis also found a smaller effect for multitheoretical interventions compared with MBIs [7], indicating that these interventions might generally be less effective. This might explain why no effect of integrative approaches was found in fully automated digital interventions.

Limitations

Several methodological limitations should be recognized; however, as they could have impacted the findings of this systematic review. First, the specific search terms adopted in this systematic review limit the findings. Although searches should aim to be as comprehensive as possible, it is necessary to balance sensitivity and specificity when conducting searches [64]. The specificity adopted in this systematic review may not have allowed the searches to be comprehensive, as the literature uses many different terms to describe fully automated digital mental well-being interventions. Second, the inclusion criteria in this systematic review are ambiguous and require judgement [64]. This subjectivity could lead to lower reproducibility of the findings and random errors and biases [65]. Finally, the review adopts an exclusive focus on mental well-being (which includes both subjective and psychological well-being). Although improving mental well-being could be considered the primary aim of digital mental well-being promotion [10], the exclusive focus on mental well-being does not allow the review to provide insights into indirect positive or negative intervention effects.

In addition to methodological limitations, we observed several limitations of the included studies that lowered confidence in the quality of evidence (Grading of Recommendations, Assessment, Development, and Evaluations). We saw a high RoB in the included studies because of the following reasons: (1) missing outcome data—although it is unknown what impact the dropout has on the overall effect (eg, underestimation or overestimation) as reasons for dropout remain largely unknown; (2) the effect of adherence—suboptimal adherence might lead to an underestimation of the effectiveness; and (3) measurement of the outcome—because of the use of self-report measures while participants are aware of their allocated intervention, potentially leading to overestimation of the effectiveness. In addition, we also found a lack of general high-quality research practice in studies. Several studies were underpowered, did not provide sufficient information regarding randomization, and did not preregister or contain a prespecified analysis plan.

Furthermore, we detected a publication bias of the studies included in the meta-analysis. This publication bias indicated that smaller studies with a larger random variation were largely missing, perhaps because they were less likely to be published.

Finally, the fully automated digital mental well-being interventions were primarily delivered in a Western context and typically included a sample of participants who were highly educated and female, which might limit the generalizability of the findings. In particular, there is evidence that females and highly educated individuals might engage with and therefore benefit from these interventions differently.

Recommendations for Future Research

The systematic review findings lead to several implications for future research. First, future research should aim to focus in more detail on supporting engagement and reducing dropout in fully automated digital mental well-being interventions—by understanding the impact of behavioral strategies, such as habit formation and nonspecific rewards [55], and also by examining what is considered effective engagement—the target level of intervention engagement needed for change [66]. This will allow for evidence-based recommendations of the level of intervention engagement in future research and practice and for studies to adopt effective end strategies.

Second, future research should look to understand how automated digital interventions can be tailored to deliver relevant content according to the preferences of the user and whether tailoring is necessary to ensure intervention effectiveness and whether acceptability can be ensured across different populations (eg, Western vs non-Western) and intervention types (eg, positive psychology vs mindfulness and ACT).

Finally, we recommend that future research strictly follows high-quality research recommendations, such as the CONSORT (Consolidated Standards of Reporting Trials) statement [67], when investigating fully automated digital mental well-being interventions to allow for higher confidence in the quality of the evidence.

Conclusions

Overall, this review provides a novel insight into the effectiveness of fully automated digital mental well-being interventions in the general population. It shows that fully automated digital mental well-being interventions can effectively promote mental well-being in the general population (particularly when adopting a mindfulness-, ACT-, and self-compassion–based approach), despite low levels of intervention adherence and high study dropout.

Acknowledgments

The authors are grateful for the support of the librarians at the University of Bath and advice from Emma Fisher in conducting the meta-analysis. This work was supported by the Economic and Social Research Council (grant 2572559) and Cyberlimbic Systems Ltd.

Data Availability

For the purpose of open access, the author has applied a Creative Commons Attribution (CC-BY) license. Data supporting this study are openly available from UK Data Service.

Authors' Contributions

JG, BA, MB, CC, and TT designed the study, created the study protocol, and developed the study methodology. Then study selection took place which was conducted by JG, MB, ET, MZ, and BA. This was followed by data extraction and quality assessment which JG, AM and BA conducted. The data was then analyzed by JG, BA, MB, and CC. After which it was written-up and edited by JG, BA, MB, CC, AM, TT, MB, ET, and MZ.

Conflicts of Interest

JG received partial funding for this research project from Cyberlimbic Systems Ltd. TT is the chief executive officer and cofounder of Cyberlimbic Systems Ltd. BA received funding from the National Institute of Health Research and the UK Research and Innovation on the topic of digital health interventions. BA also sits on the scientific advisory board of the Medito Foundation and earGym. The remaining authors have no conflicts of interest to declare.

Multimedia Appendix 1

Search strategy per database.

DOCX File , 21 KB

Multimedia Appendix 2

Table with detailed intervention description.

DOCX File , 28 KB

Multimedia Appendix 3

Main per-protocol analysis including outlier [27-30,32-36,38-40,42].

DOCX File , 35 KB

Multimedia Appendix 4

Exploring heterogeneity [27-30,32,34-36,38-40,42].

DOCX File , 114 KB

Multimedia Appendix 5

PRISMA Checklist.

PDF File (Adobe PDF File), 69 KB

  1. Diener E. Subjective well-being. Psychol Bull. 1984;95(3):542-575. [FREE Full text] [CrossRef]
  2. Ryff CD. Happiness is everything, or is it? Explorations on the meaning of psychological well-being. J Pers Soc Psychol. 1989;57(6):1069-1081. [FREE Full text] [CrossRef]
  3. Ryan RM, Deci EL. On happiness and human potentials: a review of research on hedonic and eudaimonic well-being. Annu Rev Psychol. 2001;52:141-166. [FREE Full text] [CrossRef] [Medline]
  4. Westerhof GJ, Keyes CL. Mental illness and mental health: the two continua model across the lifespan. J Adult Dev. Jun 2010;17(2):110-119. [FREE Full text] [CrossRef] [Medline]
  5. Promotion of mental well-being: pursuit of happiness. World Health Organization. Nov 30, 2013. URL: https://www.who.int/publications/i/item/9789290224228 [accessed 2022-09-15]
  6. Tudor K. Mental Health Promotion: Paradigms and Practice. Milton Park, UK. Routledge; 1996.
  7. van Agteren J, Iasiello M, Lo L, Bartholomaeus J, Kopsaftis Z, Carey M, et al. A systematic review and meta-analysis of psychological interventions to improve mental wellbeing. Nat Hum Behav. May 2021;5(5):631-652. [FREE Full text] [CrossRef] [Medline]
  8. Bohlmeijer ET, Westerhof GJ. A new model for sustainable mental health. In: Making an Impact on Mental Health. Milton Park, UK. Routledge; 2020.
  9. Keyes CL. Promoting and protecting mental health as flourishing: a complementary strategy for improving national mental health. Am Psychol. 2007;62(2):95-108. [FREE Full text] [CrossRef]
  10. Promoting mental health: concepts, emerging evidence, practice. World Health Organization. 2004. URL: https://apps.who.int/iris/bitstream/handle/10665/42940/9241591595.pdf [accessed 2022-09-15]
  11. Bendtsen M, Müssener U, Linderoth C, Thomas K. A mobile health intervention for mental health promotion among university students: randomized controlled trial. JMIR Mhealth Uhealth. Mar 20, 2020;8(3):e17208. [FREE Full text] [CrossRef] [Medline]
  12. Comprehensive mental health action plan 2013-2030. World Health Organization. Sep 21, 2021. URL: https://www.who.int/publications/i/item/9789240031029 [accessed 2022-09-15]
  13. Weiss LA, Westerhof GJ, Bohlmeijer ET. Can we increase psychological well-being? The effects of interventions on psychological well-being: a meta-analysis of randomized controlled trials. PLoS One. 2016;11(6):e0158092. [FREE Full text] [CrossRef] [Medline]
  14. Carr A, Cullen K, Keeney C, Canning C, Mooney O, Chinseallaigh E, et al. Effectiveness of positive psychology interventions: a systematic review and meta-analysis. J Posit Psychol. Sep 10, 2020;16(6):749-769. [FREE Full text] [CrossRef]
  15. Galante J, Friedrich C, Dawson AF, Modrego-Alarcón M, Gebbing P, Delgado-Suárez I, et al. Mindfulness-based programmes for mental health promotion in adults in nonclinical settings: a systematic review and meta-analysis of randomised controlled trials. PLoS Med. Jan 11, 2021;18(1):e1003481. [FREE Full text] [CrossRef] [Medline]
  16. Howell AJ, Passmore HA. Acceptance and Commitment Training (ACT) as a positive psychological intervention: a systematic review and initial meta-analysis regarding ACT’s role in well-being promotion among university students. J Happiness Stud. Sep 5, 2018;20(6):1995-2010. [FREE Full text] [CrossRef]
  17. Behaviour change: digital and mobile health interventions. National Institute for Health and Care Excellence. Oct 7, 2020. URL: https:/​/www.​nice.org.uk/​guidance/​ng183/​resources/​behaviour-change-digital-and-mobile-health-interventions-pdf -66142020002245#:~:text=This%20guideline%20covers%20interventions%20that,wearable%20devices%20or%20the%20internet [accessed 2022-09-15]
  18. Leung C, Pei J, Hudec K, Shams F, Munthali R, Vigo D. The effects of nonclinician guidance on effectiveness and process outcomes in digital mental health interventions: systematic review and meta-analysis. J Med Internet Res. Jun 15, 2022;24(6):e36004. [FREE Full text] [CrossRef] [Medline]
  19. Torous J, Jän Myrick K, Rauseo-Ricupero N, Firth J. Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health. Mar 26, 2020;7(3):e18848. [FREE Full text] [CrossRef] [Medline]
  20. Ainsworth B, Atkinson MJ, AlBedah E, Duncan S, Groot J, Jacobsen P, et al. Current tensions and challenges in mindfulness research and practice. J Contemp Psychother. May 20, 2023:1-6. [FREE Full text] [CrossRef]
  21. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions version 6.3. London, UK. The Cochrane Collaboration; 2019.
  22. Page M, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg. Apr 2021;88:105906. [FREE Full text] [CrossRef] [Medline]
  23. Reeves BC, Shea BJ, Wells GA, Sharma Waddington H. Updated guidance from the agency for healthcare research and quality effective healthcare program on including nonrandomized studies of interventions in systematic reviews: a work in progress. J Clin Epidemiol. Dec 2022;152:309-310. [FREE Full text] [CrossRef] [Medline]
  24. Groot J, Ainsworth B, Brosnan M, Clarke C. The effectiveness of digital interventions to enhance mental well-being in the general population: a systematic review. PROSPERO. 2022. URL: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022310702 [accessed 2023-09-11]
  25. Avey J, Newman A, Herbert K. Fostering employees' resilience and psychological well-being through an app-based resilience intervention. Pers Rev (forthcoming). Aug 18, 2022 [FREE Full text] [CrossRef]
  26. Bakker D, Kazantzis N, Rickwood D, Rickard N. A randomized controlled trial of three smartphone apps for enhancing public mental health. Behav Res Ther. Oct 2018;109:75-83. [FREE Full text] [CrossRef] [Medline]
  27. Brazier A, Larson E, Xu Y, Judah G, Egan M, Burd H, et al. 'Dear Doctor': a randomised controlled trial of a text message intervention to reduce burnout in trainee anaesthetists. Anaesthesia. Apr 13, 2022;77(4):405-415. [FREE Full text] [CrossRef] [Medline]
  28. Champion L, Economides M, Chandler C. The efficacy of a brief app-based mindfulness intervention on psychosocial outcomes in healthy adults: a pilot randomised controlled trial. PLoS One. 2018;13(12):e0209482. [FREE Full text] [CrossRef] [Medline]
  29. Chung J, Mundy ME, Hunt I, Coxon A, Dyer KR, McKenzie S. An evaluation of an online brief mindfulness-based intervention in higher education: a pilot conducted at an Australian university and a British university. Front Psychol. Oct 28, 2021;12:752060. [FREE Full text] [CrossRef] [Medline]
  30. Di Consiglio M, Fabrizi G, Conversi D, La Torre G, Pascucci T, Lombardo C, et al. Effectiveness of NoiBene: a web-based programme to promote psychological well-being and prevent psychological distress in university students. Appl Psychol Health Well Being. May 17, 2021;13(2):317-340. [FREE Full text] [CrossRef] [Medline]
  31. Eisenstadt A, Liverpool S, Metaxa AM, Ciuvat RM, Carlsson C. Acceptability, engagement, and exploratory outcomes of an emotional well-being app: mixed methods preliminary evaluation and descriptive analysis. JMIR Form Res. Nov 01, 2021;5(11):e31064. [FREE Full text] [CrossRef] [Medline]
  32. Gammer I, Hartley-Jones C, Jones FW. A randomized controlled trial of an online, compassion-based intervention for maternal psychological well-being in the first year postpartum. Mindfulness. Jan 17, 2020;11(4):928-939. [FREE Full text] [CrossRef]
  33. Liu K, Duan Y, Wang Y. The effectiveness of a web-based positive psychology intervention in enhancing college students' mental well-being. Soc Behav Pers Int J. Aug 04, 2021;49(8):1-13. [FREE Full text] [CrossRef]
  34. Ly KH, Ly AM, Andersson G. A fully automated conversational agent for promoting mental well-being: a pilot RCT using mixed methods. Internet Interv. Dec 2017;10:39-46. [FREE Full text] [CrossRef] [Medline]
  35. Mak WW, Tong AC, Yip SY, Lui WW, Chio FH, Chan AT, et al. Efficacy and moderation of mobile app-based programs for mindfulness-based training, self-compassion training, and cognitive behavioral psychoeducation on mental health: randomized controlled noninferiority trial. JMIR Ment Health. Oct 11, 2018;5(4):e60. [FREE Full text] [CrossRef] [Medline]
  36. Manthey L, Vehreschild V, Renner KH. Effectiveness of two cognitive interventions promoting happiness with video-based online instructions. J Happiness Stud. Feb 2016;17(1):319-339. [FREE Full text] [CrossRef]
  37. Mitchell J, Stanimirovic R, Klein B, Vella-Brodrick D. A randomised controlled trial of a self-guided internet intervention promoting well-being. Comput Hum Behav. May 2009;25(3):749-760. [FREE Full text] [CrossRef]
  38. Neumeier LM, Brook L, Ditchburn G, Sckopke P. Delivering your daily dose of well-being to the workplace: a randomized controlled trial of an online well-being programme for employees. Eur J Work Organ Psychol. May 04, 2017;26(4):555-573. [FREE Full text] [CrossRef]
  39. Pheh K, Tan H, Tan C. Ultra-brief online mindfulness-based intervention effects on mental health during the coronavirus disease outbreak in Malaysia: a randomized controlled trial. Makara Hum Behav Stud Asia. Dec 31, 2020;24(2):118-128. [FREE Full text] [CrossRef]
  40. Schulte-Frankenfeld PM, Trautwein FM. App-based mindfulness meditation reduces perceived stress and improves self-regulation in working university students: a randomised controlled trial. Appl Psychol Health Well Being. Nov 27, 2022;14(4):1151-1171. [FREE Full text] [CrossRef] [Medline]
  41. Shin M, Wong YJ, Yancura L, Hsu K. Thanks, mom and dad! An experimental study of gratitude letter writing for Asian and white American emerging adults. Couns Psychol Q. Nov 19, 2018;33(3):267-286. [FREE Full text] [CrossRef]
  42. Walsh KM, Saab BJ, Farb NA. Effects of a mindfulness meditation app on subjective well-being: active randomized controlled trial and experience sampling study. JMIR Ment Health. Jan 08, 2019;6(1):e10844. [FREE Full text] [CrossRef] [Medline]
  43. Watson PF, Petrie A. Method agreement analysis: a review of correct methodology. Theriogenology. Jun 2010;73(9):1167-1179. [FREE Full text] [CrossRef] [Medline]
  44. Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. Aug 28, 2019;366:l4898. [FREE Full text] [CrossRef] [Medline]
  45. Study quality assessment tools. National Institutes of Health National Heart Lung and Blood Institute. URL: https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools [accessed 2022-09-15]
  46. World Health Organization. Regional Office for Europe. Wellbeing measures in primary health care/the DepCare Project: report on a WHO meeting: Stockholm, Sweden, 12–13 February 1998. World Health Organization. 1998. URL: https://apps.who.int/iris/handle/10665/349766 [accessed 2022-09-15]
  47. Stewart-Brown S, Janmohamed K. Warwick-Edinburgh Mental Well-being Scale (WEMWBS) User Guide Version 1. National Health Service. Jun 2008. URL: http://www.mentalhealthpromotion.net/resources/user-guide.pdf [accessed 2022-09-15]
  48. Diener E, Emmons RA, Larsen RJ, Griffin S. The satisfaction with life scale. J Pers Assess. Feb 1985;49(1):71-75. [FREE Full text] [CrossRef] [Medline]
  49. Mackinnon A, Jorm AF, Christensen H, Korten AE, Jacomb PA, Rodgers B. A short form of the Positive and Negative Affect Schedule: evaluation of factorial validity and invariance across demographic variables in a community sample. Pers Individ Differ. Sep 1999;27(3):405-416. [FREE Full text] [CrossRef]
  50. Ciccarello L, Reinhard MA. LGS. Lebensglückskala [Verfahrensdokumentation und Fragebogen mit Auswertung]. Open Test Archive. 2014. URL: https://www.testarchiv.eu/de/test/9006602 [accessed 2022-09-15]
  51. Lyubomirsky S, Lepper HS. A measure of subjective happiness: preliminary reliability and construct validation. Soc Indic Res. 1999;46:137-155. [CrossRef]
  52. Berkman PL. Life stress and psychological well-being: a replication of Langner's analysis in the midtown Manhattan study. J Health Soc Behav. Mar 1971;12(1):35-45. [FREE Full text] [CrossRef]
  53. Ryff CD, Keyes CL. The structure of psychological well-being revisited. J Pers Soc Psychol. 1995;69(4):719-727. [CrossRef]
  54. International Wellbeing Group. Personal Wellbeing Index: 5th Edition. Australian Centre on Quality of Life, Deakin University. 2013. URL: https://www.acqol.com.au/uploads/pwi-a/pwi-a-english.pdf [accessed 2022-09-15]
  55. Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. Aug 2013;46(1):81-95. [FREE Full text] [CrossRef] [Medline]
  56. Linardon J, Fuller-Tyszkiewicz M. Attrition and adherence in smartphone-delivered interventions for mental health problems: a systematic and meta-analytic review. J Consult Clin Psychol. Jan 2020;88(1):1-13. [FREE Full text] [CrossRef] [Medline]
  57. Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: the PANAS scales. J Pers Soc Psychol. 1988;54(6):1063-1070. [FREE Full text] [CrossRef]
  58. Ainsworth B, Hardman D, Thomas M. The importance of differentiating behavioural and psychological treatment effects from placebo in respiratory interventions. Eur Respir J. Apr 2019;53(4):1900156. [FREE Full text] [CrossRef] [Medline]
  59. Schünemann H, Brożek J, Guyatt G, Oxman A. GRADE Handbook. London, UK. The Cochrane Collaboration; 2013.
  60. Bolier L, Haverman M, Westerhof GJ, Riper H, Smit F, Bohlmeijer E. Positive psychology interventions: a meta-analysis of randomized controlled studies. BMC Public Health. Feb 08, 2013;13:119. [FREE Full text] [CrossRef] [Medline]
  61. Au J, Gibson BC, Bunarjo K, Buschkuehl M, Jaeggi SM. Quantifying the difference between active and passive control groups in cognitive interventions using two meta-analytical approaches. J Cogn Enhanc. Jun 2020;4(2):192-210. [FREE Full text] [CrossRef] [Medline]
  62. Bannink FP. Positive CBT: from reducing distress to building success. J Contemp Psychother. Jun 26, 2013;44(1):1-8. [FREE Full text] [CrossRef]
  63. White CA, Uttl B, Holder MD. Meta-analyses of positive psychology interventions: the effects are much smaller than previously reported. PLoS One. 2019;14(5):e0216588. [FREE Full text] [CrossRef] [Medline]
  64. Lefebvre C, Glanville J, Briscoe S, Littlewood A, Marshall C, Metzendorf MI, et al. Searching for and selecting studies. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editors. Cochrane Handbook for Systematic Reviews of Interventions. London, UK. The Cochrane Collaboration; 2019.
  65. Herner M. Perfect top of the evidence hierarchy pyramid, maybe not so perfect: lessons learned by a novice researcher engaging in a meta-analysis project. BMJ Evid Based Med. Aug 2019;24(4):130-132. [FREE Full text] [CrossRef] [Medline]
  66. Ainsworth B, Steele M, Stuart B, Joseph J, Miller S, Morrison L, et al. Using an analysis of behavior change to inform effective digital intervention design: how did the PRIMIT website change hand hygiene behavior across 8993 users? Ann Behav Med. Jun 2017;51(3):423-431. [FREE Full text] [CrossRef] [Medline]
  67. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. Jan 01, 2015;4(1):1-9. [FREE Full text] [CrossRef] [Medline]


ACT: acceptance and commitment therapy
CBT: cognitive behavioral therapy
CONSORT: Consolidated Standards of Reporting Trials
ITT: intention to treat
MBI: mindfulness-based intervention
PP: per-protocol
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RCT: randomized controlled trial
RoB: risk of bias
SMD: standardized mean difference
WHO: World Health Organization


Edited by J Torous; submitted 28.11.22; peer-reviewed by S Toh, S Hermsen; comments to author 15.02.23; revised version received 30.06.23; accepted 07.07.23; published 19.10.23.

Copyright

©Julia Groot, Alexander MacLellan, Madelaine Butler, Elisa Todor, Mahnoor Zulfiqar, Timothy Thackrah, Christopher Clarke, Mark Brosnan, Ben Ainsworth. Originally published in JMIR Mental Health (https://mental.jmir.org), 19.10.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.