Published on in Vol 5, No 1 (2018): Jan-Mar

Evaluation of an mHealth App (DeStressify) on University Students’ Mental Health: Pilot Trial

Evaluation of an mHealth App (DeStressify) on University Students’ Mental Health: Pilot Trial

Evaluation of an mHealth App (DeStressify) on University Students’ Mental Health: Pilot Trial

Authors of this article:

Rebecca Anne Lee1 Author Orcid Image ;   Mary Elizabeth Jung1 Author Orcid Image

Original Paper

Faculty of Health and Social Development, University of British Columbia, Kelowna, BC, Canada

Corresponding Author:

Mary Elizabeth Jung, PhD

Faculty of Health and Social Development

University of British Columbia

1147 Research Road

FHSD-ART360

Kelowna, BC, V1V 1V7

Canada

Phone: 1 2508079670

Fax:1 2508079965

Email: mary.jung@ubc.ca


Background: One in five Canadians experience mental health issues with those in the age range of 15 to 24 years being most at risk of a mood disorder. University students have shown significantly higher rates of mental health problems than the general public. Current university support services are limited by factors such as available staff and finances, and social stigma has frequently been identified as an additional barrier that prevents students from accessing these resources. Mobile health (mHealth) apps are one form of alternative health support that is discrete and accessible to students, and although they are recognized as a promising alternative, there is limited research demonstrating their efficacy.

Objective: The aim of this study was to evaluate a mindfulness-based app’s (“DeStressify”) efficacy on stress, anxiety, depressive symptomology, sleep behavior, work or class absenteeism, work or school productivity, and quality of life (QoL) among university students.

Methods: Full-time undergraduate students at a Canadian university with smartphones and Internet access were recruited through in-class announcements and on-campus posters. Participants randomized into an experimental condition were given and instructed to use the DeStressify app 5 days a week for 4 weeks. Control condition participants were wait-listed. All participants completed pre- and postintervention Web-based surveys to self-assess stress, anxiety, depressive symptomatology, sleep quality, and health-related QoL.

Results: A total of 206 responses were collected at baseline, with 163 participants completing the study (86 control, 77 experimental). Using DeStressify was shown to reduce trait anxiety (P=.01) and improve general health (P=.001), energy (P=.01), and emotional well-being (P=.01) in university students, and more participants in the experimental condition believed their productivity improved between baseline and postintervention measurements than the number of participants expected to believe so randomly by chance (P=.01). The app did not significantly improve stress, state anxiety, physical and social functioning, and role limitations because of physical or emotional health problems or pain (P>.05).

Conclusions: Mindfulness-based apps may provide an effective alternative support for university students’ mental health. Universities and other institutions may benefit from promoting the use of DeStressify or other mindfulness-based mHealth apps among students who are interested in methods of anxiety management or mindfulness-based self-driven health support. Future steps include examining DeStressify and similar mHealth apps over a longer period and in university staff and faculty.

JMIR Ment Health 2018;5(1):e2

doi:10.2196/mental.8324

Keywords



Prevalence of Mental Health Issues

One in five Canadians will experience a mental health issue [1], with those in the age range of 15 to 24 years being the most at risk of meeting criteria for a mood and substance abuse disorder [2]. University students are of particular concern as they have shown significantly higher rates of mental health problems than the general public [3]. In recent years, mental health support services on university campuses have experienced high volumes of appointment requests but are limited in the amount of support they are able to provide. Counseling center directors have previously listed wait-list issues and funding concerns among necessary improvements for Canadian postsecondary counseling services, with 78% reporting being unable to meet the growing demand for services [4]. Today, postsecondary counseling center staff have noted that centers are still in need of additional resources to meet student requests [5]. The average wait time to receive mental health treatment services in Canada is 19.3 weeks [6], and students at some institutions may have to wait up to 6 months for individual treatment [7], meaning many students are left without support.

Impact on Students

Mental health issues have been shown to negatively impact student academic performance [8,9], with stress, anxiety, and sleep difficulties found to be the top three factors most frequently reported by students [10]. Additionally, diagnosed depression has been associated with decreased academic performance [11] and health problems such as back pain, diabetes, irritable bowel syndrome, and migraine headaches [12]. Unfortunately, when students attempt to manage their mental health issues, they do not always engage in healthy coping mechanisms. A survey of 212 American college students found that only 5% reach out to a professional for stress-related management, with more students instead turning to drinking, smoking, and using illegal drugs [13]. Despairingly, avoidance of seeking professional help for assistance with mental health on university campuses is a worldwide problem [3,14-16]. Reasons for not engaging with professional mental health support included perceiving stress as normal for university or graduate school, fear of judgment, shame, and uncertainty of effectiveness [15,16].

Another notable barrier to seeking mental health aid is stigma [17-19], which may be perpetuated or endorsed by others or internalized by the individual [20]. Stigma associated with mental illness precludes many people from seeking face-to-face counseling [20,21], particularly for those experiencing depressive symptoms, stress, and anxiety [22,23]. Results from a study on help-seeking behaviors and access to health care within a university population showed that 20% of students who did not use support services despite reporting symptoms of anxiety and depressive disorders did so because they were worried about what people would think. Another 20% said that they did not seek help because they thought that others wouldn’t understand their problems [16]. Among Japanese university students, feeling ashamed and worrying about other people’s opinions were listed as barriers to visiting mental health professionals [15].

Issues With Current Resources

Canadian universities currently provide services and programs to support student mental health, but these services and programs have shortcomings. First, the rise in serious mental health concerns has been associated with an increase in demand for on-campus counseling; yet, university counseling centers have not been able to meet these demands with adequate staffing [5,6]. Furthermore, on-campus counselors are often advised to only take on patients short-term, with policies often limiting the number of sessions allowed per patient [4,24]. Although these policies may allow for a greater number of students to access counselors, it does not necessarily ensure that clients are receiving the long-term support that they are seeking or need. Some institutions have attempted to address this concern by providing group counseling sessions. However, as many students have reported concerns of stigma as a barrier to seeking mental health support [14,25], they may avoid group sessions for fear of being recognized by group members. Additionally, lack of time is a notable barrier to mental health service use [26,27], and group therapy sessions may be too time consuming for students to commit to.

Mindfulness and Mobile Health (mHealth) Apps as Alternative Resources

Taking into account the barriers to current university mental health support services, an appeal can be made for an alternative, more accessible service. Mobile health (mHealth) phone apps may be a service that addresses these barriers. Mobile phones are integrated into the daily lives of nearly all university community members and can be discreetly used by students to participate in app-based mental health support programs. Individuals seeking help can receive online assistance in a nonthreatening manner that is also feasible and capable of reaching a wide number of people [28]. mHealth apps have high acceptability among users [29,30] and are reportedly more comfortable to use in public compared with other intervention formats such as when used to track diet for weight loss purposes [29]. They are publicly accessible, can be used at a person’s discretion, and hold promise of appealing to people who are not clinically diagnosed with a mental illness but nonetheless have concerns. mHealth apps may also cater to people’s desire to manage their problems on their own [14,28] by providing a support that can be independently used. Additionally, mHealth apps are user friendly: they are easy to use, have minimal time commitment [29], and can be used at any time that is convenient or necessary. These features can help ensure user anonymity and accessibility, making mHealth apps an appealing mental health support alternative.

Apps provide an inconspicuous and convenient mode of delivery for health interventions and may assist in improving accessibility of evidence-based monitoring and self-help [30]. A systematic review of 24 studies concerning the behavioral functionality of mHealth apps found that apps may provide a feasible delivery method for health interventions and have shown potential to bring about behavioral changes such as increasing physical activity and reducing alcohol consumption [29]. However, a systematic review of studies concerning mHealth app efficacy conducted by Donker et al [30] found that of over 3000 mental health apps available at time of study, only 8 studies within the systematic review were identified as evidence-based and only one study utilized a university sample. The latter study evaluated the effectiveness of an Italian mHealth app called Mobile Stress Management at decreasing anxiety and improving coping skills in female university students in comparison to a control condition. This study, as described in the Methods section, advances the literature by including nonfemale participants, using an English mHealth app, and evaluating additional outcomes. Donker et al [30] also found measurements regarding sleep disturbances and anxiety disorders to be particularly lacking in the literature. Similarly, Grist et al [31] argue that there is an insufficient amount of evidence to support the effectiveness of mHealth apps supporting the mental health of adolescents and youth. Both Payne et al [29] and Donker et al [30] identified small sample sizes as a limitation in the current literature on mHealth apps in their systematic reviews. A total of 227 participants were recruited across the 8 studies reviewed by Donker et al, and 17 of the 24 studies reviewed by Payne et al had sample sizes under 100.

Preliminary studies of nonapp-based interventions have shown mindfulness to be a promising tool for helping university students manage their stress and anxiety. Mindfulness is described as the focusing of attention on the present moment, including an awareness of the body and thoughts that is accomplished without judgment [32]. For example, one study involving first-year undergraduate students found that adapted mindfulness-based stress reduction (MBSR) interventions may improve students’ physiological and psychological well-being [33]. The adapted MBSR techniques included reading assignments, discussion, meditation, and yoga and were performed 2 hours a week for 8 weeks. Students demonstrated enhanced personal-emotional adjustment and reduced physiological stress. Similarly, researchers at the University of Northampton found that students who participated in an 8-week mindfulness-based program demonstrated significant decreases in perceived stress, anxiety, and depression as compared with a wait-list control group [34]. These findings were supported by a study in 2014 involving 458 university students that found mindfulness to be associated with improved mental health (eg, reduction in symptoms of depression, anxiety, hostility, and paranoia) [35]. Mindfulness-based interventions are increasing in public interest, and these preliminary findings suggest it may be a promising way of helping university students improve their mental health.

Current Gap in the Literature

Recent studies have shown app-based supports and mindfulness to be promising ways of addressing mental health concerns, yet, there is a lack of research regarding mindfulness-based techniques delivered through apps. This finding is surprising considering the exhaustive number of mindfulness-based mHealth apps available for purchase. In a recent study evaluating the feasibility of an Internet-based mindfulness training program among university students in Sweden, researchers found that users generally enjoyed the program and its flexibility regarding time and location of use, yet, found no significant intervention effect on psychological well-being or depression symptoms when compared with an Internet-based “expressive writing” program [36]. It is important to note that the mindfulness training was extensive—with participants encouraged to practice 30 to 45 min a day for 6 or 7 days a week. Evaluation of mindfulness-based apps that are less time-intensive than the typical length of in-person mindfulness-based training is needed.

Research Objective and Hypotheses

This study addressed this gap in the literature by evaluating the efficacy of a commercially available mindfulness-based app, called DeStressify by Stress Refuge Inc (hereafter “DeStressify”), on stress, anxiety, and depressive symptomology within a university population. The cofounder and chairman of Stress Refuge, Inc claimed DeStressify was initially developed for teachers, organizations, and the general public. It was launched in 2014, and a study conducted by DeStressify staff showed a 20% reduction in self-reported stress levels after 4 weeks of app use within a sample of teachers (Ulco Visser, email communication, September 14, 2017). However, no previously published research evaluating DeStressify has been found by the authors of this study.

The app contains a core plan that delivers mindfulness-based exercises through audio, video, or text files that require between 3 to 23 min to complete. Example titles of these exercises include grounding visualization, gratitude, imagining the life you want, and finding meaning. There are free and purchasable “pro” versions of the app available for download. The core plan is available on both the free and pro version of DeStressify, with the pro version offering additional features including “my friends,” “nutrition,” and “shop” options. It was hypothesized that students who use DeStressify would report significantly lower stress, anxiety, and depressive symptomatology as compared with a wait-list control sample post intervention. Secondary outcomes relevant for this specific study population included sleep behavior, work or class absenteeism, work or school productivity, and quality of life (QoL). It was hypothesized that compared with matched wait-list control participants, the participants using DeStressify would report significantly greater sleep quality, school or work productivity and QoL and significantly less class or work absenteeism at postintervention.


Design

In the systematic review of mHealth app studies by Donker et al [30], recruitment within each study ranged from 8 to 117 participants. For this pilot, exploratory study, sample size was calculated through a power analysis using G*Power statistical power analysis software [37]. Effect size estimate calculations were made with the Cohen Perceived Stress Scale (PSS) values from a study by Chang et al [38] in which university students participated in an 8-week MBSR intervention that included group meditation sessions and home practice. Cronbach alpha was set at .05, power at 0.80, and effect size at 0.37. A power analysis indicated a sample size of 61 would be required to test the hypotheses. It is important to note that the intervention used by Chang et al [38] included 20 hours of in-class mindfulness practice and 36 hours of assigned home practice. In consideration of the power analysis, the intervention design differences between this study and that of Chang et al [38], the sample size range in studies reviewed by Donker et al [30], and an anticipated dropout of some participants from pre- to postintervention, a recruitment goal of 200 participants was used.

Procedure

Participants were recruited through poster advertisements, in-class announcements, and emails to administrative assistants of various faculties across the University of British Columbia (UBC) Okanagan campus. Individuals interested in participating in the study emailed the researcher assistant and received a link to the Web-based eligibility survey, consent form, and baseline survey. Eligibility criteria included (1) enrollment in full course load during the winter term at the UBC Okanagan campus in an undergraduate program, (2) ownership of a smartphone, (3) regular access to the Internet, and (4) fluent comprehension of the English language. Participants indicated consent by clicking an “I consent” button after reading an information page regarding the study. Following the completion of the baseline survey, participants were randomized into either an experimental or wait-list control condition using a computer-generated random numbers table. Random numbers were generated in batches of 50 with equal counts for both treatment conditions (ie, 25 total for each). Individuals in the experimental condition were provided the pro version of DeStressify and were asked to not engage with other features of the app during the course of the study. According to the app’s website, it takes about 1 month to complete the core plan if practicing 3 days a week. To help ensure that participants were completing the core plan, and in recognizing that users may not engage with the app as frequently as recommended, individuals in the experimental condition were instructed to use the app’s core plan 5 days a week for 4 weeks. Participants could set reminders to use the app through the app itself, and an email reminding participants that they were to receive a follow-up email after 4 weeks of app use was sent to participants in the experimental condition half way through the intervention. Individuals in the control condition were given no treatment and no intervention material until after the postintervention survey was completed, at which time they were provided the app and similar guidelines for use as the experimental condition. The follow-up period was 4 weeks post baseline, at which point all participants were sent a second Web-based questionnaire. All participants who completed both surveys received an electronic Can $25 Amazon gift card. Data were collected and stored on secure systems and accessed through computers with password protection and encryption. This study was approved by the institutional review board.

Measures

All participants completed a baseline survey composed of questions regarding demographic characteristics and 6 validated self-reported measures of stress, anxiety, depressive symptomatology, sleep quality, QoL, and work productivity. Each measure was presented on a separate page in the survey, and participants were able to use a “back” button to review and change their answers before submitting their completed surveys. Participant responses were identified by their email addresses.

Demographic measurements of sex, age, income, ethnic origin, educational background, and university program of enrollment were included at the baseline assessment. Participants were also asked to identify any mental health disorder diagnoses they had, whether they were using mental health services, and if so, for how long.

Perceived stress was measured using the PSS, which contained 10 items requiring respondents to indicate how often they felt or thought a certain way over the past month [39]. Scores could range from 0 to 40 with a mean score of 14.2 for people in the age range of 18 to 29 years and 12.1 and 13.7 for males and females, respectively [39]. The PSS has shown validity and reliability within samples of college students [40].

Anxiety was measured using the State-Trait Anxiety Inventory for adults, which contained 40 items in two subscales: state anxiety and trait anxiety [41]. Each subscale included 20 statements that people may use to describe how they feel. Participants were asked to indicate how accurately each statement described them presently for state anxiety and in general for trait anxiety. Responses were scored to yield a collective score that could vary between 20 and 80 for each subscale. Both subscales have shown reliability, validity, and internal consistency within samples of high school and college students [41].

Symptoms of depression were measured using The Quick Inventory of Depressive Symptomatology Self-Report (QIDS-SR), with ratings made in consideration of the past 7 days [42]. It contained 16 items that divided into the nine symptom criterion domains associated with the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition for major depressive disorder (MDD): low mood, concentration, self-criticism, suicidal ideation, loss of interest in activities, energy or fatigue, sleep disturbance, changes in appetite or weight, and psychomotor agitation or retardation. Scores, which can range from 0 to 27, can be divided into 5 categories associated with different classifications of depression severity: none, mild, moderate, severe, and very severe, with a five-point change in score associated with a change in classification [43]. The QIDS-SR has shown validity and reliability within a sample of adults with chronic, nonpsychotic MDD [42].

Sleep quality was measured using the Pittsburg Sleep Quality Index (PSQI), which asked participants to complete the measure in consideration of their usual sleep habits over the past month [44]. The measure contained 19 questions for respondents that were scored and combined to form 7 component scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. These component scores were added to form a global PSQI score. A global score greater than 5 suggests that the respondent may have difficulties in 2 or more components [44]. An additional 5 questions were included for respondents with bed partners or roommates, although these questions did not contribute to score calculations. The PSQI has shown reliability and validity among “good” and “poor” sleepers with and without sleep-related disorders [44].

Health-related QoL was measured using the RAND 36-Item Health Survey, which included 36 items to address 8 health concepts: physical functioning (eg, ability to perform any physical activity such as bathing and eating), bodily pain, physical health problems that limit ability to perform a specific role (eg, work and daily activities), personal or emotional problems that limit ability to perform a specific role, emotional well-being, social functioning, energy or fatigue, and general health perceptions (ie, beliefs regarding overall health) [45,46].Responses to all items were scored out of 100, and a score for each of the 8 health concepts was calculated by averaging a collection of item scores. These scores represent percentages, where “a higher score defines a more favorable health state” [47]. The RAND 36-Item Health Survey is a popular measure of QoL and has shown acceptable levels of reliability, validity, and internal consistency [48].

Work productivity was measured using the Work Productivity and Activity Impairment Questionnaire: General Health V2.0 (WPAI) [49]. Respondents completed 2 to 6 items in consideration of “the effect of (their) health problems on (their) ability to work and perform regular activities” [49], where health problems were defined as “any physical or emotional problem or symptom” [49]. Responses were scored and 4 subscales, expressed in percentages, were calculated: absenteeism, “presenteeism” [49], work productivity loss, and activity impairment. Higher scores indicated greater impairment and productivity loss. The WPAI has shown reliability and validity within a sample of working individuals [49].

Two additional questions were included in the follow-up surveys for both the experimental and control conditions. For these questions, participants identified whether they believed their sleeping and work or school productivity improved, worsened, or stayed the same since baseline measurements were taken.

App use questions were included in the follow-up survey for the experimental condition. Participants were asked how frequently they used the app in comparison with what was requested at the beginning of the study and their pattern of app use over the 4 weeks. App use frequency was measured using a 10-point scale ranging from “did not use at all” to “used as often as requested.” Response options to describe patterns of app use were increased, increased then decreased, consistent, decreased then increased, and decreased and included graphic representations (see Figure 1). All app use data were self-reported.

Analytic Plan

Data were analyzed using Statistical Package for the Social Sciences (SPSS) version 23 (IBM Corp). Age distribution of treatment conditions were compared using the Mann Whitney U test. Distribution of sex, mental health disorder diagnoses, and mental health service use within treatment conditions were compared using chi-square tests. Distribution of program enrollment within treatment conditions was compared using Fisher exact test. Ethnicity distributions were compared using either chi-square test or Fisher exact test, depending on whether or not the assumptions of the chi-square test were met within each ethnicity category. Changes in pre- and postintervention scores between treatment conditions for measurements of stress, depression, state and trait anxiety, sleep quality, QoL subscales, and work productivity subscales were assessed using analysis of covariance (ANCOVA). For all ANCOVA, postscores were treated as the dependent variable and prescores the covariate. A multivariate analysis of covariance (MANCOVA) was also conducted in which the baseline scores for both state and trait anxiety were assigned as covariates, and the dependent variables were the postintervention state and trait anxiety scores. Chi-square test was conducted to identify differences in perceived work productivity. An alpha level of .05 was used in all statistical tests of significance, and effect size was determined using partial eta squared values. In alignment with suggestions by Cohen, partial eta squared values of .0099, .0588, and .1379 were used to correspond to small, medium, and large effect sizes, respectively [50]. Normality was tested using the Kolmogorov-Smirnov test. Raw scores that were not normally distributed were transformed through square root calculations to produce normality [51]. Univariate outliers were identified as having z-scores with magnitude greater than 3.29 (P<.001). Mutivariate outliers were identified as having Mahalanobis distance values greater than χ22=13.8 when analyzing anxiety scores and χ28=26.1 when analyzing QoL scores, P<.001 [51]. When outliers were present, analyses were run with and without outliers.

Figure 1. Graphic representations of app use trends provided in experimental condition postintervention survey.

Demographics

Responses from 206 students at UBC Okanagan were collected at baseline, with 104 randomized into the control condition and 102 into the experimental condition (see Figure 2). Of the 206 participants, 43 were excluded from analysis because of failure to complete postsurveys (n=41), having a phone that did not support the app (n=1), or a family emergency (n=1). This resulted in 163 responses being used in analysis (86 control, 77 experimental). There were no differences in age, sex, ethnicity, program enrollment, mental health diagnosis percentage, and mental health service use between conditions, P>.05 (see Table 1). The percent of participants in both conditions self-reporting mental health diagnoses is noteworthy, although a chi-square test confirmed that the number of people reporting such diagnoses was not statistically different between conditions, P=.18. There was also no difference in the percentage of participants in each condition who utilized health care services, P=.80. Human kinetics, general arts and sciences, and nursing were the three most common programs for enrollment in both treatment conditions.

Figure 2. Consolidated Standards of Reporting Trials (CONSORT) flow diagram (template obtained from consort-statement.org).
Table 1. Demographics of treatment conditions.
CharacteristicControl (n=86)Experimental (n=77)
Age (years)


Range16-4718-27

Average20.920.3
Sex, n (%)


Female58 (67)45 (58)
Ethnicity (3 most predominant listed), n (%)


White61 (71)50 (65)

Chinese9 (11)12 (16)

South Asian5 (6)9 (12)
Program enrollment, n (%)


Human kinetics22 (26)12 (16)

First year arts and sciences12 (14)14 (18)

Nursing11(13)11 (14)
Mental health diagnosisa


Yes, n (%)12 (14)17 (22)

Bipolar, (n)10

Depression, (n)45

Anxiety, (n)46

Other, (n)3 (obsessive-compulsive disorder and dermotilamania; depression, anxiety, and mania; attention-deficit hyperactivity disorder [ADHD])11 (adjustment disorder, anxiety, and depression; 3 ADHD; attention-deficit disorder; depression, anxiety, and ADHD; post-traumatic stress disorder [PTSD]; binge eating; depression and anxiety; bipolar, depression, and anxiety; depression, anxiety, and PTSD)
Mental health service use, n (%)


Yes10 (12)10 (13)

aDiagnoses taken verbatim from participant responses.

Stress

Perceived stress scores within the experimental condition decreased in value, whereas control condition scores slightly increased in value from baseline to postintervention (see Table 2). However, differences in scores between conditions at postintervention did not reach statistical significance, F1,160=3.54, P=.06, ηp2=.02. Reliability of the PSS and other validated surveys, as determined by Cronbach alpha, is provided in Table 3.

Depression

One participant was excluded from analysis of the QIDS-SR questionnaire as baseline responses were not provided for this measure by the participant. Raw scores were transformed for normality. Mean values for the QIDS-SR scores in Table 2 were calculated using raw data. Postintervention transformed QIDS-SR scores for the experimental condition showed no significant difference from the control condition, F1,159=3.01, P=.09, ηp2=.02, when controlling for baseline scores. Tests were reconducted using nontransformed data, and results were similar, F1,159=3.54, P=.06, ηp2=.02.

Anxiety

One multivariate outlier was identified in the control condition. No univariate outliers were observed in either trait or state anxiety scores. Overall, omnibus F tests of the MANCOVA were significant when the outlier was included, F1,160=4.25, P=.02, ηp2=.05, and excluded, F1,159=4.13, P=.02, ηp2=.05. ANCOVA results demonstrate that individuals in the experimental condition reported less trait anxiety, F1,160=8.23, P=.01, ηp2=.049, than individuals in the control condition after 4 weeks of using the DeStressify app. State anxiety scores did not significantly differ between conditions at postintervention, F1,160=1.93, P=.17, ηp2=.01.

Table 2. Mean values of measures for control (n=86) and experimental (n=77) treatment conditions at baseline and 4 weeks post intervention, excluding outliers. Standard deviations are included in parentheses.
Dependent variablePrePost

Control
Mean (SDa)
Experimental
Mean (SD)
Control
Mean (SD)
Experimental
Mean (SD)
State-Trait Anxiety Inventory (STAI) stateb44.7 (13.0)43.0 (12.0)43.4 (13.2)40.1 (12.1)
STAI traitc47.6 (11.1)47.4 (10.6)47.5 (10.8)44.5 (9.4)
QIDS-SRd8.1 (4.5)8.4 (4.3)7.4 (4.7)6.4 (3.9)
Physfuncte,f93.8 (7.5)92.1 (9.8)93.3 (10.1)92.1 (10.6)
Physlime78.2 (32.3)79.2 (33.3)77.0 (33.7)83.8 (30.6)
Emolime50.4 (42.4)52.4 (40.3)49.2 (41.8)58.4 (42.6)
Energye45.1 (17.8)46.2 (20.1)41.1 (18.9)48.9 (19.4)
Emowelle60.9 (18.6)61.1 (20.0)58.7 (20.1)66.0 (17.2)
Socialfuncte72.2 (22.4)74.2 (24.4)71.4 (24.5)77.0 (20.2)
Paine,g78.3 (19.2)83.5 (19.0)78.0 (18.6)84.3 (18.0)
Genhealthe64.7 (20.5)63.3 (19.5)61.7 (20.4)67.5 (17.4)
PSSh19.6 (7.7)18.6 (6.8)19.8 (6.7)17.8 (6.2)
WPAIi missedtimej,k4.2 (13.6)2.1 (3.5)0.1 (0.2)0.1 (0.1)
WPAI impairedtimej,k16.6 (21.9)11.0 (21.7)20.7 (26.0)14.3 (23.6)
WPAI overallworkimpairj,k18.1 (25.4)11.6 (23.4)28.1 (32.1)16.6 (27.4)
WPAI activimpairj25.7 (26.5)22.9 (25.0)24.0 (26.7)18.1 (23.5)
PSQIl7.0 (2.8)6.9 (3.1)7.0 (3.7)6.2 (3.1)

aSD: standard deviation.

bSTAI state: State-Trait Anxiety Inventory for Adults—state anxiety.

cSTAI trait: State-Trait Anxiety Inventory for Adults—trait anxiety.

dQIDS-SR: Quick Inventory of Depressive Symptomatology Self-Report.

ePhysfunct, Physlim, Emolim, Energy, Emowell, Socialfunct, Pain, Genhealth: RAND 36-Item Health Survey—Physical functioning subscale, role limitations because of physical health subscale, role limitations because of emotional health subscale, energy or fatigue subscale, emotional well-being subscale, social functioning subscale, pain subscale, and general health subscale.

fPhysfunct scores were calculated using n=156 (nexp=75, ncon=81).

gPain scores were calculated using n=162 (nexp=76, ncon=86).

hPSS: Perceived Stress Scale.

iWPAI: Work Productivity and Activity Impairment Questionnaire: General Health V2.0.

jWPAI missedtime, impairedtime, overallworkimpair, activimpair: Percent work missed because of health, percent impairment while working because of health, percent overall work impairment because of health, and percent activity impairment because of health.

kWPAI impairedtime and overallworkimpair scores were calculated using n=50 (nexp=21, ncon=29).

lPSQI: Pittsburg Sleep Quality Index.

Table 3. Cronbach alpha values for all validated measures using postintervention data (ncon=86, nexp=77).
Survey componentCronbach alpha
PSSa.86
QIDS-SRb.80
State-Trait Anxiety Inventory (STAI) state.95
STAI trait.90
PSQIc.72
Physfunct.88
Physlim.83
Emolim.83
Energy.77
Emowell.82
Socialfunct.83
Pain.83
Genhealth.78

aPSS: Perceived Stress Scale.

bQID-SR: Quick Inventory of Depressive Symptomatology Self-Report.

cPSQI: Pittsburg Sleep Quality Index.

Sleep Quality

In regards to the baseline scores, there was 1 outlier from each treatment condition that was removed from analysis for sleep quality. Raw scores were transformed for normality. Transformed values were more normally distributed, although both the raw and the transformed scores were significant when tested for normality. Nonetheless, analysis of variances (ANOVAs) were conducted as they are robust given the data’s traits [51]. There was no significant differences between treatment conditions in the postintervention scores for both raw, F1,158=2.51, P=.12, ηp2=.02, and transformed scores, F1,158=1.89, P=.17, ηp2=.01, when outliers were excluded. Results were similar when outliers were included (raw: F1,160=2.58, P=.11, ηp2=.016; transformed: F1,160=1.91, P=.17, ηp2=.01).

Quality of Life

Distributions of all subscores for the RAND 36-Item Health Survey were non-normally distributed with the exception of energy or fatigue at baseline for the experimental condition, energy or fatigue at follow-up for both treatment conditions, and general health at follow-up for the control condition. Additionally, the assumption of homogeneity of covariance matrices was not met. However, ANOVA is robust to violations of normality for this data when outliers are excluded [51], and MANOVAs are robust to heterogeneity of covariance matrices when sample sizes are equal [50,51].

Researchers identified one univariate outlier from the pain subscale, seven univariate outliers from the physical functioning subscale, and six multivariate outliers. MANCOVA was conducted with outliers, and a significant interaction effect was found between condition assignment and time, F1,160=2.06, P=.04, ηp2=.10, warranting examination of individual QoL subscales, as discussed below. Eight outliers were removed from analysis for a second MANCOVA, resulting in the test being conducted on a sample size of 81 control condition participants and 74 experimental condition participants. Trends were similar in that mean subscale values decreased within the control condition from baseline to postintervention and increased within the experimental condition (see Table 2), although these trends were not found to be significant, F1,152=1.86, P=.07, ηp2=.10.

Vincent [52] warns that significance of certain dependent variables may be masked by nonsignificant variables in MANOVAs and therefore recommends ANOVA with the Bonferroni adjustment for assessing specific variables of interest. Thus, ANCOVAs were conducted on all subscales of the RAND 36-Item Health Survey. The general health subscale was shown to significantly differ in postintervention scores between treatment conditions, F1,160=12.44, P=.001, ηp2=.07, such that scores decreased in the control condition and increased in the experimental condition as illustrated in Table 2. A significant difference was also found between treatment conditions in regards to postintervention energy or fatigue subscale scores, F1,160=8.19, P=.01, ηp2=.05, with similar trends as the general health subscale. The results of the emotional well-being subscale did not meet the assumption of homogeneity of regression slopes for ANCOVA and were therefore analyzed using repeated measures ANOVA.

Table 4. Participant count for responses regarding changes in perceived work or school productivity over 4 weeks between baseline and postintervention measurements. Expected count is provided in parentheses.
TreatmentControlExperimental
I think I was MORE productive14 (22.2)28 (19.8)
I think I was LESS productive32 (26.4)18 (23.6)
I think my productivity stayed about the same40 (37.5)31 (33.5)
Figure 3. Response counts from participants in the experimental condition when asked to identify how often they used the app in comparison with the frequency of use requested.

There was no main effect of time, F1,161=1.21, P=.27, ηp2=.01, or condition assignment F1,161=1.89, P=.17, ηp2=.01; however, there was an interaction effect, F1,161=8.13, P=.01, ηp2=.05, such that scores decreased over time for the control condition and increased over time for the experimental condition (see Table 2). All other tests were insignificant.

Work Productivity

Of the 163 participants who completed the follow-up survey, 29 people from the control condition and 21 from the experimental condition (n=50) reported having work at both baseline and postintervention and therefore completed all components of the WPAI. A subscore was calculated using these work-related data, labeled “percent overall work impairment due to health” [49]. No significant difference was found between treatment conditions, F1,47=1.10, P=.30, ηp2=.02.

All 163 participants completed the question, “During the past seven days, how much did your health problems affect your ability to do your regular daily activities, other than work at a job?” [49] This question was used to calculate percent activity impairment because of health. No significant difference was found between treatment conditions, F1,160=1.72, P=.19, ηp2=.01.

When participants were asked to choose a description that best described how their work or school productivity has changed over the past 4 weeks, there was an association between treatment condition and responses, P=.01. More participants than expected by chance in the experimental condition reported an improvement in their productivity. Conversely, fewer participants than expected by chance in the control condition reported improved productivity. These results (Table 4) suggest that those in the experimental condition reported being more productive than those in the control condition.

Patterns of App Use and Self-Reported Adherence

One participant did not report app use trends and was thus excluded from the data regarding response frequencies of participants in the experimental condition (n=76) for self-reported trends in app use. The most frequently reported patterns of app use were consistent use (n=23) and decrease in use (n=23). The most infrequently reported app use trend was increase in use (n=4). The remaining responses of increase then decrease in use and decrease then increase in use had 14 and 11 responses, respectively.

When adherence was self-reported using a scale from 0 to 10 by participants in the experimental condition, the mean rating was 6.36 (standard deviation 2.79) with a median of 7. The most frequently reported adherence rating was 8. Counts for each response option are provided in Figure 3.


Principal Findings

It was hypothesized that students who use DeStressify would report significantly lower stress, anxiety, depressive symptomatology, and class or work absenteeism and significantly greater sleep quality, school or work productivity, and QoL as compared with a wait-list control sample post intervention. The results support hypotheses that short-term use of DeStressify can reduce trait anxiety and improve general health, energy, emotional well-being, and work or school productivity in university students but do not support the other hypotheses. These findings are somewhat consistent with related literature, which have shown that MBSR techniques that are delivered in person and through one- to two-hour-long sessions improve physiological and psychological well-being in university students [33-35]. Inconsistencies may be attributed to the difference in intervention delivery methods (ie, through a mobile phone rather than in person), shorter sessions, and lack of experimental control over app use frequency. Considering the study’s design, these findings are encouraging as universities are in need of accessible alternative mental health management tools and services for students but should be considered cautiously given the small effect sizes.

The sample population of this study is representative of a large number of university students in Canada. Many universities are primarily composed of full-time undergraduate students, with a greater proportion of females attending in comparison with males. Specifically, 95% of first-year undergraduate students at Canadian universities are attending full-time, with 66% being female [53]; participants in this study were of similar demographics (enrolled full-time, 63.1% [103/163] female). Additionally, UBC is a Western Canadian university with approximately 8000 students registered in undergraduate programs on the Okanagan campus [54]. This size is comparative to small- to medium-sized Canadian universities. 

The participant dropout rate from pre- to postintervention was 19.9% (41/206). This rate is comparable with the dropout rates of other studies using Web-based programs to support mental health [55]. One healthy lifestyle Web-based intervention with an adult population had an attrition rate over 75% at 1 month post intervention [56]. Apps, in particular, lose over 75% of daily active users 3 days after download [57]. The majority of dropouts were considered lost to follow-up. Two participants were an exception to this: one participant dropped out at the beginning of the study when they identified that the app did not work on their mobile device. A second participant identified their desire to drop out for personal reasons after they were invited to complete the follow-up survey.

Previous studies have found that students reportedly avoid mental health services such as counseling and medication because of fear of stigmatization, lack of time, and cost. Apps are an appealing platform for mental health support for university students that avoid many of the barriers associated with other forms of support, including those aforementioned and until recently have lacked evidence regarding effectiveness. App-based supports such as DeStressify can help university students avoid the stigma associated with mental health, as a large majority of students possess mobile phones and frequently engage with them, making app use a more discrete form of mental health maintenance. Additionally, mHealth apps do not generally require a large amount of time to use; the practices provided in DeStressify are approximately 10 min in length—much shorter than a standard counseling session. Participants in the experimental condition of this study were instructed to use the app 5 days a week, with no specifications regarding when it should be used. This allowed for greater flexibility in scheduling and thus, greater convenience, whereas still resulting in changes to trait anxiety, certain QoL components, and work or school productivity. In addition, apps are often inexpensive. The DeStressify app that was provided to participants is publicly available for Can $8.49 at the Apple iTunes store [58] and Can $8.23 at the Google Play store [59].

The changes in stress, anxiety, and related traits after short-term use of DeStressify are encouraging, yet, the effects of long-term use remain unknown. The most frequently reported patterns of app use for participants in the experimental condition were “consistent” and “decrease in use.” Additionally, the average self-reported adherence rate in comparison to the requested amount was 64% (6.4/10), with 80% (8/10) being the most commonly reported adherence rate. Considering these self-reported adherence patterns, rates, and the magnitude of the changes in measured traits among DeStressify users, it would be interesting to determine whether the improvements observed in the experimental group of this study would persist with prolonged use of DeStressify.

Limitations

This study is among the first to provide empirical evidence regarding the effectiveness of a mindfulness-based mHealth app on stress, anxiety, depression, and related symptomatology within university students. In recognizing the novelty of this study, areas for future development should also be addressed. Although this study’s objectives did not necessitate the use of a mindfulness measure, future studies should include one as it would provide a greater understanding in to the mechanism of action of DeStressify and would be useful in the design of future mental health apps and support services. Obtaining data directly from the app regarding participant use would also be more accurate than obtaining data from self-reported measures and should be considered in future studies. As some participants provided feedback regarding user satisfaction, future studies may also wish to include a measure of user satisfaction to enrich discussion regarding a mental health app’s effectiveness and acceptance within a university population. Rickard et al [60] recommend providing opportunities for feedback directly in the app, particularly using established measures to allow for comparison between apps. Additionally, participant recruitment was dependent on self-selection, and thus, may not represent a random sample of the university population. However, individuals who would be inclined to use a mental health app may also be more likely to respond to this study’s call for participants.

There is also the possibility that some participants in the control condition downloaded DeStressify, as it is a commercially available app. Therefore, we cannot discount the possibility that scores within the control condition may have been altered because of app use. If this were to have occurred, it is suspected that control condition scores would have been closer to experimental condition scores than what they would have been if the app had not been used.

What’s more, some participants may have previously received mindfulness training and possibly interacted with DeStressify differently than participants who had not previously received mindfulness training. However, it is unknown how previous training would impact results. For example, if participants found the app’s exercises to be similar to their current mindfulness exercises, then they may have incorporated the app more easily into their daily schedule and used it more consistently. This consistent use could have yielded greater changes in their scores. Conversely, the similarities in the exercises could have yielded smaller changes in their scores. Including a measure of previous mindfulness training would be beneficial in future studies to control for its possible effects.

Finally, future directions may include comparing the effectiveness of a mental health app on different subpopulations such as university staff or high school students and gathering data beyond 4 weeks of app use to better understand long-term effectiveness of the app. Additional apps may also be considered, so as to provide a more generalized understanding of mindfulness-based mHealth apps.

Conclusions

Universities and other similar institutions may benefit from supporting the use of DeStressify or other mindfulness-based mHealth apps. It is a resource that can be easily incorporated into support services and used in addition to other mental health support services. Mindfulness-based mHealth apps such as DeStressify may be of interest to university students who are comfortable with apps and seek to manage their anxiety and mental health through an accessible, inexpensive, and discrete manner. Students interested in methods of anxiety management or mindfulness-based self-driven health support may be encouraged to try using the DeStressify app. As app use is self-directed, institutions that provide students with DeStressify may choose to conduct their own follow-up with students so as to track mental health progress. Regardless, an effective mHealth app would provide another means of addressing stress, anxiety, and related mental health concerns, allowing more students to receive the help they are seeking. This study has demonstrated how DeStressify can assist in improving some of these mental health traits in a short time frame and therefore, may be of interest to universities aiming to diversify their student mental health supports.

Conflicts of Interest

None declared.

Notice of editorial concern: This randomized study was not registered, in possible violation of ICMJE rules for prospective registration of randomized trials. The editor granted an exception because the risk of bias appears low and the study was considered formative, guiding the development of the application. However, readers are warned to carefully assess the validity of any potential explicit or implicit claims related to primary outcomes or effectiveness, as failure to register does not prevent authors from changing their outcome measures retrospectively.

  1. Smetanin P, Stiff D, Briante C, Adair CE, Ahmad S, Khan M. The Mental Health Commission of Canada. 2011. URL: https://www.mentalhealthcommission.ca/sites/default/files/MHCC_Report_Base_Case_FINAL_ENG_0_0.pdf [WebCite Cache]
  2. Statistics Canada. The Daily. 2013. URL: http://www.statcan.gc.ca/daily-quotidien/130918/dq130918a-eng.pdf [accessed 2017-06-20] [WebCite Cache]
  3. Stallman HM. Psychological distress in university students: a comparison with general population data. Aust Psychol. 2010;45(4):249-257. [CrossRef]
  4. Crozier S, Willihnganz N. Docplayer. 2005. URL: http://docplayer.net/3230999-Canadian-counselling-centre-survey.html [accessed 2017-06-20] [WebCite Cache]
  5. Thielking M. Statnews. Feb 06, 2017. URL: https://www.statnews.com/2017/02/06/mental-health-college-students/ [WebCite Cache]
  6. Barua B. Fraser Institute. 2015. URL: https://www.fraserinstitute.org/sites/default/files/waiting-your-turn-2015.pdf [WebCite Cache]
  7. Ryerson University Centre for Student Development and Counselling. Ryerson University. URL: http:/​/www.​ryerson.ca/​healthandwellness/​counselling/​students/​Career_and_Educational_Concerns/​Booking_an_Appointment/​ [accessed 2017-06-01] [WebCite Cache]
  8. Kitzrow MA. The mental health needs of today's college students: challenges and recommendations. NASPA J. 2003;41(1):167-181. [CrossRef]
  9. Richardson M, Abraham C, Bond R. Psychological correlates of university students' academic performance: a systematic review and meta-analysis. Psychol Bull. Mar 2012;138(2):353-387. [CrossRef] [Medline]
  10. Ualberta. Nov 2011. URL: http:/​/www.​su.ualberta.ca/​media/​uploads/​assets/​CouncilPresentations/​NCHA%20report%20January%2026:2012.​pdf [accessed 2017-05-20] [WebCite Cache]
  11. Hysenbegasi A, Hass SL, Rowland CR. The impact of depression on the academic productivity of university students. J Ment Health Policy Econ. Sep 2005;8(3):145-151. [Medline]
  12. Doom JR, Haeffel GJ. Teasing apart the effects of cognition, stress, and depression on health. Am J Health Behav. Sep 2013;37(5):610-619. [CrossRef] [Medline]
  13. Pierceall EA, Keim MC. Stress and coping strategies among community college students. Community Coll J Res Pract. Oct 08, 2007;31(9):703-712. [CrossRef]
  14. Mackenzie CS, Erickson J, Deane FP, Wright M. Changes in attitudes toward seeking mental health services: a 40-year cross-temporal meta-analysis. Clin Psychol Rev. Mar 2014;34(2):99-106. [CrossRef] [Medline]
  15. Sasaki M. EPA-0231 - Barriers and facilitators to the use of mental health services by Japanese university students. Eur Psychiatry. 2014;29(S1):1. [CrossRef]
  16. Eisenberg D, Golberstein E, Gollust SE. Help-seeking and access to mental health care in a university student population. Med Care. Jul 2007;45(7):594-601. [CrossRef] [Medline]
  17. Gulliver A, Griffiths KM, Christensen H. Perceived barriers and facilitators to mental health help-seeking in young people: a systematic review. BMC Psychiatry. 2010;10:113. [FREE Full text] [CrossRef] [Medline]
  18. Jorm AF, Wright A, Morgan AJ. Where to seek help for a mental disorder? National survey of the beliefs of Australian youth and their parents. Med J Aust. Nov 19, 2007;187(10):556-560. [Medline]
  19. Martin JM. Stigma and student mental health in higher education. Higher Education Research & Development. Jun 2010;29(3):259-274. [CrossRef]
  20. Talebi M, Matheson K, Anisman H. Support, depressive symptoms, and the stigma towards seeking mental health help. Int J Soc Sci Stud. 2013;1(1):133-144. [CrossRef]
  21. Andrews G, Issakidis C, Carter G. Shortfall in mental health service utilisation. Br J Psychiatry. Nov 2001;179:417-425. [FREE Full text] [Medline]
  22. Barney LJ, Griffiths KM, Jorm AF, Christensen H. Stigma about depression and its impact on help-seeking intentions. Aust N Z J Psychiatry. Jan 2006;40(1):51-54. [CrossRef] [Medline]
  23. Griffiths KM, Crisp DA, Jorm AF, Christensen H. Does stigma predict a belief in dealing with depression alone? J Affect Disord. Aug 2011;132(3):413-417. [CrossRef] [Medline]
  24. Stallman HM. University counselling services in Australia and New Zealand: activities, changes, and challenges. Aust Psychol. 2012;47:249-253. [CrossRef]
  25. Bigam S. The Ubessey. Feb 10, 2014. URL: http://old.ubyssey.ca/news/state-of-ubc-counselling-services001/ [accessed 2017-06-10] [WebCite Cache]
  26. Yorgason JB, Linville D, Zitzman B. Mental health among college students: do those who need services know about and use them? J Am Coll Health. 2008;57(2):173-182. [CrossRef] [Medline]
  27. Robinson AM, Jubenville TM, Renny K, Cairns SL. Academic and mental health needs of students on a Canadian campus. Can J Counselling Psychother. 2016;50(2):108-123. [FREE Full text]
  28. Crisp DA, Griffiths KM. Participating in online mental health interventions: who is most likely to sign up and why? Depress Res Treat. 2014;2014:790457. [FREE Full text] [CrossRef] [Medline]
  29. Payne HE, Lister C, West JH, Bernhardt JM. Behavioral functionality of mobile apps in health interventions: a systematic review of the literature. JMIR mHealth uHealth. 2015;3(1):e20. [FREE Full text] [CrossRef] [Medline]
  30. Donker T, Petrie K, Proudfoot J, Clarke J, Birch M, Christensen H. Smartphones for smarter delivery of mental health programs: a systematic review. J Med Internet Res. 2013;15(11):e247. [FREE Full text] [CrossRef] [Medline]
  31. Grist R, Porter J, Stallard P. Mental health mobile apps for preadolescents and adolescents: a systematic review. J Med Internet Res. May 25, 2017;19(5):e176. [FREE Full text] [CrossRef] [Medline]
  32. Van Gordon W, Shonin E, Griffiths MD. Mindfulness in mental health: a critical reflection. J Psy Neuro Dis Brain Stim. Jul 20, 2015;1(1):102. [CrossRef]
  33. Ramler TR, Tennison LR, Lynch J, Murphy P. Mindfulness and the college transition: the efficacy of an adapted mindfulness-based stress reduction intervention in fostering adjustment among first-year students. Mindfulness. Feb 2016;7(1):179-188. [CrossRef]
  34. Lynch S, Gander M, Kohls N, Kudielka B, Walach H. Mindfulness-based coping with university life: a non-randomized wait-list-controlled pilot evaluation. Stress Health. Mar 21, 2011;27(5):365-375. [CrossRef]
  35. Jang H, Jeon M. Relationship between self-esteem and mental health according to mindfulness of university students. Indian J Sci Technol. 2015;8(21):1-5. [CrossRef]
  36. Kvillemo P, Brandberg Y, Bränström R. Feasibility and outcomes of an internet-based mindfulness training program: a pilot randomized controlled trial. JMIR Ment Health. Jul 22, 2016;3(3):e33. [FREE Full text] [CrossRef] [Medline]
  37. Faul F, Erdfelder E, Buchner A, Albert-Georg L. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods. 2009;41(4):1149-1160. [CrossRef] [Medline]
  38. Chang VY, Palesh O, Caldwell R, Glasgow N, Abramson M, Luskin F, et al. The effects of a mindfulness-based stress reduction program on stress, mindfulness self-efficacy, and positive states of mind. Stress and Health. Aug 2004;20(3):141-147. [CrossRef]
  39. Cohen S. Mind Garden. 1994. URL: https://www.mindgarden.com/documents/PerceivedStressScale.pdf [WebCite Cache]
  40. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav. Dec 1983;24(4):385-396. [Medline]
  41. Spielberger CD, Gorsuch RL, Lushene R, Vagg PR, Jacobs GA. State-Trait Anxiety Inventory for Adults Manual. Menlo Park, CA. Mind Garden, Inc; 2015.
  42. Rush AJ, Trivedi MH, Ibrahim HM, Carmody TJ, Arnow B, Klein DN, et al. The 16-Item quick inventory of depressive symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): a psychometric evaluation in patients with chronic major depression. Biol Psychiatry. Sep 01, 2003;54(5):573-583. [Medline]
  43. Ids-qids. URL: http://www.ids-qids.org/interpretation.html [accessed 2017-12-09] [WebCite Cache]
  44. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh sleep quality index: a new instrument for psychiatric practice and research. Psychiatry Res. May 1989;28(2):193-213. [Medline]
  45. Ware JE, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). Conceptual framework and item selection. Med Care. Jun 1992;30(6):473-483. [Medline]
  46. Hays RD, Sherbourne CD, Mazel RM. The RAND 36-Item health survey 1.0. Health Econ. Oct 1993;2(3):217-227. [Medline]
  47. RAND Health. URL: http://www.rand.org/health/surveys_tools/mos/36-item-short-form/scoring.html [WebCite Cache]
  48. VanderZee KI, Sanderman R, Heyink JW, de Haes H. Psychometric qualities of the RAND 36-Item Health Survey 1.0: a multidimensional measure of general health status. Int J Behav Med. Jun 1996;3(2):104-122. [CrossRef] [Medline]
  49. Reilly MC, Zbrozek AS, Dukes EM. The validity and reproducibility of a work productivity and activity impairment instrument. Pharmacoeconomics. Nov 1993;4(5):353-365. [Medline]
  50. Field A. Discovering Statistics Using SPSS, 3rd edition. London, England. SAGE Publications Ltd; 2009.
  51. Tabachnick BG, Fidell LS. Using Multivariate Statistics, 4th edition. Boston, MA. Allyn and Bacon; 2001.
  52. Vincent WJ. Statistics in Kinesiology, 2nd edition. Champaign, IL. Human Kinetics; 1999.
  53. Cusc-ccreu. URL: http://www.cusc-ccreu.ca/publications/CUSC_2016-First-Year-Report-EN.pdf [accessed 2017-08-11] [WebCite Cache]
  54. UBC. URL: https:/​/senate.​ubc.ca/​sites/​senate.ubc.ca/​files/​downloads/​UBC%20Enrolment%20Report%202016-17_Final%20-%209%20Jan%202017_0.​pdf [accessed 2017-08-10] [WebCite Cache]
  55. Doherty G, Coyle D, Sharry J. Engagement with online mental health interventions: an exploratory clinical study of a treatment for depression. 2012. Presented at: Proc SIGCHI Conf Hum Factor Comput Syst; May 5-10, 2012:1421-1430; Austin, TX. [CrossRef]
  56. Van der Mispel C, Poppe L, Crombez G, Verloigne M, De Bourdeaudhuij I. A self-regulation-based eHealth intervention to promote a healthy lifestyle: investigating user and website characteristics related to attrition. J Med Internet Res. Jul 11, 2017;19(7):e241. [FREE Full text] [CrossRef] [Medline]
  57. Chen A. Andrewchen. URL: http:/​/andrewchen.​co/​new-data-shows-why-losing-80-of-your-mobile-users-is-normal-and-that-the-best-apps-do-much-better/​ [accessed 2017-09-11] [WebCite Cache]
  58. iTunes Preview. URL: https://itunes.apple.com/ca/app/destressify-pro/id751845893?mt=8 [WebCite Cache]
  59. Google Play. URL: https://play.google.com/store/apps/details?id=com.stressrefuge.destressify.pro [WebCite Cache]
  60. Rickard N, Arjmand HA, Bakker D, Seabrook E. Development of a mobile phone app to support self-monitoring of emotional well-being: a mental health digital innovation. JMIR Ment Health. Nov 23, 2016;3(4):e49. [FREE Full text] [CrossRef] [Medline]


ANOVA: analysis of variance
ANCOVA: analysis of covariance
MANCOVA: multivariate analysis of covariance
MBSR: mindfulness-based stress reduction
MDD: major depressive disorder
mHealth: mobile health
PSQI: Pittsburg Sleep Quality Index
PSS: Perceived Stress Scale
QIDS-SR: Quick Inventory of Depressive Symptomatology Self-Report
QoL: quality of life
STAI: State-Trait Anxiety Inventory for Adults
WPAI: Work Productivity and Activity Impairment Questionnaire: General Health V2.0


Edited by J Prescott; submitted 30.06.17; peer-reviewed by K Reynolds, D Crisp, B Loo Gee; comments to author 21.08.17; revised version received 25.10.17; accepted 14.12.17; published 23.01.18.

Copyright

©Rebecca Anne Lee, Mary Elizabeth Jung. Originally published in JMIR Mental Health (http://mental.jmir.org), 23.01.2018.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on http://mental.jmir.org/, as well as this copyright and license information must be included.