Published on in Vol 9, No 11 (2022): November

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/41482, first published .
Development of the First Episode Digital Monitoring mHealth Intervention for People With Early Psychosis: Qualitative Interview Study With Clinicians

Development of the First Episode Digital Monitoring mHealth Intervention for People With Early Psychosis: Qualitative Interview Study With Clinicians

Development of the First Episode Digital Monitoring mHealth Intervention for People With Early Psychosis: Qualitative Interview Study With Clinicians

Original Paper

1Department of Psychiatry, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, United States

2Division of Behavioral Health Services and Policy Research, New York State Psychiatric Institute, New York, NY, United States

3Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, NY, United States

4Brown School of Social Work, Washington University in St Louis, St Louis, MO, United States

5Department of Psychiatry, Icahn School of Medicine, New York, NY, United States

6New York Mental Illness Research Education and Clinical Center, The James J Peters Veteran's Affairs Medical Center, Bronx, NY, United States

Corresponding Author:

David Kimhy, PhD

Department of Psychiatry

Icahn School of Medicine

One Gustave L Levy Place

Box 1230

New York, NY, 10029

United States

Phone: 1 212 659 8752

Email: david.kimhy@mssm.edu


Background: Mobile health (mHealth) technologies have been used extensively in psychosis research. In contrast, their integration into real-world clinical care has been limited despite the broad availability of smartphone-based apps targeting mental health care. Most apps developed for treatment of individuals with psychosis have focused primarily on encouraging self-management skills of patients via practicing cognitive behavioral techniques learned during face-to-face clinical sessions (eg, challenging dysfunctional thoughts and relaxation exercises), reminders to engage in health-promoting activities (eg, exercising, sleeping, and socializing), or symptom monitoring. In contrast, few apps have sought to enhance the clinical encounter itself to improve shared decision-making (SDM) and therapeutic relationships with clinicians, which have been linked to positive clinical outcomes.

Objective: This qualitative study sought clinicians’ input to develop First Episode Digital Monitoring (FREEDoM), an app-based mHealth intervention. FREEDoM was designed to improve the quality, quantity, and timeliness of clinical and functional data available to clinicians treating patients experiencing first-episode psychosis (FEP) to enhance their therapeutic relationship and increase SDM.

Methods: Following the app’s initial development, semistructured qualitative interviews were conducted with 11 FEP treatment providers at 3 coordinated specialty care clinics to elicit input on the app’s design, the data report for clinicians, and planned usage procedures. We then generated a summary template and conducted matrix analysis to systematically categorize suggested adaptations to the evidence-based intervention using dimensions of the Framework for Reporting Adaptations and Modifications‐Enhanced (FRAME) and documented the rationale for adopting or rejecting suggestions.

Results: The clinicians provided 31 suggestions (18 adopted and 13 rejected). Suggestions to add or refine the content were most common (eg, adding questions in the app). Adaptations to context were most often related to plans for implementing the intervention, how the reported data were displayed to clinicians, and with whom the reports were shared. Reasons for suggestions primarily included factors related to health narratives and priorities of the patients (eg, focus on the functional impact of symptoms vs their severity), providers’ clinical judgment (eg, need for clinically relevant information), and organizations’ mission and culture. Reasons for rejecting suggestions included requests for data and procedures beyond the intervention’s scope, concerns regarding dilution of the intervention’s core components, and concerns about increasing patient burden while using the app.

Conclusions: FREEDoM focuses on a novel target for the deployment of mHealth technologies in the treatment of FEP patients—the enhancement of SDM and improvement of therapeutic relationships. This study illustrates the use of the FRAME, along with methods and tools for rapid qualitative analysis, to systematically track adaptations to the app as part of its development process. Such adaptations may contribute to enhanced acceptance of the intervention by clinicians and a higher likelihood of integration into clinical care.

Trial Registration: ClinicalTrials.gov NCT04248517; https://tinyurl.com/tjuyxvv6

JMIR Ment Health 2022;9(11):e41482

doi:10.2196/41482

Keywords



Early Intervention for Psychosis and Measurement-Based Care

Early treatment experiences of individuals diagnosed with schizophrenia can have enduring effects on their attitudes toward treatment, potentially altering the course of illness and affecting long-term outcomes [1,2]. Consequently, first-episode psychosis (FEP) is a critical period for optimizing treatment to enhance treatment satisfaction and adherence [3]. Specifically, psychotropic medications are critical core components of early intervention strategies. However, evidence suggests that a significant gap exists between the optimal use of medications and how they are used in real-world practice [4], with many patients receiving higher than recommended dosages of antipsychotic medications, as well as additional psychotropic medications. These practices often result in troubling symptoms and side effects, lower satisfaction with treatment, poorer therapeutic relationship and treatment engagement, and increased rates of discontinuation of treatment [1,4].

One widely promoted approach to improve treatment outcomes is measurement-based care (MBC), which is defined as the systematic evaluation of patient conditions before or during an encounter to inform treatment [5,6]. Typically, MBC relies on the patients’ recollection of their clinical status over several days or weeks. However, such retrospective assessments are problematic because they are vulnerable to the influence of memory difficulties, cognitive biases, and reframing [7-9]. These issues are particularly salient among individuals with schizophrenia, given the substantial episodic memory deficits documented in this population [10,11]. In addition, medication management sessions by psychiatrists and other prescribing clinicians typically last <30 minutes, making it difficult for providers to obtain a comprehensive view of the clinical status of patients and develop rapport. The latter is particularly pertinent for patients with FEP, about whom psychiatric care providers may have shorter treatment histories, resulting in less familiarity. Overall, these limitations may contribute to lower treatment satisfaction and adherence, poorer therapeutic relationship, and poorer clinical outcomes.

A promising strategy to overcome many of these challenges is the use of mobile health (mHealth) technologies. Extensive evidence from psychosis research studies using smartphones indicates high feasibility and validity of real-time collection of clinical information on daily experiences among individuals with psychosis, including symptoms, side effects, mood and affective processing, social activities and context, sleep, and functioning [9,12-15]. Using apps and methodologies, such as experience sampling method (ESM) that present patients with brief assessments that are more frequent and richer in detail and occur during the course of “real-world” functioning, mHealth technologies can provide a more granular and complete picture of clinical status and functioning of patients upon which more effective clinical decisions and pharmacological management can be made [9,13,16,17]. Specifically, mHealth technologies can capture changes in clinical variables across time and social contexts, potentially allowing providers to better tailor interventions. Furthermore, the “real-world” characterization of experiences of patients via mHealth technologies may also enhance shared decision-making (SDM) and therapeutic relationship by providing both clinicians and patients with more accurate clinical data that are more directly related to experiences of patients, potentially allowing for more informed joint treatment decisions.

mHealth Applications in Psychosis Treatment

To date, most apps developed for and used in the treatment of individuals with psychosis, including FOCUS [18-20], CORE [21], Actissist [22], and Acceptance and Commitment Therapy in Daily Life [23], have focused largely on supplementing face-to-face clinical encounters [16,17,24-28]. Most apps have been designed primarily to facilitate patients’ self-management of symptoms and recovery by offering psychoeducation, guidelines for practicing of cognitive and behavioral coping strategies (eg, reassessment of dysfunctional beliefs and relaxation exercises), or other skills taught during clinical sessions. Other apps have focused on monitoring symptoms and signs of clinical deterioration or enhancing social functioning [29-32]. In contrast, few apps have sought to enhance the clinical encounter itself. Specifically, to date, no app has aimed to enhance SDM and therapeutic relationships within FEP treatment. SDM has been shown to be a key element contributing to positive clinical outcomes [33,34]. Previous reports have demonstrated that SDM has a positive impact on patient satisfaction, adherence to treatment, quality of life, and empowerment, including among patients with serious mental illness [35-38]. Consistent with this view, Zielesak et al [39] pointed out that there remains a significant gap in understanding clinician needs for information in mental health care decision-making, as well as ways to better integrate apps into routine clinical care and provider workflow. Furthermore, providers’ lack of engagement with, or buy-in for, patient-reported health data have been noted as a critical barrier to its use in health care generally, making it a priority to elicit adaptations that may facilitate uptake from the provider perspective [40].

To address these gaps in the literature, we sought to develop an mHealth intervention that provides psychiatric care providers with clinically relevant and time-sensitive information that would enhance MBC, better inform decisions regarding treatment and medication management, and improve therapeutic relationships and SDM. Prior research has demonstrated the benefits of soliciting stakeholder input when developing and refining mHealth apps, including for individuals with schizophrenia [19] and early psychosis [22,31,41]. For example, Ben-Zeev et al [19] used a multistage, multistakeholder input and feedback approach combining survey and qualitative methods to develop the FOCUS app that supports self-management for people with schizophrenia. Similarly, within an early intervention service for psychosis, McClelland and Fitzgerald [41] conducted a staged series of focus groups with patients and clinicians to develop an app that helped patients track their mood and activities, receive reminders and messages, and seek external support.

In this study, we described the systematic process of soliciting inputs from clinician stakeholders to develop and adapt an app as part of a pilot study examining the implementation of a community-based FEP mHealth intervention for adolescents and young adults. As our app focuses on a novel clinical target, the information available to clinicians, and its use in SDM, the views and input of clinicians were critical for elucidating this target. Adaptations may entail changes to interventions, or to implementation strategies, that produce better alignment with factors such as the needs, resources, and cultures of target settings and populations [42]. Specifically, such input may lead to adaptations in multiple aspects of an intervention, including content, frequency, and timing, which may then improve intervention fit (eg, appropriateness), feasibility (eg, successful delivery), acceptability (eg, satisfactoriness), and effectiveness, given a particular practice setting and population served or higher-level contextual factors such as local policies [43]. Changes to implementation strategies can include adding intervention training or modifying workflows, as these focus on methods and activities that seek to maximize the extent to which an intervention is adopted, used, and sustained within routine practice [44]. Overall, adaptations may address several considerations, including clinical judgment, stakeholder preferences, and perception of the intervention, as well as factors associated with the entity or setting (eg, clinic) within which the intervention is embedded, such as an organization’s access to resources, social context, or mission. Finally, adaptations can also be responsive to the wider sociopolitical context, such as social norms or mores, and funding policies.

In addition to the practical value of obtaining stakeholder input for intervention design, there are increasing calls for the development, tracking, and reporting of processes and findings regarding adaptations to interventions and implementation strategies as part of efforts to disseminate methods, tools, and resources that promote rapid and iterative applications of implementation science and translational research [45]. One such tool is the Framework for Reporting Adaptations and Modifications‐Enhanced (FRAME) [46,47]. It facilitates the ability of researchers and providers to capture a range of information relevant to adaptation decision-making processes and to catalog ways in which a practice has changed from a previously established iteration or protocol. The FRAME allows for systematic classification of intervention adaptations by guiding researchers and providers to address key questions such as (1) when adaptations are made; (2) who participated in the decision-making process; (3) specifically, what was modified or adapted and to which aspect of the intervention does it relate (eg, context and content); (4) the reasons why an adaptation was made; (5) the goal of the adaptation (eg, increase reach or engagement); and (6) whether the adaptation is consistent with intervention fidelity or an intervention’s core principles.


Context and Setting

This qualitative study was conducted as part of a pilot randomized controlled trial (RCT) to evaluate the feasibility and acceptability of using First Episode Digital Monitoring (FREEDoM), a novel mHealth app designed to enhance MBC and SDM, as well as improve patient satisfaction with pharmacotherapy regimens at 3 clinics delivering coordinated specialty care (CSC) [48,49] for patients with FEP (ClinicalTrials.gov NCT04248517). The clinics, all affiliated with OnTrackNY, provide treatment to adolescents and young adults (aged 16-30 years) experiencing nonaffective FEP [44]. OnTrackNY originated as part of the National Institute of Mental Health Recovery After an Initial Schizophrenia Episode Implementation and Evaluation Study. The CSC programs use an evidence-based, multidisciplinary, and team-based approach that offers pharmacotherapy, psychotherapy, supported employment and education, and peer support and emphasizes an SDM approach to treatment [50]. Semistructured qualitative interviews were conducted with CSC program staff (eg, psychiatric care providers and primary therapists) before initiating the RCT at each site to elicit provider perspectives on the proposed intervention and plans for implementation. Researchers used provider feedback from structured interviews and the FRAME to identify, catalog, track, and implement adaptations to increase the potential feasibility and acceptability of the RCT intervention and protocol.

FREEDoM—a Novel mHealth Intervention

The FREEDoM mHealth intervention project involved patients completing 3-day ESM-based assessments once per month immediately before their appointment with their psychiatric care provider of the CSC program. The goal of the intervention was to provide timely, accurate, and granular information about clinical status; improve communication about pharmacotherapy between patients and clinicians; enhance SDM; and improve patient treatment satisfaction.

During the 3-day assessment, the mHealth app delivered notifications to the participants’ smartphones 10 times a day at random times between 10 AM and 10 PM to complete brief questionnaires. Participants had 15 minutes to begin responding to questions presented on the smartphone’s screen. The questions asked during each sampling assessment varied based on the time of day and a system of branching logic within each set of questions. The first daily questionnaire included questions about sleep and medications taken the previous day. The middle 8 questionnaires asked about psychiatric symptoms, medication side effects, mood, substance use, social activities, and context, as well as activities and difficulties functioning. The final questionnaire each day asked about side effects that are less transient (eg, constipation and sexual side effects), as well as global functioning. Each questionnaire took 3 to 5 minutes to complete. Following the 3-day ESM assessment, the clinician received a 1-page succinct report summarizing key clinical variables characterizing the current status and functioning of the patient, along with changes from the previous month and the start of the study that could be reviewed and discussed with the patient in the upcoming session. Clinicians were encouraged to share the reports with their patients during clinical sessions and use them as a basis for discussions on clinical status, treatment goals, clinical progress, and SDM.

Sample

A purposive sampling approach was used to identify staff members at each CSC site whose primary role was to provide clinical care to patients. Team leaders served as initial key informants at each site and nominated a psychiatric care provider (either a physician or nurse practitioner) and other clinical staff members, whom they believed would contribute feedback relevant to the proposed intervention and implementation plan, for study participation. All staff members identified for the interviews provided informed consent and participated in the study.

Data Collection

The initial development of the questions and inquiry items included in the FREEDoM app was completed by DK and TSS, with the team members providing additional edits. Next, the CSC providers completed individual semistructured interviews lasting approximately 1 hour each. Interviews were conducted by 2 senior MD or PhD clinician researchers (TSS and DK) who were trained and supported by 2 experts in qualitative methods and implementation science (LJC and AS). The first 2 interviews were conducted in-person before the COVID-19 pandemic restrictions, and subsequent interviews were conducted via videoconferencing (eg, via Zoom) owing to social distancing mandates. Interview guides (Multimedia Appendix 1), which were developed collaboratively by the research team, were framed to inquire about providers’ perspectives on study procedures related to implementation, recruitment, and retention, as well as feedback on the content and structure of both the FREEDoM mHealth app used to deliver the proposed intervention and the report delivered to clinicians. During the interviews, providers were shown screenshots of the mHealth app and a draft of the 1-page clinical report for feedback. The interviews were audio-recorded, transcribed verbatim, reviewed for accuracy, and deidentified.

Pragmatic Data Analysis Procedures

Data analysis and deliberation of adaptations were performed in tandem with data collection (Figure 1). Data were analyzed using a summary template and matrix analysis approach to categorize suggested adaptations using key dimensions of the FRAME. Matrix analysis is a rigorous but pragmatic method for rapidly extracting and reducing qualitative data, allowing researchers to systematically synthesize and catalog content into a template of key topics [51-53].

Following the semistructured interviews, one author (RTR) developed draft interview summaries of each transcript, extracting interview content based on key interview topics. These summaries were then edited by a senior author (AS) with expertise in qualitative analysis to ensure that all information pertinent to potential adaptations from each transcript was captured in the summary. Summaries included providers’ assessment of procedures or content (eg, endorsed or had concerns) and systematically outlined each suggested adaptation along with illustrative quotes.

Next, brief descriptions of the suggested adaptations and relevant contextual information from the summaries were entered into a descriptive adaptation matrix (Table 1). The adaptation matrix was a Microsoft Excel table template with column headings representing information that would be needed to classify adaptations along FRAME domains (the adaptation suggested, supporting rationale or contextual information, whether adaptations would vary by study site, and key quotes) and rows outlining potential adaptations organized by project components (eg, “project implementation issues,” “app-related,” and “report-related”) with specific subtopics (eg, “mobile phone and data plan reimbursement” was a subtopic of “project implementation issues”). During this charting process, the authors met every other week to discuss the suggested adaptations and deliberate making changes. Decisions on whether to implement a suggested adaptation were documented by 1 author (RTR) in the adaptation matrix along with a brief description of why the adaptation was incorporated.

Figure 1. Pragmatic analysis for rapid qualitative research. FRAME: Framework for Reporting Adaptations and Modifications-Enhanced.
View this figure
Table 1. A sample descriptive adaptation matrix mobile health First Episode Digital Monitoring.
DomainStakeholder input and feedbackSuggestion for adapted practiceVaries by siteRelevant stakeholder quoteAdaptation implemented
App-related

Questions or contentCurrent app content is fine, but weight gain should be added as a side effectAdd weight gain to the app questionsNo“Would you consider adding weight gain to the list of side-effects? Because that’s been something that has been brought up by some participants in the past and the prescriber really tries to, to work with them on that.”Yes

Pinging and frequency10 pings a day may be too muchReduce pings per dayN/Aa“I’m just curious if like if you’re in college, you’re in high school; like how realistic is it that you’re going to be...? I don’t know what’s the frequency...”No
Report

Layout or designInclusion of daily averages is usefulInclude daily averages in granular graphNo“I like having the average for the day.”Yes

Access to reportIt would be helpful for participants to receive a report of their answers within the app itselfConsider providing report directly to participantNo“Just wondering, like are, are participants able to get like information like this?... like maybe like since it’s an app like they’re able to like see what they said and like past months.”No, consider for future iteration

ContentCaffeine should be included owing to its effects on sleepInclude caffeine as a substanceNo“[Caffeine and other substances] are relevant for sleep.”Yes

aN/A: not applicable.

After all the suggested adaptations were entered into the descriptive adaptation matrix, each adaptation was further classified along FRAME domains that were applicable to tracking planned adaptations before implementation for interventions without established fidelity standards (what was modified at what level of delivery, type of contextual adaptation, nature of content modification, reason for adaptation and goal). The FRAME organizes the reasons why an adaptation is made into 4 overarching categories: recipient, provider, organization, or sociopolitical context, with specific subcategories. An additional subcategory was developed and added under participant-level reasons for adaptation that emerged from the data—“Health narratives and priorities”—to reflect the adaptations that sought to be more responsive to participants’ understanding and perspectives on their needs and mental health. Descriptive reasons for not implementing the suggested adaptations were further classified into categories inductively developed by the researchers. To organize and streamline findings, the adaptations were clustered by the reason for suggesting them and by whether or not the suggestion was adopted. Strategies for maximizing rigor included progressively reducing the data using a series of defined steps (eg, transcribing, summarizing, charting, and categorizing); using multiple researchers at each step to extract, reduce, and categorize the data; conducting frequent debriefing meetings throughout data collection and analysis; and keeping an audit trail [54,55].

Ethics Approval

All the procedures were approved by the Institutional Review Boards of the New York State Psychiatric Institute (#7900) and Northwell Health (#20-0429).


Overview

A total of 11 CSC clinical providers completed the semistructured interviews: 4 staff members each at 2 sites and 3 staff members at a third site. The interviewed staff members represented different disciplines and clinical roles on the treatment team, including team leaders (3/11, 27%), psychiatric care providers (eg, psychiatrist and nurse practitioner; 5/11, 45%), and primary therapists (eg, clinical social workers; 3/11, 27%). In total, the staff members suggested 31 adaptations (Tables 2 and 3): twenty-four were regarding the intervention itself (eg, app questionnaire and data reports), and the remaining 7 were regarding implementation strategies (eg, reiterating instructions and checking smartphone compatibility). Suggestions to modify content were the most frequent (18/31, 58%) and focused on adding or refining content, such as including new survey questions or displaying additional data in the report. This was followed by suggestions for context modification (13/31, 42%), including aspects of format, such as the design of the report, and aspects of the population, such as which staff should have access to the report. Reasons for suggesting modifications included responsiveness to factors at the participant (15/31, 48%), provider (11/31, 35%), organization or setting (3/31, 10%), and sociopolitical (2/31, 6%) levels. Overall, the goals of the suggested adaptations were to improve the fit with recipients or to increase satisfaction, effectiveness, feasibility, reach, and engagement. Ultimately, 58% (18/31) of suggestions were implemented within the study and applied across all sites (ie, adaptations were not specific to or varied by site), whereas 42% (13/31) were not adopted.

Table 2. Summary of adaptations suggested and accepted for mobile health First Episode Digital Monitoring.
Reason for suggested adaptationGoal was to increase or improveWhat was suggested and adaptedType of adaptation made
Recipient level

Health narratives and priorities


Time burdenFit with recipients, feasibilityRepeat or reassure that skipping some questionnaires is OKImplementation strategy: content-repeating


Privacy or confidentialityReach and engagementRepeat information regarding confidentialityImplementation strategy: content-repeating


Person-centered careFit with recipients, satisfactionReport: use person-centered, experience-based language vs medicalized languageContent: tailoring or tweaking or refining


Recovery-oriented approachFit with recipients, satisfactionApp: ask how bothersome symptom is and impact on functioningContent: adding elements

Access to resources


TechnologyReach and engagement, feasibilityCheck smartphone compatibility before enrollmentImplementation strategy: context, format

Crisis or emergent circumstance


Participant safetyFit with recipientsInclude suicidal ideation as exclusionary criteriaContext: population

Comorbidities


Multiple mental health symptoms or conditionsEffectivenessClarify instructions to include multiple psychiatric medicationsImplementation strategy: content-tailoring or tweaking or refining


Physical health side effectsFit with recipients, satisfactionApp: ask about weight gain as potential side effectContent: adding elements
Provider level

Clinical judgment


Clinically meaningful informationSatisfaction, effectivenessApp: ask about timing of medication use and factor in for adherenceContent: adding elements


Clinically meaningful informationSatisfaction, effectivenessReport: include substance use and caffeine use on reportContent: adding elements


Clinically meaningful informationSatisfaction, effectivenessReport: include lines for daily averages on report’s granular graphsContent: adding elements


Previous training or skillsFeasibilityTrain providers to read report and include legendTraining content: adding elements

Preferences


Data visualizationSatisfactionReport: reduce report or graph density (eg, focus on subset of symptoms or side effects)Context: format


Data visualizationSatisfactionReport: use dots on report’s granular graphsContext: format
Organization level

Service structure


Team-based careFeasibility, effectivenessOption to share report with multiple staffContext: personnel

Mission or culture


Shared decision-makingSatisfaction, effectivenessOption for clinician to show report to the participantContext: format
Sociopolitical level

Existing policies


COVID-19 pandemic social distancing mandatesReach and engagementAttend web-based program meeting for introduction or warm handoff to client for recruitmentImplementation strategy: context, format


COVID-19 pandemic social distancing mandatesReach and engagementOption to receive an e-gift card as participant reimbursement Implementation strategy: context, format
Table 3. Summary of adaptations suggested and rejected for mobile health First Episode Digital Monitoring.
Reason for suggested adaptationGoal was to increase or improveWhat was suggested, but not adaptedType of adaptation not madeReason why adaptation not made
Recipient level

Health narratives and priorities


Time burdenFit with recipients, feasibilityReduce ping frequencyContent: shortening or condensing (pacing or timing), tailoringCompromises core components


Time burdenFit with recipients, feasibilityTailor ping timing around participant work or school hoursContent: shortening or condensing (pacing or timing), tailoringIncreases complexity


Privacy or confidentialityReach and engagementOffer non–app-based means of collecting informationContext: formatBeyond intervention scope


Person-centered careFit with recipients, satisfactionApp: ask more positively worded questionsContent: adding elementsIncreases recipient time burden


Person-centered careFit with recipients, satisfactionAsk more open-ended questionsContent: adding elementsIncreases complexity of data or report

Access to resources


TechnologyReach and engagement, feasibilityProvide phones to participantsImplementation strategy: content, addingAdditional resources required (as well as in-person meeting during COVID-19 pandemic)

Literacy or education level


Data visualizationFit with recipientsReport: simplify report so participants can understand it more easilyContent: tailoring or tweaking or refiningCompromises core components (may reduce usefulness to providers as primary targets)
Provider level

Clinical judgment


Clinically meaningful informationSatisfaction, effectivenessApp: ask more about negative symptomsContent: adding elementsBeyond intervention scope and increases time burden


Clinically meaningful informationSatisfaction, effectivenessApp: ask about suicidal ideationContent: adding elementsAdditional resources required


Clinically meaningful informationSatisfaction, effectivenessAllow providers to access more information than what is on the reportContent: adding elementsAdditional resources required


Clinically meaningful informationSatisfaction, effectivenessCollect data on days more removed from clinical sessionContent: lengthening or extending (pacing or timing)Beyond scope

Preferences


Data visualizationSatisfactionReport: use bars on granular graphsContext: formatNot consistent with most clinicians’ preferences
Organization level

Mission or culture


Shared decision-makingSatisfaction, effectivenessSend report or information directly to participantContext: formatAdditional resources required and beyond intervention scope

Adaptations Suggested and Adopted

Adaptations for Participant-Level Reasons

Adaptations that were ultimately adopted were most commonly driven by reasons at the participant level and included the need to address factors such as participants’ health narratives and priorities, comorbidities, access to resources, and safety. With respect to health narratives and priorities, the most substantive changes were to refine or add intervention content. Staff members emphasized the need to use person-centered and experience-based language, instead of medical language, throughout the intervention, including changing data labels on the report (eg, changing “symptoms” to “experiences” and “hallucinations” to “seeing things”; Multimedia Appendix 2):

[It’s important that the] language be recovery-oriented...[many participants] don’t agree with our diagnosis. So that’s why it’s important for us to be able to engage them. It can’t always be- reflect the language of sort of traditional medical model.
[P6]

Beyond refining the wording, staff members highlighted the need for additional app questions that would incorporate participants’ own perceptions of their mental health in a more person-centered and recovery-oriented way, potentially making the questionnaire more engaging and relevant to the participants. Suggestions that were adopted included adding questions that would not only assess the frequency of symptoms or side effects but also to inquire the degree to which participants perceived these experiences to be bothersome or interfering in their functioning (ie, how much “[This Experience] gets in the way of what I’m doing”):

Particularly for our population, it’s really not about whether or not they have a symptom... It’s really about if that symptom is getting in the way of something...our young people don’t in general tend to like apps that remind them or conceptualize them as being sick... asking things in a way that might be a little more recovery-oriented might be helpful...“If you do experience this, can you tell us...how much is this thing particularly bothersome or impacting your ability to do the things that you want to do whether you work or school”...That way the person could experience it as, “yes I have voices, but no, it’s actually not impacting me” or if something is interrupting your life, it might help you to remind yourself, “okay, this actually is a problem.”
[P6]

Staff members also noted the need to add questions that would further reflect priorities of the recipients; for example, asking about side effects that were of known concern to them, as subsequently included in the app:

Would you consider adding weight gain to the list of side-effects? Because that’s been something that has been brought up by some participants in the past and the prescriber really tries to... work with them on that.
[P3]

Staff members also identified the need to reassure participants of confidentiality and voluntariness by repeating content, such as reiterating instructions regarding confidentiality and the ability to skip app questions. Finally, to address concerns regarding participant safety, study exclusionary criteria were modified to include suicidal ideation, whereas concerns regarding the participants’ access to technology were addressed by adding a step to check participants’ smartphone compatibility with the app before enrollment.

Adaptations for Provider-Level Reasons

Reasons at the provider level included factors such as clinical judgment, previous training or skills, and provider preferences. Most commonly, this entailed suggestions for adding questions to the app or presenting additional data in the report to maximize access to information that providers believed to be clinically meaningful. This included requests for the report to display specific substances beyond illicit drugs (eg, caffeine) that can impact participants’ functioning and for the app questionnaire to account for different factors that have a role in medication adherence (eg, route of administration and timing):

It doesn’t capture what substance was used. You would have to ask...maybe code each [substance in the report]...show [the participant]...had a cup of coffee...[Caffeine and other substances] are relevant for sleep.
[P1]
What if the patient’s on a (Long Acting Injectable), like an antipsychotic, how would you capture that?...What if they [were supposed to take] the medication in the morning, but took it in the afternoon...
[P11]

Suggestions for how best to depict data in the report generally reflected provider preferences for visualizing data in a certain format to enhance readability or to reduce the density of graphs (eg, display only the subset of symptoms and side effects with highest impact or severity) owing to concerns that the report was “a little overwhelming...lots of bars...the page is completely full.” In addition, enhancing training for providers in interpreting reports was identified and incorporated as a key implementation strategy:

At first when I saw the report I’m like, “oh, my gosh, all these dots, all these numbers,” but...you guys [actually] explaining it to me...I feel like it’s really simple...
[P3]
Adaptations for Organization-Level Reasons

Regarding reasons associated with the organization or setting, adaptations were suggested to better align the project with key aspects of the mission or culture of the CSC programs and team service structure, specifically SDM and the use of a team-based approach. For example, staff members suggested that they could show the report to participants during sessions, using it as a “visual” tool for promoting participant engagement and informing SDM processes (eg, discussing options, tailoring pros and cons, and exploring patient fears or expectations):

I could totally see using it. I’m all about transparency. So I would show [the report] to them, and I would try to explain it and everything. “And this is what the data says...” in terms of engaging them into their treatment, it’ll help with that...this is...shared decision making. And this gives them more of a connection and participation in their treatment.
[P1]

Furthermore, given the multidisciplinary and team-based approach of the CSC programs, providers emphasized that team members other than the psychiatric care provider should have access to the report, which was integrated as an option:

Since we are a team and we talk very openly about each participant...I think all of our team members should get [the report]...it would be like a comprehensive way to say...this person is...experiencing this and this, experiencing this kind of side-effects, and then we can get together as a team about it during our meeting.
[P8]
Adaptations for Sociopolitical Context–Level Reasons

Finally, to respond to the sociopolitical context, adaptations to implementation strategies were suggested to address some of the barriers related to COVID-19 pandemic social distancing mandates. Given the limited in-person services, staff members noted the need to expand options for reimbursing participants (eg, offering electronic gift cards) and for preserving aspects of a warm handoff when linking participants to researchers by adding the option of a web-based handoff:

To introduce the [participant]...we are able to do groups via the [virtual] platform. So if the participant is able to go onto the platform and do our video session...if they agree, [the research assistant] can join and it will be the three of us.
[P8]

Adaptations Suggested but Not Adopted

Of the 31 suggestions, 13 (48%) adaptations were ultimately not implemented, which generally reflected suggestions to add content by collecting additional information through the app questionnaire, to adapt aspects of context to facilitate participants’ direct access to and understanding of their own data, and to change the pacing or timing of the intervention components. Overall, the reasons for suggesting these adaptations reflected rationales similar to those behind the adaptations that were made, with responsiveness to health narratives and priorities of the participants and clinical judgment of the providers once again being the most frequent. The reasons that researchers did not incorporate these suggested adaptations included additional study resources being required, modifications being beyond the scope of the intervention, concerns regarding compromising core components or mechanisms, managing intervention complexity, managing participant time burden, and adaptations not being consistent with the preferences of most providers. The staff suggested additional questionnaire content, such as asking more about negative symptoms and positive experiences or adding open-ended questions, primarily as a potential way to make the app more engaging for participants:

But it will also be nice to, towards the end, to say oh, “but you did report this other positive thing that happened to you.” Or so it’s just not about medication.
[P10]

Although researchers acknowledged the potential value of collecting this additional data, these additions were ultimately not made owing to concerns that they were outside the primary scope of the pilot trial, would pose an increased time burden for participants, or would unacceptably increase the complexity of the data presented in the report.

The staff also expressed concerns about different aspects of intervention timing, inquiring “how realistic” it was for participants to respond to 10 questionnaires a day, with suggestions to reduce or tailor questionnaire frequency. There was also provider uncertainty about the timing of data collection, with suggestions to space out participant completion of questionnaires and to include time points further removed from upcoming appointments to potentially capture experiences that may also be relevant but more challenging to remember:

Is there an opportunity to have flexibility with what three days are selected...As opposed to the last three days before they’re seeing me...answering those questions [at different points] in real time further away from my appointment...I could see sometimes where [the past three days] might matter, if there’s something they want to talk about in their experience more recently. I can see sometimes where it’s not as relevant.
[P6]

These changes to intervention timing were not adopted, with researchers seeking to preserve the core component of 10 ESM questionnaires based on their prior experience of high frequencies yielding adequate response rates [13], and tailoring questionnaire frequency to participants’ changing schedules was too complex to be reliably implemented over time.

Providers also suggested offering alternative means for participants to complete the questionnaire, offering smartphones to participants lacking the technology, as well as providing participants with direct access to their own data and further simplifying the report to make it easier for participants to understand:

Is there an option if participants are hesitant about downloading an app, like a way to do it by email...
[P2]
It would be nice if when you’re with a particular client to simplify these graphs. Because if you are going to use it as a tool, like this most people would not understand.
[P10]

Although these suggestions had the potential to expand intervention reach and enhance participant engagement with the intervention and their own data, they were ultimately not adopted. Researchers determined that offering a non–app-based means of collecting data was beyond the scope of the mHealth intervention and that tailoring the report to participants versus providers could result in a loss of information that potentially compromised core components. Moreover, purchasing smartphones would require additional funding.


Principal Findings

This study presents our process and findings of using rapid and pragmatic qualitative methods along with the FRAME to systematically solicit, document, deliberate, and report provider-suggested adaptations to FREEDoM, an mHealth app aimed at enhancing treatment for individuals with FEP. This study is one of only a handful of published reports characterizing efforts to incorporate direct stakeholder input (eg, clinicians) into the development process of an app targeting treatment of psychosis and the first to focus on enhancing the therapeutic relationship and improving SDM among patients with FEP and their treatment teams.

With overarching research questions guided by the FRAME, we conducted focused semistructured interviews while concurrently extracting data from transcripts to interview summaries and then to a descriptive matrix, further condensing the data at each step until we categorized each adaptation along the FRAME domains. This study demonstrates how these methods can facilitate rapid analysis of qualitative research data for intervention adaptation and yield timely findings with high clinical relevance to inform the delivery of care.

Reasons for suggesting adaptations most commonly included responsiveness to health narratives and priorities of patients, clinical judgment of providers, and mission or culture of organizations. Suggestions to add or refine content were most common, including asking participants to rate how bothersome symptoms or side effects were, rewording the report to be person centered and experience based in lieu of medical language, and presenting additional data in the report. Adaptations to context were most often related to an implementation strategy (eg, web-based handoffs during recruitment), the format of the provider report, and with whom the report was shared.

Overall, the adaptations that were suggested and adopted were driven by key aspects of the CSC context to shift the intervention to better reflect the needs and preferences of the population served and the CSC’s emphasis on SDM, recovery-oriented practice, and team-based approach to care. In particular, asking additional questions and changing the phrasing of report labels sought to address factors such as patients’ perceptions of their mental health conditions, priorities, and existing comorbidities. The inclusion of additional questions also addressed providers’ need for more comprehensive and clinically relevant information, as did changes to which data were displayed and how the report was designed. Adaptations implemented also responded to key aspects of the structure, mission, and culture of the CSC programs. For example, the CSC team-based approach to care necessitated the option of sharing the report across providers, whereas the option to review the report collaboratively with participants during a session aligned with SDM. This adaptation to share the report with other providers and patients, as well as the inclusion of patients’ perceptions of the impact of symptoms on functioning, may be particularly important to counteract the potential tendency of any one provider to narrowly interpret or selectively focus on certain data, given their particular role, background, or training. Although not fully eliminating factors such as providers’ information selection bias, incorporating patients’ ratings of functioning and having multiple individuals review and discuss the report, including the patients themselves, may help bridge the gap between what patients and any one provider might perceive as important, relevant, or possible, potentially enhancing SDM.

Adaptations that were suggested but not incorporated most frequently reflected suggestions to collect additional patient information, facilitate patients’ access to their own data, or change the timing of the intervention components. The fact that the rationales for suggesting these adaptations, which were ultimately not made, were generally similar to the those for implemented adaptations indicates that the adaptation decision-making process—whether to adapt or not—did not appear to exhibit a systematic bias (eg, consistently rejecting adaptations reflecting participant-level compared with provider-level factors). Suggested adaptations were not incorporated into the intervention when the research team deemed that they were outside of the current aims or scope of the trial, potentially compromised core components or mechanisms or that they presented a feasibility challenge such as insufficient resources to implement an adaptation in the context of a pilot trial or increased complexity.

The tracking of adaptations not made further helps to highlight key dilemmas that may frequently emerge when deliberating mHealth adaptations within clinical care. For example, in this study, researchers had to weigh the potential benefit of the providers’ suggestion that participant engagement could be encouraged by including more positively worded statements or open-ended questions in the app against the potential drawback of increased time required to complete questionnaires, which might discourage participant engagement. Ultimately, the decision was made to not include these extra questions, given that the potential net impact on engagement was unclear. In addition, it hindered the study’s ability to expeditiously produce short 1-page clinician reports by having to process and include additional items and free text entries, which would also potentially increase the amount of time that clinicians would need to review a more complex report. Such deliberations illustrate how decision-makers may have to discern how best to balance factors such as the desire to potentially create a more engaging app while not sacrificing feasibility by inadvertently creating an excessive time burden for patients or clinicians. Future studies can further identify the information that decision-makers consider when weighing these factors and explore the feasibility of empirically pretesting different iterations of an intervention when the evidence to support an adaptation decision is unclear. For example, with adequate time and resources, 2 versions of an app could be tested—one with and one without the positive and open-ended questions—providing an empirical basis upon which to accept or reject this suggestion, depending on the respective rates of participant engagement. Overall, tracking adaptations not made provides greater insight into the dilemmas and decision-making processes of intervention adaptation while also offering concrete suggestions that can be considered for future refinement of similar mHealth interventions. Proposing preliminary categories for reasons why adaptations are not made represented the first step toward providing guidelines to standardize this process.

Overall, health care systems and workflows often vary dramatically, necessitating consideration of whether to integrate uniform and standardized interventions or shape interventions around specific aspects of local contexts, for example, the needs, preferences, and training backgrounds of providers in any setting. Adapting interventions to certain contexts and providers may yield several benefits, such as increased intervention uptake, satisfaction, and effectiveness. However, the challenges in engaging in the process of intervention adaptation include the extra time, resources, and expertise required to solicit stakeholder input and make adaptations. By illustrating some of the tools and rapid approaches used in this study, we seek to help minimize some of these challenges.

Limitations

This study has several limitations. Although CSC providers with different clinical roles were interviewed, the inclusion of other provider roles representing nonclinical staff (eg, peer specialist and supported employment specialist) could have yielded additional information relevant to adaptation, particularly given the team-based approach of CSC programs. The inclusion of CSC patients was originally planned as part of stakeholder interviews (to be published in a separate manuscript); however, the onset of the COVID-19 pandemic and the enactment of social distancing mandates coincided with the start of the study and interfered with patient data collection. Given that implementation barriers identified by patients and providers can be different, the inclusion of CSC patients would likely have identified additional suggestions that either expanded upon or potentially conflicted with the feedback offered by providers. Future studies, including our pilot trial of the developed FREEDoM app, which includes both stakeholder perspectives, can also offer insights into how best to balance or reconcile suggested adaptations that differ or conflict between patients and providers. Nevertheless, by soliciting CSC clinicians’ perspectives, this study addressed a key gap in the literature regarding providers’ information needs and strategies that may promote mHealth integration into early psychosis treatment. This gap is particularly important to address given the overarching concerns regarding providers’ buy-in for, and use of, patient-generated data in health care more broadly [40]. In addition, although our study contributes to the current understanding of provider preferences regarding MBC within early psychosis treatment and how to deploy mHealth technologies, it represents only an initial step, with much work remaining to identify the factors that influence long-term implementation, acceptability, and sustainability.

By virtue of the research objective, identified adaptations reflect the context of participating CSC programs and the scope of a subsequent clinical trial seeking to provide clinicians with patient information that may impact pharmacological treatment decisions. However, CSC is an established evidence-based practice with well-articulated core components that may support broader applicability of our findings, including a team-based approach, a wide range of multidisciplinary services (eg, psychotherapy, pharmacotherapy, and primary care coordination; supported employment and education; family education and support; and case management), and person-centered, recovery-oriented treatment that emphasizes SDM. In addition, although all 3 CSC study sites were in urban areas and had their fidelity to the model monitored, adaptations were uniform across sites despite variability along other key dimensions, such as the type of organization operating the program (eg, affiliated with a community-based nonprofit organization vs a hospital), aspects of population served (eg, ratio of more newly enrolled CSC patients to more established patients), and psychiatric or medical staffing (eg, nurse practitioner or psychiatrist, one or multiple psychiatric providers on team). Although this suggests the potential for broader generalizability of findings across CSCs, the adaptations may not be applicable for settings using mHealth data for a different purpose or to CSC programs that substantially depart from the model’s core functions and components, particularly those that may not adopt the recovery-oriented, person-centered, and SDM approaches that drove many of the adaptations suggested in this study. Finally, the study focused on adaptations suggested before intervention implementation; therefore, results from ongoing clinical trials are needed to evaluate the implementation and effectiveness of the developed mHealth intervention.

Conclusions

This study illustrates a pragmatic and rapid application of the FRAME to track provider-suggested adaptations to FREEDoM, a novel mHealth intervention app, and its implementation within “real-world” FEP treatment programs. The methodology used in this study offers a rigorous, iterative, and rapid approach to solicit, analyze, and incorporate qualitative stakeholder inputs for the development and adaptation of clinical interventions. Systematic tracking of suggested adaptations, including which adaptations were ultimately not implemented (and why), is essential to understanding and enhancing key implementation indicators such as intervention fit, feasibility, and acceptability while also increasing transparency and accountability in the adaptation decision-making processes. The FREEDoM app seeks to enhance the therapeutic relationship and improve SDM between patients with FEP and their treatment teams. Future studies should characterize relevant clinical findings, including measures of therapeutic relationships and SDM.

Acknowledgments

This study was supported by the National Institute of Mental Health (grant P50MH115843). The authors would like to thank the individuals and coordinated specialty care sites that participated in this study.

Authors' Contributions

DK, TSS, LJC, and AS conceptualized and designed the study. RTR, XX, and RB assisted with data collection, visualization, and coordination. AS, RTR, LJC, TSS, and DK analyzed and interpreted the qualitative data. AS and RTR drafted the initial manuscript, and SS, IL, LJC, TSS, and DK provided the edits. All authors have read and approved the final manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Clinician qualitative interview.

PDF File (Adobe PDF File), 38 KB

Multimedia Appendix 2

Sample participant mobile health reports.

DOCX File , 255 KB

  1. Kane JM, Robinson DG, Schooler NR, Mueser KT, Penn DL, Rosenheck RA, et al. Comprehensive versus usual community care for first-episode psychosis: 2-year outcomes from the NIMH raise early treatment program. Am J Psychiatry 2016 Apr 01;173(4):362-372 [FREE Full text] [CrossRef] [Medline]
  2. McGorry PD. Early intervention in psychosis: obvious, effective, overdue. J Nerv Ment Dis 2015 May;203(5):310-318 [FREE Full text] [CrossRef] [Medline]
  3. Dixon LB, Stroup TS. Medications for first-episode psychosis: making a good start. Am J Psychiatry 2015 Mar 01;172(3):209-211. [CrossRef] [Medline]
  4. Robinson DG, Schooler NR, John M, Correll CU, Marcy P, Addington J, et al. Prescription practices in the treatment of first-episode schizophrenia spectrum disorders: data from the national RAISE-ETP study. Am J Psychiatry 2015 Mar 01;172(3):237-248 [FREE Full text] [CrossRef] [Medline]
  5. Aboraya A, Nasrallah HA, Elswick DE, Ahmed E, Estephan N, Aboraya D, et al. Measurement-based care in psychiatry-past, present, and future. Innov Clin Neurosci 2018 Nov 01;15(11-12):13-26 [FREE Full text] [Medline]
  6. Lewis CC, Boyd M, Puspitasari A, Navarro E, Howard J, Kassab H, et al. Implementing measurement-based care in behavioral health: a review. JAMA Psychiatry 2019 Mar 01;76(3):324-335 [FREE Full text] [CrossRef] [Medline]
  7. Blum LH, Vakhrusheva J, Saperstein A, Khan S, Chang RW, Hansen MC, et al. Depressed mood in individuals with schizophrenia: a comparison of retrospective and real-time measures. Psychiatry Res 2015 Jun 30;227(2-3):318-323 [FREE Full text] [CrossRef] [Medline]
  8. Kimhy D, Delespaul P, Ahn H, Cai S, Shikhman M, Lieberman JA, et al. Concurrent measurement of "real-world" stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology. Schizophr Bull 2010 Nov;36(6):1131-1139 [FREE Full text] [CrossRef] [Medline]
  9. Kimhy D, Delespaul P, Corcoran C, Ahn H, Yale S, Malaspina D. Computerized experience sampling method (ESMc): assessing feasibility and validity among individuals with schizophrenia. J Psychiatr Res 2006 Apr;40(3):221-230 [FREE Full text] [CrossRef] [Medline]
  10. Abdel-Baki A, Lal S, D-Charron O, Stip E, Kara N. Understanding access and use of technology among youth with first-episode psychosis to inform the development of technology-enabled therapeutic interventions. Early Interv Psychiatry 2017 Feb;11(1):72-76. [CrossRef] [Medline]
  11. Greenland-White SE, Ragland JD, Niendam TA, Ferrer E, Carter CS. Episodic memory functions in first episode psychosis and clinical high risk individuals. Schizophr Res 2017 Oct;188:151-157 [FREE Full text] [CrossRef] [Medline]
  12. Kimhy D, Lister A, Liu Y, Vakhrusheva J, Delespaul P, Malaspina D, et al. The impact of emotion awareness and regulation on psychotic symptoms during daily functioning. NPJ Schizophr 2020 Mar 24;6(1):7 [FREE Full text] [CrossRef] [Medline]
  13. Kimhy D, Myin-Germeys I, Palmier-Claus J, Swendsen J. Mobile assessment guide for research in schizophrenia and severe mental disorders. Schizophr Bull 2012 May;38(3):386-395 [FREE Full text] [CrossRef] [Medline]
  14. Kimhy D, Vakhrusheva J, Khan S, Chang RW, Hansen MC, Ballon JS, et al. Emotional granularity and social functioning in individuals with schizophrenia: an experience sampling study. J Psychiatr Res 2014 Jun;53:141-148 [FREE Full text] [CrossRef] [Medline]
  15. Vakhrusheva J, Khan S, Chang R, Hansen M, Ayanruoh L, Gross JJ, et al. Lexical analysis of emotional responses to "real-world" experiences in individuals with schizophrenia. Schizophr Res 2020 Feb;216:272-278 [FREE Full text] [CrossRef] [Medline]
  16. Torous J, Andersson G, Bertagnoli A, Christensen H, Cuijpers P, Firth J, et al. Towards a consensus around standards for smartphone apps and digital mental health. World Psychiatry 2019 Feb;18(1):97-98 [FREE Full text] [CrossRef] [Medline]
  17. Torous J, Wisniewski H, Liu G, Keshavan M. Mental health mobile phone app usage, concerns, and benefits among psychiatric outpatients: comparative survey study. JMIR Ment Health 2018 Nov 16;5(4):e11715 [FREE Full text] [CrossRef] [Medline]
  18. Ben-Zeev D, Brian RM, Jonathan G, Razzano L, Pashka N, Carpenter-Song E, et al. Mobile health (mHealth) versus clinic-based group intervention for people with serious mental illness: a randomized controlled trial. Psychiatr Serv 2018 Sep 01;69(9):978-985. [CrossRef] [Medline]
  19. Ben-Zeev D, Kaiser SM, Brenner CJ, Begale M, Duffecy J, Mohr DC. Development and usability testing of FOCUS: a smartphone system for self-management of schizophrenia. Psychiatr Rehabil J 2013 Dec;36(4):289-296 [FREE Full text] [CrossRef] [Medline]
  20. Ben-Zeev D, Scherer EA, Gottlieb JD, Rotondi AJ, Brunette MF, Achtyes ED, et al. mHealth for schizophrenia: patient engagement with a mobile phone intervention following hospital discharge. JMIR Ment Health 2016 Jul 27;3(3):e34 [FREE Full text] [CrossRef] [Medline]
  21. Ben-Zeev D, Chander A, Tauscher J, Buck B, Nepal S, Campbell A, et al. A smartphone intervention for people with serious mental illness: fully remote randomized controlled trial of CORE. J Med Internet Res 2021 Nov 12;23(11):e29201 [FREE Full text] [CrossRef] [Medline]
  22. Berry N, Machin M, Ainsworth J, Berry K, Edge D, Haddock G, et al. Developing a theory-informed smartphone app for early psychosis: learning points from a multidisciplinary collaboration. Front Psychiatry 2020;11:602861 [FREE Full text] [CrossRef] [Medline]
  23. Vaessen T, Steinhart H, Batink T, Klippel A, Van Nierop M, Reininghaus U, et al. ACT in daily life in early psychosis: an ecological momentary intervention approach. Psychosis 2019 Mar 19;11(2):93-104. [CrossRef]
  24. Ahmed A, Ali N, Giannicchi A, Abd-alrazaq A, Ahmed M, Aziz S, et al. Mobile applications for mental health self-care: a scoping review. Comput Methods Programs Biomed Update 2021;1:100041 [FREE Full text] [CrossRef]
  25. Batra S, Baker R, Wang T, Forma F, DiBiasi F, Peters-Strickland T. Digital health technology for use in patients with serious mental illness: a systematic review of the literature. Med Devices (Auckl) 2017 Oct;10:237-251 [FREE Full text] [CrossRef] [Medline]
  26. Bell IH, Alvarez-Jimenez M. Digital technology to enhance clinical care of early psychosis. Curr Treat Options Psych 2019 Jul 11;6(3):256-270. [CrossRef]
  27. Jameel L, Valmaggia L, Barnes G, Cella M. mHealth technology to assess, monitor and treat daily functioning difficulties in people with severe mental illness: a systematic review. J Psychiatr Res 2021 Nov 24;145:35-49. [CrossRef] [Medline]
  28. Larsen ME, Huckvale K, Nicholas J, Torous J, Birrell L, Li E, et al. Using science to sell apps: evaluation of mental health app store quality claims. NPJ Digit Med 2019;2:18 [FREE Full text] [CrossRef] [Medline]
  29. Alvarez-Jimenez M, Bendall S, Koval P, Rice S, Cagliarini D, Valentine L, et al. HORYZONS trial: protocol for a randomised controlled trial of a moderated online social therapy to maintain treatment effects from first-episode psychosis services. BMJ Open 2019 Feb 19;9(2):e024104 [FREE Full text] [CrossRef] [Medline]
  30. Buck B, Hallgren KA, Campbell AT, Choudhury T, Kane JM, Ben-Zeev D. mHealth-assisted detection of precursors to relapse in schizophrenia. Front Psychiatry 2021;12:642200 [FREE Full text] [CrossRef] [Medline]
  31. Eisner E, Drake RJ, Berry N, Barrowclough C, Emsley R, Machin M, et al. Development and long-term acceptability of ExPRESS, a mobile phone app to monitor basic symptoms and early signs of psychosis relapse. JMIR mHealth uHealth 2019 Mar 29;7(3):e11568 [FREE Full text] [CrossRef] [Medline]
  32. He-Yueya J, Buck B, Campbell A, Choudhury T, Kane JM, Ben-Zeev D, et al. Assessing the relationship between routine and schizophrenia symptoms with passively sensed measures of behavioral stability. NPJ Schizophr 2020 Nov 23;6(1):35 [FREE Full text] [CrossRef] [Medline]
  33. Fiorillo A, Barlati S, Bellomo A, Corrivetti G, Nicolò G, Sampogna G, et al. The role of shared decision-making in improving adherence to pharmacological treatments in patients with schizophrenia: a clinical review. Ann Gen Psychiatry 2020 Aug 05;19(1):43 [FREE Full text] [CrossRef] [Medline]
  34. Stovell D, Morrison AP, Panayiotou M, Hutton P. Shared treatment decision-making and empowerment-related outcomes in psychosis: systematic review and meta-analysis. Br J Psychiatry 2016 Jul;209(1):23-28. [CrossRef] [Medline]
  35. Bär Deucher A, Hengartner MP, Kawohl W, Konrad J, Puschner B, Clarke E, CEDAR study group. Participation in medical decision-making across Europe: an international longitudinal multicenter study. Eur Psychiatry 2016 May;35:39-46. [CrossRef] [Medline]
  36. Loos S, Arnold K, Slade M, Jordan H, Del Vecchio V, Sampogna G, CEDAR study group. Courses of helping alliance in the treatment of people with severe mental illness in Europe: a latent class analytic approach. Soc Psychiatry Psychiatr Epidemiol 2015 Mar;50(3):363-370. [CrossRef] [Medline]
  37. Matthias MS, Salyers MP, Rollins AL, Frankel RM. Decision making in recovery-oriented mental health care. Psychiatr Rehabil J 2012;35(4):305-314 [FREE Full text] [CrossRef] [Medline]
  38. O’Sullivan MJ, Rae S. Shared decision making in psychiatric medicines management. Mental Health Pract 2014 May 09;17(8):16-22. [CrossRef]
  39. Zielasek J, Reinhardt I, Schmidt L, Gouzoulis-Mayfrank E. Adapting and implementing apps for mental healthcare. Curr Psychiatry Rep 2022 Sep;24(9):407-417 [FREE Full text] [CrossRef] [Medline]
  40. Lavallee DC, Lee JR, Austin E, Bloch R, Lawrence SO, McCall D, et al. mHealth and patient generated health data: stakeholder perspectives on opportunities and barriers for transforming healthcare. mHealth 2020;6:8 [FREE Full text] [CrossRef] [Medline]
  41. McClelland GT, Fitzgerald M. A participatory mobile application (app) development project with mental health service users and clinicians. Health Educ J 2018 Jun 05;77(7):815-827. [CrossRef]
  42. Ferrer-Wreder L, Sundell K, Mansoory S. Tinkering with perfection: theory development in the intervention cultural adaptation field. Child Youth Care Forum 2011 Dec 25;41(2):149-171. [CrossRef]
  43. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011 Mar;38(2):65-76 [FREE Full text] [CrossRef] [Medline]
  44. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci 2015 Feb 12;10:21 [FREE Full text] [CrossRef] [Medline]
  45. Chambers DA, Norton WE. The Adaptome: advancing the science of intervention adaptation. Am J Prev Med 2016 Oct;51(4 Suppl 2):S124-S131 [FREE Full text] [CrossRef] [Medline]
  46. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci 2019 Jun 06;14(1):58 [FREE Full text] [CrossRef] [Medline]
  47. Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci 2013 Jun 10;8:65 [FREE Full text] [CrossRef] [Medline]
  48. Mascayano F, Nossel I, Bello I, Smith T, Ngo H, Piscitelli S, et al. Understanding the implementation of coordinated specialty Care for Early Psychosis in New York state: a guide using the RE-AIM framework. Early Interv Psychiatry 2019 Jun;13(3):715-719. [CrossRef] [Medline]
  49. Read H, Kohrt BA. The history of coordinated specialty care for early intervention in psychosis in the united states: a review of effectiveness, implementation, and fidelity. Community Ment Health J 2022 Jul;58(5):835-846. [CrossRef] [Medline]
  50. Bello I, Lee R, Malinovsky I, Watkins L, Nossel I, Smith T, et al. OnTrackNY: the development of a coordinated specialty care program for individuals experiencing early psychosis. Psychiatr Serv 2017 Apr 01;68(4):318-320 [FREE Full text] [CrossRef] [Medline]
  51. Abraham T, Van Tiem J. Using qualitative summary templates and matrix displays to assess factors that impact the pace of implementation. U.S. Department of Veterans Affairs. 2021 Oct 6.   URL: https:/​/www.​hsrd.research.va.gov/​for_researchers/​cyber_seminars/​archives/​video_archive.​cfm?SessionID=3996 [accessed 2022-05-02]
  52. Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual Health Res 2002 Jul;12(6):855-866. [CrossRef] [Medline]
  53. Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, et al. Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the veterans health administration. Implement Sci 2019 Feb 01;14(1):11 [FREE Full text] [CrossRef] [Medline]
  54. Creswell JW, Creswell JD. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 5th edition. Thousand Oaks, CA: Sage Publications; 2018.
  55. Padgett D. Qualitative and Mixed Methods in Public Health. Thousand Oaks, CA: Sage Publications; 2011.


CSC: coordinated specialty care
ESM: experience sampling method
FEP: first-episode psychosis
FRAME: Framework for Reporting Adaptations and Modifications‐Enhanced
FREEDoM: First Episode Digital Monitoring
MBC: measurement-based care
mHealth: mobile health
RCT: randomized controlled trial
SDM: shared decision-making


Edited by J Torous; submitted 25.08.22; peer-reviewed by T Wykes, D Ben-Zeev; comments to author 31.08.22; revised version received 16.09.22; accepted 19.09.22; published 06.11.22

Copyright

©Ana Stefancic, R Tyler Rogers, Sarah Styke, Xiaoyan Xu, Richard Buchsbaum, Ilana Nossel, Leopoldo J Cabassa, T Scott Stroup, David Kimhy. Originally published in JMIR Mental Health (https://mental.jmir.org), 31.10.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.