Published on in Vol 11 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/58409, first published .
Automated Real-Time Tool for Promoting Crisis Resource Use for Suicide Risk (ResourceBot): Development and Usability Study

Automated Real-Time Tool for Promoting Crisis Resource Use for Suicide Risk (ResourceBot): Development and Usability Study

Automated Real-Time Tool for Promoting Crisis Resource Use for Suicide Risk (ResourceBot): Development and Usability Study

Original Paper

1Department of Psychology, Harvard University, Cambridge, MA, United States

2Department of Psychiatry, Massachusetts General Hospital/Harvard Medical School, Boston, MA, United States

3Department of Psychology, Rutgers, The State University of New Jersey, Piscataway, NJ, United States

4Franciscan Children's Hospital, Brighton, MA, United States

Corresponding Author:

Daniel DL Coppersmith, AM

Department of Psychology

Harvard University

33 Kirkland St

Cambridge, MA, 02138

United States

Phone: 1 617 495 3800

Email: dcoppersmith@g.harvard.edu


Background: Real-time monitoring captures information about suicidal thoughts and behaviors (STBs) as they occur and offers great promise to learn about STBs. However, this approach also introduces questions about how to monitor and respond to real-time information about STBs. Given the increasing use of real-time monitoring, there is a need for novel, effective, and scalable tools for responding to suicide risk in real time.

Objective: The goal of this study was to develop and test an automated tool (ResourceBot) that promotes the use of crisis services (eg, 988) in real time through a rule-based (ie, if-then) brief barrier reduction intervention.

Methods: ResourceBot was tested in a 2-week real-time monitoring study of 74 adults with recent suicidal thoughts.

Results: ResourceBot was deployed 221 times to 36 participants. There was high engagement with ResourceBot (ie, 87% of the time ResourceBot was deployed, a participant opened the tool and submitted a response to it), but zero participants reported using crisis services after engaging with ResourceBot. The most reported reasons for not using crisis services were beliefs that the resources would not help, wanting to handle things on one’s own, and the resources requiring too much time or effort. At the end of the study, participants rated ResourceBot with good usability (mean of 75.6 out of 100) and satisfaction (mean of 20.8 out of 32).

Conclusions: This study highlights both the possibilities and challenges of developing effective real-time interventions for suicide risk and areas for refinement in future work.

JMIR Ment Health 2024;11:e58409

doi:10.2196/58409

Keywords



Overview

Real-time monitoring methods—such as ecological momentary assessment (EMA)—capture fine-grained, “real-world” information about suicidal thoughts and behaviors (STBs) as they occur and thus have immense potential to advance our understanding of suicide [1,2]. The promise of real-time monitoring methods for STBs has been widely recognized, as evidenced in part by the recent proliferation of published studies using EMA to study STBs. A recent systematic review identified 45 articles that have used real-time monitoring methods to study STBs [3].

Collecting information about STBs in real time, however, poses important safety, ethical, and methodological concerns [4]. One complex ethical challenge is regarding how to monitor or respond to incoming information about STBs from suicidal or self-injuring individuals. For example, when participants submit a survey response indicating current suicidal intent that researchers can access in real time, should the study team intervene? How should the study team determine when an intervention is needed? What should the intervention involve?

A consensus statement (generated from a panel of 24 experts) on the ethical and safety practices for conducting real-time monitoring studies of individuals at risk for suicide and related behaviors was recently released [4]. There was a strong (about 94%) agreement that when participants provide a “high-risk” response, the study team should reach out to them directly to conduct a suicide risk assessment as soon as possible (within 12-24 hours for responses indicating “imminent” risk). An exception the panel noted, however, was anonymous studies where contact information for participants is not known. A systematic review of practices in 59 previous or ongoing digital monitoring studies of STBs [5], however, indicates a gap between this apparent consensus and reality, as just over half (58%) of studies reported monitoring and intervening upon incoming responses during the study. Thus, there remains a notable departure between expert consensus and real-world practices for responding to incoming data. The other most common safety practice identified in this review was automated notifications (eg, pop-up messages with crisis resources) triggered by specific survey responses, which was used in roughly half of the studies included.

Both common approaches of researchers intervening, and static pop-up messages have significant limitations. Static messages are easy for participants to habituate to or ignore, especially during periods of high distress. Human- (often clinician-led) active interventions (eg, calling participants) by the research team are resource-intensive and have the potential to cause undesired reactivity. If participants are aware the researchers will act if they provide a “high-risk” response, participants may underreport STBs (or stop responding to study surveys entirely) to avoid an unwanted intervention. A recent empirical investigation of this issue of reactivity found mixed support for reactivity to real-time interventions (in this case, phone outreach by the study team) [6]. If responding to incoming data on STBs does influence individuals’ responding behavior, this could muddle the validity of the resultant study data. Another key limitation is the feasibility of monitoring and responding to incoming data, as this approach tends to require considerable staff, technology systems, and funding. The resources required for these safety protocols may partially explain the gap between expert consensus and real-world practices.

Given the increasing use of real-time monitoring methods [7], there is a need for novel, effective, and scalable tools for responding to suicide risk in real time. Recent advances in mobile technologies have the potential to facilitate automated, potentially highly efficient risk assessment strategies (ie, that do not require a clinician calling and may be less subject to reactivity) and deployment of specific types of notifications or alerts delivered directly to participants. Automated assessments and interventions could be faster and less burdensome for both participants and researchers. Automated tools (here, referring to systems that use rule-based [eg, if-then] logic, not those that use generative artificial intelligence; Blease and Torous [8]) might be more effective than human interventions because they can reach the participant faster than study staff. Recently developed rule-based automated assessments and interventions for STBs have shown promise. One such automated intervention consists of a brief, automated risk assessment and barrier reduction intervention (BRI) designed to increase the use of crisis resources [9]. The BRI component includes psychoeducation designed to reduce perceived barriers to using crisis resources by clearing up misconceptions on which these barriers may be based. A large-scale clinical trial found that this intervention was associated with a 23% increase in the use of crisis services [9]. A similar trial of an automated intervention also found that a brief automated intervention could increase the reported use of crisis resources [10]. The promise of this type of intervention for real-time monitoring is that it could guide participants to resources during high-risk situations. This type of intervention is consistent with recent calls for just-in-time adaptive interventions for suicide prevention, which are intended to provide the right type of support at the right time [11,12]. Therefore, providing ethical, scalable, and fast risk management during research studies.

This automated intervention, however, has only been tested at a single time point [9,10] and has never been tested in the context of participation in real time, longitudinal monitoring of those at risk for suicide. Understanding how this tool translates to this context is crucial given that one cannot assume engagement and efficacy generalize across contexts in digital health [13]. Engagement is a crucial first step given that digital interventions often suffer from low engagement and a quick drop-off in use over time. In short, to realize the potential of an automated digital intervention, participants must engage with the intervention. Therefore, it is crucial to first adapt this intervention for real-time monitoring. Second, it is important to understand the feasibility, acceptability, and utility of the intervention for real-time monitoring. The development of such a tool has the potential to improve the safety, scale, and breadth of real-time monitoring studies of STBs.

Study Aims and Outcomes

The overall project aims were (1) to adapt an evidence-based BRI that aims to increase the use of crisis resources for deployment in real time monitoring research and to (2) to test the feasibility, acceptability, and utility of this tool in a real-time monitoring study of suicidal thoughts. For the latter objective, the key outcomes of interest were as follows: (1) Do people engage with the BRI? (2) Do people contact crisis resources after the BRI? (3) What do people report about the acceptability and usability of the BRI?


Adapting BRI

To adapt the BRI [9], members of the author team first met to develop the workflow of the intervention as well as the text to be deployed. Over multiple meetings, the author team iterated on the workflow to be appropriate for the context of a real-time monitoring study. The main changes to be appropriate for real-time monitoring research included changing the beginning of the workflow to focus on the recent submission of a survey with self-report suicidal intent and for the text to be brief enough for viewing on mobile phones. The authors then worked with a graphic designer (MD) to name the BRI and develop images to pair with each text screen of the intervention. Images were added to promote engagement and to help differentiate the BRI from the base real-time survey questions. The BRI was named ResourceBot and images of the ResourceBot were generated for each screen. In total, there were 23 unique images or text slides generated. The ResourceBot was designed to be triggered after a participant submits a survey with elevated suicidal intent. The workflow of ResourceBot is: (1) confirm current distress (to ensure the participant-reported distress was not made in error), (2) offer resources, (3) identify barriers to using resources, and (4) provide psychoeducation to promote resource use. A general overview of the ResourceBot workflow is provided in Figure 1 and example slides of the tool are provided in Figures 2 and 3. All images of the ResourceBot and the decision logic are provided in Multimedia Appendix 1. ResourceBot was built and deployed directly in the Metricwire app (Metricwire Inc), which was also used for real-time monitoring surveys.

Figure 1. General overview of ResourceBot flow.
Figure 2. Example of ResourceBot offering resources. (A) Confirming current distress. (B) Asking about the type of resources to view. (C) Providing resources to call if participants selected that they wanted resources to call.
Figure 3. Barrier reduction intervention in ResourceBot. (A) Asking participants to select barriers to using resources. (B) The first psychoeducation slide shows if “I can handle it on my own” is selected. (C) The second psychoeducation slide shows if “I can handle it on my own” is selected.

As part of the development of the ResourceBot tool, we ran a pilot study with 8 participants to primarily determine when to trigger ResourceBot among other topics. In this pilot study, across 369 completed total surveys, the ResourceBot tool was only deployed twice to one participant. This one participant engaged with the ResourceBot and the crisis resources it provided. Based on the results of this pilot study, we lowered the threshold for triggering the ResourceBot from a suicidal intent rating on the EMA or daily survey of greater than 3 (out of 10) to greater than 1 (out of 10) for the main wave of data collection. This threshold was lowered so that a greater number of participants would be provided with the tool and able to provide feedback on it. Given that in the pilot, we found that the ResourceBot was successfully deployed, we proceeded to the main wave of data collection.

Participants

Participants were 74 adults who were recruited through the Prolific research platform. Prolific was selected for recruitment given it has been associated with high-quality data [14,15]. The demographics and clinical characteristics of the participants are provided in Table 1. The specific inclusion criteria for the study were suicidal thoughts in the past week, the ability to speak and write English fluently, access to an internet-capable smartphone, and living in the United States. To identify participants eligible for the study, a screening survey study was sent out to participants on the platform who lived in the United States, were fluent in English, had at least a 90% approval rating of past studies on Prolific, and endorsed a lifetime history of mental illness. We used the filter of lifetime history of mental illness to increase the prevalence of suicidal thinking in the population initially screened for recent suicidal thoughts. Consistent with recommendations for web-based data collection [16] and to help ensure quality data and attentive responding, suicidal thoughts were asked about in multiple ways on the screening survey (eg, binary lifetime presence of thoughts, ordinal recency of thoughts, and text entry number of days with thoughts). To be included participants needed to provide a consistent response pattern on the screening survey of endorsing the lifetime presence of suicidal thoughts, reporting the most recent time they had suicidal thoughts in the past week, and writing a number greater than zero for lifetime days with suicidal thoughts. Participants were compensated US $0.60 for completing the brief screening survey. Participants who met eligibility criteria based on the screening survey were then invited to the main study. To be included in the current analysis, participants had to complete at least 1 real-time survey.

Table 1. Participant demographics and clinical variables (n=74).
DemographicsVariables
Age (years), mean (range)34.7 (20 to 62)
Sex assigned at birth, n

Female37

Male37
Gender identity, n

Female31

Male36

Genderqueer, non-binary, gender fluid6

Other1
Race, n

White59

Black6

Asian3

Multiracial5

Other1
Ethnicity, n

Hispanic/Latino7
Highest education level, n

Less than high school1

High school/GEDa11

Some college22

2-year college degree6

4-year college degree30

Master degree3

Professional degree1
Lifetime suicide attempt, n

Yes46
Lifetime days with suicidal thoughts, median (SD)550 (2451)
Patient Health Questionnaire-9, mean (SD)18.0 (6.6)
Generalized Anxiety Disorder-7, mean (SD)14.2 (5.6)
Psychotherapy history, n

Lifetime use59

Current use22
Medication for mental health history, n

Lifetime use59

Current use29

aGED: General Education Development.

Ethical Considerations

All study procedures were approved by the Harvard University Area institutional review board (protocol IRB22-0012; “Automated Real Time Safety and Monitoring Study”). All participants provided informed consent. Following the screening survey, eligible participants were sent the consent form and baseline survey. The informed consent form contained specific language about how real-time responses were not being monitored by the study team and automated messages would encourage resources.

Procedure

In the baseline survey, participants completed questionnaires assessing STBs, psychiatric symptoms, and mental health care history. At the end of the survey, participants were provided with instructions for downloading the Metricwire smartphone app and then confirmed that they had downloaded the app before submitting the baseline survey for approval. Participants were compensated US $10 for completing the baseline survey. Participants were then sent to the Prolific platform, an anonymous login for the Metricwire smartphone app.

The real-time monitoring period was 2 weeks long and began the day after participants logged into their anonymous accounts. Six surveys were sent each day. Five momentary surveys were sent semirandomly between 9 AM and 9 PM and spaced at least 90 minutes apart. One daily survey was sent at 8 PM each day. The momentary survey stayed open to complete for 1 hour and the daily survey for 2 hours. The last momentary survey of the day and the daily survey could overlap; ultimately, 11% percent of momentary surveys were submitted during the hours of 8 PM to 10 PM (after the daily survey was prompted). Participants were paid US $0.25 for each survey they completed. If participants completed 5 or more surveys in a day, they received a US $1.00 bonus for that day. All payments for the real-time surveys were sent the day after the real-time monitoring period was complete.

On the day after the last day of the real-time monitoring period, participants were sent an exit survey via MetricWire asking them about their experiences in the study. If a participant reported that they received the ResourceBot, they were then asked questions about the acceptability and usability of it. The exit survey stayed open to complete for 8 hours. For completing the exit survey, participants were paid US $3. With this payment structure, participants could earn up to US $48 in the study. All payments were sent through Prolific.

Baseline Assessment Surveys

In the baseline assessment, participants completed a self-report version of the Self-Injurious Thoughts and Behaviors Interview—Revised (SITBI-R; Fox et al [17]). The SITBI-R measures the presence, frequency, recency, and other aspects of self-injurious thoughts and behaviors. The SITBI-R has shown excellent reliability and validity [17]. Participants also completed the Patient Health Questionnaire-9 (PHQ-9; Kroenke et al [18]). The PHQ-9 is a widely used brief measure of the severity of symptoms of depression in the past 2 weeks. PHQ-9 scores range from 0 to 27. In this study, the PHQ-9 had excellent internal consistency (Cronbach α=0.89). The Generalized Anxiety Disorder 7-item (GAD-7; Spitzer et al [19]) was also administered at baseline. The GAD-7 is a brief measure of the severity of symptoms of anxiety in the past 2 weeks. GAD-7 scores range from 0 to 21. In this study, the GAD-7 had excellent internal consistency (Cronbach α=0.90).

We also measured participants’ mental health treatment history and crisis service use with measures created for this study. For mental health history, participants were asked if they had ever received any form of mental health treatment from a professional. If so, they were asked if they had ever received talk therapy for mental health and if they were currently receiving talk therapy. Participants were also asked if they had ever been prescribed medications for mental health reasons and if they were currently being prescribed medications for mental health reasons. For crisis resources use, participants were asked if they had “ever called a suicide crisis lifeline (eg, 988 Suicide & Crisis Lifeline)” and if they had ever “texted a suicide crisis service (eg, Crisis Text Line).” If a participant endorsed using a crisis resource, they were asked how helpful it was on a 0 (not at all helpful) to 5 (very helpful) scale and how likely they were to use the resource in the future on a 0 (not at all likely) to 5 (very likely) scale. All participants were also asked, “Below are a few reasons that people may not call or text suicide crisis services. Do any of these speak to you as reasons why you wouldn't call or text a suicide crisis service in the future?” The reasons listed were: “I can handle things on my own,” “Too much time/effort,” “No professionals,” “No police,” “They won’t help,” and “None of these reasons.”

EMA Items

The momentary and daily surveys contained multiple items on suicidal thinking, affective states, and cognitive processes. For the present analyses, the most relevant item is the suicidal intent item, which was used to trigger the ResourceBot. The exact item wording was “How strong is your intent to kill yourself right now? Intent = to what extent are you actually going to kill yourself.” Participants rated this item on a 0 (“not at all”) to 10 (“very strong”) scale. This item has been used in previous real-time studies [20,21] and a similar item has shown predictive validity for suicidal behavior [22]. A daily version of the item was also included, “Today how strong was your intent to kill yourself? Intent = to what extent are you actually going to kill yourself.” The daily item had the same scale and anchors as the momentary item. Suicidal intent was selected to trigger the ResourceBot because, in a consensus statement, it was identified as a key piece of information for determining real-time risk [4]. Furthermore, the level of suicidal intent has been used to determine interventions in other real-time risk protocols [5,6].

Another relevant item in the daily survey was an item on crisis resource use. Participants were specifically asked, “Today, did you use any crisis resources? For example, did you call 988?” with the response options of yes or no. If a participant selected yes, then a participant would be asked “What crisis resource did you use? For example, calling 988, texting crisis text line.” If a participant selected yes, they would also be asked how helpful the resource was. These items were included to capture additional data on crisis resource use in case participants did not complete the ResourceBot follow-up survey.

Crisis Resources

At all times in the real-time monitoring period, there was an always-available list of resources they could open (ie, “resources survey”) in the MetricWire app. The resources survey contained a list of the following resources: National Suicide Prevention Lifeline, Crisis Text Line, Trans Lifeline, Veterans Crisis Line Chat, Veterans Crisis Line, and Trevor Project Chat. A participant could select a resource from the list, which would take them to a page with more information on the resource and how to contact them. The page included a direct link to the webpage of the resource. Participants were told that “the research team may be able to see if you select a resource, but the team cannot see what you communicate or share with the support lines.” These resources were selected because they offered support through different means of communication (eg, calling or texting) and support for different populations. Additionally, at the end of all real-time surveys, participants were reminded that the National Suicide Prevention Lifeline, Crisis Text Line, and their local emergency department were helpful resources.

ResourceBot Tool

The ResourceBot tool was built directly in MetricWire. When a participant submitted a survey with a suicidal intent rating greater than 1, it would trigger the deployment of the ResourceBot tool. This threshold of suicidal intent was lower than the threshold used in previous studies [5] because the intervention is lower intensity compared with other interventions (eg, clinician contact). A stop rule in place was that the ResourceBot tool would be only deployed once a day to limit the burden and increase engagement. The ResourceBot tool was sent immediately after the survey submission. If a participant did not open the ResourceBot survey, then a reminder notification was sent 5 minutes after the initial deployment. The ResourceBot survey stayed open for 4 hours. Once a participant opened the ResourceBot survey, a participant was guided through a protocol that (1) confirmed current distress, (2) offered crisis resources, (3) identified barriers to crisis resource use, and (4) provided psychosocial education on resources. An example interaction is provided in Figures 2 and 3. The tool is designed to overcome common concerns and misconceptions (ie, barriers) related to using crisis services, thereby increasing the use of these services.

ResourceBot Follow-Up Survey

One hour after the ResourceBot survey was submitted, a brief follow-up survey was sent. The survey asked if a participant used a resource since they were sent the ResourceBot. If yes, they were asked what resource they used and how helpful the resource was. If not, participants were asked why they did not use a resource. The response options for why they did not use a resource included: “Too much time/effort,” “Didn’t think it would help,” “Felt better without them,” “I handled it on my own,” and “Other.” Participants could enter more information into an open-ended text field if they selected “Other.”

Exit Survey

At the end of the EMA period, participants were sent a survey asking them about their experiences in the study. All participants were sent questions about the overall study and participants who were sent the ResourceBot were sent additional questions about the ResourceBot. All participants were asked, “Did you feel comfortable answering the cellphone questions honestly?” and rated it on a scale of 0 (not at all) to 5 (very much). All participants were asked, “Did you receive the ResourceBot, which directed you to crisis services, in the study?” and shown a picture of the ResourceBot as a reminder. If a participant endorsed receiving the ResourceBot, they were sent the Client Satisfaction Questionnaire (CSQ; Larsen et al [23]) and the System Usability Scale (SUS; Lewis and Sauro [24]). The CSQ is an 8-item scale that produces a score from 8 to 32 with higher scores indicating greater satisfaction. In this study, the CSQ was used as a measure of acceptability and had excellent internal consistency (Cronbach α=0.96) The SUS is a 10-item scale that produces an original score of 0 to 40 which is then multiplied by 2.5 to create scaled scores of 0 to 100 with higher scores indicating that the user rates the system as more useable. In this study, the SUS had excellent internal consistency (Cronbach α=0.91).

Statistical Analysis

For all analyses, we focus on descriptive statistics. For the first aim of whether people engage with the ResourceBot, we focus on how often participants open and submit the ResourceBot survey. Although there are multiple ways to operationalize engagement [25], we highlight this simple definition of engagement for this first examination of ResourceBot. We also report on data provided within the ResourceBot survey, including the endorsed barriers to using crisis resources.

One factor that could have affected participants’ engagement with the ResourceBot is the current level of suicidal intent. For example, a participant with higher levels of current intent (eg, 9 out of 10) and possibly greater risk may engage with the tool in a different way than a participant with lower levels of current intent (eg, 2 out of 10). To understand the relationship between the level of intent prior to the ResourceBot and engagement with the ResourceBot we ran additional analyses. We identified the momentary survey submitted closest in time (ie, the trigger survey) to the submitted ResourceBot survey. This resulted in momentary intent ratings for 181 of the 192 ResourceBot engagements; the 11 other engagements were triggered by a daily survey report. We focused on the 181 engagements for the subsequent descriptive analyses. The average time difference between the submitted momentary survey and the submitted ResourceBot survey was 7.6 minutes. We then categorized the momentary intent levels into low and high levels. We operationalized low as a score of 2, 3, or 4 and high as a score of greater than 5. This resulted in 92 low-intent ResourceBot engagements and 89 high-intent ResourceBot engagements. We present descriptive statistics on data within the ResourceBot encounter by momentary intent level.

Due to the potential for habituation to ResourceBot content with multiple deployments over time, we also isolated each participant’s first submitted engagement with the ResourceBot and presented descriptive statistics on data within this first encounter with the ResourceBot.

For the second aim, if people contact crisis resources after engaging with ResourceBot, we focus on how often in the ResourceBot follow-up survey do people report using crisis resources. We also report on crisis resource use reported in the daily survey as well as the frequency of viewing the crisis resources. For the third aim, we report on the descriptive statistics on exit survey scores on acceptability and usability. We also report additional exit survey data on the honesty of responding. Together these analyses use multiple sources to comprehensively describe the feasibility, acceptability, and utility of a real-time crisis resource tool. All data analysis codes and results can be viewed on the Open Science Framework [26].


Descriptive Statistics

Baseline data on lifetime use of crisis hotlines, experiences on crisis hotlines, and barriers to future use of crisis hotlines are provided in Table 2. Most participants (49/74, 66% for calling and 58/74, 78% for texting) had not used crisis hotlines in their life. Participants who had previously used crisis hotlines reported that they in general were not helpful (calling mean helpfulness=1.12 out of 5; texting mean helpfulness=1.19 out of 5). Participants reported on average they were not very likely to use hotlines in the future. The most frequently endorsed reason for not using crisis hotlines in the future was the belief that they would not help.

Table 2. Baseline crisis hotline lifetime histories.
Baseline historiesVariables
Lifetime called crisis line, n (%)

Yes (Percent)25 (34)
Lifetime texted crisis line, n (%)

Yes (Percent)16 (22)
Helpfulness of calling crisis linea, mean (SD)1.12 (1.54)
Helpfulness of texting call lineb, mean (SD)1.19 (1.33)
How likely to call crisis line in the futurec, mean (SD)1.14 (1.47)
How likely to text crisis line in the futured, mean (SD)1.35 (1.62)
Reasons for not using crisis lines in the futuree

I can handle things on my own26

Too much time or effort12

No professionals12

No police17

They would not help49

None of these reasons6

aOnly answered by participants who answered they had called a crisis line (n=25).

bOnly answered by participants who answered they had texted a crisis line (n=16).

cAnswered by all participants (n=74).

dAnswered by all participants (n=74).

eParticipants could select multiple reasons.

Participants completed 2909 momentary surveys and 679 daily surveys. A total of 74 participants completed at least 1 momentary survey and 72 participants completed at least 1 daily survey. The average number of momentary surveys submitted was 39.3 (range 2 to 70) and for daily surveys, it was 9.4 (range 1 to 14). The average compliance rate for the momentary surveys was 56% (range 3% to 100%) and for the daily surveys was 67% (range 7% to 100%). The daily averages of EMA and Daily survey scores of the intent to kill oneself item are shown in Figure 4.

Figure 4. Mean suicidal intent scores over time in the study. (A) Mean EMA intent scores by day in study. (B) Mean daily intent scores by day in study. Blue line is the linear trend of the mean intent score by day in the study. EMA: ecological momentary assessment.

The ResourceBot was deployed 221 times to 36 participants. A total of 35 participants engaged with the ResourceBot at least once. The ResourceBot was deployed and engaged multiple times by 28 (80%) of the 35 participants who engaged it at least once. The other 7 (20%) of the 35 participants engaged with it only once.

The exit survey was completed by 44 participants. We compared participants who completed the exit survey (n=44) to those who did not (n=30) on EMA compliance percentage, daily survey compliance percentage, mean EMA suicidal intent severity, and mean daily suicidal intent severity. We conducted this descriptive retention analysis to understand if the type of participant who completed the exit survey may be biased in some way. We found that participants who completed the exit survey had higher EMA (72% vs 34%) and daily survey (81% vs 46%) compliance rates than those who did not complete the exit survey. We also found similar mean EMA (1.31 completers vs 1.15 noncompleters) and daily suicidal intent ratings (1.48 completers vs 1.43 noncompleters) by exit survey status.

Among those who completed the exit survey, 19 reported receiving the ResourceBot. We cross-checked participants’ self-reports of receiving the ResourceBot with the ResourceBot deployment data. Eighteen of the 19 who reported receiving the ResourceBot in the exit survey matched with the ResourceBot deployment data. The one participant who reported receiving the ResourceBot, but did not actually receive it was excluded from the ResourceBot exit survey analysis. One of the 18 participants did not complete all items for the CSQ and therefore, we report on 17 participants for the CSQ.

Do People Engage With the ResourceBot?

There was 87% (192 out of 221) overall engagement (defined as opening and submitting) with the ResourceBot tool and 86% (165 out of 192) compliance with the ResourceBot follow-up survey. The different components of the ResourceBot and the frequency of responses are provided in Table 3. In the majority of deployments (122 out of 192) participants confirmed that they were in distress. For participants who reported that they were not in distress, the most commonly reported reason was being used for these thoughts or feelings. Text resources (n=81) were more frequently selected than call resources (n=19). For the likelihood of using resources, the most common response was not likely (n=48). For barriers to using resources, the most common responses included that “it won’t help” and “I can handle it on my own.” As shown in Table 3, participants often skipped out of the ResourceBot at various stages of the tool.

Table 3. ResourceBot responses for all ResourceBot engagements.
QuestionResponse (n)
It sounds like you’re feeling pretty down or having a hard time. Is that right? (n=192)
  • Yes (122)
  • No (14)
  • Skipped (56)
Why are you having high intent but not feeling down/having a hard time? (n=8)
  • I am used to these thoughts/feelings (7)
  • I don’t need help for these thoughts/feelings (1)
Prefer places could call or places you could text? (n=130)
  • Call (19)
  • Text (81)
  • Skipped (30)
How likely to use the resources shared? (n=100)
  • Not Likely (48)
  • Somewhat Likely (34)
  • Very Likely (6)
  • Skipped (12)
Reasons people may not use these resources (n=82; shown if not likely or somewhat likely to use resources)
  • It won’t help (37)
  • I can handle it on my own (26)
  • Too much time/effort (5)
  • No police (5)
  • I may not use these resources for a reason not otherwise listed (4)
  • No professionals (3)
  • Skipped (2)
Reasons people may not use these resources (n=6; shown if very likely to use resources)
  • It won’t help (1)
  • I can handle it on my own (2)
  • Too much time/effort (0)
  • No police (0)
  • Not really - another reason (3)
  • No professionals (0)
  • Skipped (0)

Responses to the ResourceBot by level of momentary intent are presented in Table 4. The patterns of responding were similar across low and high levels of intent. For example, across both low and high levels of intent, participants most commonly reported being not likely to use the Resources presented and most frequently endorsed the barrier of the belief that the resources would not help. Results from the first encounter with ResourceBot only are presented in Table 5. Results from the first encounters showed similar trends to data from all encounters.

Table 4. ResourceBot responses by ecological momentary assessment suicidal intent severity (n=182).
QuestionLow Intent Responses (n)High Intent Responses (n)
It sounds like you’re feeling pretty down or having a hard time. Is that right?
  • Yes (53)
  • No (8)
  • Skipped (32)
  • Yes (64)
  • No (6)
  • Skipped (19)
Why are you having high intent but not feeling down/having a hard time?
  • I am used to these thoughts/feelings (5)
  • I don’t need help for these thoughts/feelings (0)
  • I am used to these thoughts/feelings (2)
  • I don’t need help for these thoughts/feelings (1)
Prefer places could call or places you could text?
  • Call (7)
  • Text (41)
  • Skipped (10)
  • Call (12)
  • Text (36)
  • Skipped (17)
How likely to use the resources shared?
  • Not Likely (21)
  • Somewhat Likely (18)
  • Very Likely (2)
  • Skipped (7)
  • Not Likely (27)
  • Somewhat Likely (12)
  • Very Likely (4)
  • Skipped (5)
Reasons people may not use these resources
  • It won’t help (15)
  • I can handle it on my own (15)
  • Too much time/effort (3)
  • No police (3)
  • I may not use these resources for a reason not otherwise listed (2)
  • No professionals (0)
  • Skipped (1)
  • It won’t help (21)
  • I can handle it on my own (9)
  • Too much time/effort (2)
  • No police (2)
  • I may not use these resources for a reason not otherwise listed (2)
  • No professionals (3)
  • Skipped (0)
Reasons people may not use these resources
  • It won’t help (0)
  • I can handle it on my own (2)
  • Too much time/effort (0)
  • No police (0)
  • Not really - another reason (0)
  • No professionals (0)
  • Skipped (0)
  • It won’t help (1)
  • I can handle it on my own (0)
  • Too much time/effort (0)
  • No police (0)
  • Not really - another reason (3)
  • No professionals (0)
  • Skipped (0)
Table 5. ResourceBot responses (first engagement only).
QuestionResponse (n)
It sounds like you’re feeling pretty down or having a hard time. Is that right? (n=35)
  • Yes (16)
  • No (0)
  • Skipped (19)
Prefer places could call or places you could text? (n=16)
  • Call (1)
  • Text (12)
  • Skipped (3)
How likely to use the resources shared? (n=13)
  • Not Likely (4)
  • Somewhat Likely (5)
  • Very Likely (0)
  • Skipped (4)
Reasons people may not use these resources (n=9)
  • It won’t help (5)
  • I can handle it on my own (2)
  • Too much time/effort (0)
  • No police (1)
  • I may not use these resources for a reason not otherwise listed (1)
  • No professionals (0)
  • Skipped (0)

Do People Contact Crisis Resources After Engaging With ResourceBot?

In the ResourceBot follow-up survey, 0 participants reported using crisis resources. Participants could endorse multiple reasons for not using resources in the follow-up survey. The frequency of the reasons for not using the resources was as follows: did not think it would help (76/165, 46%), I handled it on my own (70/165, 42%), too much time and effort (33/165, 20%), felt better without them (24/165, 15%), and other (5/165, 3%). Two times participants did not answer this question.

In the daily survey, there were 3 total times participants reported using crisis resources that day from 3 separate participants. One participant reported calling a crisis line, one reported attending group therapy, and one did not remember the exact resource they used. The helpfulness ratings for 3 instances were 0, 4, and 5 (out of 5).

The crisis resources survey (ie, the constantly available list of resources that participants could open) was opened and submitted a total of 312 times in the survey across 59 participants. A total of 113 times participants opened the survey and skipped out without selecting a resource to view, we therefore report on the remaining 199 times participants selected a resource to view. The frequency of viewing by time in the study is shown in Figure 4. In Figure 4, day 1 refers to the day a participant first logged into the Metricwire smartphone app and day 2 refers to the first day of smartphone surveys. The frequency of viewing for each type of resource was the following (Figure 5): Crisis Text Line (69/197, 35%), National Suicide Prevention Lifeline (58/197, 29%), Trevor Project Chat (29/197, 15%), Trans Lifeline (17/197, 9%), Veteran’s Crisis Line Chat (13/197, 6%), and Veteran’s Crisis Line (13/197, 6%).

Figure 5. Number of views of resources by day in study. Study day 1 refers to the day a participant first logged into the Metricwire smartphone app and day 2 refers to the first day of smartphone surveys.

What Do People Report About the Acceptability and Usability of ResourceBot?

The mean for the CSQ was 21 (out of 32) and the SD was 5.96. The mean for the SUS was 76.7 (out of 100) and the SD was 17.06. The mean and SD for all items on both scales are provided in Multimedia Appendix 1. For the comfort with answering questions honestly item, the average rating was 4.30 (out of 5).


The aims of the current project were to adapt an evidence-based BRI into a new tool for smartphone-based delivery (ResourceBot) that aims to increase the use of crisis resources and to test the feasibility, acceptability, and utility of this tool in real time monitoring. There were 3 key findings from this study. First, it is feasible to develop and deploy a real-time resource tool. Second, there was low use of crisis resources overall (including those specifically associated with ResourceBot) in the study. Third, participants rated the ResourceBot with moderate satisfaction and good usability. Each of these findings warrants further comment.

We found that it is possible to build, deploy, and receive high engagement with a real-time crisis resource tool. Much has been written about the promise of smartphone technologies for suicide research and intervention [11,27,28]. Although a plethora of mobile apps exist for suicide prevention [29], little systematic research to date has examined mobile interventions for suicide prevention [30]. This study found that in a severe sample (ie, recent suicidal thoughts, elevated symptoms of depression and anxiety), most of the time participants engaged with the ResourceBot and provided helpful data on their use of the tool. In this paper, we intentionally used a simple decision rule (eg, if suicidal intent is greater than 1 then send ResourceBot) to increase the feasibility and interpretability of findings. To promote greater engagement, future research could increase the complexity in 2 ways. First, the decision rule for the tool could be more adaptive and tailored to the individual, for example, deploying the tool based on participant’s deviations from their own within person average level of suicidal intent or a rule that incorporates additional variables beyond just suicidal intent. Second, a greater number of messages with distinct content could be used in the tool. The barriers and psychoeducation messages were static in this study, which could have resulted in habituation to the ResourceBot and low use of resources. Therefore, a broader more dynamic message base to draw on may promote greater engagement over time in future work. In short, this study found it is feasible to deliver tools for participant safety immediately after participants complete real-time monitoring surveys. Future work can expand upon what type of tools are offered in that immediate moment after a participant has reported suicidal thoughts.

The second finding of this work is that no participants reported using crisis hotlines in the follow-up survey after the ResourceBot. This finding was counter to our expectations given prior work finding that a BRI can increase the self-reported use of crisis resources [9]. Our work highlights how in digital health research one cannot assume findings from one intervention context necessarily translate to another context. There are numerous reasons why there may be differences between past work and this study. This includes differences in the trigger for the BRI and the participants in the studies. For example, prior work was done with naturalistic users of social media platforms and this work was done with EMA study participants. Most participants in this study had a lifetime history of mental health care and many had previously used crisis hotlines. At baseline, participants reported on average feeling unlikely to use crisis hotlines in the future and therefore may have been more resistant to the BRI. Past work has also been with larger samples (ie, hundreds of participants), and we may have seen more participants use hotlines with a larger sample size.

More broadly, given the increasing role of crisis hotlines in national suicide prevention efforts [31] and suicide research safety protocols [5], this study highlights the need to continue to understand participants’ concerns about and experiences with these hotlines. A recent nationally representative survey found that about 5% (23/388) of participants with serious distress had used the 988 Lifeline and only 29% (7/23) of those participants with serious distress reported being very likely to use it in the future [32]. This work suggests the skepticism of crisis hotlines (eg, beliefs that they won’t help) in this study are not unique and perhaps a need to offer a broader range of resources in future work. For example, providing suggestions of coping skills (eg, distraction and relaxation) from interventions such as the safety planning intervention may be incrementally useful [33,34].

Finally, participants rated the ResourceBot with good satisfaction and usability. According to one normative rating scale of the SUS [35,36], the average score in our study for ResourceBot would get a grade of a “B.” According to another rating system [37], it would be considered “good.” These findings provide further support for the feasibility of the real-time deployment of suicide prevention tools. To our knowledge, publicly available norms for the CSQ are not available. Using a transformation suggested by the scale developers to put the score on a 25 to 100 scale where one multiplies the original total score by 3.125, would produce an average score of 65.6 for the ResourceBot. This suggests significant room for improvement with the ResourceBot tool. It is possible that participants’ skepticism of crisis lines influenced their satisfaction with ResourceBot given that the tool promoted the use of these crisis lines. In the future, offering a broader type of message, resources, and skills may increase satisfaction with the tool. Another important finding from the exit survey is that participants reported on average being very comfortable answering questions honestly in the study. Examination of temporal trends in the intent to kill oneself scores also showed no changes in severity by day in the study. If participants were trying to avoid triggering the ResourceBot tool, one may expect to see lower intent scores toward the end of the study period and we do not see this. Both the exit survey honesty ratings and the lack of temporal trends in intent scores, suggest a lack of reactivity to the ResourceBot tool, which is a concern with real-time interventions for suicide prevention [6]. The lack of reactivity could also be due, at least in part, to the anonymous nature of the study [38] and the clear language in the consent form regarding active risk monitoring. The structure of the study and the ResourceBot could have contributed to participants feeling more comfortable disclosing suicidal thoughts [39].

This study provides new information on real-time risk management and crisis resource use but still has important limitations that warrant discussion. First, the current sample was a convenience sample recruited through a web-based research platform. It is unclear how the results would generalize to a clinical sample. Second, the threshold used to deploy the ResourceBot tool was relatively low compared with thresholds used in past research [6]. It is possible that participants did not use the resources because they did not consider their own current suicidal thoughts severe enough to warrant reaching out to a crisis hotline. To try to limit overwhelming participants with the ResourceBot, we implemented a stop rule so that the ResourceBot was only deployed up to once a day. Without this stop rule, the deployment rate would have been 939 rather than 221. Nevertheless, most participants who were sent the ResourceBot in the current study were sent it multiple times. As shown in Figure 4, the average levels of suicidal intent were relatively low, which is one reason why the lower threshold was used. These issues of severity and frequency highlight the challenge of selecting an appropriate threshold of suicidal intent. The engagement with the ResourceBot tool and use of the crisis resources may have been different if a higher threshold was used. Third, the ResourceBot deployment was contingent on compliance with the real-time survey and it is possible that participants may be less likely to fill out a survey when they are experiencing higher levels of distress. More work is needed to better understand compliance in real-time monitoring studies and the best way to incentivize compliance [40]. Fourth, the compliance rate for the exit survey (59.4%) was relatively low, which could have potentially biased the exit survey results. Finally, this study can provide information on the feasibility and usability of automated tools, but cannot fully speak to the ethics of automated interventions versus clinician outreach interventions [4,41]. In this study, the Prolific platform requires that participants maintain anonymity and therefore clinician contact in this setting would not be possible. Decisions related to the type and timing of real-time interventions depend upon the context of the study as well as discussions with ethics boards and regulatory bodies [4,42].

Future studies could build upon this study in multiple ways. First, this study only offered crisis lines as resources. Future work could offer more types of resources in this automated tool format, such as reminders or skills for coping with suicidal thoughts. Second, this study is focused on feasibility and acceptability and no randomization was used. Future work could consider a between-participants randomized control trial where different groups were provided with an automated interactive intervention or pop-up reminders. Future work could also attempt to use a within-person micro-randomized trial design [43] where participants are randomized to different types of automated tools at different levels of suicidal thinking [11]. This type of trial design could allow future studies to empirically test the effect of presenting different kinds of resources on future resource use or self-reported momentary suicidal thinking. Finally, this work highlights the immense complexity and challenge of building ethical and effective real-time interventions for suicide prevention. Future work could use focus groups and other qualitative methods from individuals with lived experience to better understand and develop tools that would be the most helpful to people during moments of elevated suicide risk [44].

Mobile technologies have the potential to advance the understanding of suicide and contribute to new suicide prevention approaches. These technologies, however, present immense ethical challenges in which researchers grapple with both collecting helpful data and preserving participant safety. This study highlights the nuance of this issue and the need for the rigorous development of real-time safety tools.

Acknowledgments

This study was funded by the APF/The Trust Grant in Honor of Eric A. Harris, EdD, JD. DDLC is funded by the National Science Foundation Graduate Research Fellowship (grant DGE-1745303) and the National Institute of Mental Health (grant F31MH130055). KHB is supported by the National Institute of Mental Health (grant K23MH120436). MKN is funded by grants from the National Institute of Mental Health (P50MH129699), the Chet and Will Griswold Fund, and a gift from the Fuss Family Research Fund. This content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation or the National Institutes of Health. The authors wish to thank all participants for their participation and valuable feedback. The authors also wish to thank Kayla Edwards for her help with the study and Robert Morris for feedback on the ResourceBot.

Conflicts of Interest

MKN receives publication royalties from Macmillan, Pearson, and UpToDate. He has been a paid consultant in the past 3 years for Microsoft Corporation, the Veterans Health Administration, and COMPASS Pathways, and for legal cases regarding a death by suicide. He has stock options in Cerebral Inc. He is an unpaid scientific advisor for Empatica, Koko, and TalkLife. EMK has been a paid consultant in the past 3 years for Boehringer Ingelheim Pharmaceuticals.

Multimedia Appendix 1

Images of the ResourceBot and the decision logic and mean (SD) for all items on both scales.

DOCX File , 3398 KB

  1. Kleiman EM, Nock MK. Real-time assessment of suicidal thoughts and behaviors. Curr Opin Psychol. 2018;22:33-37. [CrossRef] [Medline]
  2. Millner AJ, Robinaugh DJ, Nock MK. Advancing the understanding of suicide: the need for formal theory and rigorous descriptive research. Trends Cogn Sci. 2020;24(9):704-716. [FREE Full text] [CrossRef] [Medline]
  3. Kivelä L, van der Does WAJ, Riese H, Antypa N. Don't miss the moment: a systematic review of ecological momentary assessment in suicide research. Front Digit Health. 2022:4. [FREE Full text] [CrossRef]
  4. Nock MK, Kleiman EM, Abraham M, Bentley KH, Brent DA, Buonopane RJ, et al. PRCP. 2021;3(2):57-66. [FREE Full text] [CrossRef] [Medline]
  5. Bentley KH, Maimone JS, Kilbury EN, Tate MS, Wisniewski H, Levine MT, et al. Practices for monitoring and responding to incoming data on self-injurious thoughts and behaviors in intensive longitudinal studies: a systematic review. Clin Psychol Rev. 2021;90:102098. [FREE Full text] [CrossRef] [Medline]
  6. Bentley KH, Millner AJ, Bear A, Follet L, Fortgang RG, Zuromski KL, et al. Intervening on high-risk responses during ecological momentary assessment of suicidal thoughts: is there an effect on study data? Psychol Assess. 2024;36(1):66-80. [CrossRef] [Medline]
  7. Kleiman EM, Glenn CR, Liu RT. The use of advanced technology and statistical methods to predict and prevent suicide. Nat Rev Psychol. 2023;2(6):347-359. [FREE Full text] [CrossRef] [Medline]
  8. Blease C, Torous J. ChatGPT and mental healthcare: balancing benefits with risks of harms. BMJ Ment Health. 2023;26(1):e300884. [FREE Full text] [CrossRef] [Medline]
  9. Jaroszewski AC, Morris RR, Nock MK. Randomized controlled trial of an online machine learning-driven risk assessment and intervention platform for increasing the use of crisis services. J Consult Clin Psychol. 2019;87(4):370-379. [CrossRef] [Medline]
  10. Cohen K, Dobias M, Morris R, Schleider J. Improving uptake of mental health crisis resources: randomized test of a single-session intervention embedded in social media. J Behav Cogn Ther. 2023;33(1):24-34. [CrossRef]
  11. Coppersmith DDL, Dempsey W, Kleiman EM, Bentley KH, Murphy SA, Nock MK. Just-in-time adaptive interventions for suicide prevention: promise, challenges, and future directions. Psychiatry. 2022;85(4):317-333. [FREE Full text] [CrossRef] [Medline]
  12. Nahum-Shani I, Smith SN, Spring BJ, Collins LM, Witkiewitz K, Tewari A, et al. Just-in-time adaptive interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Ann Behav Med. 2018;52(6):446-462. [FREE Full text] [CrossRef] [Medline]
  13. Nahum-Shani I, Shaw SD, Carpenter SM, Murphy SA, Yoon C. Engagement in digital interventions. Am Psychol. 2022;77(7):836-852. [FREE Full text] [CrossRef] [Medline]
  14. Douglas BD, Ewell PJ, Brauer M. Data quality in online human-subjects research: comparisons between MTurk, prolific, CloudResearch, qualtrics, and SONA. PLoS One. 2023;18(3):e0279720. [FREE Full text] [CrossRef] [Medline]
  15. Peer E, Rothschild D, Gordon A, Evernden Z, Damer E. Data quality of platforms and panels for online behavioral research. Behav Res Methods. 2022;54(4):1643-1662. [FREE Full text] [CrossRef] [Medline]
  16. Smith DMY, Lipson SM, Wang SB, Fox KR. Online methods in adolescent self-injury research: challenges and recommendations. J Clin Child Adolesc Psychol. 2022;51(2):143-154. [CrossRef] [Medline]
  17. Fox KR, Harris JA, Wang SB, Millner AJ, Deming CA, Nock MK. Self-injurious thoughts and behaviors interview-revised: development, reliability, and validity. Psychol Assess. 2020;32(7):677-689. [CrossRef] [Medline]
  18. Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606-613. [FREE Full text] [CrossRef] [Medline]
  19. Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092-1097. [CrossRef] [Medline]
  20. Bentley KH, Coppersmith DL, Kleiman EM, Nook EC, Mair P, Millner AJ, et al. Do patterns and types of negative affect during hospitalization predict short-term post-discharge suicidal thoughts and behaviors? Affect Sci. 2021;2(4):484-494. [FREE Full text] [CrossRef] [Medline]
  21. Coppersmith DDL, Ryan O, Fortgang RG, Millner AJ, Kleiman EM, Nock MK. Mapping the timescale of suicidal thinking. Proc Natl Acad Sci U S A. 2023;120(17):e2215434120. [FREE Full text] [CrossRef] [Medline]
  22. Wang SB, Coppersmith DDL, Kleiman EM, Bentley KH, Millner AJ, Fortgang R, et al. A pilot study using frequent inpatient assessments of suicidal thinking to predict short-term postdischarge suicidal behavior. JAMA Netw Open. 2021;4(3):e210591. [FREE Full text] [CrossRef] [Medline]
  23. Larsen DL, Attkisson CC, Hargreaves WA, Nguyen TD. Assessment of client/patient satisfaction: development of a general scale. Eval Program Plann. 1979;2(3):197-207. [CrossRef] [Medline]
  24. Lewis JR, Sauro J. The factor structure of the system usability scale. In: Kurosu M, editor. Human Centered Design. Lecture Notes in Computer Science. San Diego, CA, USA. Springer; 2009:94-103.
  25. Short CE, DeSmet A, Woods C, Williams SL, Maher C, Middelweerd A, et al. Measuring engagement in eHealth and mHealth behavior change interventions: viewpoint of methodologies. J Med Internet Res. 2018;20(11):e292. [FREE Full text] [CrossRef] [Medline]
  26. ResourceBot: developing and testing an automated real-time tool for promoting crisis resource use for suicide risk. OSF HOME. URL: https://osf.io/yxmdg/?view_only=adb42a1907eb46638fba1ff9032a0ac4 [accessed 2024-09-26]
  27. Allen NB, Nelson BW, Brent D, Auerbach RP. Short-term prediction of suicidal thoughts and behaviors in adolescents: can recent developments in technology and computational science provide a breakthrough? J Affect Disord. 2019;250:163-169. [FREE Full text] [CrossRef] [Medline]
  28. Torous J, Larsen ME, Depp C, Cosco TD, Barnett I, Nock MK, et al. Smartphones, sensors, and machine learning to advance real-time prediction and interventions for suicide prevention: a review of current progress and next steps. Curr Psychiatry Rep. 2018;20(7):51. [CrossRef] [Medline]
  29. Wilks CR, Chu C, Sim D, Lovell J, Gutierrez P, Joiner T, et al. User engagement and usability of suicide prevention apps: systematic search in app stores and content analysis. JMIR Form Res. 2021;5(7):e27018. [FREE Full text] [CrossRef] [Medline]
  30. Jiménez-Muñoz L, Peñuelas-Calvo I, Díaz-Oliván I, Gutiérrez-Rojas L, Baca-García E, Porras-Segovia A. Suicide prevention in your pocket: a systematic review of ecological momentary interventions for the management of suicidal thoughts and behaviors. Harv Rev Psychiatry. 2022;30(2):85-99. [CrossRef] [Medline]
  31. Miller AB, Oppenheimer CW, Glenn CR, Yaros AC. Preliminary research priorities for factors influencing individual outcomes for users of the US national suicide prevention Lifeline. JAMA Psychiatry. 2022;79(12):1225-1231. [CrossRef] [Medline]
  32. Purtle J, McSorley AMM, Adera AL, Lindsey MA. Use, potential use, and awareness of the 988 suicide and crisis lifeline by level of psychological distress. JAMA Netw Open. 2023;6(10):e2341383. [FREE Full text] [CrossRef] [Medline]
  33. Stanley B, Martínez-Alés G, Gratch I, Rizk M, Galfalvy H, Choo TH, et al. Coping strategies that reduce suicidal ideation: an ecological momentary assessment study. J Psychiatr Res. 2021;133:32-37. [FREE Full text] [CrossRef] [Medline]
  34. Stanley B, Brown GK. Safety planning intervention: a brief intervention to mitigate suicide risk. Cogn Behav Ther. 2012;19(2):256-264. [CrossRef]
  35. Lewis JR. The system usability scale: past, present, and future. Int J Hum Comput Interact. 2018;34(7):577-590. [CrossRef]
  36. Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. United States. Morgan Kaufmann; 2016.
  37. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114-123.
  38. Deming CA, Harris JA, Castro-Ramirez F, Glenn JJ, Cha CB, Millner AJ, et al. Inconsistencies in self-reports of suicidal ideation and attempts across assessment methods. Psychol Assess. 2021;33(3):218-229. [CrossRef] [Medline]
  39. Hallford DJ, Rusanov D, Winestone B, Kaplan R, Fuller-Tyszkiewicz M, Melvin G. Disclosure of suicidal ideation and behaviours: a systematic review and meta-analysis of prevalence. Clin Psychol Rev. 2023;101:102272. [CrossRef] [Medline]
  40. Wrzus C, Neubauer AB. Ecological momentary assessment: a meta-analysis on designs, samples, and compliance across research fields. Assessment. 2023;30(3):825-846. [FREE Full text] [CrossRef] [Medline]
  41. Pearson JL, Stanley B, King CA, Fisher CB. Intervention research with persons at high risk for suicidality: safety and ethical considerations. J Clin Psychiatry. 2001;62 Suppl 25:17-26. [Medline]
  42. Hom MA, Podlogar MC, Stanley IH, Joiner TE. Ethical issues and practical challenges in suicide research. Crisis. 2017;38(2):107-114. [CrossRef] [Medline]
  43. Klasnja P, Hekler EB, Shiffman S, Boruvka A, Almirall D, Tewari A, et al. Microrandomized trials: an experimental design for developing just-in-time adaptive interventions. Health Psychol. 2015;34S:1220-1228. [FREE Full text] [CrossRef] [Medline]
  44. Hom MA, Bauer BW, Stanley IH, Boffa JW, Stage DL, Capron DW, et al. Suicide attempt survivors' recommendations for improving mental health treatment for attempt survivors. Psychol Serv. 2021;18(3):365-376. [CrossRef] [Medline]


BRI: barrier reduction intervention
CSQ: client satisfaction questionnaire
EMA: ecological momentary assessment
GAD-7: Generalized Anxiety Disorder 7-item
PHQ-9: Patient Health Questionnaire-9
SITBI-R: Self-Injurious Thoughts and Behaviors Interview—Revised
STB: suicidal thoughts and behavior
SUS: System Usability Scale


Edited by J Torous; submitted 14.03.24; peer-reviewed by A Horwitz, U Leong; comments to author 21.03.24; revised version received 05.08.24; accepted 16.08.24; published 31.10.24.

Copyright

©Daniel DL Coppersmith, Kate H Bentley, Evan M Kleiman, Adam C Jaroszewski, Merryn Daniel, Matthew K Nock. Originally published in JMIR Mental Health (https://mental.jmir.org), 31.10.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.