Published on in Vol 10 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/43929, first published .
Validation of the Attitudes Towards Psychological Online Interventions Questionnaire Among Black Americans: Cross-cultural Confirmatory Factor Analysis

Validation of the Attitudes Towards Psychological Online Interventions Questionnaire Among Black Americans: Cross-cultural Confirmatory Factor Analysis

Validation of the Attitudes Towards Psychological Online Interventions Questionnaire Among Black Americans: Cross-cultural Confirmatory Factor Analysis

Authors of this article:

Donovan Michael Ellis1 Author Orcid Image ;   Page Lyn Anderson1 Author Orcid Image

Original Paper

Department of Psychology, Georgia State University, Atlanta, GA, United States

Corresponding Author:

Page Lyn Anderson, PhD

Department of Psychology

Georgia State University

Urban Life Bldg, 11th Floor

140 Decatur Street

Atlanta, GA, 30303

United States

Phone: 1 404 413 6258

Email: panderson@gsu.edu


Background: Acceptability of digital mental health interventions is a significant predictor of treatment-seeking behavior and engagement. However, acceptability has been conceptualized and operationalized in various ways, which decreases measurement precision and leads to heterogeneous conclusions about acceptability. Standardized self-report measures of acceptability have been developed, which have the potential to ameliorate these problems, but none have demonstrated evidence for validation among Black communities, which limits our understanding of attitudes toward these interventions among racially minoritized groups with well-documented barriers to mental health treatment.

Objective: This study aims to examine the psychometric validity and reliability of one of the first and most widely used measures of acceptability, the Attitudes Towards Psychological Online Interventions Questionnaire, among a Black American sample.

Methods: Participants (N=254) were recruited from a large southeastern university and the surrounding metropolitan area and completed the self-report measure via a web-based survey. A confirmatory factor analysis using mean and variance adjusted weighted least squares estimation was conducted to examine the validity of the underlying hierarchical 4-factor structure proposed by the original authors of the scale. An alternative, hierarchical 2-factor structure model and bifactor model were examined for comparative fit.

Results: The findings indicated that the bifactor model demonstrated a superior fit (comparative fit index=0.96, Tucker-Lewis index=0.94, standardized root mean squared residual=0.03, and root mean square error of approximation=0.09) compared with both 2- and 4-factor hierarchical structure models.

Conclusions: The findings suggest that, within a Black American sample, there may be greater utility in interpreting the Attitudes Towards Psychological Online Interventions Questionnaire subscales as attitudinal constructs that are distinct from the global acceptability factor. The theoretical and practical implications for culturally responsive measurements were explored.

JMIR Ment Health 2023;10:e43929

doi:10.2196/43929

Keywords



Background

Black communities face persistent barriers to mental health treatment, including cost, accessibility, and stigma [1-3]. Internet-based psychological interventions that implement evidence-based techniques, including psychoeducation, behavioral activation, mindfulness strategies, and symptom tracking [4], may prove useful for improving equitable access to mental health treatment as they are often more cost-effective [5,6], private [7], and readily accessible [8]. Digital interventions that are empirically driven and incorporate elements of cognitive behavioral therapy are typically referred to as internet-based cognitive behavioral therapy (iCBT) [9]. People benefit from iCBT when paired with therapist support or used alone, although the magnitude of the effect is often higher for programs with therapist assistance [10,11] (for more conservative findings on the comparative benefit of therapist support with iCBT, see the study by Bernstein et al [12]). Although iCBT programs are effective for a variety of anxiety, mood, and substance use disorders [13,14], studies have consistently reported their underutilization by the public [15,16].

Acceptability of iCBT

Studies examining this research-to-practice gap have revealed a complex picture of user acceptance of digital mental health interventions. Although therapist-supported iCBT is generally rated as more acceptable than self-guided programs [17,18], the overall willingness to use iCBT is low. In one study, 16% of non–treatment-seeking adults reported a willingness to consider using a digital mental health intervention to address a mental health concern [19], and another study reported that only 12% of participants were “definitely interested” in internet-based treatment [20]. Overall, people reported that they significantly preferred face-to-face therapy over iCBT and other digital mental health interventions [20,21].

A problem in this budding literature is that the construct of acceptability has been defined in a variety of ways, which may contribute to heterogeneous results regarding consumer attitudes toward iCBT [22]. Retrospective study outcomes, such as treatment satisfaction, engagement, usability, and feasibility, are often used interchangeably with acceptability [23]. Other researchers propose more prospective metrics, conceptualizing acceptability as “cognitively based, positive attitudes towards such interventions” that aim to predict treatment seeking [24]. Acceptability has sometimes been operationalized with measures of similar constructs, such as outcome expectancy—the expectation that one will benefit from treatment [25]. In some studies, acceptability was operationalized using single Likert scale items measuring willingness to use an intervention [20,26,27], and in other studies, researchers developed their own measure of acceptability [19,28]. The lack of precision in conceptualization and measurement may explain why conclusions about the acceptability of iCBT vary widely across studies.

A total of 6 self-report measures of consumer acceptability of digital mental health interventions now exist, with evidence of their psychometric properties and factor structure [24,29-33]. However, reflecting existing heterogeneity in the literature, these measures operationalize acceptability in various ways. The Attitudes Towards Psychological Online Interventions (APOI) questionnaire conceptualizes acceptability as a set of positive and negative appraisals and is designed to be used with various forms of digital mental health interventions [24]. The e-Therapy Attitudes and Process Questionnaire [29] includes items specifically related to users’ anticipated engagement with and short-term adherence to digital interventions. The Online Psychoeducational Intervention–Brief Attitudes Scale [32] is an abbreviated measure of attitudes (5 items) that makes the conceptual distinction that attitudes toward web-based psychoeducational interventions should incorporate elements of both psychotherapy and learning methods. In addition, 3 measures have been developed to assess working alliances in different digital contexts, akin to the therapeutic alliance fostered in face-to-face therapy [34]. The Working Alliance Inventory for guided internet interventions [30] measures the perception of an emotional attachment or collaborative bond with a digital mental health intervention, and the Working Alliance Inventory applied to virtual and augmented reality [33] measures participant comfort and trust in a virtual reality environment. Similarly, the Virtual Therapist Alliance Scale [31] measures perceptions of the therapeutic alliance with digital therapist avatars common to automated virtual reality exposure therapies. Table 1 shows the characteristics of the acceptability measures.

Table 1. Measures of acceptability toward digital mental health interventions.
StudyTitleAbbreviationIntervention modality
Clough et al [29], 2019e-Therapy Attitudes and Process QuestionnaireeTAPAll
Gómez Penedo et al [30], 2020Working Alliance Inventory for Guided Internet InterventionsWAI-IGuided interventions
Miloff et al [31], 2020Virtual Therapist Alliance ScaleVTASAugmented and virtual reality
Miragall et al [33], 2015Working Alliance Inventory Applied to Virtual and Augmented RealityWAI-VARAugmented and virtual reality
Schröder et al [24], 2015Attitudes Towards Psychological Online Interventions QuestionnaireAPOIAll
Teles et al [32], 2021Online Psychoeducational Intervention—Brief Attitudes ScaleOPI-BASPsychoeducation

Racially Minoritized Communities Are Underrepresented in Acceptability Research

Further complicating matters are the dearth of acceptability research that is inclusive of ethnically or racially minoritized communities. In 1 meta-analysis, 62 of 64 randomized controlled trials examining the efficacy and acceptability of iCBT did not include (or did not report) racial minorities in their studies [13]. All but one [33] of the existing measures of consumer attitudes toward digital mental health interventions have collected data from White majority (and predominantly European language) samples [24,29-32], including the first and most highly cited measure of acceptability toward digital mental health interventions, the APOI questionnaire [24]. The APOI was developed with German-speaking participants who reported mild to moderate depression (N=1013) and were recruited from outpatient clinics, web-based health forums, and health insurance referrals.

No research to date has evaluated the reliability or validity of the APOI scale among racially or ethnically minoritized communities, including Black Americans. This is highly problematic because even though Black communities may disproportionately benefit from the advantages afforded by iCBT and related digital mental health interventions, it is unknown whether the APOI demonstrates good psychometric properties in this population.

This Study

This study addresses this problem by assessing the psychometric properties of the APOI questionnaire in a sample of Black Americans. Using confirmatory factor analyses, this study examined whether the APOI demonstrates reliability and construct validity within a Black population. In this study, 2 measurement models were examined using 16 ordered categorical (ordinal) response items retained in the exploratory factor analysis of the APOI. The first model presents a 2-factor, hierarchical measurement model (positive and negative subfactors) distinct from the 4-factor hierarchical model proposed by Schröder et al [24]. Given considerations for equivalent models [35,36] modification indexes will be reviewed to examine new and replicative factor structures to illuminate the underlying construct of acceptability.


Recruitment

Participants were self-identified Black or African American adults (N=254 participants). The participants ranged in age from 18 to 85 (mean 27.11, SD 13.40) years and were predominantly women (172.7/254, 68%), single (167.6/254, 66%), and highly educated (at least 70% had some college education; see Table 2 for more demographic and clinical characteristics of the sample). Participants were recruited from 2 primary sources: students recruited from the participant pool of a southeastern university in an urban setting who received course credit for their participation and community participants who were solicited in public places throughout the metropolitan area (eg, parks) and had the opportunity to enter a raffle for a US $25 Amazon gift card.

Table 2. Demographics and clinical characteristics of participants.
VariablesValues
Age (years; n=254), mean (SD)27.11 (13.40)
Sex (n=254), n (%)

Male82 (32.3)

Female172 (67.7)
Sexual identity (n=252), n (%)

Heterosexual210 (83.3)

Lesbian, gay, and bisexual36 (14.3)

Self-identify6 (2.4)
Current education status (n=253), n (%)

High school1 (0.4)

Some college or currently in college173 (68.1)

Graduate or professional degree5 (2.0)

Nondegree student or other3 (1.2)

Nonstudenta71 (28.0)
Relationship status (n=252), n (%)

Single166 (65.9)

Serious dating or committed relationship55 (21.8)

Married or civil union16 (6.4)

Separated, divorced, or widowed15 (6.0)
Symptom severity,mean (SD)

DASSb—total (n=243)29.58 (20.84)

DASS—depression (n=250)8.99 (8.49)

DASS—anxiety (n=249)8.35 (7.10)

DASS—stress (n=250)11.96 (7.88)

aReflects current noneducational status but does not indicate the highest level of education completed (ie, may include college graduates).

bDASS: Depression Anxiety Stress Scale.

Procedure

Participants completed a survey developed via the Qualtrics web-based platform as part of an experimental study assessing the impact of treatment rationale on the acceptability of iCBT. Participants were randomly assigned via Qualtrics (1:1 allocation) to read either a treatment rationale or definition of iCBT (see the study by Ellis and Anderson [37] for full details). The APOI questionnaire was administered as a primary measure of acceptability. The Depression, Anxiety, and Stress Scale-21 items (DASS-21) was used to characterize the sample, as experiences of depression and anxiety have been linked to mental health treatment–seeking attitudes [38] and to provide comparative evidence to Schröder et al [24] who recruited participants with mild to moderate depression.

All the data were collected on the web and will be made available upon request.

Measures

The APOI questionnaire [24] is a measure of attitudes toward digital mental health interventions that, for the purposes of this project, was modified to reference therapist-assisted iCBT. The development of the APOI included both exploratory and confirmatory factor analyses to identify clustering of latent constructs, resulting in 16 items comprising four subscales measuring attitudes toward psychological web-based interventions, which are as follows: (1) skepticism and perception of risk (SKE), which measures negative attitudes concerning the efficacy and security of a psychological web-based intervention; (2) confidence in effectiveness (CON), which measures positive attitudes concerning the utility and credibility of a psychological web-based intervention; (3) technologization threat (TET), which measures negative attitudes toward the lack of personal contact and the remote nature of the intervention; and (4) anonymity benefits (ABE), which measures positive attitudes related to increased privacy. Participants rate their agreement with each item (eg, “I have the feeling that iCBT can help me.”) on a 5-point Likert scale (1=totally agree to 5=totally disagree). Positively valenced items were reverse coded. The total scores ranged from 16 to 80, with higher scores indicating more positive attitudes toward iCBT. The APOI demonstrated strong overall internal consistency (Cronbach α=.77) and showed evidence of construct validity in a sample of 1013 participants [24].

The DASS-21 [39] is a measure of mental illness comprising 3 subscales: depression, anxiety, and stress. Participants rated each item on a 4-point Likert scale (0=never to 3=always). Sum scores were computed by adding the scores across items and multiplying by 2. Scores on the total DASS-21 scale ranged from 0 to 126, with higher scores indicating more distress or impairment. Scores for each subscale were determined by summing the scores for the relevant 7 items and multiplying by 2 (range 0-42). The DASS-21 demonstrates strong convergent validity with both the Beck Anxiety Inventory (r=0.81) and Beck Depression Inventory (r=0.74), indicating a satisfactory ability to discriminate between anxiety and depressive symptoms [40]. The DASS-21 was normed on a nonclinical sample (N=717), and subsequent research has supported the validity and reliability of the DASS-21 across racial groups, including Black Americans (subscales: Cronbach α=.81−.88 [41]).

Statistical Analysis

The variables used for the factor analysis are listed in Table 3. See Tables 4 and 5 for the interitem correlation matrix and descriptive statistics.

Confirmatory factor analyses were performed using Mplus (version 8.4; Muthén & Muthén) with a sample of Black American adults (N=254) to examine the cross-cultural equivalence of the factor structure derived from the final set of 16 items indicated in the study by Schröder et al [24]. The weighted least squares means and variance adjusted (WLSMV) estimation method was used to analyze the covariance matrix structure of ordinal items. Several indices were used to evaluate the model fit: the discrepancy chi-square statistic (df≤5), standardized root mean squared residual (SRMR; SRMR≤0.08), root mean square error of approximation (RMSEA; RMSEA≤0.08), comparative fit index (CFI; CFI≥0.90), and Tucker-Lewis index (TLI; TLI≥0.90), which are commonly recommended at the indicated thresholds [42-44]. Latent variables were scaled by fixing the latent variances to 1, which allowed all indicator factor loadings to be estimated. Finally, reliability analyses of the APOI were conducted by calculating the internal consistency (Cronbach α) and corrected item-total correlations (discrimination) to facilitate comparisons with reliability metrics reported in the original publication.

In model 1, we examined a 2-factor, hierarchical confirmatory measurement model (2 first-order factors loading on 1 second-order global factor). We posited that the set of attitudes endorsed on the APOI would indicate a “positive attitudes towards internet-based treatments” latent factor as well as a “negative attitudes towards internet-based treatments” latent factor. Indicators drawn from the confidence in effectiveness (CON) and anonymity benefits (ABE) subscales comprise positive attitudes toward iCBT and were tested to examine statistically significant loading onto the “positive” latent factor. Indicators derived from the skepticism and perception of risk (SKE) and technologization threat (TET) subscales of the APOI comprise negative attitudes and were tested for statistically significant loading onto the “negative” latent factor. Both “positive” and “negative” first-order factors loaded onto the second-order global factor (termed Acceptability for the purposes of this study; Figure 1).

In model 2, we attempted a replication of the 4-factor, hierarchical confirmatory measurement model (4 first-order factors loading on 1 second-order global factor) proposed in the study by Schröder et al [24]. Indicators drawn from the 4 subscales were modeled per the provided confirmatory factor analysis specifications [24]. All 4 first-order factors (CON, ABE, SKE, and TET) were loaded onto the second-order global factor acceptability (Figure 2).

If neither hypothesized model 1 nor model 2 demonstrates adequate model fit, the modification fit indexes provided by the WLSMV estimation will be reviewed, and the comparative fit of a third alternative model (model 3) will be examined.

Table 3. Attitudes Towards Psychological Online Interventions Questionnaire: subscale and item descriptionsa.
Measure name and scale or item labelDescription
Confidence in effectiveness subscalebMeasures positive attitudes concerning the efficacy and credibility of therapist-assisted iCBTc

CON1A therapist-assisted iCBT program can help me to recognize the issues that I have to challenge.

CON2I have the feeling that a therapist-assisted iCBT can help me.

CON3A therapist-assisted iCBT program can inspire me to better approach my problems.

CON4I believe that the concept of therapist-assisted iCBT programs makes sense.
Anonymity benefits subscalebMeasures positive attitudes related to the privacy and confidentiality of using a therapist-assisted iCBT

ABE1A therapist-assisted iCBT program is more confidential and discreet than visiting a therapist.

ABE2By using a therapist-assisted iCBT program, I can reveal my feelings more easily than with a therapist.

ABE3I would be more likely to tell my friends that I use a therapist-assisted iCBT program than that I visit a therapist.

ABE4By using a therapist-assisted iCBT program, I do not have to fear that someone will find out that I have psychological problems.
Skepticism and perception of risk subscaledMeasures negative attitudes concerning the efficacy and security of a therapist-assisted iCBT

SKE1Using therapist-assisted iCBT programs, I do not expect long-term effectiveness.

SKE2Using therapist-assisted iCBT programs, I do not receive professional support.

SKE3It is difficult to implement the suggestions of a therapist-assisted iCBT effectively in everyday life.

SKE4Therapist-assisted iCBT programs could increase isolation and loneliness.
Technologization threat subscaledMeasures negative attitudes related to the independent and remote nature of therapist-assisted iCBT

TET1In crisis situations, a therapist can help me better than a therapist-assisted iCBT program.

TET2I learn skills to better manage my everyday life from a therapist rather than from a therapist-assisted iCBT program.

TET3I am more likely to stay motivated with a therapist than when using a therapist-assisted iCBT program.

TET4I do not understand therapeutic concepts as well with a therapist-assisted iCBT.

aResponse scale (1=totally disagree to 5=totally agree).

bHigher scores represent greater acceptability.

ciCBT: internet-based cognitive behavioral therapy.

dHigher scores indicate lower acceptability.

Table 4. Bivariate correlations between the 16 Attitudes Towards Psychological Online Interventions items.
Variable12345678910111213141516
CONa11b
CON20.741
CON30.760.791
CON40.710.650.751
ABEc10.380.460.470.411
ABE20.370.420.430.440.721
ABE30.200.340.260.250.530.561
ABE40.380.410.400.450.610.580.661
SKEd1−0.05−0.10−0.070.01−0.27−0.31−0.15−0.171
SKE2−0.01−0.10−0.020.02−0.12−0.30−0.19−0.180.631
SKE3−0.15−0.21−0.150.03−0.19−0.26−0.22−0.150.710.721
SKE4−0.09−0.18−0.070.04−0.22−0.28−0.28−0.250.630.690.751
TETe1−0.44−0.42−0.500.58−0.42−0.41−0.28−0.330.240.210.240.221
TET2−0.36−0.39−0.420.33−0.43−0.45−0.39−0.430.410.340.410.450.631
TET3−0.39−0.34−0.410.36−0.47−0.38−0.34−0.410.380.250.300.380.660.721
TET4−0.22−0.22−0.290.18−0.45−0.50−0.33−0.400.540.410.480.510.390.680.621

aCON: confidence in effectiveness.

bNot applicable.

cABE: anonymity benefits.

dSKE: skepticism and perception of risk.

eTET: technologization threat.

Table 5. Descriptive statistics of the 16 Attitudes Towards Psychological Online Interventions items.
CONa1CON2CON3CON4ABEb1ABE2ABE3ABE4SKEc1SKE2SKE3SKE4TETd1TET2TET3TET4
Values, mean (SD)3.6 (1.0)3.4 (1.0)3.6 (1.0)3.7 (1.0)3.3 (1.0)3.2 (0.09)3.0 (1.0)3.2 (1.1)3.1 (1.2)3.3 (1.1)3.1 (1.1)3.2 (1.1)2.5 (1.0)2.7 (1.0)2.6 (1.0)2.9 (1.1)
Skew−0.41−0.15−0.51−0.50−0.030.040.01−0.08−0.09−0.19−0.07−0.130.260.030.180.11
Kurt0.070.240.340.16−0.020.09−0.12−0.14−0.50−0.34−0.18−0.330.300.160.07−0.06

aCON: confidence in effectiveness.

bABE: anonymity benefits.

cSKE: skepticism and perception of risk.

dTET: technologization threat.

Figure 1. Higher-order, 2-factor model depicting hierarchical relationship among indicators of 2 latent factors: positive and negative attitudes toward treatment loading on a global acceptability factor. ABE: anonymity benefits; CON: confidence in effectiveness; SKE: skepticism and perception of risk; TET: technologization threat. Note: threshold structure not shown.
Figure 2. Higher-order, 4-factor model depicting hierarchical relationship among indicators of 4 latent factors: confidence, anonymity benefits, skepticism, and technologization threat loading on a global acceptability factor. ABE: anonymity benefits; CON: confidence in effectiveness; SKE: skepticism and perception of risk; TET: technologization threat. Note: threshold structure not shown.

Ethics Approval

This study was conducted in compliance with The Georgia State University institutional review board protocol #H18341 and preregistered with the Open Science Framework [45].


Sample Characteristics

A total of 268 participants were enrolled in the study and completed the survey. Of these, 14 participants were excluded because they did not complete the APOI questionnaire, thus yielding a sample of 254 participants. Participant ratings suggested mild symptoms of anxiety (mean 8.35, SD 7.10) and stress (mean 11.96, SD 7.88) and normal levels of depressive symptoms (mean 9.00, SD 8.49) according to standard thresholds of the DASS-21 [39].

Construct Validity

The 2 proposed models explored the construct of acceptability as a hierarchical, 2-factor model comprising “positive attitudes” and “negative attitudes” toward therapist-assisted iCBT, or as a hierarchical, 4-factor model comprising 4 distinct domains of attitudes toward therapist-assisted iCBT (confidence in effectiveness, anonymity benefits, skepticism and perception of risk, and technologization threat). See Table 6 for a full description of the model’s fit indices.

Neither model had a perfect absolute model fit according to the chi-square test (model 1: χ2103=1579., P<.001; model 2: χ2101=595.3, P<.001). There was variation in the absolute values of correlation residuals, as residuals frequently exceeded 0.10 in model 1 (mean 0.14, SD 0.01), contrary to recommendations for ordered categorical variables [36]. Correlation residuals were largely below 0.10 in model 2 (mean 0.07, SD 0.01). Model 1 indicated poor fit according to CFI (0.65), TLI (0.59), SRMR (0.12), and RMSEA (0.24, 90% CI 0.23-0.25). Model 2 demonstrated better fit estimates with CFI (0.88), TLI (0.86), SRMR (0.08), and marginally improved RMSEA (0.14, 90% CI 0.13-0.15). As neither model 1 nor model 2 demonstrated adequate fit indices, an alternative bifactor model 3 (shown in Table 6) was examined because it retains theoretical similarity to the structure proposed by Schröder et al [24], and hierarchical models (ie, model 2) have more parameter constraints and are nested within less constrained bifactor models (ie, model 3) [46-48]. In model 3, the 4 factors (CON, ABE, SKE, and TET) were specified as orthogonal (instead of hierarchical) to the global factor of acceptability (Figure 3). Chi-square tests did not indicate an absolute model fit: χ282=248.7, P<.001, although the chi-square:df ratio was 3.03, which is within the recommended range between 2 and 5 [44]. Furthermore, model 3 indicated better estimates with CFI=0.96, TLI=0.94, SRMR=0.03, and RMSEA=0.09, 90% CI 0.08-0.10. Overall, model 3 demonstrated adequate to good fit according to accepted thresholds [42-44] and the absolute values of correlation residuals did not exceed 0.10 (mean 0.03, SD 0.002). Other equivalent models were investigated (informed by statistically significant modification indices and theoretical rationale), but none demonstrated both structural fit and conceptual interpretability or parsimony (see Multimedia Appendix 1 for all tested confirmatory factor analysis models).

As models 1, 2, and 3 were nested, comparisons were conducted to verify the statistically improved model fit by examining the change in the chi-square statistic. As the scaled chi-square value for WLSMV cannot be used for traditional chi-square difference testing, the DIFFTEST option in Mplus (version 8.4) was used [49]. As shown in Table 6, comparisons indicated a significant chi-square change, Δχ22=327.7, P<.001, suggesting that model 2 was significantly better than model 1. Similarly, there was a significant chi-square change, Δχ219=231.9, P<.001, suggesting that model 3 was significantly better than model 2. Model 3 was the best fitting model and is described in more detail below (see Table 7 for full factor loadings and Figure 4 for the model with parameter estimates).

When examining the standardized factor loadings of the bifactor model, the absolute value of loadings for the categorical indicators ranged from 0.52 to 0.87 on their original 4 factors. Consistent with the findings of Schröder et al [24], all indicators significantly loaded onto their respective latent factors (CON, ABE, SKE, and TET), supporting the theory that these 4 domains are valid indicators of attitudes toward internet-delivered treatment. Furthermore, the 2 positively valenced latent factors (CON and ABE) significantly covaried as similar yet distinct factors (ψ=0.54; P<.001) as did the 2 negatively valenced latent factors (SKE, TET; ψ=0.70; P<.001).

The relationship between the 16 ordinal indicators and the global acceptability factor was more complex, as the absolute value of the loadings ranged from 0.004 to 0.70. Although the factor loadings for both CON and ABE indicators were positively correlated with the global acceptability factor, only CON indicators demonstrated adequate strength (0.35-0.70), whereas loadings for ABE items ranged from 0.02 to 0.28, suggesting a relatively weak relationship with the global factor. One item of the ABE subscale (ABE3) “I would be more likely to tell my friends that I use a therapist-assisted iCBT program than that I visit a therapist” did not load significantly on the global factor (λ=0.016; P=.83). Furthermore, there was significant heterogeneity in the factor loadings for both the SKE and TET indicators on the global factor. Despite its conceptualization as “negative attitudes,” factor loadings of indicators of SKE ranged from 0.15 to 0.20 and were positively correlated with the global acceptability factor. Conversely, factor loadings of indicators of TET ranged from 0.39 to 0.64 and were negatively correlated with the global acceptability factor. One item of the TET subscale (TET4) “I do not understand therapeutic concepts as well with a therapist-assisted iCBT as I do with a live therapist” did not load significantly on the global factor (λ=0.004; P=.95).

Overall, the results from the bifactor model structure of the APOI provide evidence that the 4 factors proposed by Schröder et al [24] exhibit an orthogonal relationship with the global factor of acceptability. As expected, positively valenced factors were positively related to one another, negatively valenced factors were positively related to one another, and each item was a significant indicator of the 4 distinct subscales when controlling for the common variance shared by the global factor. The bifactor model shows that most (but not all) of the 16 APOI items are significant indicators of the global factor, although all SKE items were related in the opposite direction.

Table 6. Goodness-of-fit indexes of models tested in confirmatory factor analysis.
Model nameChi-square (df)P valueCFIaTLIbSRMRcRMSEAd (95% CI)Comparison
ΔChi-square (df)P valueNote
2 factor1579.8 (103)<.0010.650.590.120.24 (0.23-0.25)e
4 factorf595.3 (101)<.0010.880.860.080.14 (0.13-0.15)984.45 (2)<.001Versus model 1
Bifactorf248.7 (82)<.0010.960.940.030.09 (0.08-0.10)346.57 (19)<.001Versus model 2

aCFI: comparative fit index.

bTLI: Tucker-Lewis index.

cSRMR: standardized root mean squared residual.

dRMSEA: root mean square error of approximation.

eNot available.

fDIFFTEST command used for weighted least squares means and variance adjusted estimators to test differences in model fit.

Figure 3. Bifactor model depicting orthogonal relationship among indicators of 4 latent factors: confidence, anonymity benefits, skepticism, and technologization threat loading alongside a global acceptability factor. ABE: anonymity benefits; CON: confidence in effectiveness; SKE: skepticism and perception of risk; TET: technologization threat. Note: threshold structure not shown.
Table 7. Model 3 (bifactor) standardized factor loadings with SEs.
Relation or variableEstimate (SE)P value
Loadings

Confidence in effectiveness (CON) BY


CON10.66 (0.06)<.001


CON20.83 (0.04)<.001


CON30.72 (0.06)<.001


CON40.52 (0.07)<.001

Anonymity benefits (ABE) BY


ABE10.77 (0.03)<.001


ABE20.83 (0.03)<.001


ABE30.75 (0.03)<.001


ABE40.75 (0.03)<.001

Skepticism and perception of risk (SKE) BY


SKE10.79 (0.02)<.001


SKE20.75 (0.03)<.001


SKE30.87 (0.02)<.001


SKE40.81 (0.02)<.001

Technologization threat (TET) BY


TET10.54 (0.06)<.001


TET20.81 (0.03)<.001


TET30.72 (0.04)<.001


TET40.86 (0.03)<.001

Acceptability BY


CON10.51 (0.07)<.001


CON20.35 (0.08)<.001


CON30.54 (0.08)<.001


CON40.70 (0.07)<.001


ABE10.28 (0.07)<.001


ABE20.18 (0.08).01


ABE30.02 (0.08).83


ABE40.22 (0.07).001


SKE10.16 (0.06).01


SKE20.20 (0.06).001


SKE30.15 (0.06).02


SKE40.15 (0.06).008


TET1−0.64 (0.05)<.001


TET2−0.31 (0.07)<.001


TET3−0.39 (0.07)<.001


TET4<.01 (0.08).95
Factor covariances

Confidence in effectiveness WITH


Anonymity benefits0.54 (0.06)<.001


Skepticism and perception of risks−0.30 (0.05)<.001


Technologization threat−0.38 (0.06)<.001


Acceptability0.00 (—a)

Anonymity benefits WITH


Skepticism and perception of risks−0.41 (0.06)<.001


Technologization threat−0.61 (0.05)<.001


Acceptability0.00 (—)

Skepticism and perception of risk WITH


Technologization threat0.70 (0.05)<.001


Acceptability0.00 (—)

Technologization threat WITH


Acceptability0.00 (—)

aNot available.

Figure 4. Bifactor model depicting orthogonal relationship among indicators of 4 latent factors: confidence, anonymity benefits, skepticism, and technologization threat loading alongside a global acceptability factor. Standardized parameter estimates shown. ABE: anonymity benefits; CON: confidence in effectiveness; SKE: skepticism and perception of risk; TET: technologization threat. Note: threshold structure not shown.

Reliability

The APOI demonstrated excellent internal consistency for the total scale (Cronbach α=.89) and retained good-to-excellent reliability across subscales (Cronbach α=.84 for ABE, .85 for TET, .87 for SKE, and .90 for CON). Across subscales, the corrected item-total correlations ranged from 0.59 to 0.83, with a mean adjusted correlation of 0.71 indicating good item discrimination within subscales. The corrected item‐total correlations for the APOI total scale ranged from 0.45 to 0.68, with a mean adjusted correlation of 0.55, indicating good item discrimination within the total scale.


Principal Findings

This study evaluated the psychometric properties of the APOI questionnaire [24], which is the most robust and widely used measure of acceptability for digital mental health interventions within a sample of Black Americans. The APOI demonstrated good-to-excellent internal consistency in the current sample, both as a total score and across subscales (Cronbach α=.84−.90), which is stronger than the internal consistency reported in the original publication (Cronbach α=.62−.77).

However, the original hierarchical, 4-factor model proposed by Schröder et al [24] exhibited relatively poor goodness-of-fit indices. Instead, the APOI showed the strongest evidence for construct validity of a bifactor model in which each of the indicators loaded on a global factor of acceptability and the global factor of acceptability was orthogonally related to the 4 subscales. Although this unexpected finding is inconsistent with the hierarchical model proposed by Schröder et al [24], it is consistent with the literature showing that bifactor models fit better than their equivalent higher-order model in more than 90% of comparisons for mental abilities test batteries [50] and can be particularly valuable in evaluating the plausibility of subscales [51,52]. The strong, positive correlations between positively valenced subscales (confidence in effectiveness and anonymity benefits) and negatively valenced subscales (skepticism and perception of risk and technologization threat), and the negative correlations across oppositely valenced subscales are compelling evidence that the subscales have meaningful discriminant validity and can be interpreted in their own right.

The heterogeneity of findings regarding model fit may be explained by the nature of the coefficients of the factor loadings and overall structure. Modeling both positive and negatively valenced factors onto a unitary, higher-order construct (ie, acceptability) can prove difficult, especially when variance exists among indicators of lower-order constructs. The factor loadings between the 16 indicators and global acceptability factor varied substantially. Several indicators loading on the ABE, SKE, and TET subscales exhibited relatively weak or null relations with acceptability or were in the opposite direction than expected. Items loaded on the ABE subscale, in particular, may indicate both facilitators and barriers to engagement with digital interventions, given the user’s conflicting perceptions of digital privacy and confidentiality [8]. Items that loaded on the SKE subscale were positively correlated with acceptability which is contrary to the conceptualization of this subscale as a construct reflecting negative attitudes, although this is interpreted with caution, given their weak correlations.

Scholars have called for better conceptualizations of acceptability [15,23], which have the potential to produce even more parsimonious measures by exploring new factors or consolidating indicators to reduce conceptual overlap. In particular, there is a growing need for evidence of the dimensions of acceptability that are demonstrably correlated with uptake, engagement, and adherence to digital mental health interventions. As discussed in prior research, this apparent discrepancy in consumer attitudes and behaviors may, in fact, be a consequence of the heterogeneous nature and definition of acceptability toward digital mental health interventions [22,24]. A considerable amount of research uses a single item to assess acceptability and results from this study, and others [29,30,32], demonstrate that single-items measures are inadequate for the operationalization of this heterogeneous construct.

Furthermore, these data suggest that within a Black American population, there is greater utility in interpreting the APOI subscales as attitudinal constructs distinct from a global acceptability factor. However, given that the higher-order model is nested within the bifactor model [46-48], these models are not necessarily at odds with one another. Ultimately, these results provide support for the underlying validity of the 4 factors proposed by the APOI but eschew traditional practices of prioritizing the calculation of a single acceptability score at the expense of adequately measuring each relevant dimension of acceptability and reporting them in tandem with the global score for contextualization.

Strengths and Limitations

This is the first study to investigate the psychometric properties of the APOI questionnaire among a racially minoritized population. This study is the first to provide evidence for the cross-cultural equivalence of APOI among Black Americans. This is a notable contribution to the literature, as the vast majority of randomized controlled trials examining the efficacy and acceptability of iCBT do not include (or do not report) racial minorities in their studies [13], and existing measures of consumer attitudes toward digital mental health interventions [24,29-33] have predominantly been developed and examined for validation within White majority (and predominantly European) samples. Furthermore, by modifying the target treatment from “psychological online interventions” to “therapist-assisted iCBT,” this study provides preliminary evidence for the utility of the APOI for diverse digital interventions with varying degrees of specificity. Overall, the results suggest that the APOI is a robust measure.

Despite the strengths of this study, there are some limitations that warrant attention. The study sample consisted of participants with minimal symptoms of depression, anxiety, or stress. This was distinct from the participants who reported moderate levels of depression in the study by Schröder et al [24]. Future research needs to evaluate these measures among those with greater depression severity or other diagnoses. The participants in this study were predominantly young adult females. These demographic groups are more likely to use digital mental health interventions, and the relative impact of their positive and negative attitudes towards digital mental health intervention is likely to differ across diverse populations [8]. Relatedly, measurement invariance was not formally assessed across different subgroups within the sample (eg, male vs female), because of significant imbalances in sample size, which minimized the power to detect potential differences between these groups. Finally, the convergent validity of the APOI with other measures of acceptability within a Black American sample could not be determined because no other relevant measures of acceptability existed at the time of data collection for this study.

Future Directions

Future research should modify the APOI to apply it to other digital mental health interventions (eg, virtual reality exposure therapies and massively open web-based interventions) and translate the measure into additional languages (eg, Spanish) to further examine cross-intervention and cross-cultural equivalency. Although the APOI demonstrated good internal consistency reliability within the present sample, test-retest reliability was not examined. Indeed, with the exception of the study by Clough et al [29], there is a notable lack of investigation of the test-retest reliability of acceptability measures, which deserves further evaluation. Moreover, it would be compelling to investigate the criterion validity of the APOI to examine whether positive attitudes toward digital mental health interventions predict the willingness to use or actual use of digital mental health interventions among racially and ethnically minoritized participants. Consistent with the Theory of Planned Behavior [53], which emphasizes the relationship among beliefs, attitudes, and behavioral intentions, positive attitudes toward acceptability would be expected to be the strongest predictor of behavioral intention, which in turn is the immediate determinant of actual treatment-seeking behavior. Investigations of the relationship between attitudes toward iCBT and the effectiveness of such interventions should be conducted, as those with more positive attitudes might derive greater clinical benefits. Finally, although studies examining the convergent validity of the APOI with related measures of acceptability toward digital mental health interventions have been recently conducted [29,30], these studies did not expressly recruit participants from racially and ethnically minoritized communities, and their results are predominantly based on White or European samples. This is concerning, as racially and ethnically minoritized communities may be positioned to benefit the most from the treatment accessibility advantages afforded by digital mental health interventions [54]. Understanding these communities’ attitudes toward these treatments is paramount.

Conclusions

The APOI questionnaire is a valid and reliable measure of attitudes toward therapist-assisted iCBT among Black Americans. However, some of the indicators were only weakly associated with the global factor of acceptability, and a bifactor model demonstrated better goodness-of-fit than the hierarchical, 4-factor structure proposed by the original authors. This provides strong evidence that the APOI demonstrates multidimensionality and that there is greater utility in interpreting APOI subscales as attitudinal constructs distinct from a global acceptability factor. Indeed, attitudes of acceptability comprise both positive and negative attitudes toward the uptake of digital mental health interventions and must be evaluated in tandem to effectively understand the nuanced attitudes consumers may hold toward these interventions. This is the first study to examine the psychometric properties of any measure of consumer attitudes toward digital mental health interventions among Black participants. Demonstrating the reliability, validity, and cultural equivalency of existing measures of attitudes toward these interventions is needed to improve our understanding of the drivers of and barriers to using digital treatments among minoritized communities. For the full potential of digital mental health interventions to improve equitable access to treatment to be realized, more adequate representation of minoritized communities in research on these interventions must be achieved.

Acknowledgments

The authors would like to thank Lee Branum-Martin, PhD, for consultation on structural equation modeling and confirmatory factor analyses. The data in this study are a secondary analysis by Ellis and Anderson [37]. This paper presents original secondary analyses of a previously published experimental survey study.

Authors' Contributions

DME devised the project, main conceptual ideas, and protocol outline and conducted all the statistical analyses; designed the figures and tables; and wrote the manuscript. Both DME and PLA contributed to the final version of this manuscript. PLA supervised the project.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Materials depict fit indices for all examined confirmatory factor analyses. Mplus (version 8.4) syntax is provided for all analyses.

DOCX File , 32 KB

  1. Alegría M, Canino G, Ríos R, Vera M, Calderón J, Rusch D, et al. Inequalities in use of specialty mental health services among Latinos, African Americans, and non-Latino whites. Psychiatr Serv 2002 Dec;53(12):1547-1555. [CrossRef] [Medline]
  2. Ayalon L, Alvidrez J. The experience of Black consumers in the mental health system--identifying barriers to and facilitators of mental health treatment using the consumers' perspective. Issues Ment Health Nurs 2007 Dec 09;28(12):1323-1340. [CrossRef] [Medline]
  3. Gaston GB, Earl TR, Nisanci A, Glomb B. Perception of mental health services among Black Americans. Social Work in Mental Health 2016 Feb 16;14(6):676-695. [CrossRef]
  4. Andersson G, Titov N, Dear BF, Rozental A, Carlbring P. Internet-delivered psychological treatments: from innovation to implementation. World Psychiatry 2019 Feb;18(1):20-28 [FREE Full text] [CrossRef] [Medline]
  5. Gerhards SA, de Graaf LE, Jacobs LE, Severens JL, Huibers MJ, Arntz A, et al. Economic evaluation of online computerised cognitive-behavioural therapy without support for depression in primary care: randomised trial. Br J Psychiatry 2010 Apr;196(4):310-318. [CrossRef] [Medline]
  6. Hedman E, Andersson E, Ljótsson B, Andersson G, Rück C, Lindefors N. Cost-effectiveness of internet-based cognitive behavior therapy vs. cognitive behavioral group therapy for social anxiety disorder: results from a randomized controlled trial. Behav Res Ther 2011 Nov;49(11):729-736. [CrossRef] [Medline]
  7. Carolan S, de Visser RO. Employees' perspectives on the facilitators and barriers to engaging with digital mental health interventions in the workplace: qualitative study. JMIR Ment Health 2018 Jan 19;5(1):e8 [FREE Full text] [CrossRef] [Medline]
  8. Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res 2021 Mar 24;23(3):e24387 [FREE Full text] [CrossRef] [Medline]
  9. Himle JA, Weaver A, Zhang A, Xiang X. Digital mental health interventions for depression. Cognit Behavioral Pract 2022 Feb;29(1):50-59. [CrossRef]
  10. Johansson R, Andersson G. Internet-based psychological treatments for depression. Expert Rev Neurother 2012 Jul;12(7):861-9; quiz 870. [CrossRef] [Medline]
  11. Linardon J, Cuijpers P, Carlbring P, Messer M, Fuller-Tyszkiewicz M. The efficacy of app-supported smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials. World Psychiatry 2019 Oct 09;18(3):325-336 [FREE Full text] [CrossRef] [Medline]
  12. Bernstein EE, Weingarden H, Wolfe EC, Hall MD, Snorrason I, Wilhelm S. Human support in app-based cognitive behavioral therapies for emotional disorders: scoping review. J Med Internet Res 2022 Apr 08;24(4):e33307 [FREE Full text] [CrossRef] [Medline]
  13. Andrews G, Basu A, Cuijpers P, Craske M, McEvoy P, English C, et al. Computer therapy for the anxiety and depression disorders is effective, acceptable and practical health care: an updated meta-analysis. J Anxiety Disord 2018 Apr;55:70-78 [FREE Full text] [CrossRef] [Medline]
  14. Barak A, Hen L, Boniel-Nissim M, Shapira N. A comprehensive review and a meta-analysis of the effectiveness of internet-based psychotherapeutic interventions. J Technol Human Serv 2008 Jul 03;26(2-4):109-160. [CrossRef]
  15. Apolinário-Hagen J, Kemper J, Stürmer C. Public acceptability of e-mental health treatment services for psychological problems: a scoping review. JMIR Ment Health 2017 Apr 03;4(2):e10 [FREE Full text] [CrossRef] [Medline]
  16. Waller R, Gilbody S. Barriers to the uptake of computerized cognitive behavioural therapy: a systematic review of the quantitative and qualitative evidence. Psychol Med 2009 May;39(5):705-712. [CrossRef] [Medline]
  17. Casey LM, Joy A, Clough BA. The impact of information on attitudes toward e-mental health services. Cyberpsychol Behav Soc Netw 2013 Aug;16(8):593-598. [CrossRef] [Medline]
  18. Mitchell N, Gordon PK. Attitudes towards computerized CBT for depression amongst a student population. Behav Cognit Psychother 2007 May 14;35(4):421-430. [CrossRef]
  19. Travers MF, Benton SA. The acceptability of therapist-assisted, internet-delivered treatment for college students. J College Student Psychother 2014 Jan 14;28(1):35-46. [CrossRef]
  20. Mohr DC, Siddique J, Ho J, Duffecy J, Jin L, Fokuo JK. Interest in behavioral and psychological treatments delivered face-to-face, by telephone, and by internet. Ann Behav Med 2010 Aug;40(1):89-98 [FREE Full text] [CrossRef] [Medline]
  21. Choi I, Sharpe L, Li S, Hunt C. Acceptability of psychological treatment to Chinese- and Caucasian-Australians: internet treatment reduces barriers but face-to-face care is preferred. Soc Psychiatry Psychiatr Epidemiol 2015 Jan;50(1):77-87. [CrossRef] [Medline]
  22. Molloy A, Ellis DM, Su L, Anderson PL. Improving acceptability and uptake behavior for internet-based cognitive-behavioral therapy. Front Digit Health 2021;3:653686 [FREE Full text] [CrossRef] [Medline]
  23. Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv 2019 Jul 01;70(7):538-544 [FREE Full text] [CrossRef] [Medline]
  24. Schröder J, Sautier L, Kriston L, Berger T, Meyer B, Späth C, et al. Development of a questionnaire measuring attitudes towards Psychological Online Interventions-the APOI. J Affect Disord 2015 Nov 15;187:136-141. [CrossRef] [Medline]
  25. Devilly GJ, Borkovec TD. Psychometric properties of the credibility/expectancy questionnaire. J Behav Ther Exp Psychiatry 2000 Jun;31(2):73-86. [CrossRef] [Medline]
  26. Handley T, Perkins D, Kay-Lambkin F, Lewin T, Kelly B. Familiarity with and intentions to use internet-delivered mental health treatments among older rural adults. Aging Ment Health 2015;19(11):989-996. [CrossRef] [Medline]
  27. Wootton BM, Titov N, Dear BF, Spence J, Kemp A. The acceptability of internet-based treatment and characteristics of an adult sample with obsessive compulsive disorder: an internet survey. PLoS One 2011;6(6):e20548 [FREE Full text] [CrossRef] [Medline]
  28. Apolinário-Hagen J, Vehreschild V, Alkoudmani RM. Current views and perspectives on e-mental health: an exploratory survey study for understanding public attitudes toward internet-based psychotherapy in Germany. JMIR Ment Health 2017 Feb 23;4(1):e8 [FREE Full text] [CrossRef] [Medline]
  29. Clough B, Eigeland J, Madden I, Rowland D, Casey L. Development of the eTAP: a brief measure of attitudes and process in e-interventions for mental health. Internet Interv 2019 Dec;18:100256 [FREE Full text] [CrossRef] [Medline]
  30. Gómez Penedo JM, Berger T, Grosse Holtforth M, Krieger T, Schröder J, Hohagen F, et al. The Working Alliance Inventory for guided Internet interventions (WAI-I). J Clin Psychol 2020 Jun;76(6):973-986. [CrossRef] [Medline]
  31. Miloff A, Carlbring P, Hamilton W, Andersson G, Reuterskiöld L, Lindner P. Measuring alliance toward embodied virtual therapists in the era of automated treatments with the Virtual Therapist Alliance Scale (VTAS): development and psychometric evaluation. J Med Internet Res 2020 Mar 24;22(3):e16660 [FREE Full text] [CrossRef] [Medline]
  32. Teles S, Ferreira A, Paúl C. Assessing attitudes towards online psychoeducational interventions: psychometric properties of a Brief Attitudes Scale. Health Soc Care Community 2021 Sep;29(5):e1-10. [CrossRef] [Medline]
  33. Miragall M, Baños RM, Cebolla A, Botella C. Working alliance inventory applied to virtual and augmented reality (WAI-VAR): psychometrics and therapeutic outcomes. Front Psychol 2015 Oct 08;6:1531 [FREE Full text] [CrossRef] [Medline]
  34. Horvath AO, Bedi RP. The alliance. In: Psychotherapy Relationships That Work: Therapist Contributions and Responsiveness to Patients. Oxford, United Kingdom: Oxford University Press; 2002.
  35. McDonald RP, Ho MR. Principles and practice in reporting structural equation analyses. Psychol Method 2002 Mar;7(1):64-82 [FREE Full text] [CrossRef] [Medline]
  36. Kline R. Principles and Practice of Structural Equation Modeling. New York City, NY: Guilford Publications; 2015.
  37. Ellis DM, Anderson PL. Improving the acceptability of internet-based cognitive-behavioral therapy among Black Americans. Technol Mind Behav 2021 Oct 18;2(3). [CrossRef]
  38. Magaard JL, Seeralan T, Schulz H, Brütt AL. Factors associated with help-seeking behaviour among individuals with major depression: a systematic review. PLoS One 2017;12(5):e0176730 [FREE Full text] [CrossRef] [Medline]
  39. Lovibond S, Lovibond P, Psychology Foundation of Australia. Manual for the Depression Anxiety Stress Scales. Sydney, N.S.W: Psychology Foundation of Australia; 1995.
  40. Lovibond PF, Lovibond SH. The structure of negative emotional states: comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories. Behav Res Ther 1995 Mar;33(3):335-343. [CrossRef] [Medline]
  41. Norton PJ. Depression Anxiety and Stress Scales (DASS-21): psychometric analysis across four racial groups. Anxiety Stress Coping 2007 Sep;20(3):253-265. [CrossRef] [Medline]
  42. Fan X, Thompson B, Wang L. Effects of sample size, estimation methods, and model specification on structural equation modeling fit indexes. Structural Equat Model Multidisciplinary J 1999 Jan;6(1):56-83. [CrossRef]
  43. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural Equat Model Multidisciplinary J 1999 Jan;6(1):1-55. [CrossRef]
  44. Marsh HW, Hocevar D. Application of confirmatory factor analysis to the study of self-concept: first- and higher order factor models and their invariance across groups. Psychol Bull 1985 May;97(3):562-582. [CrossRef]
  45. Ellis D, Anderson P. Cross-cultural validation of the attitudes towards psychological online interventions questionnaire among Black Americans. OSF Registries. 2022.   URL: https://osf.io/y3r2p [accessed 2022-02-16]
  46. Markon KE. Bifactor and hierarchical models: specification, inference, and interpretation. Annu Rev Clin Psychol 2019 May 07;15(1):51-69. [CrossRef] [Medline]
  47. Rijmen F. Formal relations and an empirical comparison among the bi-factor, the Testlet, and a second-order multidimensional IRT model. J Educ Measurement 2010;47(3):361-372. [CrossRef]
  48. Yung Y, Thissen D, McLeod LD. On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika 1999 Jun;64(2):113-128. [CrossRef]
  49. Satorra A, Bentler PM. Ensuring positiveness of the scaled difference chi-square test statistic. Psychometrika 2010 Jun 20;75(2):243-248 [FREE Full text] [CrossRef] [Medline]
  50. Cucina J, Byle K. The bifactor model fits better than the higher-order model in more than 90% of comparisons for mental abilities test batteries. J Intell 2017 Jul 11;5(3):27 [FREE Full text] [CrossRef] [Medline]
  51. Reise SP, Moore TM, Haviland MG. Bifactor models and rotations: exploring the extent to which multidimensional data yield univocal scale scores. J Pers Assess 2010 Nov;92(6):544-559 [FREE Full text] [CrossRef] [Medline]
  52. Reise S, Bonifay W, Haviland M. Bifactor modelling and the evaluation of scale scores. In: The Wiley Handbook of Psychometric Testing, 2 Volume Set A Multidisciplinary Reference on Survey, Scale and Test Development · Volume 1. Hoboken, New Jersey: Wiley; 2018.
  53. Ajzen I. The theory of planned behavior. Organizational Behav Human Decis Process 1991 Dec;50(2):179-211. [CrossRef]
  54. Schueller SM, Hunter JF, Figueroa C, Aguilera A. Use of digital mental health for marginalized and underserved populations. Curr Treat Options Psych 2019 Jul 5;6(3):243-255. [CrossRef]


APOI: Attitudes Towards Psychological Online Interventions
CFI: comparative fit index
DASS-21: Depression Anxiety Stress Scale-21 items
iCBT: internet-based cognitive behavioral therapy
RMSEA: root mean square error of approximation
SRMR: standardized root mean squared residual
TLI: Tucker-Lewis index
WLSMV: weighted least squares means and variance adjusted


Edited by J Torous; submitted 01.11.22; peer-reviewed by S Pardini, P Chow; comments to author 23.02.23; revised version received 05.03.23; accepted 06.03.23; published 27.04.23

Copyright

©Donovan Michael Ellis, Page Lyn Anderson. Originally published in JMIR Mental Health (https://mental.jmir.org), 27.04.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.