Published on in Vol 9, No 8 (2022): August

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/38428, first published .
Predicting Patient Wait Times by Using Highly Deidentified Data in Mental Health Care: Enhanced Machine Learning Approach

Predicting Patient Wait Times by Using Highly Deidentified Data in Mental Health Care: Enhanced Machine Learning Approach

Predicting Patient Wait Times by Using Highly Deidentified Data in Mental Health Care: Enhanced Machine Learning Approach

Authors of this article:

Amir Rastpour1 Author Orcid Image ;   Carolyn McGregor1, 2 Author Orcid Image

Original Paper

1Faculty of Business and Information Technology, Ontario Tech University, Oshawa, ON, Canada

2Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, Australia

Corresponding Author:

Amir Rastpour, BSc, MSc, PhD

Faculty of Business and Information Technology

Ontario Tech University

2000 Simcoe St N

Oshawa, ON, L1G 0C5

Canada

Phone: 1 905 721 8668 ext 2830

Fax:1 905 721 3167

Email: amir.rastpour@ontariotechu.ca


Background: Wait times impact patient satisfaction, treatment effectiveness, and the efficiency of care that the patients receive. Wait time prediction in mental health is a complex task and is affected by the difficulty in predicting the required number of treatment sessions for outpatients, high no-show rates, and the possibility of using group treatment sessions. The task of wait time analysis becomes even more challenging if the input data has low utility, which happens when the data is highly deidentified by removing both direct and quasi identifiers.

Objective: The first aim of this study was to develop machine learning models to predict the wait time from referral to the first appointment for psychiatric outpatients by using real-time data. The second aim was to enhance the performance of these predictive models by utilizing the system’s knowledge while the input data were highly deidentified. The third aim was to identify the factors that drove long wait times, and the fourth aim was to build these models such that they were practical and easy-to-implement (and therefore, attractive to care providers).

Methods: We analyzed retrospective highly deidentified administrative data from 8 outpatient clinics at Ontario Shores Centre for Mental Health Sciences in Canada by using 6 machine learning methods to predict the first appointment wait time for new outpatients. We used the system’s knowledge to mitigate the low utility of our data. The data included 4187 patients who received care through 30,342 appointments.

Results: The average wait time varied widely between different types of mental health clinics. For more than half of the clinics, the average wait time was longer than 3 months. The number of scheduled appointments and the rate of no-shows varied widely among clinics. Despite these variations, the random forest method provided the minimum root mean square error values for 4 of the 8 clinics, and the second minimum root mean square error for the other 4 clinics. Utilizing the system’s knowledge increased the utility of our highly deidentified data and improved the predictive power of the models.

Conclusions: The random forest method, enhanced with the system’s knowledge, provided reliable wait time predictions for new outpatients, regardless of low utility of the highly deidentified input data and the high variation in wait times across different clinics and patient types. The priority system was identified as a factor that contributed to long wait times, and a fast-track system was suggested as a potential solution.

JMIR Ment Health 2022;9(8):e38428

doi:10.2196/38428

Keywords



The length and predictability of wait times are important factors that impact patient satisfaction, treatment effectiveness, and the efficiency of care that the patients receive. Providing patients with accurate wait time predictions and informing them about potential appointment delays increase the patients’ satisfaction level and enable care providers and staff members to manage the patient flow more effectively and efficiently [1-3]. Lengthy wait times are significantly associated with prognosis deterioration in mental health care [4] and are associated with higher rates of no-shows that adversely impact wait time management. The issue of long wait times is worse for children and youth with mental health problems, with some waiting as long as 2.5 years [5].

A great deal of research has been conducted on wait time prediction and identification of factors that drive lengthy wait times in physical health care sectors, including emergency departments [6], maternity emergency rooms [7], radiology departments [3], and oncology departments [8]. These wait time prediction models are usually developed for systems in which care is provided during a single visit to a care provider and the care is provided to patients individually. In contrast, mental health care is offered in a different context: the care is usually provided through multiple consecutive visits, the number of which is not necessarily known at the beginning of treatment, and the care can be provided to a group of patients, as in group consultation sessions. In addition, the psychiatric clinics also face a high rate of no-shows that make the task of wait time prediction even more difficult. Because of these intrinsic differences in the care provided for patients with physical problems and psychiatric patients, the wait time models developed for physical care cannot be readily used in the context of psychiatric care.

The task of predicting wait times becomes even more challenging when the available data are highly deidentified, that is, all direct identifiers (such as name, address, license plate) and quasi identifiers (such as gender, date of birth, zip code) are removed. Although it is a common practice to deidentify research data by removing the direct identifiers, the removal of quasi identifiers makes highly deidentified data more attractive from the privacy point of view, but at the same time, it compromises the utility of the information in the data that can otherwise be used to analyze and improve the system [9,10].

We use state-of-the-art machine learning (ML) methods to predict outpatient wait times at a tertiary mental health hospital by using real-time highly deidentified data. In addition, we use these models to identify key factors that drive the wait times. ML methods are sophisticated tools that can capture hidden patterns in large and imperfect data more effectively than conventional linear regression methods. ML methods are resistant to noise in the data and they quickly adapt to operational changes in wait time management processes without human supervision [11-14]. ML methods have been widely used in the mental health care sector for diagnosis [15-18], prognosis [19-22], treatment [23-25], and other medical purposes. A few literature review papers provide systematic surveys of these papers [26-29]. However, to the best of our knowledge, ML methods have not yet been applied to provide wait time prediction in the mental health care sector, although other health care sectors have benefitted from these sophisticated methods in their waiting list management.

A system’s knowledge is obtained through systems thinking, which is defined as seeing the relationship among components (rather than seeing the components individually) and observing the patterns of change (rather than static “snapshots”) [30]. In the context of an emergency department, it has been shown that ML models provide more accurate wait time predictions when they are enhanced with the system’s knowledge in the presence of quasi identifiers such as age and gender [31]. We obtain and use the system’s knowledge to enhance the predictive power of our ML models in the absence of quasi identifiers.

The first objective of this study was to develop 6 ML methods (namely, linear regression, random forest, weighted k-nearest neighbors, support vector machine, neural network, and decision tree) for real-time prediction of wait time for new outpatients in 8 outpatient clinics in Ontario Shores Centre for Mental Health Sciences (Ontario Shores) in Ontario, Canada. The second objective was to enhance the predictive power of ML models by using the system’s knowledge while having highly deidentified input data. The third objective was to assess variable importance to understand what factors drove long wait times. The fourth objective was to develop models such that care providers could understand and use them relatively easily without the need for background knowledge on ML models and their implementation.


Data Source and Data Preparation

In this research, we used highly deidentified retrospective administrative data from Ontario Shores to build ML models for predicting new outpatients’ wait times. Our focus was on 8 outpatient clinics, namely, Anxiety and Mood Disorders (AMD) Clinic, Traumatic Stress Clinic, Borderline Personality Self-Regulation Clinic, Women’s Clinic, Prompt Care Clinic, Prompt AMD, Prompt Transitional Aged Youth, and Prompt Adolescent Consultation. Our data included 4998 patients whose first appointment was between April 1, 2017 and September 30, 2019 (both days inclusive). We excluded 30 (0.6%) patients because of missing referral dates. Selection of patients based on the date of their first appointment, rather than their referral date, had caused a selection bias in favor of 2 groups of patients: (1) those who had longer wait times among patients whose referral date was just before April 1, 2017 and (2) those who had shorter wait times among patients whose referral date was just before September 30, 2019. To address this selection bias, we removed all patients whose referral date was before April 1, 2017, and for each clinic, we removed all patients whose referral date was after September 30, 2019 minus the 80th percentile of the wait times in that specific clinic. After removing the biased data, we were left with 4187 referral entries. Table 1 shows the breakdown of patient count by clinic.

Table 1. Summary statistics of patients’ wait time across different clinics (N=4187).
ClinicPatients (n)Mean (SD) daysMedian days
Anxiety and Mood Disorders Clinic29898.08 (105.71)64
Traumatic Stress Clinic203173.85 (113.58)165
Borderline Personality Self-Regulation Clinic181107.42 (60.33)112
Women’s Clinic15580.19 (47.08)75
Prompt Care Clinic233829.05 (23.90)21
Prompt Anxiety and Mood Disorders Consultation436186.42 (138.16)205.5
Prompt Transitional Aged Youth Consultation40254.5 (45.77)37
Prompt Adolescent Consultation17497.14 (57.17)90.5

Variables

Outcome Variable

We aimed to predict the wait time, defined as the time from referral to the first appointment.

Predictor Variables Included in Our Data Set

Relevant variables for each patient were selected from the electronic health record. The medical variables included referral date, triage date, priority level (low, medium, and high) designated at triage, all appointment dates, status of each appointment (attended, no-show, and cancelled), and possible status changes while waiting or receiving care. 

Engineered Predictor Variables

To better understand how our 8 clinics provided care to their patients, we had monthly feedback sessions with the care providers. In these sessions, components of the care system were identified and their interactions were outlined. Figure 1 presents a schematic view of the care processes at our clinics of interest. At first, care providers (usually family doctors) sent referrals to Ontario Shores. Then, a triage clinician assessed the patients and made intake decisions (accept or decline). Accepted patients entered the wait list associated with each clinic and remained there until the first appointment with their clinician. The outcome variable, wait time, was actually the wait time of the primary queue, as presented in Figure 1. Primary queue patients who were ahead of the new arrival directly impacted the new patient’s wait time. Understanding how the system worked led us to understand that although follow-up queues were downstream to the primary queue, they also indirectly impacted the wait time of the new patients. This indirect impact happened because the follow-up queues utilized the same resources (clinicians) that the primary queue utilized. Therefore, the new patient’s wait time depended on how much of the resource capacity was utilized by the follow-up queues. Another important output from our systems analysis discussions was that the clinics had utilizations close to 100%, which meant all of the care capacity offered by the clinics were assigned to patients. This helped us to approximate the offered care capacity by adding up the provided care. Obtaining the system’s knowledge led us to define and measure the following predictor variables: (1) number of patients from each priority level in primary and follow-up queues, (2) the accumulative wait time of patients from each priority level who were in follow-up queues, (3) the accumulative amount of service (treatment) that patients from each priority level in follow-up queues (note that the amount of service received in the primary queue is zero) had already received, (4) the accumulative amount of time patients from each priority level had already spent in follow-up queues, and (5) the total care capacity during 30-, 60-, and 90-day time windows just before the referral date.

Figure 1. A schematic view of an outpatient receiving mental health care.
View this figure

Missing Values

Some patients were missing their designated priority level at triage. However, those patients had a priority designated to them at a later date (possibly owing to re-evaluations while waiting). If a priority level at triage was missing, we replaced that with the priority level designated at the closest date after triage.

Dimensionality Reduction

High dimensionality, that is, having too many variables in a model, may cause many complications, including overfitting and producing a higher sampling variance (ie, sensitivity to small fluctuations in the training set) [32-34]. We reduced the dimensionality of our data by selecting a subset of variables (and discarding the rest) while retaining as much information as possible from all variables. We kept all of the medical variables, and among variables obtained from systems analysis, we calculated the pairwise Pearson correlation and removed the redundant information if the correlation was larger than 90%, as in [35].

Outliers

We used the generalized extreme studentized deviate method to identify the outliers [36]. This method iteratively applies the generalized extreme studentized deviate test and progressively evaluates anomalies by removing potential outliers and recalculating the test statistic and the associated critical value. The procedure continues until all outliers are identified.

ML Methods

Implementation

We examined 6 different ML methods, namely, linear regression, random forest, weighted k-nearest neighbors, support vector machine, neural network, and decision tree [11]. We used R version 4.0.2 (2020-06-22) and developed all our predictive models in the tidymodels ecosystem of packages [37]. The tidymodels ecosystem was used to streamline the modeling procedure and to avoid coding variations caused by using separate packages for each ML tool. Streamlining the modeling procedure simplified the implementation and debugging steps and therefore made the models more likely to be used by Ontario Shores.

Tuning and Evaluation

For each of the ML modeling approaches, there were multiple hyperparameters that we needed to tune to make sure that the obtained output was the best (or close-to-best) possible from that model. To obtain good models and to avoid overfitting, data for each clinic were randomly divided into training (75% of the data) and testing (the remaining 25%) sets. First, we applied the Latin hypercube sampling method [38] to create the search grid within the range of values of each hyperparameter. Then, we selected the best value for each hyperparameter by conducting an exhaustive grid search using the 10-fold cross-validation method. As our outcome variable, wait time, was continuous, we used the root mean square error (RMSE) to compare the performance of different models.

Ethics Approval

This study was approved by the Research Ethics Boards at Ontario Tech University (15596) and Ontario Shores (19-009-D).


Summary Statistics

The summary statistics of our data for different clinics are shown in Table 1. The average wait time widely varied between the clinics from 29 days for Prompt Care Clinic to 186 days for Prompt AMD Consultation. For more than half of the clinics, the average wait time was longer than 3 months. We also calculated the median wait times, which varied between 21 days for Prompt Care Clinic and 205.5 days for Prompt AMD consultation, and confirmed long wait times. Our data also showed large standard deviation values, which varied between 23.9 days for Prompt Care Clinic and 138.16 days for Prompt AMD consultation. Having a high variation among patients’ wait times made it difficult for both care providers and patients to be able to plan ahead.

The summary statistics of the appointments across different clinics are shown in Table 2. In total, the data set included 30,342 appointments out of which 4862 (16%) were no-shows. The number of scheduled appointments widely varied among clinics, from 307 appointments for Prompt Adolescent Consultation to 10,506 appointments for Borderline Personality Self-Regulation Clinic. The proportion of no-shows also widely varied across different clinics, from 2.9% (17/584) for Prompt AMD and Transitional Aged Youth Clinics to 22.2% (623/2804) for Women’s Clinic. Figure 2 shows the average wait time and 95% CI of all patients stratified by the priority level and clinic. This figure illustrates that low-priority patients had the longest wait time in all clinics, except for the Women’s Clinic, where the medium-priority patients had the longest average wait time.

Table 2. Summary statistics of all the appointments and no-show appointments per patient across clinics.
ClinicAll appointments (N=30,342)No-show appointments (n=4862)

Patients (n)Mean (SD)MedianPatients, n (%)Mean (SD)Median
Anxiety and Mood Disorders Clinic683021.68 (23.03)171431 (20.9)4.54 (6.84)2
Traumatic Stress Clinic561727.27 (16.08)261148 (20.4)5.57 (5.30)4
Borderline Personality Self-Regulation Clinic10,50657.73 (50.18)42.51453 (13.8)7.98 (7.71)5
Women’s Clinic280417.97 (14.53)17.5623 (22.2)3.99 (5.16)2
Prompt Care Clinic31671.34 (0.8)1158 (4.9)0.07 (0.32)0
Prompt Anxiety and Mood Disorders Consultation5841.08 (0.28)117 (2.9)0.03 (0.19)0
Prompt Transitional Aged Youth Consultation5271.05 (0.22)114 (2.6)0.03 (0.18)0
Prompt Adolescent Consultation3071.74 (2.3)118 (5.8)0.10 (0.37)0
Figure 2. The mean wait time with 95% confidence interval by priority level and clinic. AMD: Anxiety and Mood Disorders; BP: Borderline Personality; Consult.: Consultation; TAY: Transitional Aged Youth.
View this figure

Model Performance

We first applied our dimensionality reduction approach to the training set. Variables introduced in the section “Engineered Predictor Variables” appeared to contain similar information. We dropped all of them except for “Number of patients from each priority level in primary and follow-up queues.” We used these variables along with variables introduced in the section “Predictor Variables Included in Our Data Set” to build our ML models. Table 3 displays the best RMSE values obtained from each of the models for each of the clinics when we included the engineered predictors. For each clinic (ie, each row), the best performing method is italicized. As different clinics followed different operational schemes and their wait times had different profiles, there was not a single ML model that outperformed all of the rest across all clinics. However, the random forest method appeared to be the most promising method as it provided the minimum RMSE values for 4 (out of the 8 clinics) and the second minimum RMSE for the other 4 clinics. It is notable that the linear regression method, regardless of its simplicity, outperformed other ML methods at some clinics such as Women’s Clinic. This can be attributed to the existence of linear patterns in the data [39], having small sample sizes [40], and the fact that the grid search method provides an optimal combination of the selected subset of hyperparameter values, but it cannot guarantee the global optimality of the output [13].

Table 3. Comparison of the root mean square error of different machine learning methodsa.
ClinicLinear regressionRandom forestK-nearest neighborsSupport vector machineNeural networkDecision tree
Anxiety and Mood Disorders Clinic66.8849.8970.3752.6483.4450.65
Traumatic Stress Clinic94.0293.5498.9586.6108.36102.71
Borderline Personality Self-Regulation Clinic5049.4751.9450.6161.9856.16
Women’s Clinic33.4536.164239.6346.9356.73
Prompt Care Clinic19.0416.4916.8317.1316.8717.13
Prompt Anxiety and Mood Disorders Consultation121.25119.6125.64117.83142.53131.28
Prompt Transitional Aged Youth Consultation29.8226.1926.323.5628.9228.34
Prompt Adolescent Consultation20.418.0431.819.1326.620.84

aFor each clinic (ie, each row), the best performing method is italicized.

Hyperparameter Tuning

Table 4 displays the list of hyperparameters that we used to tune each of the ML methods, the range of values for each hyperparameter, and the selected values for the AMD clinic. The selected values varied across clinics.

Table 5 displays the selected values of the random forest hyperparameters across different clinics. For some settings, the neural network method with 1 hidden layer could be the same as the linear regression method [12]; to avoid duplications, we did not consider such settings for the neural network method.

Table 4. Hyperparameters used for tuning the machine learning methods and their selected values for the Anxiety and Mood Disorders Clinic.
Machine learning method, parameterRangeSelected valueExplanation
Linear regressionN/AaN/AN/A
Random forest

mtry1 to 2016Number of predictors at each split

min_n2 to 4014Minimum node size
K-nearest neighbors

neighbors1 to 1513Number of neighbors to consider

dist_power0.1 to 20.21Minkowski distance parameter

weight_funcbRectangularKernel function for weighting sample distribution
Support vector machine

cost2–10 to 2522.31The cost of wrong predictions

rbf_sigma10–10 to 10010–1.76Radial basis function parameter

margin0 to 0.20.11Epsilon for support vector machine insensitive loss function
Neural network

hidden_units1 to 109Number of units in the hidden model

penalty10–10 to 10010–0.39Amount of weight decay

epochs101 to 103993Number of training iterations
Decision tree

cost_complexity10–10 to 10–110–8.02Cost/complexity parameters

tree_depth1 to 153Maximum depth of the tree

min_n2 to 4017Minimum node size

aN/A: not applicable.

btriweight, triangular, rectangular, rank, optimal, inv, gaussian, epanechnikov, cos, biweight.

Table 5. Hyperparameters of the random forest method across different clinics.
ClinicCount of splitting variablesCount of treesMinimal node size
Anxiety and Mood Disorders Clinic16100014
Traumatic Stress Clinic17100029
Borderline Personality Self-Regulation Clinic2100031
Women’s Clinic17100039
Prompt Care Clinic17100039
Prompt Anxiety and Mood Disorders Consultation19100030
Prompt Transitional Aged Youth Consultation11100010
Prompt Adolescent Consultation11100010

Variable Importance

The random forest method provides measures of importance for predictor variables. These measures of importance help the user to identify variables that have the most and the least impacts on the outcome variable. Figure 3 displays the importance of predictor variables, measured by impurity (variance of the responses) at the AMD Clinic. The rankings of the importance of predictor variables were similar to those shown in Figure 3 at other clinics. According to Figure 3, priorityUpdate, countCurrentlyInService, and queueSize were the most influential variables. In our models, priorityUpdate variables denoted the last priority assigned to each patient, countCurrentlyInService variables denoted how many patients of different priority levels were currently receiving service (ie, were in the follow-up queues) at the referral time, and queueSize variables denoted how many patients of different priority levels were waiting in the primary queue at the referral time. The seasonality variables did not play important roles in wait time prediction.

Figure 3. Importance of the predictor variables, measured by impurity (variance of the responses), at the Anxiety and Mood Disorders Clinic.
View this figure

Current State

Although operations management and ML tools have been widely used in different sectors of care for physical diseases to improve the waiting list management, there has not been such studies in the mental health care sector. Previous research has demonstrated positive effects of operational and policy improvements that have taken place to improve the wait list management in physical health care, for example, cancer care [41]. In 2015, the Canadian Wait Time Alliance [42] reported that although there had been significant progress in the wait time management in 5 areas of focus in the 2004 Health Accord (hip and knee replacement, cataract, bypass surgery, radiation therapy, and diagnostic imaging), mental health care struggled with long waiting times and required immediate attention nationwide. That report also outlined that universal measures did not even exist to track access to psychiatric care across the country. Loebach and Ayoubzadeh [43] compared the waiting times for psychiatric patients and patients with physical problems in the province of Ontario, Canada, and concluded that while the former group often ended up waiting beyond the target waiting times specified by the province, the latter group often received their treatments within the target time window.

No-show Rates

In addition to adverse impacts that long wait times have on patients’ health conditions, they also cause higher no-show rates that cause operational complications for health care managers [44,45]. The no-show rate may depend on factors such as wait time and quality of care and may vary between 5% and 80% in different health care sectors [44,45]. Figure 4 illustrates the positive correlation between longer wait times and higher no-show rates in our data, which indicates that shortening wait times may decrease the no-show rates as well.

Figure 4. Correlation between no-show appointments and wait times. AMD: Anxiety and Mood Disorders; BP: Borderline Personality; Consult.: Consultation; MH: Mental Health; TAY: Transitional Aged Youth.
View this figure

Random Forest Method

In this research, we applied 6 different ML methods to predict wait times in mental health care and to identify factors that drive the long wait times. The input data were highly deidentified, which limited the data utility. The random forest method enhanced by the system’s knowledge turned out to be the most promising method. The good performance of this model could be attributed to some appealing computational features of the random forest method, including its low sensitivity to outliers and its ability in capturing complex interactions between predictor variables [46]. From a practical point of view, another appealing feature of the random forest method is its relatively low sensitivity to parameter tuning [46]. Identifying the random forest method as a superior ML method to predict wait times is consistent with findings of [47] that identify this method as the most accurate method to calculate the probability of waiting more than 1 day before receiving treatment for patients with opioid use disorder.

Managerial Insights

The impurity measure of variable importance, which was the basis for ranking predictive variables in Figure 3, indicates that the total amount of increase in the mean square error of the model output resulted from a random permutation of each variable in the test set. According to Figure 3, the long wait times can be attributed to the usage of the priority system in assigning the care resources to patients. In priority systems, low-priority patients are preceded by patients from higher priority levels and may end up waiting an extended period of time to receive a relatively simple treatment. It is likely that during their long wait times, low-priority patients are changed to higher-priority patients owing to condition deterioration. This phenomenon has been observed in other health care sectors, and using the “fast-track” system for low-priority patients has been suggested as a potential solution [48-50]. In a fast-track system, the waiting line is broken into 2 separate lines: one for low-priority patients and one for patients with higher priorities. One potential advantage of this approach is that because of the simpler nature of care required by low-priority patients, they can be attended by “less-trained” clinicians, freeing up the “more-trained” clinicians for patients with more complex needs. One potential disadvantage of the fast-track system is that the improvement in waiting time of low-priority patients may come at the cost of longer wait times for patients with higher priorities.

Limitations

Small sample sizes coupled with very large variations within wait times of each clinic imposed the main limitation in this study. There was also a significant difference between the wait time profiles of clinics such that the generalized models for all clinics performed poorly in comparison to models for individual clinics. In addition, the following approximations also impacted the accuracy of model predictions.

  1. Care Resource Capacity Limitation: There was no access to the real capacity offered to patients at each clinic at a given day and therefore, we created proxy variables to approximate the capacity.
  2. Group Meeting Limitation: Of the clinics that we reviewed within Ontario Shores, the AMD, Traumatic Stress, Borderline Personality Self-Regulation, and Women’s clinics provided care through group meetings where multiple patients attend at the same time. The dynamics of group visits in clinics that provide group treatments were not clear and therefore, we could not explicitly capture the potential impacts of these treatments on wait times.

Conclusion

In this study, we used retrospective highly deidentified administrative data from 8 clinics at Ontario Shores to build 6 different ML models to predict wait times. We enhanced our models by system knowledge to mitigate the limiting impact of deidentification on our data utility. The data included 4187 patients who received care through 30,342 appointments. The random forest method provided the minimum RMSE values for 4 of the 8 clinics and the second minimum RMSE for the other 4 clinics. The priority system was identified as a factor that contributed to long wait times, and a fast-track system was suggested as a potential solution. Despite the challenges with the wait time source data, this research provided Ontario Shores with a deeper understanding of the extent of and contributors to their wait times on a clinic-by-clinic basis. This research provided Ontario Shores with information and knowledge to pursue quality improvement initiatives to reduce wait times.

Acknowledgments

We are grateful to Ontario Shores and its members for providing the data, partial funding, and helping us with understanding the data set. Ontario Shores provided partial funding for this research. Ontario Shores had no role in the study design, analysis, interpretation of the findings, writing the manuscript, and the decision to submit the paper for publication. The authors thank the editorial team for their thoughtful comments that led to considerable improvements in this paper.

Authors' Contributions

AR was involved with the conception and design of the study, and he conducted all the statistical and predictive analyses. He jointly drafted the initial and revision documents with CMG and created intellectual contents. CMG provided the clinical context information for mental health and Ontario Shores. She assisted with the design of the analysis with regard to inclusion and exclusion criteria for clinics and patient types based on the various challenges with the data for use by the machine learning techniques. Both authors gave their approval of the final version to be submitted for peer review.

Conflicts of Interest

None declared.

  1. Camacho F, Anderson R, Safrit A, Jones A, Hoffmann P. The Relationship between Patient’s Perceived Waiting Time and Office-Based Practice Satisfaction. North Carolina Medical Journal 2006 Nov 01;67(6):409-413 [FREE Full text] [CrossRef]
  2. Jaworsky C, Pianykh O, Oglevee C. Patient Feedback on Waiting Time Displays. Am J Med Qual 2017;32(1):108. [CrossRef] [Medline]
  3. Curtis C, Liu C, Bollerman T, Pianykh O. Machine Learning for Predicting Patient Wait Times and Appointment Delays. J Am Coll Radiol 2018 Sep;15(9):1310-1316. [CrossRef] [Medline]
  4. Reichert A, Jacobs R. The impact of waiting time on patient outcomes: Evidence from early intervention in psychosis services in England. Health Econ 2018 Nov;27(11):1772-1787 [FREE Full text] [CrossRef] [Medline]
  5. Kids can't wait: 2020 provincial budget recommendations for high-quality, accessible child and youth mental health and addictions care for all Ontario families. Children's Mental Health Ontario.   URL: https://cmho.org/wp-content/uploads/CMHO-Report-WaitTimes-2020.pdf [accessed 2022-07-30]
  6. Ang E, Kwasnick S, Bayati M, Plambeck EL, Aratow M. Accurate Emergency Department Wait Time Prediction. MSOM 2016 Feb;18(1):141-156 [FREE Full text] [CrossRef]
  7. Pereira S, Portela F, Santos M, Machado J, Abelha A. Predicting Pre-triage Waiting Time in a Maternity Emergency Room Through Data Mining. Smart Health 2016:105 [FREE Full text] [CrossRef]
  8. Joseph A, Hijal T, Kildea J, Hendren L, Herrera D. Predicting waiting times in radiation oncology using machine learning. 2018 Presented at: 16th IEEE International Conference on Machine Learning and Applications; January; Cancun, Mexico. [CrossRef]
  9. Deidentification guidelines for structured data. Information and Privacy Commissioner of Ontario. 2016.   URL: https://www.ipc.on.ca/wp-content/uploads/2016/08/Deidentification-Guidelines-for-Structured-Data.pdf [accessed 2022-07-30]
  10. Zaman ANK, Obimbo C, Dara R. An improved differential privacy algorithm to protect re-identification of data. 2017 Presented at: IEEE Canada International Humanitarian Technology Conference (IHTC); July 21-22; Toronto, Canada. [CrossRef]
  11. Murphy K. Machine Learning: A Probabilistic Perspective. Cambridge, MA, USA: MIT Press; 2012.
  12. James G, Witten D, Hastie T, Tibshirani R. Chapter 10. In: An Introduction to Statistical Learning: With Applications in R. New York, NY, USA: Springer; 2013.
  13. Kuhn M, Johnson K. Applied Predictive Modeling. NY, USA: Springer; 2013.
  14. Machine learning: what it is and why it matters. SAS Institute.   URL: https://www.sas.com/en_us/insights/analytics/machine-learning.html [accessed 2022-07-30]
  15. Hueniken K, Somé NH, Abdelhack M, Taylor G, Elton Marshall T, Wickens C, et al. Machine Learning-Based Predictive Modeling of Anxiety and Depressive Symptoms During 8 Months of the COVID-19 Global Pandemic: Repeated Cross-sectional Survey Study. JMIR Ment Health 2021 Nov 17;8(11):e32876 [FREE Full text] [CrossRef] [Medline]
  16. Sonkurt H, Altınöz AE, Çimen E, Köşger F, Öztürk G. The role of cognitive functions in the diagnosis of bipolar disorder: A machine learning model. Int J Med Inform 2021 Jan;145:104311. [CrossRef] [Medline]
  17. Mumtaz W, Qayyum A. A deep learning framework for automatic diagnosis of unipolar depression. Int J Med Inform 2019 Dec;132:103983. [CrossRef] [Medline]
  18. Wshah S, Skalka C, Price M. Predicting Posttraumatic Stress Disorder Risk: A Machine Learning Approach. JMIR Ment Health 2019 Jul 22;6(7):e13946 [FREE Full text] [CrossRef] [Medline]
  19. Cos H, Li D, Williams G, Chininis J, Dai R, Zhang J, et al. Predicting Outcomes in Patients Undergoing Pancreatectomy Using Wearable Technology and Machine Learning: Prospective Cohort Study. J Med Internet Res 2021 Mar 18;23(3):e23595 [FREE Full text] [CrossRef] [Medline]
  20. Wang M, Ge W, Apthorp D, Suominen H. Robust Feature Engineering for Parkinson Disease Diagnosis: New Machine Learning Techniques. JMIR Biomed Eng 2020 Jul 27;5(1):e13611 [FREE Full text] [CrossRef]
  21. Morel D, Yu K, Liu-Ferrara A, Caceres-Suriel A, Kurtz S, Tabak Y. Predicting hospital readmission in patients with mental or substance use disorders: A machine learning approach. Int J Med Inform 2020 Jul;139:104136 [FREE Full text] [CrossRef] [Medline]
  22. Hatton C, Paton L, McMillan D, Cussens J, Gilbody S, Tiffin P. Predicting persistent depressive symptoms in older adults: A machine learning approach to personalised mental healthcare. J Affect Disord 2019 Mar 01;246:857-860. [CrossRef] [Medline]
  23. Crane N, Jenkins L, Bhaumik R, Dion C, Gowins J, Mickey B, et al. Multidimensional prediction of treatment response to antidepressants with cognitive control and functional MRI. Brain 2017 Feb;140(2):472-486 [FREE Full text] [CrossRef] [Medline]
  24. Hahn T, Kircher T, Straube B, Wittchen H, Konrad C, Ströhle A, et al. Predicting treatment response to cognitive behavioral therapy in panic disorder with agoraphobia by integrating local neural information. JAMA Psychiatry 2015 Jan;72(1):68-74. [CrossRef] [Medline]
  25. Whitfield-Gabrieli S, Ghosh S, Nieto-Castanon A, Saygin Z, Doehrmann O, Chai X, et al. Brain connectomics predict response to treatment in social anxiety disorder. Mol Psychiatry 2016 May;21(5):680-685. [CrossRef] [Medline]
  26. Shatte A, Hutchinson D, Teague S. Machine learning in mental health: a scoping review of methods and applications. Psychol. Med 2019 Feb 12;49(09):1426-1448 [FREE Full text] [CrossRef]
  27. Thieme A, Belgrave D, Doherty G. Machine Learning in Mental Health. ACM Trans. Comput.-Hum. Interact 2020 Oct 31;27(5):1-53 [FREE Full text] [CrossRef]
  28. Cecula P, Yu J, Dawoodbhoy F, Delaney J, Tan J, Peacock I, et al. Applications of artificial intelligence to improve patient flow on mental health inpatient units - Narrative literature review. Heliyon 2021 Apr;7(4):e06626 [FREE Full text] [CrossRef] [Medline]
  29. Chancellor S, De Choudhury M. Methods in predictive techniques for mental health status on social media: a critical review. NPJ Digit Med 2020;3:43 [FREE Full text] [CrossRef] [Medline]
  30. Senge P. Chapter 5. In: The Fifth Discipline: The Art & Practice of the Learning Organization. Redfern, Sydney, Australia: Currency; 2006.
  31. Kuo Y, Chan N, Leung J, Meng H, So A, Tsoi K, et al. An Integrated Approach of Machine Learning and Systems Thinking for Waiting Time Prediction in an Emergency Department. Int J Med Inform 2020 Jul;139:104143. [CrossRef] [Medline]
  32. Greene W. Econometric Analysis (6th Edition). USA: Pearson; 2008.
  33. Witten IH, Frank E, Hall M. Data Mining: Practical Machine Learning Tools and Techniques (Morgan Kaufmann Series in Data Management Systems). San Francisco, USA: Morgan Kaufmann; 4th edition; Sep 30, 2011.
  34. Shmueli G. To Explain or to Predict? Statist Sci 2010 Aug 1;25(3):289 [FREE Full text] [CrossRef]
  35. Mowbray F, Zargoush M, Jones A, de Wit K, Costa A. Predicting hospital admission for older emergency department patients: Insights from machine learning. Int J Med Inform 2020 Aug;140:104163. [CrossRef] [Medline]
  36. Rosner B. Percentage Points for a Generalized ESD Many-Outlier Procedure. Technometrics 1983 May;25(2):165-172 [FREE Full text] [CrossRef]
  37. Kuhn M, Julia S. Tidy Modeling With R: A Framework for Modeling in the Tidyverse. 2021.   URL: https://www.tmwr.org/ [accessed 2021-10-22]
  38. McKay MD, Beckman RJ, Conover WJ. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 1979 May;21(2):239 [FREE Full text] [CrossRef]
  39. Kim Y. Comparison of the decision tree, artificial neural network, and linear regression methods based on the number and types of independent variables and sample size. Expert Systems with Applications 2008 Feb;34(2):1227-1234 [FREE Full text] [CrossRef]
  40. Jiao S, Gao Y, Feng J, Lei T, Yuan X. Does deep learning always outperform simple linear regression in optical imaging? Opt. Express 2020 Jan 27;28(3):3717 [FREE Full text] [CrossRef]
  41. Rastpour A, Begen M, Louie A, Zaric G. Variability of waiting times for the 4 most prevalent cancer types in Ontario: a retrospective population-based analysis. CMAJ Open 2018;6(2):E227-E234 [FREE Full text] [CrossRef] [Medline]
  42. Eliminating code gridlock in Canada’s health care system: 2015 wait time alliance report card. Wait Time Alliance.   URL: https://www.waittimealliance.ca/wta-reports/2015-wta-report-card/ [accessed 2022-07-30]
  43. Loebach R, Ayoubzadeh S. Wait times for psychiatric care in Ontario. UWOMJ 2017 Dec 03;86(2):48-50 [FREE Full text] [CrossRef]
  44. Kheirkhah P, Feng Q, Travis L, Tavakoli-Tabasi S, Sharafkhaneh A. Prevalence, predictors and economic consequences of no-shows. BMC Health Serv Res 2016 Jan 14;16:13 [FREE Full text] [CrossRef] [Medline]
  45. Marbouh D, Khaleel I, Al Shanqiti K, Al Tamimi M, Simsekler M, Ellahham S, et al. Evaluating the Impact of Patient No-Shows on Service Quality. RMHP 2020 Jun;Volume 13:509-517 [FREE Full text] [CrossRef]
  46. Zhang C, Ma Y. Ensemble Machine Learning: Methods and Applications. Boston, MA, USA: Springer; 2012.
  47. Kong Y, Zhou J, Zheng Z, Amaro H, Guerrero E. Using machine learning to advance disparities research: Subgroup analyses of access to opioid treatment. Health Serv Res 2022 Apr;57(2):411-421 [FREE Full text] [CrossRef] [Medline]
  48. Cooke M, Wilson S, Pearson S. The effect of a separate stream for minor injuries on accident and emergency department waiting times. Emerg Med J 2002 Jan;19(1):28-30 [FREE Full text] [CrossRef] [Medline]
  49. Sanchez M, Smally A, Grant R, Jacobs L. Effects of a fast-track area on emergency department performance. J Emerg Med 2006 Jul;31(1):117-120. [CrossRef] [Medline]
  50. Lin D, Patrick J, Labeau F. Estimating the waiting time of multi-priority emergency patients with downstream blocking. Health Care Manag Sci 2014 Mar;17(1):88-99 [FREE Full text] [CrossRef] [Medline]


AMD: Anxiety and Mood Disorders
ML: machine learning
RMSE: root mean square error


Edited by J Torous; submitted 31.03.22; peer-reviewed by M Zargoush, D Carvalho, T Sagi; comments to author 03.05.22; revised version received 18.06.22; accepted 18.07.22; published 09.08.22

Copyright

©Amir Rastpour, Carolyn McGregor. Originally published in JMIR Mental Health (https://mental.jmir.org), 09.08.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.