Review
Abstract
Background: Uncertainty surrounds the ethical and legal implications of algorithmic and data-driven technologies in the mental health context, including technologies characterized as artificial intelligence, machine learning, deep learning, and other forms of automation.
Objective: This study aims to survey empirical scholarly literature on the application of algorithmic and data-driven technologies in mental health initiatives to identify the legal and ethical issues that have been raised.
Methods: We searched for peer-reviewed empirical studies on the application of algorithmic technologies in mental health care in the Scopus, Embase, and Association for Computing Machinery databases. A total of 1078 relevant peer-reviewed applied studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. Conventional content analysis was undertaken to address our aims, and this was supplemented by a keyword-in-context analysis.
Results: We grouped the findings into the following five categories of technology: social media (53/132, 40.1%), smartphones (37/132, 28%), sensing technology (20/132, 15.1%), chatbots (5/132, 3.8%), and miscellaneous (17/132, 12.9%). Most initiatives were directed toward detection and diagnosis. Most papers discussed privacy, mainly in terms of respecting the privacy of research participants. There was relatively little discussion of privacy in this context. A small number of studies discussed ethics directly (10/132, 7.6%) and indirectly (10/132, 7.6%). Legal issues were not substantively discussed in any studies, although some legal issues were discussed in passing (7/132, 5.3%), such as the rights of user subjects and privacy law compliance.
Conclusions: Ethical and legal issues tend to not be explicitly addressed in empirical studies on algorithmic and data-driven technologies in mental health initiatives. Scholars may have considered ethical or legal matters at the ethics committee or institutional review board stage. If so, this consideration seldom appears in published materials in applied research in any detail. The form itself of peer-reviewed papers that detail applied research in this field may well preclude a substantial focus on ethics and law. Regardless, we identified several concerns, including the near-complete lack of involvement of mental health service users, the scant consideration of algorithmic accountability, and the potential for overmedicalization and techno-solutionism. Most papers were published in the computer science field at the pilot or exploratory stages. Thus, these technologies could be appropriated into practice in rarely acknowledged ways, with serious legal and ethical implications.
doi:10.2196/24668
Keywords
Introduction
Background
Data-driven technologies for mental health have expanded in recent years [
, ]. The COVID-19 pandemic has accelerated this shift, with physical distancing measures fast-tracking the digitization and virtualization of health and social services [ , ]. These initiatives extend from hospital- to community-based services for people with mental health conditions and psychosocial disabilities (the term mental health conditions and psychosocial disabilities is used to refer to the broad range of mental health conditions and the associated disability; the term is used by the World Health Organization [ ]). Government agencies, private technology firms, service user groups, service providers, pharmaceutical companies, professional associations, corporate services, and academic researchers are among the actors involved [ - ]. The technologies they create serve various functions, including information sharing, communication, clinical decision support, digital therapies, patient or service user and population monitoring, bioinformatics and personalized medicine, and service user health informatics [ ]. Only some of these broader digital technologies will use algorithmic technologies to which this paper will turn.Throughout this paper, we use the term algorithmic and data-driven technologies to describe various technologies that rely on complex information processing to analyze large amounts of personal data and other information deemed useful to making decisions [
]. The term is used here to encompass technologies variously referred to as artificial intelligence, machine learning, deep learning, natural language processing, robotics, speech processing, and similar automation technologies. The paper is premised on the view that the term algorithmic and data-driven technologies offers a useful category for the purposes of this review, although important conceptual and practical differences exist between technologies within this broad category (eg, between artificial intelligence and machine learning).In the mental health context, algorithmic and data-driven technologies are generally used to make inferences, predictions, recommendations, or decisions about individuals and populations. Predictive analysis is largely aimed at assessing a person’s health conditions. Data collection may occur in a range of settings, from services concerning mental health, suicide prevention, or addiction support. Collection may also occur beyond these typical domains. For example, web-based platforms can draw on users’ posts or purchasing habits to flag their potential risk of suicide [
]. CCTV systems with machine learning sensors in suicide hotspots can be programmed to assess bodily movements that may precipitate a person’s suicide attempt [ ]. Education institutions may flag students who appear to be in distress based on attendance records, social media use, and physiometric monitoring [ ]. There are also examples of algorithmic technologies being used in forensic mental health settings [ ] and other criminal justice settings [ ], including databases that combine noncriminal mental health data with user-generated social media content for the apparent purpose of preventive policing [ ].Some prominent mental health professionals have argued that digital technologies, including algorithmic and data-driven technologies, hold the potential to bridge the “global mental health treatment gap” [
] by “reach[ing] billions of people” worldwide [ ]. A 2019 Lancet Psychiatry editorial describes a “general agreement that big data and algorithms will help optimize performance in psychiatry” [ ]. Others have described “widespread agreement by health care providers, medical associations, industry, and governments that automation using digital technology could improve the delivery and quality of care in psychiatry, and reduce costs” [ ]. Indeed, governments and some private sector actors appear enthusiastic [ ]. For people who use mental health services and their representative organizations, views on algorithmic technology in mental health care appear more ambivalent, although research by service user researchers, advocates, and their representative organizations comprises only a very small part of scholarship and commentary in the field [ - ].This study set out to identify to what extent and on what matters legal and ethical issues were considered in the empirical research literature on algorithmic and data-driven technologies in mental health care. Empirical research refers simply to scholarship that seeks to use algorithmic and data-driven technology in an applied way in the mental health context.
Ethics and Law
Ethics refer to guiding principles, whereas laws, which may be based on ethical or moral principles, are enforceable rules and regulations with penalties for those who violate them. Scholarship on the ethical and legal dimensions of algorithmic and data-driven technologies in mental health care is relatively scant but growing [
- ]. Existing research generally draws together two strands of research: first, the ethicolegal issues involved in algorithmic and data-driven technological mental health initiatives [ - ] and, second, a broader scholarship concerning algorithmic and data-driven technologies [ - ]. We briefly discuss each of these strands of research.According to Lederman et al [
], most web-based mental health interventions have not been subject to ethical scrutiny, particularly those that go beyond one-to-one web-based or phone-based counseling, such as mental health apps and moderated web-based forums. Lederman et al [ ] suggest using the classic health ethics framework, with its four principles of nonmaleficence, beneficence, respect for autonomy, and justice, particularly given its widespread use and acceptance among the health professions [ ]. However, given the emergence of digital mental health initiatives in nonclinical settings (eg, in education, work settings, social media, and financial services), other ethical frameworks and practices may be required [ , ]. Nonhealth settings are not governed by the same entrenched bioethical principles, norms of conduct, or regulatory frameworks as formal health care systems [ ]. Burr et al [ ] pointed out that the transfer of responsibility from traditional health care providers to institutions, organizations (both private and public), and individuals who are creating web-based mental health initiatives gives rise to new ethical considerations. These include the duty to intervene in emergencies, competency to address people’s support needs, and ensuring the decisional capacity and health literacy of consumers of commercialized products [ ]. This expanded scope is a sign that the ethical literature concerning digital technology in the mental health context is growing [ , , - ], even if ethical analyses may not occur in most applied initiatives, as suggested by Lederman et al [ ]. Legal scholarship on digital technology in mental health care is sparse [ ] but tends to focus on the regulatory frameworks applicable to digital health, privacy, confidentiality, cybersecurity, and software as medical devices [ - ].The broader ethical and legal dimensions of algorithmic technologies have been the subject of a much larger scholarship [
- , , ]. Scholars in this field are typically concerned with issues of fairness, accountability, transparency, privacy, security, reliability, inclusivity, and safety, which are examined in contexts as diverse as criminal law, consumer transactions, health, public administration, migration, and employment. Legal scholars have tended to call for technological due process (involving fair, accountable, and transparent adjudications and rulemaking), net neutrality (broadly, equal treatment of web-based content by providers of internet access services) [ ], and nondiscrimination principles [ ]. Early legal and ethical scholarship focused on efforts to ensure basic standards of algorithmic transparency and auditing, but a more recent movement of scholars, regulators, and activists has begun to ask more fundamental questions, including whether algorithmic systems should be used at all in certain circumstances, and if so, who gets to govern them [ ].Methods
Design
This study adapted a scoping review methodology to undertake a broad exploration of the literature. Scoping reviews are particularly useful for surveying a potentially large and interdisciplinary field that has not yet been comprehensively reviewed and for which clarification of concepts is required [
], a characterization that appears apt for the use of algorithmic and data-driven technologies in mental health care. The scoping review method was also considered the most appropriate approach because it could capture the literature from several sources and disciplines with varying terminology and conceptual boundaries.We adapted the Arksey and O’Malley framework for scoping reviews [
]. The framework involves the following five steps or framework stages: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies; (4) charting results; and (5) collating, summarizing, and reporting results.A description of each step is outlined below.
We drew on elements of the Joanna Briggs Institute scoping review methodology [
] and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) extension for scoping reviews [ ] to support the rigor of our methods. Study selection included all study types, and the overall aim was to chart data according to key issues, themes, and gaps [ ]. Materials were analyzed using conventional content analysis supplemented with keyword-in-context analysis (discussed below).Identifying the Research Question (Step 1)
We sought to identify all studies within a selective sampling frame [
] that answered the following research questions:- In what ways are algorithmic and data-driven technologies being used in the mental health context?
- How and to what extent are issues of law and ethics being addressed in these studies?
These questions were chosen to maintain a wide approach to generate the breadth of coverage [
].Identifying Relevant Studies (Step 2)
A rapid or streamlined literature search was conducted. We started with a search string that emerged from our initial literature review (noted in the Background section). However, the search string was updated as we surveyed the literature, and new terms and ideas from other disciplines and practices were considered. We also undertook a hand search of relevant reference lists of included papers to identify other papers for inclusion. The search was not exhaustive because of the breadth of the topic area, but it aimed to be inclusive of diverse disciplines and varying conceptualizations of the topic.
The following search strings emerged through an iterative process (
). They were applied in keyword fields or abstract and title fields (where available in each database).Iteratively developed search string.
Scopus
- (TITLE-ABS-KEY ('mental (health OR ill* OR disability OR impair*)' OR 'psychiatr*' OR 'psycholog*' OR 'beahvioral health') AND TITLE-ABS-KEY ('algorithm*' OR 'artificial intelligence' OR 'machine learning') AND TITLE-ABS-KEY ('internet' OR 'social media' OR 'chatbot' OR 'smartphone' OR 'tracking'))
Embase Ovid
- ('mental (health OR ill* OR disability OR impair*)' or 'psychiatr*' or 'beahvioral health').mp. [mp=title, abstract, heading word, drug trade name, original title, device manufacturer, drug manufacturer, device trade name, keyword, floating subheading word, candidate term word]
- “mental illness”.mp. or mental disease/
- algorithm/ or machine learning/ or artificial intelligence/
- ('algorithm*' or 'artificial intelligence' or 'machine learning').mp. [mp=title, abstract, heading word, drug trade name, original title, device manufacturer, drug manufacturer, device trade name, keyword, floating subheading word, candidate term word]
- Internet/ or “web-based”.mp.
- ('internet' or 'social media' or 'chatbot' or 'smartphone' or 'tracking').mp. [mp=title, abstract, heading word, drug trade name, original title, device manufacturer, drug manufacturer, device trade name, keyword, floating subheading word, candidate term word]
- The above search strings were applied in various combinations.
Association for Computing Machinery
- ('mental health' OR 'mental ill*' OR 'psychiatr*' OR 'behavio* health') AND ('algorithm*' OR 'artificial intelligence' OR 'machine learning') AND ('internet' OR 'social media' OR 'chatbot' OR 'smartphone' OR 'tracking')
No date limit was placed, although the search was conducted between August 2019 and February 2020 iteratively. A language filter was applied to focus on English-language results, which was applied for pragmatic reasons to reduce the search scope and complexity (for more on limitations, including terms we appear to have overlooked, see the Discussion section).
After an extensive search, 1078 relevant peer-reviewed research studies were identified in the study selection stage. From these, papers that were not available in English, duplicates, and papers not available in the full text were excluded.
Study Selection (Step 3)
The process of identifying relevant studies among the 1078 papers was iterative, involving several discussions between coauthors. Unlike systematic reviews, where inclusion and exclusion criteria for studies are established at the outset, this study developed these criteria during the search process (
) [ ]. The purpose of deciding on criteria post hoc is to avoid barring studies that might not align with current understandings of the issue or topic [ ]. This was especially important when including computer science databases in the search strategy because of the heterogeneity of studies broadly related to mental health.Inclusion and exclusion criteria.
Inclusion criteria
- Study undertaken in a mental health context or with application to a mental health context
- Text available in English
- Study related broadly to the use of big data, internet technology, artificial intelligence, sensors, smart technology, and other contemporary algorithmic technologies
Exclusion criteria
- Commentary pieces
- Studies focused on other health conditions
- Application of data science methods to clinical data collected via clinical technologies (eg, application of data science methods to magnetic resonance imaging data)
- Data science methods paper with no specific real-world application or objective
- Application of data science methods to psychiatric research in general
- Studies applied to animals or animal models
Owing to the large number of studies identified at step 2, we did not undertake a full-text review. Instead, we reviewed only the abstract and title according to our inclusion criteria. According to the PRISMA criteria described by Moher et al [
], systematic reviews would include a full-text review after duplicates were removed to assess all articles for eligibility. We did not take this step, as the screening and eligibility phases of the review could take place by reviewing the abstracts or titles (after all, we were simply looking for applied mental health research that used algorithmic and data-driven technologies— ).This adaptation enabled us to review a large body of work in a rapidly expanding field. Our broad inclusion approach was also chosen to prevent the exclusion of studies from disciplines that do not conform to traditionally appropriate research designs, which might preclude them from reviews with stricter inclusion and exclusion criteria (eg, using an insufficient study design description as an exclusion criterion). For example, we found that many computer science papers were published in conference journals [
] and did not always include in-depth methods or an explicit statement of the research aim or objectives.This process resulted in 132 empirical research papers included in the review.
provides a PRISMA diagram that sets out the process of exclusion for our adapted study.Charting Results (Step 4)
Through initial deductive analysis of the abstracts and discussions between the researchers, we identified several key issues and themes through which to consider the broad research field. We settled on a typology that considered both the form of technology used in the study (eg, social media, sensors, or smartphones) and the stated purpose for the mental health initiative (eg, detection and diagnosis, prognosis, treatment, and support).
The second step involved analyzing the data to determine how legal and ethical issues were discussed. The material was analyzed using the computer software package NVivo 12 (QSR International) [
]. Conventional content analysis was undertaken, supplemented by keyword-in-context analysis [ ]. We used the following terms, drawn from themes and keywords arising in the literature noted in the Ethics and Law section, which are typically associated with legal and ethical matters arising in the use of digital technologies in mental health: law* or legal*; ethic*; human rights; transparen*; oversight; accountab*; bias; fairness; privacy; trust; regulat*We sought a uniform approach to the 132 studies included in this review. However, in practice, it was often impossible to extract all the information required where research reports used varying terminology and concepts and potentially failed to include relevant material.
Collating, Summarizing, and Reporting Results (Step 5)
Several typologies can be used to categorize the algorithmic and data-driven technologies identified in these studies. As noted, we integrate two here: (1) the primary forms of technology and (2) their stated purpose. Such distinctions can help to highlight the predominant areas of technological inquiry and differentiate relevant ethical and legal concerns for the various categories.
Results
Typology: Form and Stated Purpose
Overview
We derived five major categories of technology (
): (1) social media (53/132, 40.1%), (2) smartphones (37/132, 28%), (3) sensing technology (20/132, 15.1%), (4) chatbots (5/132, 3.8%), and (5) miscellaneous (17/132, 12.9%). We have discussed these categories in detail in the following sections. We further evaluated the papers according to the stated purpose of the technology using a typology created by Shatte et al [ ]. They categorized papers into the following four categories: (1) detection and diagnosis; (2) prognosis, treatment, and support; (3) public health; and (4) research and clinical administration.Categorization of articles by the form and stated purpose of technology.
Social media
- Detection and diagnosis (26/132, 19.7%) [ - ]
- Prognosis treatment and support (4/132, 3%) [ - ]
- Public health (22/132, 16.7%) [ - ]
- Research and clinical administration (1/132, 0.7%) [ ]
Smartphones
- Detection and diagnosis (17/132, 12.9%) [ - ]
- Prognosis treatment and support (20/132, 15.1%) [ - ]
- Public health (0/132, 0%)
- Research and clinical administration (0/132, 0%)
Sensing technology
- Detection and diagnosis (6/132, 4.5%) [ - ]
- Prognosis treatment and support (12/132, 9.1%) [ - ]
- Public health (2/132, 1.5%) [ , ]
- Research and clinical administration (0/132, 0%)
Chatbots
- Detection and diagnosis (0/132, 0%)
- Prognosis treatment and support (5/132, 3.8%) [ - ]
- Public health (0/132, 0%)
- Research and clinical administration (0/132, 0%)
Miscellaneous
- Detection and diagnosis (8/132, 6.1%) [ - ]
- Prognosis treatment and support (8/132, 6.1%) [ - ]
- Public health (1/132, 0.7%) [ ]
- Research and clinical administration (0/132, 0%)
Neat distinctions were not always possible. For example, Nambisan et al [
] sought to validate a method of detecting depression among social media users in a large-scale data set (a common aim in the social media category). At first glance, their study might appear to fall within the detection and diagnosis category. However, the ultimate aim of the study was to improve public health informatics, which improves the accuracy of population-wide prevalence analysis. Hence, we placed this study in the public health category (defined in the following sections).The four categories by Shatte et al [
] offer clinical or medical framing, which broadly matches the clinical orientation of the scholarship (as a counterview, some researchers have called for the demedicalization of digital platforms designed to help people in mental distress [ ], a point to which we will return in our discussion). Shatte et al [ ] found that most studies in their scoping review on machine learning in mental health research focused on detection and diagnosis—this is indeed reflected in our own findings. We found that 43.2% (57/132) of the studies broadly concerned detection and diagnosis.We determined that 37.1% (49/132) of the studies broadly concerned technology aimed primarily at prognosis, treatment, and support, which includes initiatives for personalized or tailored treatment and technologies used in services where treatment is provided. Examples include the use of smartphone apps to provide personalized education to someone based on psychometric data generated by the app.
A total of 18.9% (25/132) of studies were on public health. Public health papers used large epidemiological or public data sets (eg, social media data and usage data from Wi-Fi infrastructure) to monitor or respond to persons who appear to be experiencing or self-disclosing an experience of distress, mental health crisis, or treatment. However, we struggled in applying this category, as many were borderline cases in the detection and diagnosis category. This ambiguity may be because many studies were based in the field of computer science and were contemplated at a higher level of generality, with limited discussion of the specific setting in which they might be used (eg, a social media analytical tool could be used in population-wide prevalence studies or to identify and direct support to specific users of a particular web-based platform).
Our search uncovered only 1 study related to research and clinical administration; this particular study focused on the triage of patients in health care settings.
Finally, it is noteworthy that despite the reasonably large volume of studies, all but a few were at an exploratory and piloting stage. This is not surprising given the predominance in our survey of scholarship from computer science journals in databases such as ACM. A key issue in this area of inquiry is the large context gap between the design of these technological innovations and the context of implementation. In many papers from the computer science discipline, the authors made assumptions or guesses as to how their innovations could be implemented, with seemingly little input from end users. This is not a critique of individual researchers; instead, as we shall discuss later, it reflects the need for interdisciplinary and consultative forms of research at the early stages of ideation and piloting. This matter also raises questions as to whether there is a strong enough signal or feedback loop from practice settings back to designers and computer scientists in terms of what needs they should be responding to and why.
Social Media
We found 53 studies concerning social media, in which data were collected through social media platforms. Two major platform types were identified: mass social media, including mainstream platforms such as Facebook, Twitter, and Reddit; and specialized social media, comprising platforms focused on documenting health or mental health experiences. Both forms of social media involve the collection of textual data shared by users and self-reported mental health conditions, and sometimes expert opinion on diagnoses attributable to users (identified through information shared on the web). For example, researchers may examine whether the content of posts shared correlates with, and can therefore help predict, people’s self-reported mental health diagnosis. Most studies concerned mass social media platforms (Twitter: 17/53, 32%; Reddit: 14/53, 26%; Facebook: 6/53, 11%), with a small number concerning specialized social media (PatientsLikeMe: 1/53, 2%; Psycho-babble:1/53, 2%; Reachout: 1/53, 2%).
The largest sub-category in the social media group (26/53, 49%) have focused on predicting or detecting depression, with some concerning other diagnostic categories. Some studies attempted to capture multiple diagnostic categories or aimed to detect broad signs of mental ill-health.
Mobile Apps
In total, 38 studies concerned mobile apps used to collect and process data from participants, of which two main subcategories emerged. The first included apps that required active data input by participants (27/38, 71%), which either took the form of validated surveys (eg, Patient Health Questionnaire-9) or an experience sampling method; the second included those that passively collected data from inbuilt smartphone sensors (15/38, 39%). Some papers were counted twice as they had methods that covered both subcategories. Contemporary smartphones include a range of sensors related to sleep patterns, activity (movement), location data (GPS, Wi-Fi, and Bluetooth), communication or in-person human interaction (microphones), web-based activity (phone or text logs and app usage), and psychomotor data (typing and screen taps).
Apps that draw on these data sources can be considered passive sensing because the individual generally does not have to input data actively. Data collection generally requires participants to install an app that collects data from smartphone sensors and sends it to the researchers.
Sensing Technology
In total, 20 studies focused on broader sensor technology designed to continuously collect data on a person’s activity or environment. We differentiated this category from smartphone passive sensing, although there is a clear crossover with some personal wearables that fall under the sensing technology category. As we use it here, sensing technology includes a range of both wearables and environmental sensing technology (our search strings included variations on this theme, including tracking, biometric monitoring, and behavioral sensing). Many wearables were off-the-shelf personal wearables such as Fitbits, although there were several others, such as radio-frequency identification tags. Environmental sensors refer to technologies that collect data within the environment or about the environment but are not personal wearables, such as smart-home devices.
The list of sensing technologies includes personal wearables (9/20, 45%), smart-home sensors, automated home devices, internet of things (3/20, 15%), Microsoft Kinect (a software developer kit that includes computer vision, speech models, and algorithmic sensors; 1/20, 5%), skin conductance technology (1/20, 5%), portable electroencephalogram (1/20, 5%), radio-frequency identification tags (2/20, 10%), the use of Wi-Fi metadata (2/20, 10%), and data collected via care robots (eg, Paro Robot; 1/20, 5%).
Some studies have examined sensor systems for use in psychiatric settings. For example, Cheng et al [
] used a wireless monitoring system to monitor the location and heartrate of psychiatric inpatients. Other studies have used sensors in everyday settings. For example, Dickerson et al [ ] sought to create a "real-time depression monitoring system for the home" for which data are collected that are "multi-modal, spanning a number of different behavioral domains including sleep, weight, activities of daily living, and speech prosody"Chatbots
The fourth group of studies explored the use of chatbots and conversational agents in web-based mental health contexts, of which 5 studies appeared. This group includes studies focused on chatbots being used by both people experiencing mental health conditions or psychosocial disabilities and those who provide them with care or support. For example, D’Alfonso et al [
] studied the development of a "moderated online social therapy" web application, which provides an interactive social media-based platform for youth recovering from psychosis.Miscellaneous
This final group (17/132, 12.9%) included a range of studies that did not fit the previous categories. This category included the collection of data from video games and data sources where there was no explicit outline of how such data would be collected in practice (eg, facial expression data). We included the video game data in this miscellaneous category, although it could also possibly sit in the social media category.
Law and Ethics
Law
As noted, we conducted a thematic analysis supplemented by keyword-in-context analysis to identify themes related to law and ethics, as discussed in the Background section of this paper. There was little explicit discussion of legal issues, although issues such as privacy, which have precise legal dimensions, were discussed. However, privacy has rarely been discussed in terms of the law in the literature surveyed. We will return to the issue of privacy shortly. The term law appeared in just 1 study with reference to the legal implications of the particular algorithmic and data-driven technology being considered [
]. The term legal appeared in passing in three papers [ , ], among which the most substantial statement, by Faurholt-Jepson et al [ ], referred to legal concerns as one of several considerations in different national contexts:Using smartphones to collect large amounts of data on personal behavioral aspects leads to possible issues on privacy, security, storage of data, safety, legal and cultural differences between nations that all should be considered, addressed and reported accordingly.
[133]
A passing reference was made to the rights of user subjects in some studies (eg, Manikonda and De Choudhury [
] asked, “[h]ow...automated approaches, that are themselves prone to errors, [could] be made to act fairly, as well as secure one’s privacy, their rights on the platforms, and their freedom of speech?”). Other studies referred very briefly to compliance with the relevant regulatory or legislative frameworks under which the algorithmic and data-driven technologies were tested, such as the Health Insurance Portability and Accountability Act of 1996 (United States) [ , , ].Ethics
In terms of explicit reference to ethics, 10 studies included a specific section on the ethical issues raised by their work [
, , , , , , , , , ] and 10 others included a broad reference to key ethical issues [ , , , , , , , , ]. The latter material varied from one or two sentences to a paragraph or more. Although we searched for several ethical and legal themes (eg, privacy, security, safety, transparency, autonomy, and justice), the theme of privacy was dominant.Privacy
Privacy was discussed in several ways across all the included studies but was primarily addressed as part of the research method rather than in the real-world implementation of the technology. Approximately 19.7% (26/132) of papers sought to address user privacy through anonymization, deidentification, or paraphrasing of personal information. For example, Li et al [
] stated, “to protect Weibo users’ privacy, personally identifiable information (eg, names, usernames) were excluded from any research outputs.”The second major approach concerns what we have referred to as privacy protocols. This included aligning processes to legal requirements [
, , , ] but for the most part concerned some kind of process for data management, such as ensuring the consent of and providing notice to user subjects. For example, Manikonda and De Choudhury [ ] proposed “guidelines to be incorporated in the design and deployment of [their] interventions and tools,” which sought “voluntary consent from the population being studied and those likely to benefit from the technologies.” However, unlike Manikonda and De Choudhury [ ], very few studies have discussed the issue of consent at the implementation stage of their proposed technology. Instead, most authors discussed consent in terms of how their research was conducted. In some cases, mainly regarding social media, the authors argued that consent was not needed given the public nature of user-generated content on social media (a topic to which we will return).A collection of privacy engineering approaches was taken, including hashing and encryption and managing the data processing location. There were a variety of approaches around when and where data were processed and how this aligned with ideas about privacy. Some studies have used encryption before sending data to servers for processing, whereas others have analyzed data locally on the smartphone or did not store specific data postprocessing. Wang et al [
], for example, noted, “we do not record any speech on the phone or upload to the cloud and all audio signal processing and feature extraction is based on privacy preserving algorithms.”Some authors referred to the tension between privacy and data quality—framed, for example, as “[p]rivacy versus lives saved” [
]. This framing was seen within specific methods and technological approaches, such as the data and privacy-preserving qualities of sensor data, compared with Wi-Fi infrastructure data. For example, Ware et al [ ] argued that whereas Wi-Fi infrastructure data could be considered more privacy-preserving than collecting data from smartphones, it may also be less accurate. Ji et al [ ] discussed how a method they tested “has an advantage over data protection methods because it trains on the entire dataset, but it also violates user privacy and breaks the data protection setting,” and ultimately argued that their chosen “method achieves a balance between preserving privacy and accurate detection.”The final point in the privacy theme was expectations. Very few studies have considered the expectations people may have about how their data are used. This led to an acknowledgment that the use of data from sources such as social media or video games to make predictions about people’s mental health changes the meaning of these data and could have unintended consequences. Eichstaedt et al [
], for example, noted that social media data used for health reasons might change how people perceive that data and thus the type of data they report. De Choudhury et al [ ], in their analysis of mental health content in social media, warned that an unhelpful outcome could include “chilling effects in participation in the community, or suicide ideation moving on to fringe or peripheral platforms where such populations might be difficult to extend help to.”Discussion
Principal Findings
Overview
To summarize, we identified five major types of technology—social media, mobile apps, sensing technology, chatbots, and others—in which algorithmic and data-driven technologies were applied in the mental health context. The primary stated purpose of these technologies was broadly to detect and diagnose mental health conditions (approximately 57/132, 43.2% of studies). Only 15.1% (20/132) of papers discussed ethical implications, with a primary focus on the individual privacy of research participants.
Privacy
As noted, the privacy of participants was addressed in the studies primarily with reference to engineering methods and, in some instances, concerning regulatory compliance. In the smartphone group, notice and consent combined with engineering methods were used to address user-subject privacy concerns. In the social media group, privacy was discussed in terms of how data were managed and the technical elements of the algorithms used, including privacy-preserving algorithms [
] and limiting the use of identifiable information [ ]. In the sensor group, privacy was addressed in several ways, particularly by collecting low-fidelity data [ ] and anonymization [ ].Questions may be raised about how privacy is (or should be) conceptualized and how the technologies will fare in real-world settings. Taking a strictly legal approach to privacy, for example, may not necessarily confer a social license to operate. An example of a failure to align law and social license is the United Kingdom’s proposed care.data scheme, where secondary data from general practitioners were to be collected for research purposes [
]. Although this scheme aligned, and in some cases, went further than legal requirements, it still faced a public backlash and was ultimately shut down.Privacy as a concept exists as an expression of claims to dignity and self-determination. These more expansive concerns of dignity and autonomy were not the subject of explicit consideration in the studies examined in this review. This point raises the issue of possible gaps in the literature.
Gaps
Overview
It is difficult to discuss what did not appear in the literature, as such observations are necessarily subjective and will differ based on a person’s disciplinary background, interests, and priorities. For our part, we noted four interconnected matters that we believe are important and which arise in the literature noted in the Background section. They are the paucity of ethical inquiry and consideration of algorithmic accountability, the near-complete lack of service user or subject input, and concerns with a medico-technological framing.
Gaps in Ethical Enquiry
Notwithstanding the common interest in matters of privacy across almost all papers, there was a relatively low engagement with broader ethical dimensions of the algorithmic and data-driven technology in question—a finding that appears to support the view of some scholars in the field [
].However, an important distinction should be made between empirical studies designed to validate or explore a particular technology and (as we discussed in the Background section) the literature concerned specifically with ethical and legal issues arising from algorithmic technology. The very form of journal articles that examine applied research concerning algorithmic and data-driven technologies in mental health care may tend to preclude a focus on the ethical and legal issues that arise (although some authors clearly felt it worth noting pressing issues in their papers). Some disciplines, including computer science, appear to have traditionally separated ethics or legal articles from publications concerned with findings or validation regarding emerging technologies, although this tradition is somewhat challenged in the literature on ethics in design [
, ].Furthermore, the gap between applied research, on the one hand, and research that is specifically focused on ethics, on the other hand, does not appear to be unique to the mental health context. For example, Hübner et al [
] point out that “ethical values have not yet found their firm place in empirically rigorous health technology evaluation studies” more generally. This dynamic “sets the stage for further research at the junction of clinical information systems and ethics” [ ]. Indeed, others have sought to create frameworks to meet the new ethical and regulatory challenges of health care in the digital age [ ].A minority of the studies in our review discussed these challenges. Birnbaum et al [
], for example, discussed the limits of contemporary ethical standards for research on social media in the mental health context, noting that “[e]xisting ethical principles do not sufficiently guide researchers” and new technological approaches to “illness identification and symptom tracking will likely result in a redefinition of existing clinical rules and regulations.” However, many other studies have not discussed or alluded to these challenges. In one study, web-based videogame players were recruited to conduct a web-based survey asking for "sociodemographic and gaming information" and feedback concerning psychometric indicators to develop machine learning to predict psychological disorders. The researchers requested electronic consent from participants to take part in the study but “did not apply for, or receive, any approval from any board or committee for this research as this was a techno-behavioral general study which was non-medicinal, non-intrusive, and non-clinical in nature” [ ]. Furthermore, the authors noted, “[we] are affiliated to a technology university which has no internal committee related to research on human subjects” [ ].New critical questions are required. For example, several studies in the social media category, the largest group of studies, eschewed institutional review board approval based on claims that their data sets were publicly available, raising ethical and legal concerns surrounding emergent, inferred, or indirect data concerning mental health and the potential appropriation of detection and screening tools in unethical (and potentially even illegal) ways. Such claims are being increasingly challenged, particularly following concerns about the creation of inferred data about unsuspecting and nonconsenting users in the health context generally [
] and the mental health context in particular [ ]. Arguably, the likelihood of these risks being overlooked in research is exacerbated by the near-complete exclusion of persons with experience of mental health service use as active contributors to knowledge production in this field, whether as co- or lead investigators or even as advisors.Lack of Service User Involvement
Very few studies (4/132, 3%) in this survey appear to have included people who have used mental health services, those who have experienced mental health conditions or psychosocial disability, or even those who were envisaged as end-beneficiaries of the particular algorithmic and data-driven technology, in the design, evaluation, or implementation of the proposals in any substantive way (except as research participants). In studies where service users were involved, this tended to comprise of research participants being involved in the co-design of content or codeveloping user-interfaces. D’Alfonso et al [
], for example, noted, “[t]he creation of therapy content [in their web-based platform]...was driven by feedback from users and expert youth mental health clinicians through iterative prototyping and participatory design.”With very few exceptions, however, the survey indicated a near-complete exclusion of service users in the conceptualization or development of algorithmic and data-driven technologies and their application to mental health initiatives. It is also noteworthy that even mental health practitioners, who may well be end users envisaged by technologists, were involved in relatively few studies.
The active involvement of mental health service users and representative groups for persons with psychosocial disabilities has become a prominent ethos in mental health and disability policies worldwide [
] and is imperative in international human rights law [ ]. A study in our survey included an acknowledgment of the limitations of not working with affected populations [ ]. However, the authors referred to study populations as research subjects rather than active contributors to technological development. Manikonda and De Choudhury [ ] did recommend the “[a]doption of user centered design approaches in intervention and technology development, to investigate specific needs and constraints of the target users, as well as their acceptability, utility, and interpretability.” Similarly, Ernala et al [ ] noted that the field could benefit extensively from cross-disciplinary partnerships and partnerships between “computational and clinical researchers, and patients.” They also recommended “[p]articipatory research efforts such as the Connected and Open Research Ethics (CORE) initiative [for use] to develop dynamic and relevant ethical practices to guide and navigate the social and ethical complexities of patient data collection” [ , ].From a pragmatic perspective alone, the involvement of service users and others with psychosocial disabilities is generally agreed to increase the likelihood of “viable and effective—rather than disruptive and short-lived—advances” in digital technologies in the mental health context [
].Of the scant commentary and research in the field by persons with psychosocial disabilities and service users, commentators have raised concerns about: the potential need for a right to explanation concerning algorithmic decision making for individuals (not only the right of an individual to understand how a decision about them was made but also to query the values that go into a particular algorithmic decision system) [
]; the risk of discrimination or harm where sensitive personal information is leaked, stolen, sold, or scraped from social media [ ]; and the deployment of data-driven technologies in coercive psychiatric interventions and policing [ , ]. Keyword searches along these lines did not yield any relevant results. Emerencia et al [ ] prioritized the ethical imperative of shared decision making (“an approach in which patient and clinician are equal participants in deciding the treatment plan”) in their study on algorithmic technologies that might generate "personalized advice for schizophrenia patients", and Saha et al [ ] highlighted the potential harms caused by the use of social media data to examine “psychopathological effects subject to self-reported usage of psychiatric medication.” However, these were unusual considerations among the studies reviewed and were noted in passing.Concerns With Algorithmic Accountability
As discussed in the Background section, ethical and legal scholars on algorithmic and data-driven technologies have begun to raise fundamental concerns about whether algorithmic systems should be used at all for certain purposes and, if so, who should govern them [
]. Pasquale [ ] illustrates the evolution of these concerns with reference to mental health apps:For some researchers who are developing mental health apps, the first-wave algorithmic accountability concerns will focus on whether a linguistic corpus of stimuli and responses adequately covers diverse communities with distinct accents and modes of self-presentation. Second-wave critics...may bring in a more law and political economy approach, questioning whether the apps are prematurely disrupting markets for (and the profession of) mental health care in order to accelerate the substitution of cheap (if limited) software for more expensive, expert, and empathetic professionals.
Second-wave concerns give rise to questions as to who is benefiting from (and burdened by) data collection, analysis, and use [
]. Such concerns are spurred by questions about which systems deserve to be built, which problems most need to be addressed, and who is best placed to build and monitor them [ ]. Scholarship on algorithmic and data-driven technologies in mental health services appears to have seldom asked such questions, at least explicitly ([ ]; notable exceptions include [ ] and [ ]). The debate about algorithmic accountability in mental health care is likely to accelerate in the coming years amid broader calls for algorithmic decision systems to be subject to contest, account, and redress to citizens and representatives of the public interest.Overmedicalization and Concerns of Techno-Solutionism
The issues the studies aimed to address were presented in medical terms and framed as problems that are amenable to digital technological solutions. This is not surprising. However, some scholars have raised concerns regarding this framing. In their survey of the messaging of mental health apps, Parker et al [
] argued that prominent apps tend to overmedicalize states of distress and may overemphasize “individual responsibility for mental well-being.” There may be legitimate reasons to demedicalize some approaches to supporting people in distress via digital initiatives and remain cautious about framing the matters as medical problems amenable to digital technological solutions [ , ]. Rose [ ] argues that:most forms of mental distress are inextricably linked to problems of poverty, precarity, violence, exclusion, and other forms of adversity in people’s personal and social experiences, and are best addressed not by medicalization, but by low intensity but committed and durable social interventions guided by outcomes that are not measured in terms of symptom reduction, but by the capacities that people themselves desire in their everyday lives.
This argument raises broader questions about the politics of mental health, for which it would be unrealistic to expect empirical studies of algorithmic and data-driven technologies in mental health care to resolve. Nevertheless, there is an argument that such political considerations and value choices are currently overlooked, with an overwhelming emphasis on scientific methods and measurements of risk and benefit.
Comparison With Previous Work
Reviews such as those conducted by Shatte et al [
] and Tai et al [ ] applied systematic literature search methods to identify the use of machine learning and artificial intelligence in modifying therapeutics and prevention strategies in psychiatry, and Doorn et al [ ] performed a scoping review on its role in psychotherapy. However, to the best of our knowledge, no studies have surveyed the field to identify how ethical and legal issues are incorporated into applied research.Limitations
A disadvantage of using a rapid scoping review method is the difficulty in reproducing the results, given the use of numerous search strings in multiple combinations. This is exacerbated by our aim to cover multiple technology types across several cross-disciplinary databases (resulting in 1078 potential studies reduced manually to 132). There are trade-offs in this broad, exploratory approach. In addition to the challenges of replicability, we cannot claim to have achieved an exhaustive review, as may be possible in systematic reviews of specific technologies or subtypes (such as machine learning). Furthermore, the wide range of new and emerging technologies in our scope poses terminological challenges; hence, we undoubtedly missed studies that used terms overlooked in our search strings (as a peer reviewer pointed out, we did not use the term recommender system). This is exacerbated by the intrinsic challenge of pinning down terms and concepts in any area of rapid technological change [
].Despite these limitations, a survey of empirical studies offers valuable information. The principal strength of a scoping review is its breadth. Our broad and cross-disciplinary approach enabled us to identify cross-cutting trends in the literature as a whole, and the trends we identified are striking, that is, roughly 15.1% (20/132) of the studies in the survey contained even a brief consideration of ethical issues, and only 3% (4/132) of studies appeared to involve mental health service users or affected populations. We argue that this is a significant finding that warrants our chosen method and research design.
Conclusions
Our findings suggest that the disciplines undertaking applied research in this field do not generally prioritize explicit consideration of ethical and legal issues in their studies—and, perhaps more broadly, “the moral, political, social and policy issues at stake” [
]. Research institutions tend to focus strongly on protecting human participants involved in research, as they should, which is generally reflected in the studies in our survey (although not always). However, other important considerations, such as participatory and community-engaged research, which is an increasingly accepted requirement of mental health research, policy and practice, as well as broader ethicolegal issues in the field appear to be overlooked. This situation may have several explanations warranting further investigation, including editorial requirements for scholarly papers, the workings of institutional review mechanisms, funding arrangements, and prevailing evidentiary and epistemological cultures. However, with an increase in adverse effects involving flows of data concerning mental health [ ], the situation must surely change.Acknowledgments
Funding for this research was obtained from the Mozilla Foundation and the Australian Research Council (Project ID: DE200100483).
Conflicts of Interest
None declared.
References
- Gooding P. Mapping the rise of digital mental health technologies: emerging issues for law and society. Int J Law Psychiatry 2019 Nov;67:101498. [CrossRef] [Medline]
- Mohr DC, Lyon AR, Lattie EG, Reddy M, Schueller SM. Accelerating digital mental health research from early design and creation to successful implementation and sustainment. J Med Internet Res 2017 May 10;19(5):e153 [FREE Full text] [CrossRef] [Medline]
- Torous J, Myrick KJ, Rauseo-Ricupero N, Firth J. Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health 2020 Mar 26;7(3):e18848 [FREE Full text] [CrossRef] [Medline]
- Heibron A. United for Global Mental Health. 2020. URL: https://www.unitedgmh.org/news/covid19seminar6 [accessed 2020-07-08]
- WHO QualityRights initiative - improving quality, promoting human rights. World Health Organization. 2020. URL: http://www.who.int/mental_health/policy/quality_rights/en/ [accessed 2020-09-29]
- Castelluccia C, Le Métayer D. Understanding algorithmic decision-making: opportunities and challenges. In: STUDY Panel for the Future of Science and Technology - European Parliament. Brussels, Belgium: European Parliamentary Research Service, Scientific Foresight Unit; 2019.
- Marks M. Artificial intelligence based suicide prediction. 2019. URL: https://papers.ssrn.com/abstract=3324874 [accessed 2019-08-20]
- Mishara BL, Bardon C, Dupont S. Can CCTV identify people in public transit stations who are at risk of attempting suicide? An analysis of CCTV video recordings of attempters and a comparative investigation. BMC Public Health 2016 Dec 15;16(1):1245 [FREE Full text] [CrossRef] [Medline]
- Harwell R. Colleges are turning students' phones into surveillance machines. The Washington Post. 2019. URL: https://www.washingtonpost.com/technology/2019/12/24/colleges-are-turning-students-phones-into-surveillance-machines-tracking-locations-hundreds-thousands/ [accessed 2020-03-27]
- Tortora L, Meynen G, Bijlsma J, Tronci E, Ferracuti S. Neuroprediction and A.I. in forensic psychiatry and criminal justice: a neurolaw perspective. Front Psychol 2020 Mar 17;11:220 [FREE Full text] [CrossRef] [Medline]
- Stevenson M, Doleac JL. Algorithmic risk assessment in the hands of humans. Soc Sci Res Network J 2019 Dec 05:3489440. [CrossRef]
- Rado D. Stigmatizing kids? New law forces families to disclose student's mental health treatment. Florida Phoenix. 2018. URL: https://www.floridaphoenix.com/2018/07/11/stigmatizing-kids-new-law-forces-families-to-disclose-students-mental-health-treatment/ [accessed 2019-07-18]
- Patel V, Saxena S, Lund C, Thornicroft G, Baingana F, Bolton P, et al. The Lancet Commission on global mental health and sustainable development. Lancet 2018 Oct 27;392(10157):1553-1598. [CrossRef] [Medline]
- Bhugra D, Tasman A, Pathare S, Priebe S, Smith S, Torous J, et al. The WPA- Lancet Psychiatry Commission on the future of psychiatry. Lancet Psychiatry 2017 Oct;4(10):775-818. [CrossRef] [Medline]
- Lancet Psychiatry. Digital health: the good, the bad, and the abandoned. Lancet Psychiatry 2019 Apr;6(4):273. [CrossRef]
- Bauer M, Monteith S, Geddes J, Gitlin MJ, Grof P, Whybrow PC, et al. Automation to optimise physician treatment of individual patients: examples in psychiatry. Lancet Psychiatry 2019 Apr;6(4):338-349. [CrossRef]
- Hollis C, Sampson S, Simons L, Davies EB, Churchill R, Betton V, et al. Identifying research priorities for digital technology in mental health care: results of the James Lind Alliance Priority Setting Partnership. Lancet Psychiatry 2018 Oct;5(10):845-854. [CrossRef]
- Carr S. Renegotiating the contract. Lancet Psychiatry 2017 Oct;4(10):740-741. [CrossRef]
- Harris L. The rise of the digital asylum. Mad In America. 2019. URL: https://www.madinamerica.com/2019/09/the-rise-of-the-digital-asylum/ [accessed 2019-10-11]
- Martinez-Martin N. Chapter Three - Trusting the bot: addressing the ethical challenges of consumer digital mental health therapy. In: Bárd I, Hildt E, editors. Ethical Dimensions of Commercial and DIY Neurotechnologies, Volume 3; (Developments in Neuroethics and Bioethics, Volume 3) 1st Edition. Cambridge, Massachusetts, United States: Academic Press; 2020:63-91.
- Gooding P, Resnick K. Psychiatry and law in the digital age: untangling the hype, risk and promise. Int J Law Psychiatry 2020 May;70:101553. [CrossRef] [Medline]
- Capon H, Hall W, Fry C, Carter A. Realising the technological promise of smartphones in addiction research and treatment: an ethical review. Int J Drug Policy 2016 Oct;36:47-57. [CrossRef] [Medline]
- Torous J, Andersson G, Bertagnoli A, Christensen H, Cuijpers P, Firth J, et al. Towards a consensus around standards for smartphone apps and digital mental health. World Psychiatry 2019 Feb;18(1):97-98 [FREE Full text] [CrossRef] [Medline]
- Martinez-Martin N, Insel TR, Dagum P, Greely HT, Cho MK. Data mining for health: staking out the ethical territory of digital phenotyping. NPJ Digit Med 2018 Dec 19;1(1):68 [FREE Full text] [CrossRef] [Medline]
- Lederman R, D'Alfonso S, Rice S, Coghlan S, Wadley G, Alvarez-Jimenez M. Ethical issues in online mental health interventions. ECIS 2020 Research Papers. 2020. URL: https://aisel.aisnet.org/ecis2020_rp/66/ [accessed 2021-05-19]
- Luxton D, Anderson S, Anderson M. Chapter 11 - Ethical issues and artificial intelligence technologies in behavioral and mental health care. In: Luxton DD, editor. Artificial Intelligence in Behavioral and Mental Health Care. Cambridge, Massachusetts, United States: Academic Press; 2015:255-276.
- D'Alfonso S. AI in mental health. Curr Opin Psychol 2020 Dec;36:112-117. [CrossRef] [Medline]
- Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell 2019 Sep 02;1(9):389-399. [CrossRef]
- Frankish K, Ramsey W. The Cambridge Handbook of Artificial Intelligence. Cambridge: Cambridge University Press; 2014.
- Pasquale F. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, Massachusetts, United States: Harvard University Press; 2017:1-320.
- Burr C, Morley J, Taddeo M, Floridi L. Digital psychiatry: risks and opportunities for public health and wellbeing. IEEE Trans Technol Soc 2020 Mar;1(1):21-33. [CrossRef]
- Torous J, Nebeker C. Navigating ethics in the digital age: introducing connected and open research ethics (CORE), a tool for researchers and institutional review boards. J Med Internet Res 2017 Feb 08;19(2):e38 [FREE Full text] [CrossRef] [Medline]
- Barnett I, Torous J. Ethics, transparency, and public health at the intersection of innovation and Facebook's suicide prevention efforts. Ann Intern Med 2019 Feb 12;170(8):565. [CrossRef]
- Swirsky ES, Boyd AD. Adherence, Surveillance, and Technological Hubris. Am J Bioeth 2018 Sep 20;18(9):61-62. [CrossRef] [Medline]
- Armontrout J, Torous J, Cohen M, McNiel D, Binder R. Current regulation of mobile mental health applications. J Am Acad Psychiatry Law 2018 Jun;46(2):204-211. [CrossRef] [Medline]
- Berman AL, Carter G. Technological advances and the future of suicide prevention: ethical, legal, and empirical challenges. Suicide Life Threat Behav 2020 Jun 05;50(3):643-651. [CrossRef] [Medline]
- Edwards-Stewart A, Alexander C, Armstrong CM, Hoyt T, O'Donohue W. Mobile applications for client use: ethical and legal considerations. Psychol Serv 2019 May;16(2):281-285. [CrossRef] [Medline]
- Kramer GM, Kinn JT, Mishkind MC. Legal, regulatory, and risk management issues in the use of technology to deliver mental health care. Cogn Behav Pract 2015 Aug;22(3):258-268. [CrossRef]
- Sherman J. Double secret protection: bridging federal and state law to protect privacy rights for telemental and mobile health users. Duke L J 2018;67(5):1115-1153.
- Floridi L. The Ethics of Information. Oxford, United Kingdom: Oxford University Press; 2015:1-380.
- Eubanks V. Automating Inequality. New York, United States: Macmillan; 2018:1-272.
- Maniadaki K. Net neutrality regulation in the EU: competition and beyond. J Eur Compet Law Pract 2019;10(7):479-488. [CrossRef]
- Pasquale F. Internet nondiscrimination principles: commercial ethics for carriers and search engines. University of Chicago Legal Forum. 2008. URL: https://chicagounbound.uchicago.edu/uclf/vol2008/iss1/6 [accessed 2021-05-25]
- Pasquale F. The second wave of algorithmic accountability. Law and Political Economy Project. 2019. URL: https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/ [accessed 2020-07-02]
- Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc 2015 Sep;13(3):141-146. [CrossRef] [Medline]
- Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005 Feb;8(1):19-32. [CrossRef]
- Peters M, Godfrey C, McInerney P, Munn Z, Trico A, Khalil H. Chapter 11: Scoping reviews. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. Adelaide: JBI; 2020.
- Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473. [CrossRef] [Medline]
- Cooper H. Research Synthesis and Meta-Analysis: A Step-by-Step Approach, 4th Ed. Thousand Oaks, California, United States: Sage Publications, Inc; 2009:1-280.
- Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
- Andonie R, Dzitac I. How to write a good paper in computer science and how will it be measured by ISI web of knowledge. Int J Comput Commun 2010 Nov 01;5(4):432. [CrossRef]
- Leech NL, Onwuegbuzie AJ. Beyond constant comparison qualitative data analysis: using NVivo. Sch Psychol Q 2011 Mar;26(1):70-84. [CrossRef]
- Shatte AB, Hutchinson DM, Teague SJ. Machine learning in mental health: a scoping review of methods and applications. Psychol Med 2019 Jul;49(9):1426-1448. [CrossRef] [Medline]
- Birnbaum ML, Ernala SK, Rizvi AF, De Choudhury M, Kane JM. A collaborative approach to identifying social media markers of schizophrenia by employing machine learning and clinical appraisals. J Med Internet Res 2017 Aug 14;19(8):e289 [FREE Full text] [CrossRef] [Medline]
- Fatima I, Mukhtar H, Ahmad HF, Rajpoot K. Analysis of user-generated content from online social communities to characterise and predict depression degree. Jo Inf Sci 2017 Nov 14;44(5):683-695. [CrossRef]
- Cheng Q, Li TM, Kwok C, Zhu T, Yip PS. Assessing suicide risk and emotional distress in chinese social media: a text mining and machine learning study. J Med Internet Res 2017 Jul 10;19(7):e243 [FREE Full text] [CrossRef] [Medline]
- Yan H, Fitzsimmons-Craft EE, Goodman M, Krauss M, Das S, Cavazos-Rehg P. Automatic detection of eating disorder-related social media posts that could benefit from a mental health intervention. Int J Eat Disord 2019 Oct 05;52(10):1150-1156 [FREE Full text] [CrossRef] [Medline]
- Cacheda F, Fernandez D, Novoa FJ, Carneiro V. Early detection of depression: social network analysis and random forest techniques. J Med Internet Res 2019 Jun 10;21(6):e12554 [FREE Full text] [CrossRef] [Medline]
- Hussain J, Satti FA, Afzal M, Khan WA, Bilal HS, Ansaar MZ, et al. Exploring the dominant features of social media for depression detection. J Inf Sci 2019 Aug 12;46(6):739-759. [CrossRef]
- Ranganathan A, Haritha A, Thenmozhi D, Aravindan C. Early detection of anorexia using RNN-LSTMSVM classifiers. In: Proceedings of the 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019. 2019 Sep 09 Presented at: 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019; September 9-12, 2019; Lugano, Switzerland URL: https://www.scopus.com/record/display.uri?eid=2-s2.0-85070531421&origin=inward&txGid=9fa6a2994e821361ea895a74004966eb
- Nguyen H, Nguyen V, Nguyen T, Larsen M, O’Dea B, Nguyen D, et al. Jointly predicting affective and mental health scores using deep neural networks of visual cues on the web. In: Proceedings of the 19th International Conference on Web Information Systems Engineering – WISE 2018. 2018 Presented at: 19th International Conference on Web Information Systems Engineering – WISE 2018; November 12-15, 2018; Dubai, United Arab Emirates p. 100-110. [CrossRef]
- Krishnamurthy M, Mahmood K, Marcinek P. A hybrid statistical and semantic model for identification of mental health and behavioral disorders using social network analysis. In: Proeedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). 2016 Presented at: IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM); Aug. 18-21, 2016; San Francisco, CA, USA p. 1019-1026. [CrossRef]
- Karmen C, Hsiung RC, Wetter T. Screening internet forum participants for depression symptoms by assembling and enhancing multiple NLP methods. Comput Methods Programs Biomed 2015 Jun;120(1):27-36. [CrossRef] [Medline]
- Reece AG, Reagan AJ, Lix KL, Dodds PS, Danforth CM, Langer EJ. Forecasting the onset and course of mental illness with Twitter data. Sci Rep 2017 Oct 11;7(1):13006 [FREE Full text] [CrossRef] [Medline]
- Tariq S, Akhtar N, Afzal H, Khalid S, Mufti MR, Hussain S, et al. A novel co-training-based approach for the classification of mental illnesses using social media posts. IEEE Access 2019;7:166165-166172. [CrossRef]
- Ji S, Long G, Pan S, Zhu T, Jiang J, Wang S. Detecting suicidal ideation with data protection in online communities. In: Li G, Yang J, Gama J, Natwishai J, Tong Y, editors. Database Systems for Advanced Applications. Switzerland: Springer; 2019:225-229.
- Alambo A, Gaur M, Lokala U, Kursuncu U, Thirunarayan K, Gyrard A, et al. Question answering for suicide risk assessment using Reddit. In: Proeedings of the IEEE 13th International Conference on Semantic Computing (ICSC). 2019 Presented at: IEEE 13th International Conference on Semantic Computing (ICSC); Jan. 30 - Feb. 1, 2019; Newport Beach, CA, USA. [CrossRef]
- Eichstaedt JC, Smith RJ, Merchant RM, Ungar LH, Crutchley P, Preoţiuc-Pietro D, et al. Facebook language predicts depression in medical records. Proc Natl Acad Sci U S A 2018 Oct 30;115(44):11203-11208 [FREE Full text] [CrossRef] [Medline]
- Katchapakirin K, Wongpatikaseree K, Yomaboot P, Kaewpitakkun Y. Facebook social media for depression detection in the Thai community. In: Proceedings of the 15th International Joint Conference on Computer Science and Software Engineering (JCSSE). 2018 Presented at: 15th International Joint Conference on Computer Science and Software Engineering (JCSSE); July 11-13, 2018; Nakhon Pathom, Thailand p. 227-231. [CrossRef]
- Reece AG, Danforth CM. Instagram photos reveal predictive markers of depression. EPJ Data Sci 2017 Aug 8;6(1):-. [CrossRef]
- Leiva V, Freire A. Towards suicide prevention: early detection of depression on social media. In: Kompatsiaris I, Cave J, Satsiou A, editors. Internet Sci. Switzerland: Springer; 2017:428-436.
- Deshpande M, Rao V. Depression detection using emotion artificial intelligence. In: Proceedings of the International Conference on Intelligent Sustainable Systems (ICISS). 2017 Presented at: International Conference on Intelligent Sustainable Systems (ICISS); December 7-8, 2017; Palladam, India. [CrossRef]
- Ernala S, Birnbaum M, Candan K, Rizvi A, Sterling W, Kane J, et al. Methodological gaps in predicting mental health states from social media: triangulating diagnostic signals. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019 Presented at: CHI '19: CHI Conference on Human Factors in Computing Systems; May 2019; Glasgow, Scotland UK p. 1-16. [CrossRef]
- Joshi D, Makhija M, Nabar Y, Nehete N, Patwardhan M. Mental health analysis using deep learning for feature extraction. In: Proceedings of the ACM India Joint International Conference on Data Science and Management of Data. 2018 Presented at: CoDS-COMAD '18: The ACM India Joint International Conference on Data Science & Management of Data; January, 2018; Goa, India p. 356-359. [CrossRef]
- Wang Y, Tang J, Li J, Li B, Wan Y, Mellina C, et al. Understanding and Discovering Deliberate Self-Harm Content in Social Media. In: Proceedings of the 26th International Conference on World Wide Web. 2017 Presented at: WWW '17: Proceedings of the 26th International Conference on World Wide Web; April, 2017; Perth, Australia p. 93-102. [CrossRef]
- Vedula N, Parthasarathy S. Emotional and linguistic cues of depression from social media. In: Proceedings of the 2017 International Conference on Digital Health. 2017 Presented at: DH '17: International Conference on Digital Health; July 2017; London, UK p. 127-136. [CrossRef]
- Chen X, Sykora M, Jackson T, Elayan S. What about mood swings: identifying depression on twitter with temporal measures of emotions. In: Proceedings of the Companion Proceedings of the The Web Conference 2018. 2018 Presented at: WWW '18: The Web Conference 2018; April 2018; Lyon, France p. 1653-1660. [CrossRef]
- Ziwei B, Chua H. An Application for Classifying Depression in Tweets. : Association for Computing Machinery; 2019 Presented at: ICCBD 2019: Proceedings of the 2nd International Conference on Computing and Big Data; October, 2019; Taichung, Taiwan p. 37-41. [CrossRef]
- Shuai H, Shen C, Yang D, Lan Y, Lee W, Yu P, et al. Mining online social data for detecting social network mental disorders. In: Proceedings of the 25th International Conference on World Wide Web. 2016 Presented at: WWW '16: 25th International World Wide Web Conference; April, 2016; Montréal Québec Canada p. 275-285. [CrossRef]
- Chancellor S, Lin Z, Goodman E, Zerwas S, De Choudhury M. Quantifying and predicting mental illness severity in online pro-eating disorder communities. In: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. 2016 Presented at: CSCW '16: Computer Supported Cooperative Work and Social Computing; February, 2016; San Francisco, CA, USA p. 1171-1184. [CrossRef]
- Wu P, Koh J, Chen A. Event detection for exploring emotional upheavals of depressive people. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. 2019 Presented at: SAC '19: The 34th ACM/SIGAPP Symposium on Applied Computing; April 2019; Limassol Cyprus p. 2086-2095. [CrossRef]
- Lu J, Sridhar S, Pandey R, Hasan M, Mohler G. Investigate transitions into drug addiction through text mining of Reddit data. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019 Presented at: KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining; August, 2019; Anchorage AK USA p. 2367-2375. [CrossRef]
- D'Alfonso S, Santesteban-Echarri O, Rice S, Wadley G, Lederman R, Miles C, et al. Artificial intelligence-assisted online social therapy for youth mental health. Front Psychol 2017 Jun 02;8:796 [FREE Full text] [CrossRef] [Medline]
- Rezaii N, Walker E, Wolff P. A machine learning approach to predicting psychosis using semantic density and latent content analysis. NPJ Schizophr 2019 Jun 13;5(1):9 [FREE Full text] [CrossRef] [Medline]
- Ricard BJ, Marsch LA, Crosier B, Hassanpour S. Exploring the utility of community-generated social media content for detecting depression: an analytical study on Instagram. J Med Internet Res 2018 Dec 06;20(12):e11817 [FREE Full text] [CrossRef] [Medline]
- Ernala SK, Rizvi AF, Birnbaum ML, Kane JM, De Choudhury M. Linguistic markers indicating therapeutic outcomes of social media disclosures of schizophrenia. Proc ACM Hum-Comput Interact 2017 Dec 06;1(CSCW):1-27. [CrossRef]
- Saha K, Sugar B, Torous J, Abrahao B, Kıcıman E, De Choudhury M. A social media study on the effects of psychiatric medication use. In: Proceedings of the 13th International Conference on Web and Social Media, ICWSM 2019. 2019 Presented at: 13th International Conference on Web and Social Media, ICWSM 201; June 11-14, 2019; Münich, Germany p. 440-451 URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85069434115&partnerID=40&md5=6eb1f1d6c8684a897f76cb9725aa6c7c
- Du J, Zhang Y, Luo J, Jia Y, Wei Q, Tao C, et al. Extracting psychiatric stressors for suicide from social media using deep learning. BMC Med Inform Decis Mak 2018 Jul 23;18(Suppl 2):43 [FREE Full text] [CrossRef] [Medline]
- Thorstad R, Wolff P. Predicting future mental illness from social media: a big-data approach. Behav Res Methods 2019 Aug 29;51(4):1586-1600. [CrossRef] [Medline]
- Fatima I, Abbasi BU, Khan S, Al‐Saeed M, Ahmad HF, Mumtaz R. Prediction of postpartum depression using machine learning techniques from social media text. Expert Systems 2019 Apr 26;36(4):12409. [CrossRef]
- Zhou T, Hu G, Wang L. Psychological disorder identifying method based on emotion perception over social networks. Int J Environ Res Public Health 2019 Mar 16;16(6):953 [FREE Full text] [CrossRef] [Medline]
- Coppersmith G, Leary R, Crutchley P, Fine A. Natural language processing of social media as screening for suicide risk. Biomed Inform Insights 2018 Aug 27;10:A [FREE Full text] [CrossRef] [Medline]
- Li A, Jiao D, Zhu T. Detecting depression stigma on social media: a linguistic analysis. J Affect Disord 2018 May;232:358-362. [CrossRef] [Medline]
- Dao B, Nguyen T, Venkatesh S, Phung D. Effect of social capital on emotion, language style and latent topics in online depression community. In: Proceedings of the IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future (RIVF). 2016 Presented at: IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future (RIVF); Nov. 7-9, 2016; Hanoi, Vietnam p. 61-66. [CrossRef]
- Nambisan P, Luo Z, Kapoor A, Patrick T, Cisler R. Social media, big data, and public health informatics: ruminating behavior of depression revealed through Twitter. In: Proceedings of the 48th Hawaii International Conference on System Sciences. 2015 Presented at: 48th Hawaii International Conference on System Sciences; Jan. 5-8, 2015; Kauai, HI, USA p. 2906-2913. [CrossRef]
- Nguyen T, Phung D, Dao B, Venkatesh S, Berk M. Affective and content analysis of online depression communities. IEEE Trans Affective Comput 2014 Jul 1;5(3):217-226. [CrossRef]
- Dao B, Nguyen T, Phung D, Venkatesh S. Effect of mood, social connectivity and age in online depression community via topic and linguistic analysis. In: Benatallah B, Bestavros A, Manolopoulos Y, Vakali A, Zhang Y, editors. Web Information Systems Engineering – WISE 2014. Switzerland: Springer; 2014:398-407.
- Manikonda L, De Choudhury M. Modeling and understanding visual attributes of mental health disclosures in social media. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017 Presented at: CHI '17: CHI Conference on Human Factors in Computing Systems; May 2017; Denver Colorado USA p. 170-181. [CrossRef]
- De Choudhury M, Kiciman E, Dredze M, Coppersmith G, Kumar M. Discovering shifts to suicidal ideation from mental health content in social media. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2016 May Presented at: CHI'16: CHI Conference on Human Factors in Computing Systems; May 2016; San Jose California USA p. 2098-2110 URL: http://europepmc.org/abstract/MED/29082385 [CrossRef]
- Gaur M, Kursuncu U, Alambo A, Sheth A, Daniulaityte R, Thirunarayan K, et al. "Let Me Tell You About Your Mental Health!": contextualized classification of Reddit posts to DSM-5 for web-based intervention. In: Proceedings of the CIKM '18: The 27th ACM International Conference on Information and Knowledge Management. 2018 Presented at: CIKM '18: The 27th ACM International Conference on Information and Knowledge Management; October 2018; Torino Italy p. 753-762. [CrossRef]
- Bagroy S, Kumaraguru P, De Choudhury M. A social media based index of mental well-being in college campuses. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017 Presented at: CHI '17: CHI Conference on Human Factors in Computing Systems; May, 2017; Denver Colorado USA p. 1634-1646. [CrossRef]
- De Choudhury M, Counts S, Horvitz E. Social media as a measurement tool of depression in populations. In: Proceedings of the 5th Annual ACM Web Science Conference. 2013 May Presented at: WebSci '13: Web Science 2013; May 2013; Paris France p. 47-56. [CrossRef]
- Sadeque F, Xu D, Bethard S. Measuring the latency of depression detection in social media. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. 2018 Presented at: WSDM 2018: The Eleventh ACM International Conference on Web Search and Data Mining; February, 2018; Marina Del Rey CA USA p. 495-503. [CrossRef]
- Burnap P, Colombo W, Scourfield J. Machine classification and analysis of suicide-related communication on Twitter. In: Proceedings of the 26th ACM Conference on Hypertext & Social Media. 2015 Presented at: HT '15: 26th ACM Conference on Hypertext and Social Media; September, 2015; Guzelyurt Northern Cyprus p. 75-84. [CrossRef]
- Sinha P, Mishra R, Sawhney R, Mahata D, Shah R, Liu H. #suicidal - a multipronged approach to identify and explore suicidal ideation in Twitter. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019 Presented at: CIKM '19: The 28th ACM International Conference on Information and Knowledge Management; November, 2019; Beijing China p. 941-950. [CrossRef]
- Milne DN, McCabe KL, Calvo RA. Improving moderator responsiveness in online peer support through automated triage. J Med Internet Res 2019 Apr 26;21(4):e11410 [FREE Full text] [CrossRef] [Medline]
- Perez Arribas I, Goodwin GM, Geddes JR, Lyons T, Saunders KEA. A signature-based machine learning model for distinguishing bipolar disorder and borderline personality disorder. Transl Psychiatry 2018 Dec 13;8(1):274 [FREE Full text] [CrossRef] [Medline]
- Berrouiguet S, Barrigón ML, Castroman JL, Courtet P, Artés-Rodríguez A, Baca-García E. Combining mobile-health (mHealth) and artificial intelligence (AI) methods to avoid suicide attempts: the Smartcrises study protocol. BMC Psychiatry 2019 Sep 07;19(1):277 [FREE Full text] [CrossRef] [Medline]
- Stamate D, Katrinecz A, Stahl D, Verhagen SJ, Delespaul PA, van Os J, et al. Identifying psychosis spectrum disorder from experience sampling data using machine learning approaches. Schizophr Res 2019 Jul;209:156-163. [CrossRef] [Medline]
- Stamate D, Katrinecz A, Alghamdi W, Stahl D, Delespaul P, van Os J, et al. Predicting psychosis using the experience sampling method with mobile apps. In: Proceedings of the 16th IEEE International Conference on Machine Learning and Applications (ICMLA). 2017 Presented at: 16th IEEE International Conference on Machine Learning and Applications (ICMLA); Dec. 18-21, 2017; Cancun, Mexico p. 667-673. [CrossRef]
- Ben-Zeev D, Scherer EA, Wang R, Xie H, Campbell AT. Next-generation psychiatric assessment: using smartphone sensors to monitor behavior and mental health. Psychiatr Rehabil J 2015 Sep;38(3):218-226 [FREE Full text] [CrossRef] [Medline]
- Ware S, Yue C, Morillo R, Lu J, Shang C, Bi J, et al. Predicting depressive symptoms using smartphone data. Smart Health 2020 Mar;15:100093. [CrossRef]
- Mastoras R, Iakovakis D, Hadjidimitriou S, Charisis V, Kassie S, Alsaadi T, et al. Touchscreen typing pattern analysis for remote detection of the depressive tendency. Sci Rep 2019 Sep 16;9(1):13414 [FREE Full text] [CrossRef] [Medline]
- Wshah S, Skalka C, Price M. Predicting posttraumatic stress disorder risk: a machine learning approach. JMIR Ment Health 2019 Jul 22;6(7):e13946 [FREE Full text] [CrossRef] [Medline]
- Gerych W, Agu E, Rundensteiner E. Classifying depression in imbalanced datasets using an autoencoder-based anomaly detection approach. In: Proceedings of the IEEE 13th International Conference on Semantic Computing (ICSC). 2019 Presented at: IEEE 13th International Conference on Semantic Computing (ICSC); Jan. 30 - Feb. 1, 2019; Newport Beach, CA, USA. [CrossRef]
- Kim J, Lim S, Min YH, Shin Y, Lee B, Sohn G, et al. Depression Screening Using Daily Mental-Health Ratings from a Smartphone Application for Breast Cancer Patients. J Med Internet Res 2016 Aug 04;18(8):e216 [FREE Full text] [CrossRef] [Medline]
- Farhan A, Lu J, Bi J, Russell A, Wang B, Bamis A. Multi-view bi-clustering to identify smartphone sensing features indicative of depression. In: Proceedings of the IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE). 2016 Presented at: IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE); June 27-29, 2016; Washington, DC, USA. [CrossRef]
- Wang R, Campbell A, Zhou X. Using opportunistic face logging from smartphone to infer mental health: challenges and future directions. In: Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers. 2015 Presented at: UbiComp '15: The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 2015; Osaka Japan p. 683-692. [CrossRef]
- Opoku AK, Visuri A, Ferreira D. Towards early detection of depression through smartphone sensing. In: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers. 2019 Presented at: UbiComp '19: The 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 2019; London United Kingdom p. 1158-1161. [CrossRef]
- Huang Y, Gong J, Rucker M, Chow P, Fua K, Gerber M, et al. Discovery of behavioral markers of social anxiety from smartphone sensor data. In: Proceedings of the 1st Workshop on Digital Biomarkers. 2017 Presented at: MobiSys'17: The 15th Annual International Conference on Mobile Systems, Applications, and Services; June 2017; Niagara Falls New York USA p. 9-14. [CrossRef]
- Xu X, Chikersal P, Doryab A, Villalba DK, Dutcher JM, Tumminia MJ, et al. Leveraging routine behavior and contextually-filtered features for depression detection among college students. Proc ACM Interact Mob Wearable Ubiquitous Technol 2019 Sep 09;3(3):1-33. [CrossRef]
- Gruenerbl A, Osmani V, Bahle G, Carrasco J, Oehler S, Mayora O, et al. Using smart phone mobility traces for the diagnosis of depressive and manic episodes in bipolar patients. In: Proceedings of the 5th Augmented Human International Conference. 2014 Presented at: AH '14: 5th Augmented Human International Conference; March 2014; Kobe Japan p. 1-8. [CrossRef]
- Nobles A, Glenn J, Kowsari K, Teachman B, Barnes L. Identification of imminent suicide risk among young adults using text messages. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018 Presented at: CHI '18: CHI Conference on Human Factors in Computing Systems; April, 2018; Montreal QC Canada p. 1-11. [CrossRef]
- Bardram J, Frost M, Szántó K, Faurholt-Jepsen M, Vinberg M, Kessing L. Designing mobile health technology for bipolar disorder: a field trial of the monarca system. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2013 Presented at: CHI '13: CHI Conference on Human Factors in Computing Systems; April, 2013; Paris France p. 2627-2636 URL: https://dl.acm.org/doi/abs/10.1145/2470654.2481364 [CrossRef]
- Sarda A, Munuswamy S, Sarda S, Subramanian V. Using Passive Smartphone Sensing for Improved Risk Stratification of Patients With Depression and Diabetes: Cross-Sectional Observational Study. JMIR Mhealth Uhealth 2019 Jan 29;7(1):e11041 [FREE Full text] [CrossRef] [Medline]
- Berrouiguet S, Ramírez D, Barrigón ML, Moreno-Muñoz P, Camacho RC, Baca-García E, et al. Combining continuous smartphone native sensors data capture and unsupervised data mining techniques for behavioral changes detection: a case series of the evidence-based behavior (eB2) study. JMIR Mhealth Uhealth 2018 Dec 10;6(12):e197 [FREE Full text] [CrossRef] [Medline]
- Grünerbl A, Muaremi A, Osmani V, Bahle G, Ohler S, Tröster G, et al. Smartphone-based recognition of states and state changes in bipolar disorder patients. IEEE J Biomed Health Inform 2015 Jan;19(1):140-148. [CrossRef] [Medline]
- Meinlschmidt G, Tegethoff M, Belardi A, Stalujanis E, Oh M, Jung EK, et al. Personalized prediction of smartphone-based psychotherapeutic micro-intervention success using machine learning. J Affect Disord 2020 Mar 01;264:430-437. [CrossRef] [Medline]
- Whelan P, Machin M, Lewis S, Buchan I, Sanders C, Applegate E, et al. Mobile early detection and connected intervention to coproduce better care in severe mental illness. Stud Health Technol Inform 2015;216:123-126. [Medline]
- Hidalgo-Mazzei D, Reinares M, Murru A, Bonnin C, Vieta E, Colom F. Signs and symptoms self-monitoring and psychoeducation in bipolar patients with a smart-phone application (SIMPLe) project. Eur Psychiatry 2015 Mar;30:320. [CrossRef]
- Francillette Y, Bouchard B, Boucher E, Gaboury S, Bernard P, Romain AJ, et al. Development of an exergame on mobile phones to increase physical activity for adults with severe mental illness. In: Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference. 2018 Presented at: PETRA '18: The 11th PErvasive Technologies Related to Assistive Environments Conference; June, 2018; Corfu Greece p. 241-248. [CrossRef]
- Hidalgo-Mazzei D, Mateu A, Reinares M, Murru A, Bonnín CD, Varo C, et al. Psychoeducation in bipolar disorder with a SIMPLe smartphone application: feasibility, acceptability and satisfaction. J Affect Disord 2016 Aug;200:58-66. [CrossRef] [Medline]
- Faurholt-Jepsen M, Busk J, Frost M, Vinberg M, Christensen EM, Winther O, et al. Voice analysis as an objective state marker in bipolar disorder. Transl Psychiatry 2016 Jul 19;6(7):e856 [FREE Full text] [CrossRef] [Medline]
- Wahle F, Kowatsch T, Fleisch E, Rufer M, Weidt S. Mobile sensing and support for people with depression: a pilot trial in the wild. JMIR Mhealth Uhealth 2016 Sep 21;4(3):e111 [FREE Full text] [CrossRef] [Medline]
- Guidi A, Salvi S, Ottaviano M, Gentili C, Bertschy G, de Rossi D, et al. Smartphone application for the analysis of prosodic features in running speech with a focus on bipolar disorders: system performance evaluation and case study. Sensors (Basel) 2015 Nov 06;15(11):28070-28087 [FREE Full text] [CrossRef] [Medline]
- Burns MN, Begale M, Duffecy J, Gergle D, Karr CJ, Giangrande E, et al. Harnessing context sensing to develop a mobile intervention for depression. J Med Internet Res 2011 Aug 12;13(3):e55 [FREE Full text] [CrossRef] [Medline]
- Barish G, Aralis H, Elbogen E, Lester P. A mobile app for patients and those who care about them: a case study for veterans with PSTD + anger. In: Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare. 2019 Presented at: PervasiveHealth'19: The 13th International Conference on Pervasive Computing Technologies for Healthcare; May 2019; Trento Italy p. 1-10. [CrossRef]
- Grünerbl A, Oleksy P, Bahle G, Haring C, Weppner J, Lukowicz P. Towards smart phone based monitoring of bipolar disorder. In: Proceedings of the Second ACM Workshop on Mobile Systems, Applications, and Services for HealthCare. 2012 Presented at: SenSys '12: The 10th ACM Conference on Embedded Network Sensor Systems; November 2012; Toronto Ontario Canada p. 1-6. [CrossRef]
- Canzian L, Musolesi M. Trajectories of depression: unobtrusive monitoring of depressive states by means of smartphone mobility traces analysis. In: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2015 Presented at: UbiComp '15: The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 2015; Osaka Japan p. 1293-1304. [CrossRef]
- Mehrotra A, Hendley R, Musolesi M. Towards multi-modal anticipatory monitoring of depressive states through the analysis of human-smartphone interaction. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. 2016 Presented at: UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September, 2016; Heidelberg Germany p. 1132-1138. [CrossRef]
- Wang R, Wang W, Aung MH, Ben-Zeev D, Brian R, Campbell AT, et al. Predicting symptom trajectories of schizophrenia using mobile sensing. GetMobile: Mobile Comp Comm 2018 Sep 05;22(2):32-37. [CrossRef]
- Carriço L, de Sa M, Duarte L, Antunes T. Therapy: location-aware assessment and tasks. In: Proceedings of the 3rd Augmented Human International Conference. 2012 Presented at: AH '12: Augmented Human International Conference; March 2012; Megève France p. 1-5. [CrossRef]
- Saeb S, Zhang M, Kwasny M, Karr C, Kording K, Mohr D. The relationship between clinical, momentary, and sensor-based assessment of depression. In: Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare. 2015 Presented at: 9th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth); May 20, 2015; Istanbul Turkey p. 229-232. [CrossRef]
- Moreira MW, Rodrigues JJ, Kumar N, Saleem K, Illin IV. Postpartum depression prediction through pregnancy data analysis for emotion-aware smart systems. Inf Fusion 2019 May;47:23-31. [CrossRef]
- Ghandeharioun A, Fedor S, Sangermano L, Ionescu D, Alpert J, Dale C, et al. Objective assessment of depressive symptoms with machine learning and wearable sensors data. In: Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). 2017 Presented at: Seventh International Conference on Affective Computing and Intelligent Interaction (ACII); Oct. 23-26, 2017; San Antonio, TX, USA. [CrossRef]
- Sivalingam R, Cherian A, Fasching J, Walczak N, Bird N, Morellas V, et al. A multi-sensor visual tracking system for behavior monitoring of at-risk children. In: Proceedings of the IEEE International Conference on Robotics and Automation. 2012 Presented at: IEEE International Conference on Robotics and Automation; May 14-18, 2012; Saint Paul, MN, USA. [CrossRef]
- Qian K, Kuromiya H, Ren Z, Schmitt M, Zhang Z, Nakamura T, et al. Automatic detection of major depressive disorder via a bag-of-behaviour-words approach. In: Proceedings of the Third International Symposium on Image Computing and Digital Medicine. 2019 Presented at: ISICDM 2019: The Third International Symposium on Image Computing and Digital Medicine; August 2019; Xi'an China p. 71-75. [CrossRef]
- Lu J, Shang C, Yue C, Morillo R, Ware S, Kamath J, et al. Joint modeling of heterogeneous sensing data for depression assessment via multi-task learning. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Mar 26;2(1):1-21. [CrossRef]
- Frogner J, Noori F, Halvorsen P, Hicks S, Garcia-Ceja E, Torresen J, et al. One-dimensional convolutional neural networks on motor activity measurements in detection of depression. In: Proceedings of the 4th International Workshop on Multimedia for Personal Health & Health Care. 2019 Presented at: MM '19: The 27th ACM International Conference on Multimedia; October, 2019; Nice France p. 9-15. [CrossRef]
- Bennett C, Sabanovic S, Piatt J, Nagata S, Eldridge L, Randall N. A robot a day keeps the blues away. In: Proeedings of the IEEE International Conference on Healthcare Informatics (ICHI). 2017 Presented at: IEEE International Conference on Healthcare Informatics (ICHI); Aug. 23-26, 2017; Park City, UT, USA p. 536-540. [CrossRef]
- Alam M, Abedin S, Al Ameen M, Hong C. Web of objects based ambient assisted living framework for emergency psychiatric state prediction. Sensors (Basel) 2016 Sep 06;16(9):1431 [FREE Full text] [CrossRef] [Medline]
- Antle A, McLaren E, Fiedler H, Johnson N. Design for mental health: how socio-technological processes mediate outcome measures in a field study of a wearable anxiety app. In: Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction. 2019 Presented at: TEI '19: Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction; March, 2019; Tempe Arizona USA p. 87-96. [CrossRef]
- Wang R, Wang W, daSilva A, Huckins JF, Kelley WM, Heatherton TF, et al. Tracking depression dynamics in college students using mobile phone and wearable sensing. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Mar 26;2(1):1-26. [CrossRef]
- Mallol-Ragolta A, Dhamija S, Boult T. A multimodal approach for predicting changes in PSTD symptom severity. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction. 2018 Presented at: ICMI '18: International Conference on Multimodal Interaction; October, 2018; Boulder CO USA p. 324-333. [CrossRef]
- Alam M, Cho E, Huh E, Hong C. Cloud based mental state monitoring system for suicide risk reconnaissance using wearable bio-sensors. In: Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication. 2014 Presented at: ICUIMC '14: The 8th International Conference on Ubiquitous Information Management and Communication; January, 2014; Siem Reap Cambodia p. 1-6. [CrossRef]
- Wan Z, Zhong N, Chen J, Zhou H, Yang J, Yan J. A depressive mood status quantitative reasoning method based on portable EEG and self-rating scale. In: Proceedings of the International Conference on Web Intelligence. 2017 Presented at: WI '17: International Conference on Web Intelligence 2017; August, 2017; Leipzig Germany p. 389-395. [CrossRef]
- Rubin J, Eldardiry H, Abreu R, Ahern S, Du H, Pattekar A, et al. Towards a mobile and wearable system for predicting panic attacks. In: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2015 Presented at: UbiComp '15: The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September, 2015; Osaka Japan p. 529-533. [CrossRef]
- Dickerson R, Gorlin E, Stankovic J. Empath: a continuous remote emotional health monitoring system for depressive illness. In: Proceedings of the 2nd Conference on Wireless Health. 2011 Presented at: WH '11: Wireless Health 2011; October, 2011; San Diego California p. 1-10. [CrossRef]
- Cheng C, Lan T, Chan C. An improved localization algorithm with wireless heartbeat monitoring system for patient safety in psychiatric wards. Eng Appl Artif Intell 2013 Feb;26(2):905-912. [CrossRef]
- Huang C, Hsiao S, Hsu Y, Chung P, Tsai M, Yang Y. Reliability improvement for an RFID-based psychiatric patient tracking system. In: IFMBE Proceedings. 2007 Presented at: 10th World Congress on Medical Physics and Biomedical Engineering, WC 2006; August 27 - Sept 1, 2006; Seoul, South Korea p. 737-741 URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84958239233&partnerID=40&md5=eda993e9db7ec67afd867cad85dca4c2 [CrossRef]
- Tsai M, Huang C, Chung P, Yang Y, Hsu Y, Hsiao S. A psychiatric patients tracking system. In: Proceedings of the IEEE International Symposium on Circuits and Systems. 2006 Presented at: ISCAS 2006: 2006 IEEE International Symposium on Circuits and Systems; May 21-24, 2006; Kos, Greece p. 4050-4053 URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-34547365849&partnerID=40&md5=b3f8c95d31fc5b17e48ed59c3cf9f151
- Ware S, Yue C, Morillo R, Lu J, Shang C, Kamath J, et al. Large-scale automatic depression screening using meta-data from WiFi infrastructure. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Dec 27;2(4):1-27. [CrossRef]
- Zakaria C, Balan R, Lee Y. StressMon: scalable detection of perceived stress and depression using passive sensing of changes in work routines and group interactions. Proc ACM Hum-Comput Interact 2019 Nov 07;3(CSCW):1-29. [CrossRef]
- Joerin A, Rauws M, Ackerman ML. Psychological artificial intelligence service, tess: delivering on-demand support to patients and their caregivers: technical report. Cureus 2019 Jan 28;11(1):e3972 [FREE Full text] [CrossRef] [Medline]
- Househ M, Schneider J, Ahmad K, Alam T, Al-Thani D, Siddig MA, et al. An evolutionary bootstrapping development approach for a mental health conversational agent. Stud Health Technol Inform 2019 Jul 04;262:228-231. [CrossRef] [Medline]
- Rinaldi A, Oseguera O, Tuazon J, Cruz A. End-to-end dialogue with sentiment analysis features. In: Stephanidis C, editor. HCI International 2017 – Posters' Extended Abstracts. HCI 2017. Communications in Computer and Information Science, vol 713. Switzerland: Springer; 2017:480-487.
- Suganuma S, Sakamoto D, Shimoyama H. An embodied conversational agent for unguided internet-based cognitive behavior therapy in preventative mental health: feasibility and acceptability pilot trial. JMIR Ment Health 2018 Jul 31;5(3):e10454 [FREE Full text] [CrossRef] [Medline]
- Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health 2018 Dec 13;5(4):e64 [FREE Full text] [CrossRef] [Medline]
- Aggarwal S, Saluja S, Gambhir V, Gupta S, Satia SPS. Predicting likelihood of psychological disorders in PlayerUnknown's Battlegrounds (PUBG) players from Asian countries using supervised machine learning. Addict Behav 2020 Feb;101:106132. [CrossRef] [Medline]
- Mandryk RL, Birk MV. The potential of game-based digital biomarkers for modeling mental health. JMIR Ment Health 2019 Apr 23;6(4):e13485 [FREE Full text] [CrossRef] [Medline]
- Ray A, Kumar S, Reddy R, Mukherjee P, Garg R. Multi-level attention network using text, audio and video for depression prediction. In: Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop. 2019 Presented at: MM '19: The 27th ACM International Conference on Multimedia; October, 2019; Nice France p. 81-88. [CrossRef]
- Salekin A, Eberle JW, Glenn JJ, Teachman BA, Stankovic JA. A weakly supervised learning framework for detecting social anxiety and depression. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Jun 05;2(2):1-26 [FREE Full text] [CrossRef] [Medline]
- Suzuki E, Deguchi Y, Matsukawa T, Ando S, Ogata H, Sugimoto M. Toward a platform for collecting, mining, and utilizing behavior data for detecting students with depression risks. In: Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments. 2015 Presented at: PETRA '15: 8th PErvasive Technologies Related to Assistive Environments; July, 2015; Corfu Greece p. 1-8. [CrossRef]
- Meng H, Huang D, Wang H, Yang H, AI-Shuraifi M, Wang Y. Depression recognition based on dynamic facial and vocal expression features using partial least square regression. In: Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge. 2013 Presented at: MM '13: ACM Multimedia Conference; October, 2013; Barcelona Spain p. 21-30. [CrossRef]
- Gupta R, Malandrakis N, Xiao B, Guha T, Van Segbroeck M, Black M, et al. Multimodal prediction of affective dimensions and depression in human-computer interactions. In: Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge. 2014 Presented at: MM '14: 2014 ACM Multimedia Conference; November, 2014; Orlando Florida USA p. 33-40. [CrossRef]
- Zhou D, Luo J, Silenzio V, Zhou Y, Hu J, Currier G, et al. Tackling Mental Health by Integrating Unobtrusive Multimodal Sensing. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015 Presented at: AAAI'15: Twenty-Ninth AAAI Conference on Artificial Intelligence; January 2015; Austin Texas, USA p. 1401-1408. [CrossRef]
- Provoost S, Ruwaard J, van Breda W, Riper H, Bosse T. Validating automated sentiment analysis of online cognitive behavioral therapy patient texts: an exploratory study. Front Psychol 2019 May 14;10:1065 [FREE Full text] [CrossRef] [Medline]
- Dias L, Barbosa J. Towards a ubiquitous care model for patients with anxiety disorders. In: Proceedings of the 25th Brazillian Symposium on Multimedia and the Web. 2019 Presented at: WebMedia '19: Brazilian Symposium on Multimedia and the Web; October, 2019; Rio de Janeiro Brazil p. 141-144. [CrossRef]
- Wade J, Nichols HS, Ichinose M, Bian D, Bekele E, Snodgress M, et al. Extraction of emotional information via visual scanning patterns: a feasibility study of participants with schizophrenia and neurotypical individuals. ACM Trans Access Comput 2018 Nov 22;11(4):1-20 [FREE Full text] [CrossRef] [Medline]
- Tuli A, Singh P, Sood M, Deb KS, Jain S, Jain A, et al. Harmony: close knitted mHealth assistance for patients, caregivers and doctors for managing SMIs. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. 2016 Presented at: UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September, 2016; Heidelberg Germany p. 1144-1152. [CrossRef]
- Hirsch T, Soma C, Merced K, Kuo P, Dembe A, Caperton DD, et al. "It's hard to argue with a computer": investigating psychotherapists' attitudes towards automated evaluation. In: Proceedings of the 2018 Designing Interactive Systems Conference. 2018 Presented at: DIS '18: Designing Interactive Systems Conference 2018; June, 2018; Hong Kong China p. 559-571. [CrossRef]
- Ferrario M, Simm W, Gradinar A, Forshaw S, Smith MT, Lee T, et al. Computing and mental health: intentionality and reflection at the click of a button. In: Proceedings of the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare. In; 2017 Presented at: 11th EAI International Conference on Pervasive Computing Technologies for Healthcare PervasiveHealth ?17. Association for Computing Machinery; May, 2017; Barcelona Spain p. 1-10. [CrossRef]
- Lenhard F, Sauer S, Andersson E, Månsson KN, Mataix-Cols D, Rück C, et al. Prediction of outcome in internet-delivered cognitive behaviour therapy for paediatric obsessive-compulsive disorder: a machine learning approach. Int J Methods Psychiatr Res 2018 Mar 28;27(1):e1576 [FREE Full text] [CrossRef] [Medline]
- Emerencia A, van der Krieke L, Sytema S, Petkov N, Aiello M. Generating personalized advice for schizophrenia patients. Artif Intell Med 2013 May;58(1):23-36. [CrossRef] [Medline]
- Ma-Kellams C, Or F, Baek JH, Kawachi I. Rethinking suicide surveillance. Clin Psychol Sci 2015 Aug 11;4(3):480-484. [CrossRef]
- Parker L, Bero L, Gillies D, Raven M, Mintzes B, Jureidini J, et al. Mental health messages in prominent mental health apps. Ann Fam Med 2018 Jul 09;16(4):338-342 [FREE Full text] [CrossRef] [Medline]
- Helbich M. Dynamic urban environmental exposures on depression and suicide (NEEDS) in the Netherlands: a protocol for a cross-sectional smartphone tracking study and a longitudinal population register study. BMJ Open 2019 Aug 10;9(8):e030075 [FREE Full text] [CrossRef] [Medline]
- Carter P, Laurie GT, Dixon-Woods M. The social licence for research: why care.data ran into trouble. J Med Ethics 2015 May;41(5):404-409 [FREE Full text] [CrossRef] [Medline]
- Friedman B, Kahn P. Human values, ethics, and design. In: Sears A, Jacko JA, editors. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. Boca Raton, Florida, United States: CRC Press; 2002:1177-1201.
- Shilton K. Values levers: building ethics into design. Sci Technol Human Values 2012 Apr 23;38(3):374-397. [CrossRef]
- Hübner UH, Egbert N, Schulte G. Clinical information systems - seen through the ethics lens. Yearb Med Inform 2020 Aug 21;29(1):104-114 [FREE Full text] [CrossRef] [Medline]
- Marks M. Emergent medical data: health information inferred by artificial intelligence. 11 U.C. Irvine Law Review 995 (2021). 2020. URL: https://papers.ssrn.com/abstract=3554118 [accessed 2020-05-11]
- Friesen P. Digital psychiatry: promises and perils. Association for the Advancement of Philosophy and Psychiatry. 2020. URL: https://aapp.press.jhu.edu/sites/default/files/Bulletins/aapp-bulletin-volume-27-1-july-2020.pdf [accessed 2021-05-20]
- Rose N. Our Psychiatric Future. Hoboken, New Jersey, United States: John Wiley & Sons Inc; 2018:1-248.
- Gooding P. A New Era for Mental Health Law and Policy: Supported Decision-Making and the UN Convention on the Rights of Persons with Disabilities. Cambridge, United Kingdom: Cambridge University Press; 2018:1-295.
- Carr S. 'AI gone mental': engagement and ethics in data-driven technology for mental health. J Ment Health 2020 Apr 30;29(2):125-130. [CrossRef] [Medline]
- Ralston W. They told their therapists everything. Hackers leaked it all. WIRED.com. 2021 Apr 05. URL: https://www.wired.com/story/vastaamo-psychotherapy-patients-hack-data-breach/ [accessed 2021-06-01]
- Powles J, Nissenbaum H. The seductive diversion of 'solving' bias in artificial intelligence. Medium. 2018. URL: https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 [accessed 2020-07-02]
- Mills C, Hilberg E. The construction of mental health as a technological problem in India. Crit Public Health 2018 Aug 13;30(1):41-52. [CrossRef]
- Tai AM, Albuquerque A, Carmona NE, Subramanieapillai M, Cha DS, Sheko M, et al. Machine learning and big data: implications for disease modeling and therapeutic discovery in psychiatry. Artif Intell Med 2019 Aug;99:101704. [CrossRef] [Medline]
- Aafjes-van DK, Kamsteeg C, Bate J, Aafjes M. A scoping review of machine learning in psychotherapy research. Psychother Res 2021 Jan 29;31(1):92-116. [CrossRef] [Medline]
- Mohr DC, Shilton K, Hotopf M. Digital phenotyping, behavioral sensing, or personal sensing: names and transparency in the digital age. NPJ Digit Med 2020 Mar 25;3(1):45 [FREE Full text] [CrossRef] [Medline]
- Resnik DB, Elliott KC. The ethical challenges of socially responsible science. Account Res 2016 Jul 20;23(1):31-46 [FREE Full text] [CrossRef] [Medline]
Abbreviations
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-analyses |
Edited by J Torous; submitted 29.09.20; peer-reviewed by N Martinez-Martin, MDG Pimentel, I Shubina; comments to author 25.11.20; revised version received 11.03.21; accepted 15.04.21; published 10.06.21
Copyright©Piers Gooding, Timothy Kariotis. Originally published in JMIR Mental Health (https://mental.jmir.org), 10.06.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.