Published on in Vol 12 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/71304, first published .
Video Games and Gamification for Assessing Mild Cognitive Impairment: Scoping Review

Video Games and Gamification for Assessing Mild Cognitive Impairment: Scoping Review

Video Games and Gamification for Assessing Mild Cognitive Impairment: Scoping Review

Review

1e-Media Research Lab, Faculty of Engineering Technology, KU Leuven, Leuven, Belgium

2Augment Group, Department of Computer Sciences, KU Leuven, Leuven, Belgium

3Karlsruhe Institute of Technology, Karlsruhe, Germany

Corresponding Author:

Yu Chen, MPhil

e-Media Research Lab

Faculty of Engineering Technology

KU Leuven

Andreas Vesaliusstraat 13

Leuven, 3000

Belgium

Phone: 32 0456210123

Email: yu.chen@kuleuven.be


Background: Early assessment of mild cognitive impairment (MCI) in older adults is crucial, as it enables timely interventions and decision-making. In recent years, researchers have been exploring the potential of gamified interactive systems (GISs) to assess pathological cognitive decline. However, effective methods for integrating these systems and designing GISs that are both engaging and accurate in assessing cognitive decline are still under investigation.

Objective: We aimed to comprehensively investigate GISs used to assess MCI. Specifically, we reviewed the existing systems to understand the different game types (including genres and interaction paradigms) used for assessment. In addition, we examined the cognitive functions targeted. Finally, we investigated the evidence for the performance of assessing MCI through GISs by looking at the quality of validation for these systems in assessing MCI and the diagnostic performance reported.

Methods: We conducted a scoping search in IEEE Xplore, ACM Digital Library, and Scopus databases to identify interactive gamified systems developed for assessing MCI. Game types were categorized according to genres and interaction paradigms. The cognitive functions targeted by the systems were compared with those assessed in the Montreal Cognitive Assessment (MoCA). Finally, we examined the quality of validation against the reference standard (ground truth), relevance of controls, and sample size. Where provided, the diagnostic performance on sensitivity, specificity, and area under the curve was reported.

Results: A total of 81 articles covering 49 GISs were included in this review. The primary game types used for MCI assessment were classified as casual games (30/49, 61%), simulation games (17/49, 35%), full-body movement games (4/49, 8%), and dedicated interactive games (3/49, 6%). Of the 49 systems, 6 (12%) assessed cognitive functions comprehensively, compared to those functions assessed via the MoCA. Of the 49 systems, 14 (29%) had validation studies, with sensitivities ranging from 70.7% to 100% and specificities ranging from 56.5% to 100%. The reported diagnostic performances of GISs were comparable to those of common screening instruments, such as Mini-Mental State Examination and MoCA, with some systems reporting near-perfect performance (area under the curve>0.98). However, these findings often stemmed from small samples and retrospective designs. Moreover, some of these systems’ model training and validation exhibited substantial deficiencies.

Conclusions: This review provides a comprehensive summary of GISs for assessing MCI, exploring the cognitive functions assessed by these systems and evaluating their diagnostic performance. The results indicate that current GISs hold promise for the assessment of MCI, with several systems demonstrating diagnostic performance comparable to established screening tools. Nevertheless, despite some systems reporting impressive performance, there is a need for improvement in validation, particularly concerning sample size and methodological rigor. Future work should prioritize prospective validation and present greater methodological consistency.

JMIR Ment Health 2025;12:e71304

doi:10.2196/71304

Keywords



Background

Cognitive impairment refers to the decline of one or more 1 cognitive functions, including attention, executive function, or memory [1]. This decline may be a natural part of aging [2]. However, with the increase in average lifespan, the prevalence of pathological cognitive decline also rises due to excessive neural damage [3]. This pathological decline is known as dementia, a general term for a variety of neurodegenerative diseases with different causes characterized by “an impaired ability to remember, think, or make decisions interfering with daily activities” [4]. Most types of dementia develop gradually over time [5]. The prestage of dementia is called mild cognitive impairment (MCI). More particularly, MCI is defined as “a syndrome of self-reported cognitive complaint with one or more objective cognitive impairments but preserved independence in functional abilities” [6]. Hence, individuals with MCI, despite experiencing cognitive impairment, still successfully perform activities of daily living (cooking, toileting, walking, visiting friends, etc), and not all individuals with MCI go on to develop dementia [6]. However, they are at a higher risk, with a reported annual conversion rate from MCI to dementia varying from 2% to 31% [7].

Currently, pharmaceutical interventions to cure MCI or dementia are still under research [8]. Nevertheless, early assessment of cognitive decline, along with early intervention methods, such as cognitive training and medication, can effectively slow down the progression of the condition [9-12]. The current gold standard for diagnosing MCI involves a detailed anamnesis, followed by a neuropsychological examination, and is often complemented by brain imaging to detect structural changes, along with blood and cerebrospinal fluid analyses [13,14]. This comprehensive assessment necessitates a team of medical specialists, consuming a significant number of medical resources [15]. Therefore, before a detailed examination is performed, it is advised to administer a quicker screening test and to continue with a full examination only if necessary [16]. The Mini-Mental State Examination (MMSE) [17] is currently the most widely used instrument for rapid screening of Alzheimer disease (AD) and MCI [18], assessing cognitive functions such as attention, registration and recall, orientation, and language. It is fast to administer, taking only 10 minutes, but it has also shown a lack of sensitivity, misclassifying older adults with MCI as healthy [19]. The Montreal Cognitive Assessment (MoCA) [20] is another popular instrument that evaluates similar cognitive functions in up to 20 minutes. Several studies have shown MoCA to be superior to the MMSE in terms of sensitivity for detecting MCI [20-22]. However, the MoCA has shown lower specificity, misclassifying healthy older adults (with natural cognitive decline) as at risk for MCI.

The suboptimal accuracy of these quick screening instruments may be partly due to the limited assessment time inherent in their design. However, another factor negatively impacting the accuracy of neuropsychological tests may be anxiety. Patients who are required to perform abstract tasks at a clinic in the presence of medical experts may experience heightened stress levels and perform suboptimally [23]. This is known as the white coat effect [24] and is known to disturb neuropsychological test performance. Moreover, neuropsychological tests are designed as one-time assessments and can be disturbed by circumstances such as lack of sleep or traumatic events (eg, loss of a spouse). However, the already long waiting lists make frequent and repeated neuropsychological tests infeasible.

Therefore, to complement screening instruments for MCI and alleviate the resources needed for a full examination, researchers are exploring alternative sources of information on cognitive functioning. Such solutions may come in the form of dedicated sensors installed in the home or wearables to detect movements [25,26], embedded software trackers that capture mouse movements [27,28], keyboard loggers installed on computers [29], conversational humanoids [30], or the use of interactive virtual simulations and games. Particularly, the latter has stirred the interest of researchers; games and simulations have long been used in understanding, measuring, and training cognition [31-33]. In addition to providing interactive, immersive environments to engage participants, they provide challenges bounded by a fixed set of rules, which can be highly indicative of targeted cognitive functioning. Research has indicated that games are more engaging than classical neuropsychological tests [32-34] and can be used to assess cognitive skills [35]. Moreover, the entertaining nature of gamified interactive systems (GISs) might reduce stress. They can be played in the comfort of the patient’s home, making the presence of a trained administrator redundant [36]. Finally, GISs also encourage more frequent interaction, providing additional testing data. Thus, they have the potential to complement traditional screening instruments by offering longitudinal assessment, which provides health care professionals with a new perspective for their case findings and diagnosis.

Despite their potential, it remains unclear to what extent GISs can be effectively used for the assessment of MCI, particularly as an early indicator of dementia. Researchers are still investigating the effective methods for system integration [37] and the accuracy of GISs in assessing cognitive decline [31,37]. In summary, while the of GISs for assessing MCI holds potential clinical significance, it remains in its early stages of development.

Objectives

Therefore, to provide a more comprehensive understanding of the potential of GISs, we aimed to present a scoping review of their applications in assessing MCI, addressing the following research questions (RQs).

  1. What are the different game types (genres and interaction paradigms) of GISs used to assess MCI?
  2. Which cognitive functions are assessed by GISs?
  3. How are GISs evaluated (eg, longitudinal or cross-sectional), and how reliable are these studies in reporting diagnostic performance?

Overview

The review followed the latest PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines (checklist 1) [38], focusing on GISs for assessing MCI. The protocol is provided in Multimedia Appendix 1 [9-12,20-22,38,39] and can also be accessed online [40]. The databases queried include the Scopus database (eg, PubMed, MEDLINE, and Embase) as well as more technical libraries, specifically IEEE Xplore and ACM Digital Library. The screening process for queried articles consisted of 3 phases: title screening, abstract screening, and full-text eligibility. Ambiguous articles at each phase were advanced to the next stage for further review. In the full-text eligibility, articles were reassessed to ensure that all 3 inclusion criteria were met, in addition to the 4 exclusion criteria.

Search Queries Used

The query terms used in the search strategy are presented in Textbox 1. For the population, the search focused on patients with either “Mild Cognitive Impairment” or “MCI,” deliberately excluding dementia to maintain an emphasis on the prestage of dementia. Regarding the instrument used for assessment, the search included the terms “Gamif*,” “Game,” “Video game,” or “Videogame.” In addition, the purpose of the instrument was specified as “assessment,” with variations such as “Evaluation,” “Screen*,” and “Diagnos*” included as synonyms in the queries. These terms were searched across metadata, titles, abstracts, keywords, and full text. Because the syntax of each database varied slightly, the queries were adapted to match the specific requirements of each library, with the exact queries provided in the Multimedia Appendix 2.

Textbox 1. Summary of search terms.

Disease

  • Mild Cognitive Impairment
  • MCI

Instrument

  • Gamif*
  • Game
  • Video game
  • Videogame

Purpose

  • Assess*
  • Evaluat*
  • Measure*
  • Screen*
  • Diagnos*

Eligibility Criteria

We focused on GISs designed to assess MCI and their potential for such assessment in this review. As a result, only articles meeting the predefined inclusion and exclusion criteria were included, as detailed in Textbox 2.

For systems described in multiple articles, such as separate publications for system design and diagnostic performance, all related articles were included and considered collectively as the entirety of a system. For systems that the authors named, we use the name in the remainder of the paper. For those systems that were not named by the authors, we refer to the system number given in this review.

Textbox 2. Inclusion and exclusion criteria.

Inclusion criteria

  • Full-text, peer-reviewed articles published in English
  • Systems assessing mild cognitive impairment (MCI) as part or all of their function
  • Explicit inclusion of game or gamified elements in the article

Exclusion criteria

  • Studies featuring games or gamified elements in nonelectronic forms, such as pen-and-paper games
  • Game or gamified systems designed only for rehabilitation or training of specific cognitive functions, or targeting dementia instead of MCI
  • Articles that did not claim implementation
  • Book chapters that are inaccessible or retracted articles

Data Extraction and Synthesis

The following subsections describe how data were extracted and analyzed for each RQ, in alignment with the Results section by RQs.

RQ1: Identifying Game Types and Interaction Paradigms

We first extracted the information needed to address RQ1. We described the GISs used in articles based on genre (eg, casual games or simulation games [41]) and interaction paradigm [42].

RQ2: Mapping Cognitive Functions to the MoCA Domains

Second, we extracted the information needed to address RQ2. We drew on the descriptions provided in each paper and mapped the mentioned cognitive functions to those assessed by the MoCA [20], as outlined in Textbox 3.

We chose MoCA over the MMSE because it provides a more sensitive and comprehensive assessment of MCI [43], particularly in domains relevant to many GISs [44]. MoCA includes dedicated tasks for executive functions (eg, trail making, verbal fluency, and abstraction) and more detailed visuospatial assessments (eg, cube copying and clock drawing) than MMSE. These domains are frequently targeted by GISs, making MoCA a more suitable framework for mapping cognitive coverage.

Textbox 3. Mapping of cognitive functions to the Montreal Cognitive Assessment domains.

Test

  • Short-term memory: assesses the ability to recall information after a short delay.
  • Visuospatial abilities: evaluates skills such as drawing and understanding spatial relationships.
  • Executive functions: tests planning, problem-solving, and organizing abilities.
  • Attention, concentration, and working memory: measures the ability to focus, maintain attention, and manipulate information.
  • Language: assesses naming, fluency, and comprehension.
  • Orientation to time and place: evaluates awareness of current date, location, and situation.
RQ3: Evaluation Design, Quality Assessment, and Diagnostic Performance Reporting

Finally, we extracted the information to address RQ3. Here, only systems that included evaluations for assessing MCI were coded. Each system may have multiple studies encompassing different research designs that have been published. If multiple studies of 1 system were included in the review, we chose articles focusing on the report of diagnosis. Usability studies, or studies focused on collecting other data (eg, physiological signals), were not considered to answer RQ3.

We first investigated how GISs are evaluated. Therefore, we coded the different research study designs used to assess accuracy, for example, cross-sectional studies or longitudinal studies. Second, we investigated how reliable these studies are. The articles were evaluated across 3 dimensions, each scored on a scale from 1 to 3, with higher scores indicating higher quality, based on a quality assessment method adapted from the studies by Bezabih et al [45], Johnson et al [46], and Boyle et al [47].

The detailed dimensions and scoring methods are provided in Textbox 4.

Hence, the score for each article can range from 4 to 12. We consider scores from 3 to 5 as “weak evidence,” 6 to 7 as “moderate evidence,” and scores 8 to 9 as “strong evidence.”

Finally, we investigated the diagnostic performances that were reported. The actual outcomes of the studies were coded, including sensitivity, specificity, and area under the curve (AUC).

Textbox 4. Dimensions and scoring methods.

How appropriate is the reference standard? (1 to 3 out of 3 points)

  1. Reference standard is based on fast screening instruments (eg, Mini-Mental State Examination or Montreal Cognitive Assessment)
  2. Reference standard is based on a comprehensive neuropsychological test
  3. Reference standard is based on full diagnosis by clinicians

How relevant are the comparative groups? (1 to 3 out of 3 points)

  1. No controls, or it is unspecified who or how many participants have mild cognitive impairment (MCI)
  2. MCI versus healthy controls, MCI versus older adults with subjective cognitive decline, or MCI versus older adults with dementia MCI versus healthy controls, with additional subdistinctions, for example, additional persons with subjective cognitive decline or persons with Alzheimer disease, or intraperson measurement (in case of long-term measurement in persons with MCI)

How generalizable are the findings of this study to the target population, considering the method of analysis and sample size of persons with MCI? (1 to 3 out of 3 points; for studies other than cross-sectional studies, the score was given case by case)

  1. Cross-sectional study with <10 participants with MCI
  2. Cross-sectional study with 10 to 30 participants with MCI
  3. Cross-sectional study with >30 MCI participants with MCI

Overview

The search was executed in July 2024. A total of 3856 articles were retrieved based on the queries listed in Textbox 1. The specific numbers from each database are detailed in Figure 1. Before further screening, 5.26% (203/3856) of duplicate articles were removed. After the title and abstract screening, 96.5% (3525/3653) of the articles were excluded, leaving 3.5% (128/3653) of the articles for full-text evaluation. As a result, 47 (36.7%) of the 128 articles were excluded for reasons outlined in Figure 1. Ultimately, this scoping review included 81 (63.3%) of the 128 articles describing 49 GISs, as shown in Table 1 [48-128].

Figure 1. Screening and inclusion process. DL: Digital Library; MCI: mild cognitive impairment.
Table 1. Summary of included studies.
System numberName of the gamified interactive systemReference
Sys1—a[48-50]
Sys2Fun Cube[51,52]
Sys3VAP-Sb[53,54]
Sys4VREAD[55,56]
Sys5CWGc and CSGd[57,58]
Sys6SmartAgeing[59-64]
Sys7FitForAll[65-67]
Sys8Kitchen and Cooking[68]
Sys9Find the Pair[69]
Sys10VAP-Me[70]
Sys11Whack-a-Mole[71-73]
Sys12[74]
Sys13Smartkuber[75]
Sys14[76,77]
Sys15[78]
Sys16[79]
Sys17Dr. Solitaire[80-83]
Sys18[84]
Sys19Counting sheep[85,86]
Sys20Panoramix[87-92]
Sys21Neuro-World[93,94]
Sys22[95]
Sys23Virtual ADL+House[96]
Sys24[97]
Sys25RE@CH[98]
Sys26WarCAT[99]
Sys27[100]
Sys28Holey Moley[101]
Sys29Hit-the-ball[102]
Sys30Virtual Supermarket[103-106]
Sys31[107]
Sys32VSIDCSf[108]
Sys33Lucy[109,110]
Sys34Quick, Draw![111]
Sys35[112]
Sys36COGNIPLAT[113,114]
Sys37Pac-man[115]
Sys38CogWorldTravel[116,117]
Sys39[118]
Sys40Neurocity[119]
Sys41Minecraft[120]
Sys42Seas the Day[121]
Sys43[122]
Sys44[123]
Sys45[124]
Sys46BrightArm[125]
Sys47RehabCity[126]
Sys48The Ryokansan[127]
Sys49X-Torp[128]

aVAP-S: virtual action planning supermarket.

bVAP-M: Virtual Action Planning Museum.

cNot available.

The results of the articles included are presented in the following sections, organized according to the RQs.

RQ1: What Are the Different Game Types (Genres and Interaction Paradigms) of GISs Used to Assess MCI?

When reviewing GISs according to genre and interaction paradigm, we identified GISs to be casual games, simulation games, full-body movement games, or dedicated interactive games, as illustrated in Multimedia Appendix 3.

Casual Games
Overview

Casual games (30/49, 61%) are a category of video games known for their accessibility and ease of play. These games are designed to be simple to learn, offering quick rewards and a forgiving gameplay experience, ultimately creating a fun and enjoyable entertainment experience for players of all skill levels [129]. As for the interaction paradigm, casual games run on general digital devices, such as smartphones, tablets, and PCs, without a need for specific controllers. A keyboard or a computer mouse, or simply a touch screen, is sufficient. We further categorized casual games into the following subcategories.

Card Games

In total, 12% (6/49) of the systems use classical card games. In Blackjack (Sys1) [50], players strive to assemble a hand, achieving a value as close to 21 as feasible without exceeding this pivotal threshold. This game demands decision-making, blending risk assessment and strategic calculation elements. Free Cell (Sys1) [48] and Solitaire (Sys17) [80-83] epitomize the genre of sorting card games, necessitating the arrangement of a standard 52-card deck based on criteria of suits and ranks. These games not only require sequencing and strategic planning but also underscore the virtues of patience and foresight. The Find the Pair (Sys9, Sys15, and Sys48) [69,78,127] game challenges participants to unveil matching pairs from an array of face-down cards. Participation in this game demands memory and concentration. Finally, War (Sys26) [99] is a card-based contest where players select cards to show a higher rank against a computer-simulated adversary.

Digital Versions of Analog Games

Similar to card games, digitized versions of traditional board games remain popular and are used in GISs. Scrabble (Sys1 and Sys5) [49,57,58] is a popular word game where players use letter tiles to create words on a game board. The goal is to score the most points by forming words on a 15×15 grid board. Sudoku (Sys5) [57,58] is a logic-based number puzzle game. The objective is to fill a 9×9 grid with numbers so that each column, row, and each of the nine 3×3 subgrids (also known as “regions” or “boxes”) contains all the digits from 1 to 9 without repeating any numbers. Quick, Draw! (Sys34) [111] is a fun and fast-paced drawing game where players try to quickly draw an object within 20 seconds for artificial intelligence (AI) to guess.

Commercial Video Games

Besides traditional card and analog games, adaptations of classic video games were also used to assess cognitive functions. Pac-Man (Sys37) [115] is a game where the player controls Pac-Man, a yellow, circular character, navigating a maze while eating pellets and avoiding ghosts. In Tetris (Sys39) [118], the player controls falling tetrominoes, geometric shapes made up of 4 squares each, and aims to complete horizontal lines without gaps. Fruit Ninja (Sys39) [118] is a mobile game where the player slices various fruits that appear on the screen while avoiding bombs. Candy Crush Saga (Sys39) [118] is a popular match-3 puzzle game where players swap adjacent candies to form lines of ≥3 matching candies. Minecraft (Sys41) [120] is a sandbox video game that allows players to explore, build, and create in a blocky, procedurally generated 3D world.

Minigames

Several systems (21/49, 43%) were designed with simple gameplay to provide a fun game experience to older adults. Whack-a-Mole (Sys11, Sys28, and Sys29) [72,101,102] is originally an action-reaction game in which players hit moles that randomly pop up from holes on a game surface. The narrative of the game can also involve taking photos of animals (Sys38) [116,117] or beating devils (Sys48) [127]. Spot the Difference (Sys15) [78] is a puzzle game in which 2 images that appear similar are presented side by side but with subtle differences between them. The player’s objective is to carefully examine both images and identify discrepancies, such as missing objects, changed colors, or altered shapes, within a set time limit. Bounce Balls (Sys46) [125] is a game to bounce balls to break or eliminate objects on top of the screen. Counting animals (Sys19 and Sys21) [85,86,93,94] is a game where sheep move from one side of the screen to another within a certain time. Occasionally, wolves accompany the sheep. Players must answer questions related to the number of sheep or wolves. Lucy (Sys33) [109,110] gamified the cognitive tests with dots on the screen. Participants use a controller to select the right dots based on task requirements. Similarly, X-trop (Sys49) [128] gamified the trail making test (TMT) in a sailing environment. Other games without a specific name were also identified in the studies, such as filling in the missing part of words (Sys38) [116,117], pairing words with similar or opposite meanings (Sys14), memorizing the location of mines in a grid (Sys14) [76,77], jigsaw puzzles (Sys14 and Sys45) [76,77,124], quizzes (Sys16, Sys31, Sys 32, and Sys36) [79,107,108,113,114], and moving the avatar to receive falling apples (Sys35) [112].

Simulation Games
Overview

Daily living activities are typically not significantly impaired for older adults. Nonetheless, GISs have the potential to identify the deterioration of cognitive abilities in a controlled setting. Simulation games (17/49, 35%) replicate virtual 3D environments for older adults, usually their regular living environment. Here, as for the interaction paradigm, extended reality technology is often adopted for a more immersive experience for older adults (eg, virtual reality headsets), but some also still run on more traditional platforms that can be operated with a touch screen or a keyboard and a computer mouse. In these games, older adults are required to complete tasks, and the GISs can store their behavior and performance to track their progress. We break simulation games further down based on the virtual environment provided.

Shopping

Shopping is a crucial daily activity for older adults as it necessitates the use of multiple cognitive functions to maintain independence. Our research identified 7 different GISs that contain shopping experiences [53,54,74,96-98,103-106,126]. Shopping simulation has 2 main tasks: finding items and making payments. Some systems show the shopping list on the screen (Sys3, Sys24, Sys25, Sys30, and Sys47) [53,54,97,98,103,126], and some systems require players to memorize the shopping list beforehand (Sys12 and Sys23) [74,96].

Wayfinding

Wayfinding is another important daily activity for older adults. We found 6 GISs that simulate pathfinding. These systems simulate various environments for older adults, including a park (Sys4) [55,56], a museum (Sys10) [70], and driving (Sys40) [119] or walking (Sys47) [126] on city roads. Furthermore, there are also 2 systems that offer gamified tests to find the proper bus (Sys22) [95] or metro line (Sys38) [116,117].

Other “Activities of Daily Life”

Four systems simulate cooking (Sys8, Sys12, Sys23, and Sys44) [68,74,96,123]. Older adults find food and ingredients in the kitchen and then plan the steps of cooking. Two systems simulated the financing on the ATM machine (Sys22 and Sys25) [95,98]. Older adults should insert a card, choose the amount and type of notes, enter the password to print the receipt, and then retrieve the card. Two systems require older adults to remember a phone number and then dial it (Sys6 and Sys25) [59-64,98]. Six systems require older adults to identify objects or people. In SmartAging (Sys6) [59-64], older adults are instructed to locate a list of items in a kitchen and then recall them later during another task. Virtual Action Planning Museum (Sys10) [70] presents objects visually and audibly in a virtual tour and prompts older adults to recall them. Sys23 [96] requires players to remember the medications they need to take, and then, they should pick the right pills and amounts from the bottles. RE@CH (Sys25) [98] has a task that requires players to recognize famous people, numbers, and advertisements. Researchers at Microsoft used the Hololens in Sys27 [100] to create an augmented reality environment for older adults. The older adults are then asked to identify “unnaturally placed objects” and correctly position them. CogWorldTravel (Sys38) [116,117] tests that older adults are able to differentiate between faces that have not previously appeared in the game. Panoramix (Sys20) [87-92] evaluates memorizing objects during a walk in the community, with the same mechanism as the California Verbal Learning Test.

Full-Body Movement Games

Four systems use novel full-body movement interaction paradigms to assess cognitive functions. FitForAll (Sys7) [65-67] assesses the cognitive functions of older adults through movement and performance in an exergame originally designed for rehabilitation. Sys18 [84] and Sys45 [124] developed games that combine body movement and quizzes using Kinect. Seas the Day (Sys42) [121] allows older adults to engage in enjoyable activities such as fishing, practicing tai chi, and rowing within a virtual reality environment.

Dedicated Interactive Games

Three systems created dedicated hardware to provide more interactive and tangible game experiences. Fun Cube (Sys2) [51,52] has created dedicated tangible devices comprising 6 cubes with mini screens. They use these devices to gamify the MoCA in their system. SmartKuber (Sys13) [75] had a similar idea of using tangible cubes in the gameplay. Unlike Fun Cube, the researchers behind SmartKuber integrated augmented reality technology with physical cubes. Older adults use a tablet’s camera to scan the cubes and play the game. Researchers involved in the development of Sys43 [122] were exploring the use of haptic and olfactory sensors to help older adults identify objects.

In conclusion to RQ1, a diverse array of game genres and interaction types have been used for the assessment of MCI. The breakdown of game types includes casual games (30/49, 61%), simulations (17/49, 35%), full-body movement games (4/49, 8%), and those with dedicated interaction (3/49, 6%). A tabular summary of GISs mapped to the game types is provided in Multimedia Appendix 3.

RQ2: Which Cognitive Functions Are Assessed by GISs?

We examined the cognitive functions assessed by GISs based on the 6 cognitive functions evaluated in MoCA (short-term memory; visuospatial abilities; executive functions; attention, concentration, and working memory; language; and orientation to time and place).

Short-Term Memory

A total of 28 systems assessed older adults’ short-term memory in a manner similar to the MoCA. These assessments typically involve presenting information and then asking the older adults to recall it. Often, additional context is provided, such as memorizing a shopping list (Sys12 and Sys23) [74,96], recalling objects encountered during a walk (Sys20 and Sys47) [91,126], arranging dishes on a table (Sys 12) [74], or locating a list of items in a kitchen (Sys44) [123]. Other activities may include memorizing a 2D “mines field” (Sys14) [76,77] or playing Find the Pair card games (Sys9) [69]. A time limit is often involved in assessing memory, such as when participants are asked to remember the number of sheep that come and go on a farm (Sys19 and Sys21) [85,86,93,94].

Visuospatial Ability

In total, 21 systems evaluated visuospatial ability. The MoCA assesses visuospatial ability through tasks such as drawing a clock and a cube. GISs can make drawing enjoyable, such as by allowing AI to guess what you are drawing (Sys34) [111] or sketch museum artifacts (Sys32) [108]. The gameplay of games such as Tetris (Sys39) [118] or filling space with shapes in Tetris (Sys38) [116,117] also helps evaluate visuospatial ability. GISs also enable a realistic assessment of visuospatial ability through their graphics. Older adults can navigate a virtual city or park and complete tasks that are often combined with memory, such as finding their way back (Sys40) [119].

Executive Functions

A total of 23 systems evaluated executive functions. The TMT part B is used in the MoCA to evaluate executive functions. Some systems have implemented minigames or tests similar to TMT (Sys2) [51,52] or directly gamified TMT (Sys49) [128]. Card games such as FreeCell (Sys1) [48] and Solitaire (Sys17) [82] also require executive functions in planning the strategy for sorting suits. More realistic tasks, such as cooking with a recipe (Sys8, Sys12, Sys23, and Sys44) [68,74,96,122] and sorting objects by types and then placing them in the proper places in simulation games (Sys27) [100], can also evaluate executive functions.

Attention, Concentration, and Working Memory

In total, 25 systems assessed attention, concentration, and working memory. In Whack-a-Mole and its variants (Sys11, Sys28, Sys29, Sys38, and Sys48) [72,101,102,116,117,127], players must pay attention and concentrate on hitting the moles while avoiding distractors such as bombs. Games such as Pac-Man (Sys37) [115] and Fruit Ninja (Sys39) [118] naturally challenge attention to avoid faults and obtain higher scores. Other games, such as Find the Pair (Sys9) [69] and Sudoku (Sys5) [57,58], require working memory to temporarily memorize useful information, such as patterns on cards and missing numbers in rows or columns.

Language

A total of 10 systems evaluated language skills. Quiz games (Sys15, Sys16, and Sys36) [78,79,113,114] often feature tasks such as naming objects and spelling words that start with specific letters. In simulation games, naming objects can also be easily integrated into the gameplay. Scrabble (Sys1 and Sys5) [49,57,58] also provides an enjoyable approach to test language comprehension. Contexts such as text passwords of luggage locks (Sys38) [116,117] can also assess language understanding. However, the articles included in this review indicate that none of the evaluated systems require voice input, meaning that language fluency is not assessed by any of them.

Orientation to Time and Place

In total, 7 systems evaluated orientation. We could not find game mechanisms to test orientation to time naturally. Time orientation is tested in a manner similar to word comprehension. For example, use the password hint as today’s date (Sys22) [95] or directly ask it (Sys2) [51,52]. Typically, orientation to place was assessed through gamified tasks. The orientation of place is often evaluated by combining it with visuospatial abilities, specifically the ability to navigate a path in a virtual environment (Sys4, Sys10, Sys12, Sys31, Sys40, and Sys47) [55,56,70,74,107,119,126]. Older adults must first identify the position of avatars using their orientation skills and then integrate these skills with their visuospatial abilities to find their way.

In summary, memory (28/49, 57%), visuospatial abilities (21/49, 43%), executive functions (23/49, 47%), and attention (25/49, 51%) are assessed more frequently than language (10/49, 20%) and orientation (7/49, 14%). A few systems (6/49, 12%) integrate the assessment of all MoCA functions into their gameplay. A tabular summary of GISs mapped to cognitive functions is provided in Multimedia Appendix 4.

RQ3: How Are GISs Evaluated (eg, Longitudinal or Cross-Sectional), and How Reliable Are These Studies in Reporting Diagnostic Performance?

Overview

The third RQ focused on evaluating the effectiveness of GISs. We first examined the research designs used in clinical validation studies to address this RQ. Next, we assessed the quality of these research designs and evaluated the performance in assessing MCI. Therefore, we excluded 47 articles in 26 systems (Sys1, Sys5, Sys8, Sys11, Sys12, Sys14, Sys15, Sys18, Sys19, Sys23, Sys24, Sys27, Sys31, Sys32, Sys34, Sys35, Sys37-47, and Sys49) that lacked clinical validation studies (longitudinal study or cross-sectional study). In addition, we omitted the study that relied on synthetic data in which AI agents simulated the performance of healthy older adults and patients with MCI (Sys26) [99]. The studies without clear reference standards were also excluded (Sys4 and Sys22) [55,95]. This left us with 31 studies comprising 20 systems that had validation studies specifically aimed at assessing MCI. Finally, we summarized and reported the diagnostic performance of systems with clinical validation studies.

How Are GISs Evaluated?

Two studies (Sys13 and Sys14) [75,77] reported intraperson comparisons across sessions. Sys14 [77] involved 9000 healthy participants from different age groups playing 100 games while data were collected. The results showed a linear trend (P<.001) between scores and the number of sessions played, demonstrating that all age groups showed improvement in performance across all minigames aimed at assessing cognitive functions related to MCI. While performance improved for all age groups, older adults showed slower progress compared to younger participants. Sys13 [75] compared the mean total score of the last 20% of sessions with the scores from the first 20% of sessions in a 6-week study with 13 participants, where participants were asked to play the game twice a week. The results revealed that there was no statistically significant difference.

The remaining 29 studies on 18 systems presented a cross-sectional design. Validation methods included (1) statistical analysis of game metrics to differentiate individuals with MCI from healthy controls and (2) machine learning classification methods. These studies are discussed in more detail in the following sections.

How Reliable Are These Studies in Assessing Accuracy?

The included studies were evaluated on 3 dimensions (reference standard, relevancy of controls, and sample size), each of which is scored from 1 to 3, with a higher score indicating a higher quality, based on the quality assessment method adapted from other studies [45-47]. Note that the evaluation targets the quality of task classification in healthy older adults with MCI, not the overall study quality for different research goals.

Reference Standard

Studies in the review had varying criteria for enrolling participants as patients with MCI or healthy controls. The gold standard for diagnosing MCI is a psychophysiological examination conducted by health care professionals. A total of 14 studies explicitly stated that participants with MCI underwent a comprehensive clinical diagnosis (Sys3, Sys6, Sys7, Sys9, Sys10, Sys17, Sys20, Sys30, Sys33, and Sys48) [54,63,64,67,69,70,82,83,91,104-106,109,127]. Five studies used test batteries created by authors as reference standards (Sys4 and Sys20) [56,88-90,92]. In total, 10 studies used MMSE or MoCA as their reference standard (Sys2, Sys6, Sys16, Sys21, Sys25, Sys28, Sys29, and Sys36) [52,62,79,93,94,98,101, 102,113,114].

Relevance of Controls

In most studies, healthy controls were of comparable age to other groups (of patients with MCI). In 1 study [105] involving Sys30, the control group was older adults with subjective cognitive decline who visited clinics. These participants self-reported cognitive decline before cognitive tests could assess the deficits. However, subjective cognitive decline may be a precursor of nonnormative cognitive decline and eventual progression to MCI and dementia [130]. In addition to including older adults with MCI, a study [76] involving Sys14 used healthy young people as controls for control experiments. Given the natural decline in cognitive function with age, young adults may be more easily discriminated against than older adults.

The participant groups also varied across studies in their MCI subtypes. Some studies (Sys6, Sys10, Sys30, and Sys35) [63,64,70,104,112] recruited participants with amnestic MCI, while others did not differentiate between the MCI subtypes. Some studies (Sys6, Sys7, Sys8, Sys16, Sys20, Sys25, Sys26, Sys29, Sys33, Sys46, Sys48, and Sys49) [68,76,88,89,91,98,109,127] included patients with moderate to severe AD for comparison with both the group with MCI and healthy controls.

Sample Size

The average number of patients with MCI and healthy controls in the included studies was 27.44 (SD 18.63) and 42.75 (SD 51.79), respectively. Among all cross-sectional studies included in the review, 1 study on Sys6 [62] had the largest number of participants, 1086 in total. However, the reference standard was based on the MoCA administered by software. Sys48 [127] had the second largest sample size, with 240 in total (75 healthy controls, 41 patients with MCI, and 124patients with AD). Sys2 [52] had the largest MCI group sample size, with 65 participants. The study on Sys35 [112] had the largest number of healthy controls, totaling 280.

Quality of Studies

On the basis of the aforementioned analysis, the first author (YC) calculated study quality scores as defined in the Data Extraction subsection under the Methods section. There were 9 low-quality studies on 7 systems (Sys6, Sys13, Sys14, Sys20, Sys21, Sys28, and Sys36), 8 medium-quality studies on 8 systems (Sys2, Sys3, Sys4, Sys9, Sys16, Sys20, Sys25, and Sys29), and 14 high-quality studies on 8 systems (Sys6, Sys7, Sys10, Sys17, Sys20, Sys30, Sys33, and Sys48). Detailed scores are shown in Table 2.

Table 2. Quality of studies with respect to how reliable they are in assessing accuracy (3-5 indicates low quality, 6-7 indicates moderate quality, and 8-9 indicates high quality).
System numberName of the gamified interactive systemReferenceReference standardRelevance of controlsSample sizeQuality of studies
Sys2Fun Cube[52]1236
Sys3VAP-Sa[54]3227
Sys4VREAD[56]2237
Sys6SmartAging[62]1135
Sys6SmartAging[63]3339
Sys6SmartAging[64]3238
Sys7FitForAll[67]3339
Sys9Find the pair[69]3137
Sys10VAP-Mb[70]3238
Sys13Smartkuber[75]1113
Sys14c[77]1135
Sys16[79]1236
Sys17Dr. Solitaire[82]3238
Sys17Dr. Solitaire[83]3238
Sys20Panoramix[88]2215
Sys20Panoramix[89]2327
Sys20Panoramix[90]2338
Sys20Panoramix[91]2338
Sys20Panoramix[92]3328
Sys21Neuro-World[93]1124
Sys21Neuro-World[94]1124
Sys25RE@CH[98]1236
Sys28Holey Moley[101]1135
Sys29Hit-the-ball[102]1337
Sys30Virtual Supermarket[104]3238
Sys30Virtual Supermarket[105]3238
Sys30Virtual Supermarket[106]3339
Sys33Lucy[109]3238
Sys36COGNIPLAT[113]1214
Sys36COGNIPLAT[114]1214
Sys48The Ryokansan[127]3339

aAUC: area under the curve.

bVAP-S: virtual action planning supermarket.

cNot available.

dHRMG: high-resolution monitoring games.

eMoCA: Montreal Cognitive Assessment.

fMMSE: Mini-Mental State Examination.

gEEG: electroencephalogram.

What Diagnostic Performances Are Reported?

In total, 19 studies comprising 14 different systems reported accuracy. The diagnostic performances are summarized in Table 3. The performance of GISs in assessing MCI shows promising results, with sensitivity ranging from 70.7% to 100%. This is comparable to MoCA, which has a sensitivity of 90% and is significantly better than the MMSE, which has a much lower sensitivity of 18% [20]. The specificity ranges from 56.5% to 100%. The best-performed GISs were also comparable with MMSE (94.7%) [131] and MoCA (87%) [132]. The AUC of the included GISs ranged from 0.774 to 1, which is also comparable with MMSE (78%) [43] and MoCA (83.3%) [43]. The following paragraphs discuss studies that demonstrated performance exceeding that of the MoCA and MMSE.

Among all 14 systems, the GISs with the best performance (Sys3) [54] demonstrated a sensitivity of 100%, a specificity of 100%, and an AUC of 1 in the task of classifying 2 groups (healthy older adults versus older adults with MCI or AD). However, the study was a single-center, nonblinded, cross-sectional study and had 6 healthy older adults and 6 older adults with MCI or early AD, 12 participants in total. Participants were recruited from 1 hospital.

The best-performing model for Panoramix (Sys20) [89] was the random forest. It achieved a sensitivity of 100%, a specificity of 70%, and an AUC of 0.98 for the task of classifying MCI among 3 groups (healthy older adults, older adults with MCI, and older adults with AD). The study included 64 participants: 28 healthy older adults, 16 older adults with MCI, and 20 older adults with AD. The newer version, Panoramix (version 2.0; Sys20) [91], also demonstrated a sensitivity of 100% and a specificity of 100% for the same task in a sample size of 30 participants, containing 10 healthy older adults, 10 older adults with MCI, and 10 older adults with AD. Notably, both Panoramix studies were single-center, nonblinded, cross-sectional studies. Participants were recruited from 1 association.

The study on Smart Aging (Sys6) [63] achieved an AUC of 0.986 (95% CI 0.962-1.000; P<.001) in distinguishing healthy older adults from those with cognitive impairment. The study was a single-center, nonblinded, cross-sectional investigation involving 91 participants: 23 older adults with amnestic MCI, 20 older adults with Parkinson disease–related MCI, and 25 older adults with AD.

The study on Holey Moley (Sys28) [101] had a sensitivity of 100%, a specificity of 97.2%, and an AUC of 0.953 in classifying cognitively healthy older adults and those with cognitive impairment. The study was a nonblinded, cross-sectional investigation involving 47 participants recruited from a single neurology clinic. Cognitive impairment was identified using the MMSE as the reference standard. However, the study did not report the specific cutoff score used or distinguish between MCI and AD. It only stated that 12 out of 47 participants were classified as cognitively impaired.

In total, 2 studies were conducted on COGNIPLAT (Sys36) [113,114]. Both studies were single-center, nonblinded, cross-sectional studies. The reference standard for cognitive impairment was MoCA, corrected by education level. In 1 study [113], researchers collected 119 game sessions from 10 participants. The best-performing model was a support vector machine with 9 variables from the game as input. The model had a sensitivity of 91.8% and a specificity of 93.2%. In the other study [114], researchers collected 119 game sessions from 10 participants. The best-performing model was the multilayer perceptron, with a sensitivity of 96.6% and a specificity of 90%. The model training variables included demographic data (eg, age and education) and game data (eg, game time and points).

Table 3. Mild cognitive impairment diagnosis performance.
System numberName of the gamified interactive systemReferenceGame metricsSensitivity (%)Specificity (%)AUCa
Sys3VAP-Sb[54]Features extracted from trajectory, region, and task performance1001001
Sys4VREAD[55]Correct path, incorrect path, correct sequences, incorrect sequences, overall score, and time7596c
Sys6SmartAging[63]Smart Aging Total Score0.99
Sys6SmartAging[64]Smart Aging Serious Game total score84.475.50.88
Sys7FitForAll[67]Evaluator-ranked age, HRMGd mean total, HRMG intercept total, HRMG mean level 1, HRMG mean level3, HRMG mean level 4, and heart rate slope level 30.77
Sys9Find the pair[69]Game trials8362
Sys9Find the pair[69]Game time8267c
Sys16[79]Best cutoff game points based on MoCAe8877.8c
Sys16[79]Best cutoff game points based on MoCA87.533.3c
Sys17Dr. Solitaire[83]Nu-support vector classifier using digital biomarkers of cognitive performance in Klondike Solitaire77.888.90.9
Sys20Panoramix[90]Random forest using Panoramix battery1001000.98
Sys20Panoramix[91]Support vector machine using Panoramix (version 2.0) battery100100
Sys25RE@CH[98]Total performance score78.275.70.82
Sys28Holey Moley[101]Holey Moley game classification based on MMSEf10097.20.95
Sys30Virtual Supermarket[104]Score in the Virtual Supermarket Program85.9790.87
Sys30Virtual Supermarket[105]All Virtual Supermarket variables76.391.4
Sys30Virtual Supermarket[106]All Virtual Supermarket variables7485
Sys30Virtual Supermarket[106]All bought unlisted, correct money, and duration7986
Sys33Lucy[109]Focused attention response profile and EEGg characteristics8082.6
Sys36COGNIPLAT[113]Support vector machine using 9 variables91.893.2
Sys36COGNIPLAT[114]Manually selected feature set with 9 features: age, family medical history, exercising, education, average game round time in game session, orientation game importance, naming game importance, memory game importance, and recall (Anakilisi) game importance96.6900.99
Sys48The Ryokansan[127]Flipping cards game score76.970.7
Sys48The Ryokansan[127]Game score from the “finding mistakes” task70.756.5

aAUC: area under the curve.

bVAP-S: virtual action planning supermarket.

cNot available.

dHRMG: high-resolution monitoring games.

eMoCA: Montreal Cognitive Assessment.

fMMSE: Mini-Mental State Examination.

gEEG: electroencephalogram.


Principal Findings

We identified 49 GISs for assessing MCI in this scoping review, mostly casual games and simulations, that support engagement through simple game mechanics or familiarity with daily tasks, such as shopping. We further identified that GISs mainly targeted cognitive functions, such as memory, attention, and executive function, but often lacked depth in language and orientation assessment. Finally, we found that only 14 systems reported diagnostic performance, with other studies remaining in early design stages or offering limited correlation studies. Moreover, although several GISs reported strong diagnostic performances, many of these studies (9/31, 29%) were limited by small samples, methodological flaws, and overfitting risks. In the following sections, we revisit our 3 RQs in more depth.

RQ1: What Are the Different Game Types (Genres and Interaction Paradigms) of GISs Used to Assess MCI?

We found that most GISs (30/49, 61%) consist of casual games with clear rules and straightforward gameplay, making them accessible and easy to play without requiring extensive learning or specialized technology. This simplicity may encourage older adults to engage with these systems, addressing the challenge that older adults generally have a lower game participation rate than other age groups [133]. We also noted the popularity of simulations (17/49, 35%). These offer a unique opportunity to represent and assess daily activities in a more controlled environment, providing valuable supportive data with higher ecological validity for evaluating MCI. In particular, shopping experiences are commonly integrated into many simulation game systems (7/49, 14%). The shopping experience is indeed a significant daily activity that requires short-term memory to memorize the shopping list, executive functions to recognize items, and spatial orientation to find them. Only a few systems (4/49, 8%) use full-body movement interaction paradigms to assess MCI. These include exergames and Kinect-based quiz games. Finally, only a few systems (3/49, 6%) were developed with dedicated hardware for the assessment of MCI, representing an extension of traditional GISs design.

In addition, we identified shared characteristics among the games examined in this study. All the GISs included in this research were noncompetitive and offered a single-player mode. Each gaming session was relatively brief, lasting around 15 minutes, and designed to be easy to understand. These observations align with the preference of older adults for puzzles and intellectually stimulating games [134]. Nevertheless, we like to emphasize the ongoing debate regarding whether merely simulation experiences can genuinely be considered games [135]. When daily activities are simulated without a well-integrated game design, it may lead to a lackluster gaming experience [136,137]. Furthermore, the gaming experience of casual games may not provide meaningful play for all older audiences. As articulated in the gerontoludic manifesto [138], game and mental health researchers may want to put more effort into understanding what differentiates older players rather than considering them as united in their age-related impairments.

Although we categorized gamified systems into casual, simulation, full-body movement, or dedicated interactive games for descriptive purposes, it is important to note that these genre distinctions are blurry in practice. Many games combine features from multiple genres, and genre labels do not necessarily reflect the specific cognitive demands or mechanisms embedded within gameplay. In addition, most studies did not describe game mechanics in sufficient detail to allow for systematic mapping to targeted cognitive functions. This limits the extent to which we can draw mechanistic conclusions about genre-specific diagnostic performance.

RQ2: Which Cognitive Functions Are Assessed by GISs?

The scoping review revealed that a diverse range of cognitive functions were assessed by GISs, including short-term memory (28/49, 57% GISs), visuospatial skills (20/49, 41% GISs), executive functions (23/49, 47% GISs), and attention and working memory (25/49, 51% GISs). However, a significant gap exists in evaluating language abilities (10/49, 20% GISs) and orientation to time and space (7/49, 14% GISs).

Moreover, the GISs reviewed in this study vary in how comprehensively they address the 6 cognitive domains evaluated by the MoCA (ie, short-term memory, visuospatial abilities, executive functions, attention, language, and orientation). For example, some simulation games incorporate daily tasks such as shopping, cooking, or wayfinding that naturally engage multiple domains, particularly short-term memory, executive functions, and visuospatial abilities. In contrast, casual games more often target specific domains, such as attention or working memory, through task-focused mechanics such as matching, sequencing, or quick-response challenges. Full-body movement games and dedicated interactive systems were fewer in number, and their alignment with MoCA domains varied by design, with some addressing attention or memory through physical interaction.

It is important to note that many GISs incorporate features from multiple game types, and genre labels do not consistently reflect the underlying cognitive functions. In addition, we could not draw definitive conclusions about the relative diagnostic effectiveness of different game types due to heterogeneity in the reporting of GISs designs. A better understanding of how specific game elements engage specific cognitive functions can guide future design and evaluation efforts. Hence, a more granular analysis of game mechanics in relation to targeted cognitive functions—beyond genre—would strengthen future investigations.

In particular, GISs could extend the duration of their assessment. The engaging nature of GISs can potentially encourage older adults to use the system for extended periods, creating opportunities to collect data over a longer time, in a manner that might not be feasible through traditional assessment tools. Thus, GISs could play a valuable role in evaluating long-term cognitive functions by tracking subtle changes over time, providing extra information that current screening instruments, such as MMSE and MoCA, are not designed to capture. Despite this potential, no existing system was identified in this review that specifically targets long-term MCI evaluation. Hence, future studies should prioritize the development and validation of GISs that leverage game mechanics to promote repeated use and sustained engagement, while also exploring how these systems can complement traditional diagnostic methods.

RQ3: How Are GISs Evaluated (eg, Longitudinal or Cross-Sectional), and How Reliable Are These Studies in Reporting Diagnostic Performance?

Out of a total of 49 GISs, 29 (59%) remain in the system design stage or present a preliminary user study. Hence, these GISs lack evidence of accuracy. Moreover, many studies (6/49, 12%) conducted on GIS were limited to correlating system metrics with traditional screening instruments, reducing their scientific rigor. In total, only 29% (14/49) of the GISs reported clinical validation to classify patients with MCI or intraperson comparisons.

The quality scores presented in Table 2 highlight notable variations in the methodological rigor of studies evaluating GISs for MCI assessment. While some studies used well-defined reference standards, relatively larger sample sizes, and appropriate validation strategies, others lacked clarity in participant selection or relied on less robust evaluation methods. This variability should be considered when interpreting reported diagnostic performance; higher-quality studies are more likely to yield reliable and generalizable findings, yet the results may be more nuanced. For researchers and clinicians, these scores offer a practical reference point for assessing the strength of evidence and identifying areas where future studies can improve in terms of design transparency and reporting consistency.

While clinical validation studies indicate that GISs may be a useful tool for assessing MCI, in particular, the risk of model overfitting and methodological bias remains a critical concern. For example, the virtual action planning supermarket model achieved an extraordinary F1-score of 1.0 across 4 machine learning algorithms: decision trees, random forests, adaptive boosting, and extreme gradient boosting. Such a result must be interpreted with extreme caution. The study included only 12 participants and used data enhancement techniques, yet identified 45 out of 46 features as significant—without any dimensionality reduction. Such a high feature-to-sample ratio introduces a substantial risk of overfitting, undermining the robustness of the findings.

More broadly, studies reporting near-perfect performance (eg, 1.0 accuracy or F1-score) are likely reflecting limitations in sample size, study design, or evaluation procedures rather than actual diagnostic capability. To prevent misinterpretation, especially by third-party stakeholders, it is essential to frame such results as preliminary and methodologically constrained results. These findings should not be taken as evidence of clinical readiness.

A robust study design is crucial for the clinical validation of GISs. First, the use of age-matched control groups is essential to minimize confounding variables. A key limitation in interpreting the performance of GISs-based cognitive assessments is the potential confounding effect of previous gaming experience or digital proficiency. Younger participants may outperform older adults not solely due to better cognitive functioning but also because of greater familiarity with digital devices and interaction patterns common in games. Only a few of the reviewed studies (4/31, 13%) assessed or controlled for previous gaming experience, making it difficult to disentangle cognitive performance from gameplay proficiency, as shown in the results for RQ3. This issue is further supported by recent findings from the study by Murata et al [139], which demonstrate that peripheral metrics such as swipe speed—independent of game content—can predict cognitive impairment. Future research should consider including baseline assessments of digital literacy or gaming familiarity and explore interface designs that minimize performance biases related to device handling rather than cognitive ability. In addition, previous research has highlighted sex-related differences in the risk, progression, and presentation of MCI [140,141]; most of the included studies (30/31, 97%) did not report results disaggregated by sex or analyze potential sex-based differences in diagnostic performance. As such, our review was unable to assess how sex may influence the effectiveness or generalizability of GISs for MCI assessment. The lack of such analyses represents a critical gap in the literature. Future research should explicitly report and analyze sex-disaggregated data to ensure that these systems are equitably designed and validated across diverse populations. Nevertheless, it is encouraging to see the inclusion of diverse groups, such as participants with AD, as MCI is regarded as a transitional phase between healthy aging and AD.

Second, comprehensive cognitive assessments should be administered to all participants, with those diagnosed with MCI specifically evaluated by health care professionals to ensure diagnostic precision. Solely relying on screening instruments for labeling MCI is inadequate and may overlook the complexity of cognitive decline.

Furthermore, most existing studies involve small, single-center cohorts, which limits generalizability. In addition, all the studies in the review used retrospective case-control designs, which can overestimate diagnostic performance by comparing well-defined cases and controls in artificial conditions. This introduces a high risk of bias and limits applicability in real-world diagnostic contexts. Future research would benefit from adopting larger, blinded, and multicenter studies—ideally through prospective diagnostic cohort designs—to strengthen the evidence base and enhance the clinical relevance of GISs.

Longitudinal studies, in particular, could facilitate a deeper understanding of long-term memory and its association with other cognitive functions, broadening the scope of cognitive domains assessed through GISs. By leveraging the engaging nature of gamified systems, researchers can explore how sustained interactions might reveal patterns or markers indicative of MCI progression or even early-stage cognitive decline. Future research could focus on designing and implementing longitudinal studies that incorporate diverse user populations, validate their findings against established clinical benchmarks, and investigate the potential of GISs to serve as an early warning system for cognitive decline. However, longitudinal studies present several challenges, such as participant retention, data standardization, and the ethical implications of prolonged data collection. Addressing these will be essential to maximize the utility and reliability of GISs in clinical practice.

Limitations

This review has several limitations. First, we did not expand our search beyond the initially included articles, which may have resulted in the exclusion of relevant studies related to specific GISs. For instance, in some cases, we only captured validation studies, while related user studies could not be identified from our database search. Second, our understanding of games within GISs was primarily derived from text descriptions and screenshots in the articles, without any direct interaction with the systems themselves. Consequently, the findings from RQ1 and RQ2 rely on the authors’ interpretations rather than firsthand experiences. This reliance on secondary information may restrict the accuracy and completeness of our descriptions of GISs. The community has also noted the challenges in articulating artifacts in experimental game research, and efforts are currently underway to refine standards [142].

Conclusions

GISs hold considerable potential to enhance the diagnostic process for MCI by capturing high-frequency, rich, and ecologically valid data, offering new insights beyond traditional screening instruments and test batteries. This rich data collection could provide health care professionals with a complementary perspective for diagnosis, enabling a more dynamic understanding of cognitive changes over time. To this end, this paper presents a scoping review of GISs used for assessing MCI. In total, 49 GISs were identified from 81 published articles and categorized into 4 types: casual games, simulation games, full-body movement games, and dedicated interactive games. We also examined the cognitive functions targeted and the accompanying diagnostic performance validation studies. Findings indicate a broad spectrum of GISs available for evaluating MCI with a wide range of cognitive functions. In addition, we identified several GISs reporting diagnostic performances comparable to screening instruments such as MMSE and MoCA—including a few studies (6/31, 19%) with near-perfect results. However, these findings should be interpreted with caution, as they often stem from studies with small sample sizes and limited methodological rigor. The analysis also identified limitations in the GISs validation; only 2 systems included in this review conducted intraperson comparison studies, and no rigorous longitudinal study was identified. This gap represents a missed opportunity to investigate the full potential of GISs in assessing MCI progression. Such studies could provide critical insights into how gameplay proficiency, learning effects, and adaptation to game mechanics influence the evaluation of cognitive functions, thereby enhancing the interpretability and accuracy of these GISs. Overall, this review underscores the significant potential of GISs in assessing MCI, while highlighting the need for continued research to yield more conclusive results across various facets of these interactive systems.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Protocol for a scoping review of video games and gamification for assessing mild cognitive impairment.

DOCX File , 24 KB

Multimedia Appendix 2

Search queries.

DOCX File , 14 KB

Multimedia Appendix 3

Systems and game genres included in the review.

DOCX File , 22 KB

Multimedia Appendix 4

Cognitive functions evaluated in systems.

DOCX File , 21 KB

Multimedia Appendix 5

PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist.

DOCX File , 86 KB

  1. Park HL, O'Connell JE, Thomson RG. A systematic review of cognitive decline in the general elderly population. Int J Geriatr Psychiatry. Dec 2003;18(12):1121-1134. [CrossRef] [Medline]
  2. Kelley-Gillespie N. An integrated conceptual model of quality of life for older adults based on a synthesis of the literature. Appl Res Qual Life. Jul 9, 2009;4(3):259-282. [CrossRef]
  3. Power MC, Mormino E, Soldan A, James BD, Yu L, Armstrong NM, et al. Combined neuropathological pathways account for age-related risk of dementia. Ann Neurol. Jul 26, 2018;84(1):10-22. [FREE Full text] [CrossRef] [Medline]
  4. What is dementia? Centers for Disease Control and Prevention (CDC). 2019. URL: https://www.cdc.gov/aging/dementia/index.html [accessed 2022-08-04]
  5. Mitchell AJ, Shiri-Feshki M. Rate of progression of mild cognitive impairment to dementia--meta-analysis of 41 robust inception cohort studies. Acta Psychiatr Scand. Apr 2009;119(4):252-265. [CrossRef] [Medline]
  6. Gauthier S, Reisberg B, Zaudig M, Petersen R, Ritchie K, Broich K, et al. International Psychogeriatric Association Expert Conference on mild cognitive impairment. Mild cognitive impairment. Lancet. Apr 15, 2006;367(9518):1262-1270. [FREE Full text] [CrossRef] [Medline]
  7. Bruscoli M, Lovestone S. Is MCI really just early dementia? A systematic review of conversion studies. Int Psychogeriatr. Jun 2004;16(2):129-140. [FREE Full text] [CrossRef] [Medline]
  8. Waite LM. New and emerging drug therapies for Alzheimer disease. Aust Prescr. Jun 18, 2024;47(3):75-79. [FREE Full text] [CrossRef] [Medline]
  9. Chandler MJ, Parks AC, Marsiske M, Rotblatt LJ, Smith GE. Everyday impact of cognitive interventions in mild cognitive impairment: a systematic review and meta-analysis. Neuropsychol Rev. Sep 2016;26(3):225-251. [FREE Full text] [CrossRef] [Medline]
  10. Miller DI, Taler V, Davidson PS, Messier C. Measuring the impact of exercise on cognitive aging: methodological issues. Neurobiol Aging. Mar 2012;33(3):622.e29-622.e43. [CrossRef] [Medline]
  11. Pergher V, Schoenmakers B, Demaerel P, Tournoy J, Van Hulle MM. Differential impact of cognitive impairment in MCI patients: a case-based report. Case Rep Neurol. Jun 29, 2020;12(2):222-231. [FREE Full text] [CrossRef] [Medline]
  12. Livingston G, Sommerlad A, Orgeta V, Costafreda SG, Huntley J, Ames D, et al. Dementia prevention, intervention, and care. Lancet. Dec 16, 2017;390(10113):2673-2734. [CrossRef] [Medline]
  13. Edmonds EC, McDonald CR, Marshall A, Thomas KR, Eppig J, Weigand AJ, et al. Alzheimer's Disease Neuroimaging Initiative. Early versus late MCI: improved MCI staging using a neuropsychological approach. Alzheimers Dement. May 05, 2019;15(5):699-708. [FREE Full text] [CrossRef] [Medline]
  14. Mild cognitive impairment diagnosis. Mayo Clinic. URL: https:/​/www.​mayoclinic.org/​diseases-conditions/​mild-cognitive-impairment/​diagnosis-treatment/​drc-20354583?p=1 [accessed 2025-05-29]
  15. Boccardi M, Monsch AU, Ferrari C, Altomare D, Berres M, Bos I, et al. Consortium for the Harmonization of Neuropsychological Assessment for Neurocognitive Disorders (https://nextcloud.dzne.de/index.php/s/EwXjLab9caQTbQe). Harmonizing neuropsychological assessment for mild neurocognitive disorders in Europe. Alzheimers Dement. Jan 13, 2022;18(1):29-42. [FREE Full text] [CrossRef] [Medline]
  16. De Roeck EE, De Deyn PP, Dierckx E, Engelborghs S. Brief cognitive screening instruments for early detection of Alzheimer's disease: a systematic review. Alzheimers Res Ther. Feb 28, 2019;11(1):21. [FREE Full text] [CrossRef] [Medline]
  17. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. Nov 1975;12(3):189-198. [CrossRef] [Medline]
  18. Pangman VC, Sloan J, Guse L. An examination of psychometric properties of the mini-mental state examination and the standardized mini-mental state examination: implications for clinical practice. Appl Nurs Res. Nov 2000;13(4):209-213. [CrossRef] [Medline]
  19. Mitchell AJ. A meta-analysis of the accuracy of the mini-mental state examination in the detection of dementia and mild cognitive impairment. J Psychiatr Res. Jan 2009;43(4):411-431. [CrossRef] [Medline]
  20. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. Apr 2005;53(4):695-699. [CrossRef] [Medline]
  21. Aminisani N, alimi R, Javadpour A, Asghari-Jafarabadi M, Jourian M, Stephens C, et al. Comparison between the accuracy of Montreal cognitive assessment and mini-mental state examination in the detection of mild cognitive impairment. Research Square. Preprint posted online on January 6, 2021. [FREE Full text] [CrossRef]
  22. Horton DK, Hynan LS, Lacritz LH, Rossetti HC, Weiner MF, Cullum CM. An abbreviated Montreal Cognitive Assessment (MoCA) for dementia screening. Clin Neuropsychol. May 15, 2015;29(4):413-425. [FREE Full text] [CrossRef] [Medline]
  23. Hoffman R, Al'Absi M. The effect of acute stress on subsequent neuropsychological test performance (2003). Arch Clin Neuropsychol. Jun 2004;19(4):497-506. [CrossRef] [Medline]
  24. Shehab A, Abdulle A. Cognitive and autonomic dysfunction measures in normal controls, white coat and borderline hypertension. BMC Cardiovasc Disord. Jan 11, 2011;11(1):3. [FREE Full text] [CrossRef] [Medline]
  25. Mc Ardle R, Taylor L, Cavadino A, Rochester L, Del Din S, Kerse N. Characterizing walking behaviors in aged residential care using accelerometry, with comparison across care levels, cognitive status, and physical function: cross-sectional study. JMIR Aging. Jun 04, 2024;7:e53020. [FREE Full text] [CrossRef] [Medline]
  26. Suzuki T, Murase S. Influence of outdoor activity and indoor activity on cognition decline: use of an infrared sensor to measure activity. Telemed J E Health. Jul 2010;16(6):686-690. [CrossRef] [Medline]
  27. Li A, Li J, Chai J, Wu W, Chaudhary S, Zhao J, et al. Detection of mild cognitive impairment through hand motor function under digital cognitive test: mixed methods study. JMIR Mhealth Uhealth. Jun 26, 2024;12:e48777. [FREE Full text] [CrossRef] [Medline]
  28. Seelye A, Hagler S, Mattek N, Howieson DB, Wild K, Dodge HH, et al. Computer mouse movement patterns: a potential marker of mild cognitive impairment. Alzheimers Dement (Amst). Dec 01, 2015;1(4):472-480. [FREE Full text] [CrossRef] [Medline]
  29. Austin J, Hollingshead K, Kaye J. Internet searches and their relationship to cognitive function in older adults: cross-sectional analysis. J Med Internet Res. Sep 06, 2017;19(9):e307. [FREE Full text] [CrossRef] [Medline]
  30. Yoshii K, Kimura D, Kosugi A, Shinkawa K, Takase T, Kobayashi M, et al. Screening of mild cognitive impairment through conversations with humanoid robots: exploratory pilot study. JMIR Form Res. Jan 13, 2023;7:e42792. [FREE Full text] [CrossRef] [Medline]
  31. Ferreira-Brito F, Fialho M, Virgolino A, Neves I, Miranda AC, Sousa-Santos N, et al. Game-based interventions for neuropsychological assessment, training and rehabilitation: which game-elements to use? A systematic review. J Biomed Inform. Oct 2019;98:103287. [FREE Full text] [CrossRef] [Medline]
  32. Lumsden J, Edwards EA, Lawrence NS, Coyle D, Munafò MR. Gamification of cognitive assessment and cognitive training: a systematic review of applications and efficacy. JMIR Serious Games. Jul 15, 2016;4(2):e11. [FREE Full text] [CrossRef] [Medline]
  33. Valladares-Rodríguez S, Pérez-Rodríguez R, Anido-Rifón L, Fernández-Iglesias M. Trends on the application of serious games to neuropsychological evaluation: a scoping review. J Biomed Inform. Dec 2016;64:296-319. [FREE Full text] [CrossRef] [Medline]
  34. Toril P, Reales JM, Ballesteros S. Video game training enhances cognition of older adults: a meta-analytic study. Psychol Aging. Sep 2014;29(3):706-716. [CrossRef] [Medline]
  35. Mandryk RL, Birk MV. The potential of game-based digital biomarkers for modeling mental health. JMIR Ment Health. Apr 23, 2019;6(4):e13485. [FREE Full text] [CrossRef] [Medline]
  36. Pavel M, Jimison H, Hagler S, McKanna J. Using behavior measurement to estimate cognitive function based on computational models. In: Patel VL, Arocha JF, Ancker JS, editors. Cognitive Informatics in Health and Biomedicine: Understanding and Modeling Health Behaviors. Cham, Switzerland. Springer; 2017:137-163.
  37. Piau A, Wild K, Mattek N, Kaye J. Current state of digital biomarker technologies for real-life, home-based monitoring of cognitive function for mild cognitive impairment to mild Alzheimer disease and implications for clinical care: systematic review. J Med Internet Res. Aug 30, 2019;21(8):e12785. [FREE Full text] [CrossRef] [Medline]
  38. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 02, 2018;169(7):467-473. [FREE Full text] [CrossRef] [Medline]
  39. Petersen RC. Mild cognitive impairment. N Engl J Med. Jun 09, 2011;364(23):2227-2234. [CrossRef]
  40. Video games and gamification for assessing mild cognitive impairment: a scoping review. Open Science Framework. URL: https://osf.io/m2sar/ [accessed 2025-05-29]
  41. Choi E, Shin SH, Ryu JK, Jung KI, Kim SY, Park MH. Commercial video games and cognitive functions: video game genres and modulating factors of cognitive enhancement. Behav Brain Funct. Feb 03, 2020;16(1):2. [FREE Full text] [CrossRef] [Medline]
  42. Caroux L, Isbister K, Le Bigot L, Vibert N. Player–video game interaction: a systematic review of current concepts. Comput Human Behav. Jul 2015;48:366-381. [CrossRef]
  43. Pinto TC, Machado L, Bulgacov TM, Rodrigues-Júnior AL, Costa ML, Ximenes RC, et al. Is the Montreal Cognitive Assessment (MoCA) screening superior to the Mini-Mental State Examination (MMSE) in the detection of mild cognitive impairment (MCI) and Alzheimer's Disease (AD) in the elderly? Int Psychogeriatr. Apr 14, 2019;31(4):491-504. [FREE Full text] [CrossRef] [Medline]
  44. McMurray J, Levy A, Pang W, Holyoke P. Psychometric evaluation of a tablet-based tool to detect mild cognitive impairment in older adults: mixed methods study. J Med Internet Res. Apr 19, 2024;26:e56883. [FREE Full text] [CrossRef] [Medline]
  45. Bezabih AM, Gerling K, Abebe W, Abeele VV. Behavioral theories and motivational features underlying eHealth interventions for adolescent antiretroviral adherence: systematic review. JMIR Mhealth Uhealth. Dec 10, 2021;9(12):e25129. [FREE Full text] [CrossRef] [Medline]
  46. Johnson D, Deterding S, Kuhn KA, Staneva A, Stoyanov S, Hides L. Gamification for health and wellbeing: a systematic review of the literature. Internet Interv. Nov 2016;6:89-106. [FREE Full text] [CrossRef] [Medline]
  47. Boyle EA, Hainey T, Connolly TM, Gray G, Earp J, Ott M, et al. An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Comput Educ. Mar 2016;94:178-192. [CrossRef]
  48. Jimison JB, Pavel M, Pavel J, McKanna J. Home monitoring of computer interactions for the early detection of dementia. Conf Proc IEEE Eng Med Biol Soc. 2004;2004:4533-4536. [CrossRef] [Medline]
  49. Jimison H, Pavel M, Le T. Home-based cognitive monitoring using embedded measures of verbal fluency in a computer word game. Annu Int Conf IEEE Eng Med Biol Soc. 2008;2008:3312-3315. [CrossRef] [Medline]
  50. McKanna JA, Jimison H, Pavel M. Divided attention in computer game play: analysis utilizing unobtrusive health monitoring. Annu Int Conf IEEE Eng Med Biol Soc. 2009;2009:6247-6250. [CrossRef] [Medline]
  51. Li H, Zhang T, Yu TC, Lin CC, Wong AM. Combine wireless sensor network and multimedia technologies for cognitive function assessment. In: Proceedings of the 3rd International Conference on Intelligent Control and Information Processing. 2012. Presented at: ICICIP '12; July 15-17, 2012:717-720; Dalian, China. URL: https://ieeexplore.ieee.org/document/6391483 [CrossRef]
  52. Zhang T, Lin CC, Yu TC, Sun J, Hsu WC, Wong AM. Fun cube based brain gym cognitive function assessment system. Comput Biol Med. May 01, 2017;84:1-8. [CrossRef] [Medline]
  53. Yeh SC, Chen YC, Tsai CF, Rizzo A. An innovative virtual reality system for mild cognitive impairment: diagnosis and evaluation. In: Proceedings of the 2012 IEEE-EMBS Conference on Biomedical Engineering and Sciences. 2012. Presented at: IECBES '12; December 17-19, 2012:23-27; Langkawi, Malaysia. URL: https://ieeexplore.ieee.org/document/6498023 [CrossRef]
  54. Tsai CF, Chen CC, Wu EH, Chung CR, Huang CY, Tsai PY, et al. A machine-learning-based assessment method for early-stage neurocognitive impairment by an immersive virtual supermarket. IEEE Trans Neural Syst Rehabil Eng. 2021;29:2124-2132. [CrossRef] [Medline]
  55. Shamsuddin SN, Ugail H, Lesk V, Walters E. VREAD: a virtual simulation to investigate cognitive function in the elderly. In: Proceedings of the 2012 International Conference on Cyberworlds. 2012. Presented at: CW '12; September 25-27, 2012:215-220; Darmstadt, Germany. URL: https://ieeexplore.ieee.org/document/6337422 [CrossRef]
  56. Lesk VE, Wan Shamsuddin SN, Walters ER, Ugail H. Using a virtual environment to assess cognition in the elderly. Virtual Real. Sep 26, 2014;18(4):271-279. [CrossRef]
  57. Wallace B, Goubran R, Knoefel F, Petriu M, McAvoy A. Design of games for measurement of cognitive impairment. In: Proceedings of the 2014 International Conference on Biomedical and Health Informatics. 2014. Presented at: BHI '14; June 1-4, 2014:117-120; Valencia, Spain. URL: https://ieeexplore.ieee.org/document/6864318 [CrossRef]
  58. Joshi V, Wallace B, Shaddy A, Knoefel F, Goubran R, Lord C. Metrics to monitor performance of patients with mild cognitive impairment using computer based games. In: Proceedings of the 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics. 2016. Presented at: BHI '16; February 24-27, 2016:521-524; Las Vegas, NV. URL: https://ieeexplore.ieee.org/document/7455949 [CrossRef]
  59. Tost D, Pazzi S, von Barnekow A, Felix E, Puricelli S, Bottiroli S. SmartAgeing: a 3D serious game for early detection of mild cognitive impairments. In: Proceedings of the 8th International Conference on Pervasive Computing Technologies for Healthcare. 2014. Presented at: PervasiveHealth '14; May 20-23, 2014:1-3; Oldenburg, Germany. URL: https://eudl.eu/pdf/10.4108/icst.pervasivehealth.2014.255334 [CrossRef]
  60. Zucchella C, Sinforiani E, Tassorelli C, Cavallini E, Tost-Pardell D, Grau S, et al. Serious games for screening pre-dementia conditions: from virtuality to reality? A pilot project. Funct Neurol. 2014;29(3):153-158. [FREE Full text] [Medline]
  61. Tost D, von Barnekow A, Felix E, Pazzi S, Puricelli S, Bottiroli S. Early detection of cognitive impairments with the smart ageing serious game. In: Proceedings of the 2nd International Workshop on ICTs for Improving Patients Rehabilitation Research Techniques. 2014. Presented at: REHAB '14; May 20-23, 2014:183-195; Oldenburg, Germany. [CrossRef]
  62. Bottiroli S, Tassorelli C, Lamonica M, Zucchella C, Cavallini E, Bernini S, et al. Smart aging platform for evaluating cognitive functions in aging: a comparison with the MoCA in a normal population. Front Aging Neurosci. Nov 21, 2017;9:379. [FREE Full text] [CrossRef] [Medline]
  63. Bottiroli S, Bernini S, Cavallini E, Sinforiani E, Zucchella C, Pazzi S, et al. The smart aging platform for assessing early phases of cognitive impairment in patients with neurodegenerative diseases. Front Psychol. 2021;12:635410. [FREE Full text] [CrossRef] [Medline]
  64. Cabinio M, Rossetto F, Isernia S, Saibene FL, Di Cesare M, Borgnis F, et al. The use of a virtual reality platform for the assessment of the memory decline and the hippocampal neural injury in subjects with mild cognitive impairment: the validity of smart aging serious game (SASG). J Clin Med. May 06, 2020;9(5):1355. [FREE Full text] [CrossRef] [Medline]
  65. I Konstantinidis I, E Antoniou P, D Bamidis P. Exergames for assessment in active and healthy aging - emerging trends and potentialities. In: Proceedings of the 1st International Conference on Information and Communication Technologies for Ageing Well and e-Health. 2015. Presented at: SocialICT '15; May 20-22, 2015:325-330; Lisbon, Portugal. URL: https://www.scitepress.org/papers/2015/54945/54945.pdf [CrossRef]
  66. Konstantinidis EI, Billis AS, Mouzakidis CA, Zilidou VI, Antoniou PE, Bamidis PD. Design, implementation, and wide pilot deployment of FitForAll: an easy to use exergaming platform improving physical fitness and life quality of senior citizens. IEEE J Biomed Health Inform. Jan 2016;20(1):189-200. [CrossRef] [Medline]
  67. Konstantinidis EI, Bamidis PD, Billis A, Kartsidis P, Petsani D, Papageorgiou SG. Physical training in-game metrics for cognitive assessment: evidence from extended trials with the Fitforall exergaming platform. Sensors (Basel). Aug 26, 2021;21(17):5756. [FREE Full text] [CrossRef] [Medline]
  68. Manera V, Petit PD, Derreumaux A, Orvieto I, Romagnoli M, Lyttle G, et al. 'Kitchen and cooking,' a serious game for mild cognitive impairment and Alzheimer's disease: a pilot study. Front Aging Neurosci. 2015;7:24. [FREE Full text] [CrossRef] [Medline]
  69. Sirály E, Szabó Á, Szita B, Kovács V, Fodor Z, Marosi C, et al. Monitoring the early signs of cognitive decline in elderly by computer games: an MRI study. PLoS One. Feb 23, 2015;10(2):e0117918. [FREE Full text] [CrossRef] [Medline]
  70. Tarnanas I, Laskaris N, Tsolaki M, Muri R, Nef T, Mosimann U. On the comparison of a novel serious game and electroencephalography biomarkers for early dementia screening. Adv Exp Med Biol. 2015;821:63-77. [CrossRef] [Medline]
  71. Tong T, Guana V, Jovanovic A, Tran F, Mozafari G, Chignell M, et al. Rapid deployment and evaluation of mobile serious games: a cognitive assessment case study. Procedia Comput Sci. 2015;69:96-103. [CrossRef]
  72. Tong T, Chignell M, Tierney MC, Lee J. A serious game for clinical assessment of cognitive status: validation study. JMIR Serious Games. May 27, 2016;4(1):e7. [FREE Full text] [CrossRef] [Medline]
  73. Wilkinson A, Tong T, Zare A, Kanik M, Chignell M. Monitoring health status in long term care through the use of ambient technologies and serious games. IEEE J Biomed Health Inform. Nov 2018;22(6):1807-1813. [CrossRef] [Medline]
  74. Vallejo V, Mitache AV, Tarnanas I, Muri R, Mosimann UP, Nef T. Combining qualitative and quantitative methods to analyze serious games outcomes: a pilot study for a new cognitive screening tool. Annu Int Conf IEEE Eng Med Biol Soc. Aug 2015;2015:1327-1330. [CrossRef] [Medline]
  75. Boletsis C, McCallum S. Smartkuber: a serious game for cognitive health screening of elderly players. Games Health J. Aug 2016;5(4):241-251. [CrossRef] [Medline]
  76. Bonnechère B, Fabris C, Bier JC, Van Sint Jan S, Feipel V, Jansen B. Evaluation of cognitive functions of aged patients using video games. In: Proceedings of the 4th Workshop on ICTs for improving Patients Rehabilitation Research Techniques. 2016. Presented at: REHAB '16; October 13-14, 2016:21-24; Lisbon, Portugal. URL: https://dl.acm.org/doi/10.1145/3051488.3051491 [CrossRef]
  77. Bonnechère B, Bier JC, Van Hove O, Sheldon S, Samadoulougou S, Kirakoya-Samadoulougou F, et al. Age-associated capacity to progress when playing cognitive mobile games: ecological retrospective observational study. JMIR Serious Games. Jun 12, 2020;8(2):e17121. [FREE Full text] [CrossRef] [Medline]
  78. Rivas Costa C, Fernández Iglesias MJ, Anido Rifón LE, Gómez Carballa M, Valladares Rodríguez S. The acceptability of TV-based game platforms as an instrument to support the cognitive evaluation of senior adults at home. PeerJ. 2017;5(1):e2845. [FREE Full text] [CrossRef] [Medline]
  79. Hou CJ, Huang MW, Zhou JY, Hsu PC, Zeng JH, Chen YT. The application of individual virtual nostalgic game design to the evaluation of cognitive function. Annu Int Conf IEEE Eng Med Biol Soc. Jul 2017;2017:2586-2589. [CrossRef] [Medline]
  80. Gielis K, Brito F, Tournoy J, Abeele VV. Can card games be used to assess mild cognitive impairment?: a study of klondike solitaire and cognitive functions. In: Proceedings of the Extended Abstracts Publication of the Annual Symposium on Computer-Human Interaction in Play. 2017. Presented at: CHI PLAY '17; October 15-18, 2017:269-276; Amsterdam, the Netherlands. URL: https://dl.acm.org/doi/10.1145/3130859.3131328 [CrossRef]
  81. Gielis K. Screening for mild cognitive impairment through digital biomarkers of cognitive performance in games. In: Proceedings of the 2019 Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts. 2019. Presented at: CHI PLAY '19; October 22-25, 2019:7-13; Barcelona, Spain. URL: https://dl.acm.org/doi/10.1145/3341215.3356332 [CrossRef]
  82. Gielis K, Vanden Abeele ME, De Croon R, Dierick P, Ferreira-Brito F, Van Assche L, et al. Dissecting digital card games to yield digital biomarkers for the assessment of mild cognitive impairment: methodological approach and exploratory study. JMIR Serious Games. Nov 04, 2021;9(4):e18359. [FREE Full text] [CrossRef] [Medline]
  83. Gielis K, Vanden Abeele ME, Verbert K, Tournoy J, De Vos M, Vanden Abeele V. Detecting mild cognitive impairment via digital biomarkers of cognitive performance found in klondike solitaire: a machine-learning study. Digit Biomark. Feb 19, 2021;5(1):44-52. [FREE Full text] [CrossRef] [Medline]
  84. Liu KY, Chen SM, Huang HL. Development of a game-based cognitive measures system for elderly on the basis of mini-mental state examination. In: Proceedings of the 2017 International Conference on Applied System Innovation. 2017. Presented at: ICASI '17; May 13-17, 2017:1853-1856; Sapporo, Japan. URL: https://ieeexplore.ieee.org/document/7988307 [CrossRef]
  85. Silva Neto HC, Cerejeira J, Roque L. Serious games for cognitive assessment with older adults: a preliminary study. In: Proceedings of the 16th IFIP TC 14 International Conference on Entertainment Computing. 2017. Presented at: ICEC '17; September 18-21, 2017:97-112; Tsukuba City, Japan. URL: https://link.springer.com/chapter/10.1007/978-3-319-66715-7_11 [CrossRef]
  86. Silva Neto H, Cerejeira J, Roque L. Cognitive screening of older adults using serious games: an empirical study. Entertain Comput. Dec 2018;28:11-20. [CrossRef]
  87. Valladares-Rodriguez S, Perez-Rodriguez R, Facal D, Fernandez-Iglesias MJ, Anido-Rifon L, Mouriño-Garcia M. Design process and preliminary psychometric study of a video game to detect cognitive impairment in senior adults. PeerJ. 2017;5:e3508. [FREE Full text] [CrossRef] [Medline]
  88. Valladares-Rodriguez S, Fernández-Iglesias MJ, Anido-Rifón L, Facal D, Pérez-Rodríguez R. Episodix: a serious game to detect cognitive impairment in senior adults. A psychometric study. PeerJ. 2018;6:e5478. [FREE Full text] [CrossRef] [Medline]
  89. Valladares-Rodríguez S, Anido-Rifón L, Fernández-Iglesias MJ, Facal-Mayo D. A machine learning approach to the early diagnosis of Alzheimer’s disease based on an ensemble of classifiers. In: Proceedings of the 19th International Conference on Computational Science and Its Applications. 2019. Presented at: ICCSA '19; July 1-4, 2019:396; Saint Petersburg, Russia. URL: https://link.springer.com/chapter/10.1007/978-3-030-24289-3_28 [CrossRef]
  90. Valladares-Rodriguez S, Fernández-Iglesias MJ, Anido-Rifón L, Facal D, Rivas-Costa C, Pérez-Rodríguez R. Touchscreen games to detect cognitive impairment in senior adults. A user-interaction pilot study. Int J Med Inform. Jul 2019;127:52-62. [CrossRef] [Medline]
  91. Valladares-Rodríguez S, Fernández-Iglesias MJ, Anido-Rifón LE, Pacheco-Lorenzo M. Evaluation of the predictive ability and user acceptance of Panoramix 2.0, an AI-based E-health tool for the detection of cognitive impairment. Electronics. Oct 22, 2022;11(21):3424. [CrossRef]
  92. Valladares-Rodriguez S, Pérez-Rodriguez R, Fernandez-Iglesias JM, Anido-Rifón LE, Facal D, Rivas-Costa C. Learning to detect cognitive impairment through digital games and machine learning techniques. Methods Inf Med. Sep 2018;57(4):197-207. [CrossRef] [Medline]
  93. Jung HT, Lee H, Kim K, Kim B, Park S, Ryu T, et al. Estimating mini mental state examination scores using game-specific performance values: a preliminary study. Annu Int Conf IEEE Eng Med Biol Soc. Jul 2018;2018:1518-1521. [CrossRef] [Medline]
  94. Jung HT, Daneault JF, Lee H, Kim K, Kim B, Park S, et al. Remote assessment of cognitive impairment level based on serious mobile game performance: an initial proof of concept. IEEE J Biomed Health Inform. May 2019;23(3):1269-1277. [CrossRef] [Medline]
  95. Seo K, Ryu H. Nothing is more revealing than body movement: measuring the movement kinematics in VR to screen dementia. In: Proceedings of the 2018 Asian HCI Symposium'18 on Emerging Research Collection. 2018. Presented at: ASIAN-CHI '18; April 21-26, 2018:21-24; Montreal, QC. URL: https://dl.acm.org/doi/10.1145/3205851.3205857 [CrossRef]
  96. Zeng Z, Fauvel S, Hsiang BT, Wang D, Qiu Y, Khuan P, et al. Towards long-term tracking and detection of early dementia: a computerized cognitive test battery with gamification. In: Proceedings of the 3rd International Conference on Crowd Science and Engineering. 2018. Presented at: ICCSE '18; July 28-31, 2018:1-10; Singapore, Singapore. URL: https://dl.acm.org/doi/10.1145/3265689.3265719 [CrossRef]
  97. Chessa M, Bassano C, Gusai E, Martis AE, Solari F. Human-computer interaction approaches for the assessment and the practice of the cognitive capabilities of elderly people. In: Proceedings of the 2018 International Conference on Computer Vision. 2019. Presented at: ECCV '18; September 8-14, 2018:66-81; Munich, Germany. URL: https://link.springer.com/chapter/10.1007/978-3-030-11024-6_5 [CrossRef]
  98. Chua SI, Tan NC, Wong WT, Allen Jr JC, Quah JH, Malhotra R, et al. Virtual reality for screening of cognitive function in older persons: comparative study. J Med Internet Res. Aug 01, 2019;21(8):e14821. [FREE Full text] [CrossRef] [Medline]
  99. Aljumaili M, McLeod R, Friesen M. Serious games and ML for detecting MCI. In: Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing. 2019. Presented at: GlobalSIP '19; November 11-14, 2019:1-5; Ottawa, ON. URL: https://ieeexplore.ieee.org/document/8969123 [CrossRef]
  100. Vovk A, Patel A, Chan D. Augmented reality for early Alzheimer’s disease diagnosis. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. Presented at: CHI EA '19; May 4-9, 2019:1-6; Glasgow, UK. URL: https://dl.acm.org/doi/abs/10.1145/3290607.3313007 [CrossRef]
  101. Junaid Farrukh M, Bakry MM, Hatah E, Jan TH. Gamification technique to estimate mini mental state examination scores: a validation study. Curr Trends Biotechnol Pharm. Feb 22, 2020;14(5):140-146. [CrossRef]
  102. Chen YT, Hou CJ, Derek N, Huang SB, Huang MW, Wang YY. Evaluation of the reaction time and accuracy rate in normal subjects, MCI, and dementia using serious games. Appl Sci. Jan 11, 2021;11(2):628. [CrossRef]
  103. Iliadou P, Paliokas I, Zygouris S, Lazarou E, Votis K, Tzovaras D, et al. A comparison of traditional and serious game-based digital markers of cognition in older adults with mild cognitive impairment and healthy controls. J Alzheimers Dis. Jan 14, 2021;79(4):1747-1759. [CrossRef]
  104. Eraslan Boz H, Limoncu H, Zygouris S, Tsolaki M, Giakoumis D, Votis K, et al. A new tool to assess amnestic mild cognitive impairment in Turkish older adults: virtual supermarket (VSM). Neuropsychol Dev Cogn B Aging Neuropsychol Cogn. Sep 04, 2020;27(5):639-653. [CrossRef] [Medline]
  105. Zygouris S, Iliadou P, Lazarou E, Giakoumis D, Votis K, Alexiadis A, et al. Detection of mild cognitive impairment in an at-risk group of older adults: can a novel self-administered serious game-based screening test improve diagnostic accuracy? J Alzheimers Dis. Oct 27, 2020;78(1):405-412. [CrossRef]
  106. Yan M, Yin H, Meng Q, Wang S, Ding Y, Li G, et al. A virtual supermarket program for the screening of mild cognitive impairment in older adults: diagnostic accuracy study. JMIR Serious Games. Dec 03, 2021;9(4):e30919. [FREE Full text] [CrossRef] [Medline]
  107. Jagtap S, Kawade A, Banerjee V, Gadiwan S, Ramesh R, Shinde A, et al. Detection and monitoring of Alzheimer’s disease using serious games—a study. In: Proceedings of the 2020 International Conference on Sustainable Expert Systems. 2020. Presented at: ICSES '20; September 28-29, 2020:69-76; Kathmandu, Nepal. URL: https:/​/link.​springer.com/​chapter/​10.1007/​978-981-33-4355-9_6#:~:text=Existing%20serious%20games%20are%20not,disease%20at%20the%20early%20stages [CrossRef]
  108. Liu CL, Chang SR. A pilot tool of the virtual scenario initial dementia cognitive screening (VSIDCS) with a cultural exhibition for improving the standard traditional test. Healthcare (Basel). Sep 04, 2021;9(9):1160. [FREE Full text] [CrossRef] [Medline]
  109. Lee B, Lee T, Jeon H, Lee S, Kim K, Cho W, et al. Synergy through integration of wearable EEG and virtual reality for mild cognitive impairment and mild dementia screening. IEEE J Biomed Health Inform. Jul 2022;26(7):2909-2919. [CrossRef]
  110. Lee B, Lee TH. VR and EEG combined self-monitoring platform of cognitive care. In: Jung T, Dieck MC, Loureiro SM, editors. Extended Reality and Metaverse: Immersive Technology in Times of Crisis. Cham, Switzerland. Springer; 2023:253-262.
  111. Pearson C, De La Iglesia B, Sami S. Detecting cognitive decline using a novel doodle-based neural network. In: Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering. 2022. Presented at: MetroXRAINE '22; October 26-28, 2022:99-103; Rome, Italy. URL: https://ieeexplore.ieee.org/document/9967549 [CrossRef]
  112. Choi MG. Use of serious games for the assessment of mild cognitive impairment in the elderly. Appl Comput Sci. Jun 30, 2022;18(2):5-15. [CrossRef]
  113. Goumopoulos C, Skikos G, Karapapas C, Frounta M, Koumanakos G. Applying serious games and machine learning for cognitive training and screening: the COGNIPLAT approach. In: Proceedings of the 25th Pan-Hellenic Conference on Informatics. 2021. Presented at: PCI '21; November 26-28, 2021:63-68; Volos, Greece. URL: https://dl.acm.org/doi/10.1145/3503823.3503835 [CrossRef]
  114. Karapapas C, Goumopoulos C. Mild cognitive impairment detection using machine learning models trained on data collected from serious games. Appl Sci. Sep 03, 2021;11(17):8184. [CrossRef]
  115. Guedes LR, Schueda L, Hounsell MD, Paterno AS. Identifying deficient cognitive functions using computer games: a pilot study. In: Proceedings of the 2020 Conference on Brazilian Congress on Biomedical Engineering. 2020. Presented at: CBEB '20; October 26-30, 2020:1479-1486; Vitória, Brazil. URL: https://link.springer.com/chapter/10.1007/978-3-030-70601-2_218 [CrossRef]
  116. Oliveira FT, Tong BW, Garcia JA, Gay VC. CogWorldTravel: design of a game-based cognitive screening instrument. In: Proceedings of the 2022 Joint International Conference on Serious Games. 2022. Presented at: JCSG '22; September 22-23, 2022:125-139; Weimar, Germany. URL: https://link.springer.com/chapter/10.1007/978-3-031-15325-9_10 [CrossRef]
  117. Oliveira FT, Garcia JA, Gay VC. Evaluation of CogWorldTravel: a serious game for cognitive screening. In: Proceedings of the 10th International Conference on Serious Games and Applications for Health. 2022. Presented at: SeGAH '22; August 10-12, 2022:1-8; Sydney, Australia. URL: https://ieeexplore.ieee.org/document/9978549 [CrossRef]
  118. Intarasirisawat J, Ang CS, Efstratiou C, Dickens LW, Page R. Exploring the touch and motion features in game-based cognitive assessments. Proc ACM Interact Mob Wearable Ubiquitous Technol. Sep 09, 2019;3(3):1-25. [CrossRef]
  119. Ortega A, Lemos G, Martínez J. Artificial intelligence applied to video game for detection of mild cognitive impairment. In: Proceedings of the 2nd Doctoral Symposium on Information and Communication Technologies. 2022. Presented at: DSICT '22; October 12-14, 2022:161-172; Manta, Ecuador. URL: https://link.springer.com/chapter/10.1007/978-3-031-18347-8_13 [CrossRef]
  120. Ito S, Wira M, Thawonmas R. User friendly Minecraft mod for early detection of Alzheimer’s disease in young adults. In: Proceedings of the 2022 IEEE Conference on Games, Entertainment, Media. 2022. Presented at: GEM '22; November 27-30, 2022:1; St. Michael, Barbados. URL: https://ieeexplore.ieee.org/document/10017770 [CrossRef]
  121. Mun̋oz JE, Ali F, Basharat A, Mehrabi S, Barnett-Cowan M, Cao S, et al. Development of classifiers to determine factors associated with older adult's cognitive functions and game user experience in VR using head kinematics. IEEE Trans Games. 2024:1-9. [CrossRef]
  122. Huang P, Ishibashi Y. Study on early detection and prevention of dementia using olfactory and haptic senses. In: Proceedings of the 2023 International Conference on Consumer Electronics. 2023. Presented at: ICCE '23; July 17-19, 2023:315-316; PingTung, Taiwan. URL: https://ieeexplore.ieee.org/abstract/document/10227022 [CrossRef]
  123. Taghavi MF, Ghorbani F, Delrobaei M. Development of an augmented-reality-based serious game: a cognitive assessment study. IEEE Trans Cogn Dev Syst. Jun 2024;16(3):1087-1094. [CrossRef]
  124. Chen YT, Hou CJ, Huang MW, Dong JH, Zhou JY, Hung IC. The design of interactive physical game for cognitive ability detecting for elderly with mild cognitive impairment. In: Proceedings of the 7th International Conference of World Congress on Bioengineering. 2015. Presented at: WACBE '15; July 6-8, 2015:170-173; Singapore, Singapore. URL: http://link.springer.com/10.1007/978-3-319-19452-3_45 [CrossRef]
  125. House G, Burdea G, Polistico K, Ross J, Leibick M. A serious gaming alternative to pen-and-paper cognitive scoring: a pilot study of BrightScreenerTM. J Pain Manag. 2016;9(3):255. [FREE Full text]
  126. Vourvopoulos A, Faria AL, Ponnam K, Bermudez IB. RehabCity: design and validation of a cognitive assessment and rehabilitation tool through gamified simulations of activities of daily living. In: Proceedings of the 11th Conference on Advances in Computer Entertainment Technology. 2014. Presented at: ACE '14; November 11-14, 2014:1-8; Funchal, Portugal. URL: https://dl.acm.org/doi/abs/10.1145/2663806.2663852 [CrossRef]
  127. Fukui Y, Yamashita T, Hishikawa N, Kurata T, Sato K, Omote Y, et al. Computerized touch-panel screening tests for detecting mild cognitive impairment and Alzheimer's disease. Intern Med. 2015;54(8):895-902. [FREE Full text] [CrossRef] [Medline]
  128. Sacco G, Ben-Sadoun G, Bourgeois J, Fabre R, Manera V, Robert P. Comparison between a paper-pencil version and computerized version for the realization of a neuropsychological test: the example of the trail making test. J Alzheimers Dis. 2019;68(4):1657-1666. [CrossRef] [Medline]
  129. Kuittinen J, Kultima A, Niemelä J, Paavilainen J. Casual games discussion. In: Proceedings of the 2007 conference on Future Play. 2007. Presented at: Future Play '07; November 14-17, 2007:105-112; Toronto, ON. URL: https://dl.acm.org/doi/10.1145/1328202.1328221 [CrossRef]
  130. Rabin LA, Smart CM, Crane PK, Amariglio RE, Berman LM, Boada M, et al. Subjective cognitive decline in older adults: an overview of self-report measures used across 19 international research studies. J Alzheimers Dis. Sep 24, 2015;48 Suppl 1(0 1):S63-S86. [FREE Full text] [CrossRef] [Medline]
  131. Jv RE, Jl CL. Comparison of Montreal Cognitive Assessment (Moca) and Mini-Mental State Examination (MMSE) for the detection of cognitive disorders. Crimson Publishers. URL: https://crimsonpublishers.com/tnn/fulltext/TNN.000538.php [accessed 2025-05-29]
  132. Trzepacz PT, Hochstetler H, Wang S, Walker B, Saykin AJ, Alzheimer’s Disease Neuroimaging Initiative. Relationship between the Montreal cognitive assessment and mini-mental state examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. Sep 07, 2015;15:107. [FREE Full text] [CrossRef] [Medline]
  133. Salmon JP, Dolan SM, Drake RS, Wilson GC, Klein RM, Eskes GA. A survey of video game preferences in adults: building better games for older adults. Entertain Comput. Jun 2017;21:45-64. [CrossRef]
  134. Blocker KA, Wright TJ, Boot WR. Gaming preferences of aging generations. Gerontechnology. 2014;12(3):174-184. [FREE Full text] [CrossRef] [Medline]
  135. Caserman P, Hoffmann K, Müller P, Schaub M, Straßburg K, Wiemeyer J, et al. Quality criteria for serious games: serious part, game part, and balance. JMIR Serious Games. Jul 24, 2020;8(3):e19037. [FREE Full text] [CrossRef] [Medline]
  136. Bruun-Pedersen JR, Serafin S, Kofoed LB. Restorative virtual environment design for augmenting nursing home rehabilitation. J Virtual Worlds Res. Feb 04, 2016;9(3):1-26. [CrossRef]
  137. Narayanasamy V, Wong KW, Fung CC, Depickere A. Distinguishing simulation games from simulators by considering design characteristics. In: Proceedings of the second Australasian conference on Interactive entertainment. 2005. Presented at: IE '05; November 23-25, 2005:141-144; Sydney, Australia. URL: https://dl.acm.org/doi/10.5555/1109180.1109202
  138. De Schutter B, Vanden Abeele V. Towards a Gerontoludic Manifesto. Anthropol Aging. Nov 19, 2015;36(2):112-120. [CrossRef]
  139. Murata N, Nishii S, Usuha R, Kodaka A, Fujimori M, Sugawara H, et al. A gamified N-back app for identifying mild-cognitive impairment in older adults. JMA J. Jan 15, 2025;8(1):174-182. [CrossRef] [Medline]
  140. Koran ME, Wagener M, Hohman TJ, Alzheimer’s Neuroimaging Initiative. Sex differences in the association between AD biomarkers and cognitive decline. Brain Imaging Behav. Feb 3, 2017;11(1):205-213. [FREE Full text] [CrossRef] [Medline]
  141. Martin J, Reid N, Ward DD, King S, Hubbard RE, Gordon EH. Investigating sex differences in risk and protective factors in the progression of mild cognitive impairment to dementia: a systematic review. J Alzheimers Dis. 2024;97(1):101-119. [CrossRef] [Medline]
  142. Gerling K, Birk MV. Reflections on rigor and reproducibility: moving toward a community standard for the description of artifacts in experimental games research. In: Extended Abstracts of the 2022 Annual Symposium on Computer-Human Interaction in Play. 2022. Presented at: CHI PLAY '22; November 7, 2022:266-267; Bremen, Germany. URL: https://dl.acm.org/doi/10.1145/3505270.3558360 [CrossRef]


AD: Alzheimer disease
AI: artificial intelligence
AUC: area under the curve
GISs: gamified interactive systems
MCI: mild cognitive impairment
MMSE: Mini-Mental State Examination
MoCA: Montreal Cognitive Assessment
PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews
RQ: research question
TMT: trail making test


Edited by J Torous; submitted 14.01.25; peer-reviewed by T Takebe, D Chambers; comments to author 30.03.25; revised version received 09.05.25; accepted 14.06.25; published 05.08.25.

Copyright

©Yu Chen, Kathrin Gerling, Katrien Verbert, Vero Vanden Abeele. Originally published in JMIR Mental Health (https://mental.jmir.org), 05.08.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Mental Health, is properly cited. The complete bibliographic information, a link to the original publication on https://mental.jmir.org/, as well as this copyright and license information must be included.