Skip to main content

Usability of an at-home tablet-based cognitive test in older adults with and without cognitive impairment

Abstract

Background

Mobile device-based cognitive screening has the potential to overcome the limitations in diagnostic precision and efficiency that characterize conventional pen and paper cognitive screening. Several mobile device-based cognitive testing platforms have demonstrated usability, but the usability of take-home mobile device-based cognitive screening in typical adult primary care patients requires further investigation.

Methods

This study set out to test the usability of a prototype mobile device-based cognitive screening test in older adult primary care patients across a range of cognitive performance. Participants completed the St. Louis University Mental Status Examination (SLUMS) and then used a study-supplied mobile device application at home for 5 days. The application presented 7 modules lasting approximately 15 min. Participants completed the System Usability Scale (SUS) after using the application.

Results

A total of 51 individuals participated, with a median (IQR) age of 81 (74–85) years. Cognitive impairment (SLUMS score < 27) was present in 30 (59%) of participants. The mean (95% Confidence Interval [CI]) SUS score was 76 (71–81), which indicates good usability. Usability scores were similar across ranges of cognitive impairment. A Lower SLUMS score predicted early withdrawal from the study with an area under the receiver operating characteristic curve (95% CI) of 0.78 (0.58–0.97).

Conclusion

Take-home mobile device-based cognitive testing is a usable strategy for many older adult primary care patients. Depending on patient preferences and abilities, it could be part of a flexible cognitive testing and follow-up strategy that includes mobile device-based testing in healthcare settings and pen-and-paper cognitive testing.

Peer Review reports

Background

Conventional cognitive screening methods have exhibited limitations in effectively capturing the intricacies of cognitive functioning among older adults. These methods often rely on in-person, pen-and-paper assessments, which can strain the already limited time available to healthcare providers in primary care settings and require a face-to-face visit [1]. Furthermore, these traditional methods may not fully capture the nuances of cognitive decline, especially in its early stages, potentially leading to missed opportunities for early intervention [2]. These screenings are often conducted in controlled clinical environments, which might not accurately reflect real-world cognitive performance in individuals' day-to-day lives [1]. Such limitations can hinder the accurate detection and tracking of cognitive impairment, thereby underscoring the need for innovative approaches that harness the capabilities of modern technology to provide more nuanced and accessible assessments.

Screening for cognitive change sooner may lead to eligibility for new drugs, participation in clinical trials, deployment of meaningful interventions, and overall better health care outcomes. Early detection of major neurocognitive disorders enables more timely deployment of pharmacologic and non-pharmacologic interventions to help both persons living with dementia (PLWD) and their professional or family caregivers. Cognitive screening tools must also become more inclusive for demographically diverse individuals. A body of prior work has documented limitations of screenings that are not sensitive to varying socioeconomic, cultural, racial, or other differences [3,4,5,6,7,8,9].

Although investigators have increasingly used mobile devices for cognitive testing in older adults, we lack evidence on at-home tablet-based cognitive testing in older adults requiring active participation [10]. A limited number of studies have demonstrated that various digital cognitive tests perform well in detecting dementia and mild cognitive impairment (MCI) [1]. Although a study of tablet-based cognitive assessments found high usability ratings in older adults in a controlled setting, we need further study about the usability of such testing in a take-home format [11]. A study of a self-downloaded cognitive test demonstrated feasibility in users of an online citizen science platform, but we need to test the generalizability of this finding to typical adult primary care populations [12]. Consumers with mobile devices can now obtain numerous applications designed to screen for dementia, and the products vary widely in their similarity to established cognitive screening instruments and the evidence for their usability and the validity of their results [13]. In order to reach users who are less comfortable with mobile devices, we need to understand how take-home mobile device cognitive testing will function in an older adult primary care population. We present the LifeBio Brain Phase 1 study. This study aimed to assess the usability of a take-home mobile tablet-based cognitive test in older adults with and without cognitive impairment who visit a geriatric primary care practice. We hypothesized that geriatric primary care clinic patients would find a prototype mobile cognitive testing application usable, as defined by a mean system usability score of 75 or greater.

Methods

Study design and participants

The purpose of LifeBio Brain Phase 1 was to prospectively assess the usability of a prototype mobile cognitive testing application. We recruited volunteers from an academically-affiliated Geriatric Medicine practice focusing on primary care for older adults. Advarra provided Institutional Review Board (IRB) approval and oversight of all study materials and procedures through a reliance agreement with the Rhode Island Hospital IRB. The Advarra IRB approved the study on August 24, 2022, protocol number Pro00062144.

Inclusion and exclusion criteria

This study included patients of the Geriatric Medicine practice whose most recent St. Louis University Mental Status Exam (SLUMS), Montreal Cognitive Assessment, or Mini Mental Status Examination Score was greater than 15 to sample individuals who would be likely to engage with self-administered mobile device-based testing in this study and in actual clinical practice. Additionally, we required that potential participants’ motor, hearing, and vision abilities enabled using a mobile tablet device and that they had reliable home Wi-Fi service.

Procedures

We recruited patients from November 2, 2022, to February 13, 2023. The principal investigator pre-screened all participants for study eligibility and impaired capacity for informed consent via medical record review under an IRB-approved HIPAA waiver. The principal investigator then notified healthcare providers (physicians and nurse practitioners) of all potentially eligible patients scheduled for visits and screened again for capacity via direct communication with the referring healthcare providers. Referring healthcare providers had knowledge of the study objectives, inclusion, and exclusion criteria but did not formally screen patients for eligibility. Referred potential subjects and, when applicable, legally authorized representatives, met in person with a study team member for a concise overview of the study. The overview included a brief introduction to the mobile tablet device and instructions for opening the cognitive testing application. We required informed consent from a legally authorized representative for all subjects with impaired capacity for informed consent. We also required signed confirmation of assent from subjects with impaired capacity for informed consent.

Subjects were advised of their right to disenroll from the study at any time, for any reason, without any repercussions to their current or future medical care, and with pro-ration of the study financial incentive if they completed some but not all of the protocol. The study team then collected demographic information and administered the SLUMS. The SLUMS efficiently delivers a reliable measure of cognitive function with a single-factor structure and good discriminability and compares favorably to other brief performance-based cognitive screeners [14,15,16]. The study team then provided the study device, including a stylus and a detailed tutorial on using the device, connecting to Wi-Fi, and using the study application (See Supplemental file).

When an engaged care partner, such as a spouse or adult child, was present for the initial study visit, the study team encouraged the care partner to assist the participant in turning on the device and accessing the application but not completing the tests. A study team member was available throughout the study to answer questions about the device and the protocol. Participants were instructed to engage with the mobile cognitive testing application twice daily for 5 days—once during the morning and once in the evening. After this period, a study team member collected the study device and administered the System Usability Scale (SUS). Subjects who completed the protocol received a financial incentive for participation at the end of the protocol.

Mobile application

This study tested the usability of a prototype cognitive testing mobile application. The application displayed 7 distinct modules in random order. See Supplemental file for images of the mobile application. The modules were designed to last about 15 min in total. The modules included adaptations of valid cognitive tests such as Trail Making A the Clock Drawing Test, verbal fluency, and digit-span, and game-like experiences designed to elicit eye movement and spontaneous speech that will be used to measure digital biomarkers in future iterations of LifeBio Brain:

  1. 1.

    In ‘Touch the Dot,’ the mobile device displayed a dot moving between random positions on the screen and instructed participants to ‘touch the dot.’

  2. 2.

    In ‘What is This?,’ the mobile device displayed a series of images and instructed participants to describe what they see on the screen aloud. The app randomly selected 10 images to display from a set of 60.

  3. 3.

    ‘Connect the Circles,’ displayed an adaptation of Trail Making Test A, in which the application instructs participants to touch circles sequentially in numeric order and responds by displaying a trail of line segments [17].

  4. 4.

    In “Animal Names,” a verbal fluency test, the mobile device displayed a timer and instructed participants to name as many animals as possible.

  5. 5.

    “Draw a Clock,” a digital adaptation of the Clock Drawing Test [18], instructed participants to draw an analog clock on the touchscreen using a stylus or include all the numbers and draw the hands of the clock so that the time reads “10 min to 11 o’clock.”

  6. 6.

    In “Remember Number,” the device screen displayed a 4-digit number and instructed the participant to remember it. Subsequent screens instructed the participant to say the number out loud in forward and reverse order.

  7. 7.

    In ‘Describe the Picture,’ the mobile device displayed a stylized photograph of a meal and instructed the participant to describe what they saw, and then displayed a landscape photograph and instructed the participant to talk about some places they had traveled.

Outcomes

The primary outcome was the usability of the mobile cognitive testing application, as measured by the SUS. A research assistant administered the SUS to each participant at the time of device return [19, 20]. The rationale for in-person survey administration was to increase the response rate and to avoid non-response bias if participants who did not find the device usable would be less likely to complete a device-based SUS. The SUS measures participants’ subjective experience with a digital system or product using 10 Likert items [21]. The instrument alternates between negatively-framed questions and positively-framed questions. The scoring of the SUS is on a scale of 0 to 100 [20]. We derived SUS scores from participant responses using methods previously described: we subtracted 1 from the Likert Scale value for questions 1,3,5,7 and 9 and subtracted the Likert Scale value from 5 for questions 2,4,6,8 and 10. We multiplied the resulting sum by 2.5 to obtain the total SUS score [20].

Sample size

With 42 participants completing the study, we estimated 80% power to detect an 8-point difference in the SUS with an alpha level of 0.05 in a 2-tailed t-test, assuming a standard deviation of 18 points on the SUS [19].

Statistical analysis

The primary outcome, the mean SUS score with 95% confidence intervals, was computed using standard methods for normally distributed data. We used Receiver Operating Characteristic (ROC) analysis to explore the relationship between SLUMS and study completion. For this analysis, we defined study completion as use of the study device for 5 days, completion of the SUS, and return of the device. The ROC analysis used nonparametric methods and estimated standard error using the method reported by DeLong [22]. All statistical analyses were performed using Stata SE 17.0 (StataCorp, College Station, Texas, USA).

Results

Of the 51 participants who provided informed consent, the median age was 81, with an intraquartile range (IQR) of 74 to 85. In 51 (100%) of participants, self-reported race and ethnicity were White and not Hispanic or Latino. The self-reported gender was female in 30 (59%) participants. The Median (IQR) SLUMS was 25 [21,22,23,24,25,26,27,28] [23]. Table 1 summarizes the characteristics of participants.

Table 1 Participant characteristics

Of the 51 individuals who consented to participate, 9 (18%) voluntarily discontinued before completing the study. One of these participants stopped participation before completing the SLUMS.

The mean (95% confidence interval) System Usability Scale (SUS) rating was 76 (71–81) overall. The mean SUS ratings were similar across the SLUMS score categories (Table 2). We interpreted the SUS according to the adjective ratings framework reported by Bangor et al., where a score of 52 maps to ‘OK’ usability, 73 to ‘good’ usability, 85 to ‘excellent’ usability, and 100 maps to ‘best imaginable’ usability (19). Accordingly, we estimate that LifeBio Brain had good usability, but the confidence interval included OK usability. The Pearson correlation coefficient for SUS and SLUMS was -0.03, consistent with weak or no correlation.

Table 2 System Usability Scale (SUS) rating by St. Louis University Mental Status Examination (SLUMS) score category

In the exploratory analysis of the relationship between study completion and SLUMS score category, the Pearson chi-square test was 6.10 with a P-value of 0.047. The median (IQR) SLUMS score was 26 [23,24,25,26,27,28] in participants who withdrew before study completion and 20 [13,14,15,16,17,18,19,20,21,22,23,24] in participants who completed the study. In the ROC analysis of the SLUMS score as a predictor of study completion, the ROC area under the curve (AUC) was 0.78, with a 95% confidence interval from 0.58–0.97 (Fig. 1). A SLUMS score cutpoint of ≥ 15 correctly predicted study completion in the most (88%) participants, with a sensitivity of 98% and specificity of 38%.

Fig. 1
figure 1

Receiver Operating Characteristic (ROC) curve of St. Louis University Mental Status Examination (SLUMS) score as a predictor of withdrawal from the study before completion. The area under the ROC curve was 0.78

Discussion

The objective of this study was to measure the usability of take-home mobile device-based cognitive testing in older adult primary care patients. Participants who completed the study protocol found the prototype to have good usability overall, and the mean SUS rating was similar across categories of SLUMS score. SLUMS score predicted study completion better than chance, but the ROC AUC was below the conventional lower limit of 0.8 for the ‘moderate’ range [24]. Our study demonstrates that, on average, patients in our geriatric primary care practice found mobile tablet-based take-home cognitive testing usable, but our confidence interval included ‘OK’ usability. We also found that participants with lower SLUMS scores tended to discontinue participation early, suggesting that participants with moderate to severe cognitive impairment did not find the testing platform usable.

In the context of prior research demonstrating that mobile device-based cognitive testing can have acceptable diagnostic performance, our study establishes that a geriatric primary care population considers this testing modality usable. Furthermore, our study demonstrates the usability of cognitive testing on a take-home device in a geriatric primary care setting. Cognitive testing using take-home mobile devices offers an appealing alternative to in-office pen-and-paper cognitive screenings, which strain limited provider time in primary care settings [2, 25]. Our findings align with other recent studies demonstrating the feasibility and acceptability of self-administered mobile device-based cognitive tests [26,27,28]. Our study also aligns with prior studies finding that cognitive testing is feasible with various digital platforms, including computer, web-based, and virtual-reality-based platforms [29]. Some distinguishing features of our study include the use of a take-home device rather than a patient’s own device, our inclusion of cognitively impaired volunteers, and the geriatric primary care setting. Take-home mobile device-based testing will enable longitudinal assessments, which could address limitations in the diagnostic specificity of tests such as the Montreal Cognitive Assessment [30]. Take-home mobile device-based testing will also enable the incorporation of digital biomarkers such as speech and eye movement parameters into testing protocols, whereby mobile device-based testing could eventually surpass the diagnostic performance of traditional testing methods in diagnosing Alzheimer’s Disease and related dementias [31].

This work helps establish a broad future role for mobile device-based cognitive testing. Users could interact with such testing platforms in traditional healthcare settings for a one-time screening or infrequent monitoring and at-home for screening and more frequent monitoring than what is possible with conventional pen and paper-based tests. Beyond easing the time/burden of administering every test and making these tests more enjoyable and gamified for patients, the future development of mobile device-based cognitive testing could exceed the sensitivity and specificity of pen and paper tests in that they will measure: 1) Attention (e.g., auditory, sustained vigilance, working memory); 2) Processing speed; 3) Language (e.g., generativity, fluency, object naming); 4) Learning and memory (e.g., free recall and recognition); 5) Executive functioning (e.g., mental flexibility, set-shifting, problem-solving, abstract reasoning); and 6) Visual-perceptual reasoning. Mobile device-based cognitive testing could also offer clinicians highly interpretable computer-generated diagnostic reports and a testing experience far more pleasant and game-like than conventional pen-and-paper cognitive screening tools.

Limitations

This study tested the usability of a prototype; we have yet to establish the psychometric validity of this particular set of test modules. Several features of our study may limit the generalizability of our findings in important ways. The present study acknowledges a limitation in the sample demographics, characterized by a uniform representation of individuals who self-identified as White and non-Hispanic. This homogeneity in racial and ethnic backgrounds may limit the findings' generalizability to a broader, more diverse population of older adults. The study's outcomes and conclusions may not fully account for the variations in cognitive experiences and preferences among individuals from different racial, ethnic, and cultural backgrounds. This study required usable home Wi-Fi service and motor and sensory ability to use the study device, so the results may not be generalizable to older persons without high-speed internet access or economically disadvantaged persons. We required a legally authorized representative for subjects with questionable capacity for informed consent, so our results may not be generalizable to unbefriended older adults with mild-to-moderate cognitive impairment. Finally, self-selection likely occurred at multiple stages of our recruitment, and our sample should be assumed to represent older adults who visit an academically affiliated geriatric primary care practice and are comfortable volunteering for research on cognitive testing, a possible source of healthy user bias. A more diverse participant pool could offer valuable insights into the usability and acceptance of the tablet-based cognitive test across a broader spectrum of older adults. While the findings shed light on the feasibility of this specific demographic, future studies should include a more heterogeneous sample to ensure the robustness and applicability of the results across various populations.

In-person administration of the SUS could have induced social desirability bias, where participants could have scored usability more favorably than they would have using a digital or pen-and-paper instrument. Future studies should include objective usability outcomes such as device-collected emotional response data. Our study did not qualitatively explore the reasons for discontinuation among participants with more severe cognitive impairment. Qualitative inquiry into the discontinuation of self-administered cognitive testing would be an important direction for future research in this area.

Conclusion

Take-home mobile device-based cognitive testing is a usable strategy in older adult primary care patients across a range of cognitive function. However, more severe cognitive impairment predicts unwillingness to engage with this technology. Future studies must systematically enroll economically disadvantaged persons, non-English speaking persons, and persons from racial minorities to ensure that results generalize to all potential users so the technology can achieve optimal public health impact. Take-home mobile device-based testing could be part of a flexible cognitive testing and follow-up strategy that includes mobile device-based testing in healthcare settings and pen-and-paper cognitive testing, depending on individual patient characteristics. Mobile device-based cognitive testing has the potential to increase the flexibility and reach of cognitive screening and follow-up for older adults at risk of or diagnosed with Major Neurocognitive Disorder.

Availability of data and materials

Participants in this research did not consent to the dissemination of individual-level data.

Abbreviations

SLUMS:

St. Louis University Mental Status Examination

SUS:

System Usability Scale (SUS)

CI:

Confidence interval

IQR:

Interquartile range

PLWD:

Persons living with dementia

MCI:

Mild cognitive impairment

IRB:

Institutional Review Board

HIPAA:

Health Insurance Portability and Accountability Act

ROC:

Receiver Operating Characteristic

References

  1. Chan JYC, Yau STY, Kwok TCY, Tsoi KKF. Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: a systematic review. Ageing Res Rev. 2021;72:101506.

    Article  PubMed  Google Scholar 

  2. Yokomizo JE, Simon SS, De Campos Bottino CM. Cognitive screening for dementia in primary care: a systematic review. Int Psychogeriatr. 2014;26(11):1783–804.

    Article  PubMed  Google Scholar 

  3. Manly JJ. Advantages and disadvantages of separate norms for African Americans. Clin Neuropsychol. 2005;19(2):270–5.

    Article  PubMed  Google Scholar 

  4. Tolea MI, Chrisphonte S, Galvin JE. The effect of sociodemographics, physical function, and mood on dementia screening in a multicultural cohort. Clin Interv Aging. 2020;15:2249–63.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Rossetti HC, Lacritz LH, Hynan LS, Cullum CM, Van Wright A, Weiner MF. Montreal cognitive assessment performance among community-dwelling African Americans. Arch Clin Neuropsychol. 2016;acn:acw095v1.

    Google Scholar 

  6. Rossetti HC, Smith EE, Hynan LS, Lacritz LH, Cullum CM, Van Wright A, et al. Detection of mild cognitive impairment among community-dwelling African Americans using the montreal cognitive assessment. Arch Clin Neuropsychol. 2019;34(6):809–13.

    Article  PubMed  Google Scholar 

  7. Devlin KN, Brennan L, Saad L, Giovannetti T, Hamilton RH, Wolk DA, et al. Diagnosing mild cognitive impairment among racially diverse older adults: comparison of consensus, actuarial, and statistical methods. Okonkwo O, editor. J Alzheimers Dis. 2022;85(2):627–44.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Zahodne LB, Sharifian N, Kraal AZ, Zaheed AB, Sol K, Morris EP, et al. Socioeconomic and psychosocial mechanisms underlying racial/ethnic disparities in cognition among older adults. Neuropsychology. 2021;35(3):265–75.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Bernstein A, Rogers KM, Possin KL, Steele NZR, Ritchie CS, Kramer JH, et al. Dementia assessment and management in primary care settings: a survey of current provider practices in the United States. BMC Health Serv Res. 2019;19(1):919.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Koo BM, Vizer LM. Mobile technology for cognitive assessment of older adults: a scoping review. Innov Aging. 2019;3(1). Available from: https://academic.oup.com/innovateage/article/doi/10.1093/geroni/igy038/5266911. Cited 2023 Aug 3

  11. Nef T, Chesham A, Schütz N, Botros AA, Vanbellingen T, Burgunder JM, et al. Development and evaluation of maze-like puzzle games to assess cognitive and motor function in aging and neurodegenerative diseases. Front Aging Neurosci. 2020;21(12):87.

    Article  Google Scholar 

  12. Berron D, Ziegler G, Vieweg P, Billette O, Güsten J, Grande X, et al. Feasibility of digital memory assessments in an unsupervised and remote study setting. Front Digit Health. 2022;26(4):892997.

    Article  Google Scholar 

  13. Thabtah F, Peebles D, Retzler J, Hathurusingha C. Dementia medical screening using mobile applications: a systematic review with a new mapping model. J Biomed Inform. 2020;111:103573.

    Article  PubMed  Google Scholar 

  14. Noyes ET, Major S, Wilson AM, Campbell EB, Ratcliffe LN, Spencer RJ. Reliability and factor structure of the Saint Louis University Mental Status (SLUMS) examination. Clin Gerontol. 2023;46(4):525–31.

    Article  PubMed  Google Scholar 

  15. Shwartz SK, Morris RD, Penna S. Psychometric properties of the Saint Louis University Mental Status Examination. Appl Neuropsychol Adult. 2019;26(2):101–10.

    Article  PubMed  Google Scholar 

  16. Cummings-Vaughn LA, Chavakula NN, Malmstrom TK, Tumosa N, Morley JE, Cruz-Oliver DM. Veterans affairs Saint Louis University Mental Status examination compared with the Montreal Cognitive Assessment and the short test of mental status. J Am Geriatr Soc. 2014;62(7):1341–6.

    Article  PubMed  Google Scholar 

  17. Llinàs-Reglà J, Vilalta-Franch J, López-Pousa S, Calvó-Perxas L, Torrents Rodas D, Garre-Olmo J. The trail making test. Assessment. 2017;24(2):183–96.

    Article  PubMed  Google Scholar 

  18. Gromisch ES, Beauvais J, Iannone LP, Marottoli RA. Optimizing clock drawing scoring criteria: development of the west haven-yale clock drawing test. J Am Geriatr Soc. 2019;67(10):2129–33.

    Article  PubMed  Google Scholar 

  19. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum-Comput Interact. 2008;24(6):574–94.

    Article  Google Scholar 

  20. Jordan PW, Thomas B, McClelland IL, Weerdmeester B, editors. SUS: A “quick and dirty” usability scale. In: Usability Evaluation In Industry. 0 ed. CRC Press; 1996. Available from: https://www.taylorfrancis.com/books/9781498710411. Cited 2023 Aug 30.

  21. Peres SC, Pham T, Phillips R. Validation of the System Usability Scale (SUS): SUS in the Wild. Proc Hum Factors Ergon Soc Annu Meet. 2013;57(1):192–6.

    Article  Google Scholar 

  22. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837.

    Article  CAS  PubMed  Google Scholar 

  23. Tariq SH, Tumosa N, Chibnall JT, Perry MH, Morley JE. Comparison of the Saint Louis University mental status examination and the mini-mental state examination for detecting dementia and mild neurocognitive disorder–a pilot study. Am J Geriatr Psychiatry Off J Am Assoc Geriatr Psychiatry. 2006;14(11):900–10.

    Article  Google Scholar 

  24. Lasko TA, Bhagwat JG, Zou KH, Ohno-Machado L. The use of receiver operating characteristic curves in biomedical informatics. J Biomed Inform. 2005;38(5):404–15.

    Article  PubMed  Google Scholar 

  25. Athilingam P, Visovsky C, Elliott AF, Rogal PJ. Cognitive screening in persons with chronic diseases in primary care: challenges and recommendations for practice. Am J Alzheimers Dis Dementiasr. 2015;30(6):547–58.

    Article  Google Scholar 

  26. Schweitzer P, Husky M, Allard M, Amieva H, Pérès K, Foubert-Samier A, et al. Feasibility and validity of mobile cognitive testing in the investigation of age-related cognitive decline. Int J Methods Psychiatr Res. 2017;26(3):e1521.

    Article  PubMed  Google Scholar 

  27. Moore RC, Swendsen J, Depp CA. Applications for self-administered mobile cognitive assessments in clinical research: a systematic review. Int J Methods Psychiatr Res. 2017;26(4):e1562.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Lancaster C, Koychev I, Blane J, Chinner A, Wolters L, Hinds C. Evaluating the feasibility of frequent cognitive assessment using the Mezurio smartphone app: observational and interview study in adults with elevated dementia risk. JMIR MHealth UHealth. 2020;8(4):e16142.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Cubillos C, Rienzo A. Digital cognitive assessment tests for older adults: systematic literature review. JMIR Ment Health. 2023;8(10):e47487.

    Article  Google Scholar 

  30. Davis DH, Creavin ST, Yip JL, Noel-Storr AH, Brayne C, Cullum S. Montreal Cognitive Assessment for the detection of dementia. Cochrane Dementia and Cognitive Improvement Group, editor. Cochrane Database Syst Rev. 2021;2021(7). Available from: http://doi.wiley.com/10.1002/14651858.CD010775.pub3. Cited 2023 Aug 14.

  31. Kourtis LC, Regele OB, Wright JM, Jones GB. Digital biomarkers for Alzheimer’s disease: the mobile/wearable devices opportunity. Npj Digit Med. 2019;2(1):9.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the individuals who graciously volunteered as subjects in LifeBio Brain Phase 1, and the Brown Medicine Geriatrics Practice whose cooperation made this work possible.

Funding

This work was funded by NIA SBIR 1R43AG076341-01.

Author information

Authors and Affiliations

Authors

Contributions

TAB was site principal investigator of the study, and led data analysis and manuscript production. YL contributed to data collection, data management, and manuscript production. IV contributed to data collection, management, and manuscript production. DB contributed to data interpretation and manuscript production. LS and JS led prototype design, and participated in study design, data interpretation, and manuscript production. AS contributed to prototype development and study design. RW contributed to project management, data interpretation, and study design. SG contributed to study design, project management, data collection, data management, data interpretation, and manuscript production.

Corresponding author

Correspondence to Thomas A. Bayer.

Ethics declarations

Ethics approval and consent to participate

Advarra provided Institutional Review Board (IRB) approval and oversight of all study materials and procedures through a reliance agreement with the Rhode Island Hospital IRB. Advarra complies with US federal regulations and the ethical principles of the Belmont Report, the Nuremberg Code, and the Declaration of Helsinki. The Advarra IRB approved this study on August 24, 2022, protocol number Pro00062144. All participants or their legally-authorized representative provided written informed using an IRB-approved Informed Consent Form.

Consent for publication

Not applicable.

Competing interests

Lisbeth Sanders, Lisbeth Sanders, Rebecca Williams, and Anthony Serpico have a commercial interest in the development and commercialization of the prototype that was tested in this study. The remaining authors have no competing interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bayer, T.A., Liu, Y., Vishnepolskiy, I. et al. Usability of an at-home tablet-based cognitive test in older adults with and without cognitive impairment. BMC Digit Health 2, 66 (2024). https://doi.org/10.1186/s44247-024-00123-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44247-024-00123-7

Keywords