Skip to main content

Artificial intelligent tools: evidence-mapping on the perceived positive effects on patient-care and confidentiality

Abstract

Background

Globally, healthcare systems have always contended with well-known and seemingly intractable challenges like safety, quality, efficient and effective clinical and administrative patient-care services. To firmly confront these and other healthcare challenges, the World Health Organisation proposed a full adoption of artificial intelligence (AI) applications into patient care to stimulate efficiency and guarantee quality in patient management.

Purpose

This review aimed to establish the extent and type of evidence of the positive effects of the use of AI tools in patient care. Thus, the review mapped evidence by using articles published between January 1, 2010, and October 31, 2023.

Methods

Consistent with the protocol by Tricco et al., a comprehensive literature search was executed from Nature, PubMed, Scopus, ScienceDirect, Dimensions, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar. Upholding the inclusion and exclusion standards, 95 peer-reviewed articles were included in this review.

Findings

We report that the use of AI tools can significantly improve the accuracy of clinical diagnosis to guarantee better patient health outcomes. AI tools also have the ability to mitigate, if not eliminate, most of the factors that currently predict poor patient outcomes. Furthermore, AI tools are far more efficient in generating robust and accurate data in real time and can help ease and accelerate workflow at healthcare facilities.

Conclusion

If properly integrated into the healthcare system, AI will help reduce patients’ waiting time and accelerate the attainment of Sustainable Development Goals 3.4, 3.8, and 3.b. We propose that AI developers collaborate with public health practitioners and healthcare managers to develop AI applications that appreciate socio-cultural dimensions in patient care.

Peer Review reports

Introduction

The global healthcare system is challenged with the scarcity of critical healthcare professionals, changes in disease patterns, high cost of healthcare, adverse effects of climate change, pandemics, and access and equity issues, among others [1,2,3,4]. Historically, the healthcare system has always contended with well-known and seemingly intractable challenges, including safety, treatment-diagnosis mismatch, misdiagnosis, under and over-prescription, inaccurate and incomplete patient records, inadequate resources and workforce to sustain the ever-stretched patient-care services [5, 6]. Given that the deadline set for the realisation of the Sustainable Development Goals (SDGs) is fast approaching, healthcare managers are adopting several strategies to fix the challenges sustainably [3, 7]. Considering that there is no quick fix to the numerous sets of healthcare challenges, the World Health Organisation (WHO) proposed, in addition to other interventions, a full integration of artificial intelligence (AL) tools in healthcare to stimulate efficiency and accelerate the realisation of the health-related SGDs [3, 7, 8].

AI tools are a set of technologies with computerised features that have the capacity to simulate intelligent human behaviours [5, 8]. These tools possess speed, huge data storage and processing capacity, are reliable, interoperable with other technological systems, and by far more accurate in their interpretations and patients’ diagnoses [9, 10]. When effectively combined with human reasoning, AI tools have the ability to accurately establish patterns and complex correlations subtly in large and high-dimensional datasets that often escape the traditional techniques [8, 11].

Though the full adoption of AI tools into healthcare is yet to be realised, there is evidence of the wide application of AI tools in patient care globally [10, 12, 13]. So far, AI applications in patient care are getting more sophisticated, effective, and efficient in supporting clinical and administrative decisions [2, 12]. Regardless of the level of AI use in patient care across the globe, these intelligent machines appear to be super-supportive and could redefine the future of healthcare and change its face for the better [10, 14, 15]. The utilisation of AI and other new technologies saw a surge globally during the Sars-Cov-2 pandemic [16,17,18]. For instance, various AI and new technological platforms and devices were utilised to provide continuous and essential healthcare services to patients, including predicting mortality during the pandemic [16,17,18]. Some of these mediums and devices include mobile-based self-care, video conferencing, virtual healthcare, tele-monitoring, tele-medicine, tele-consulting, tele-intensive care unit, e-consult, tele-radiology, virtual visits, and telesurgical services [16,17,18]. At this point, the patient remains the ultimate “subject” in this whole discourse and is in whose best interest AI tools are deployed in the healthcare system [2, 19, 20].

While there is growing recognition of the utility of AI tools in patient care, their coverage in the developing world is rather on a small scale [21,22,23]. So far, Asia, North America, and Europe appear to be the continents with the fastest AI coverage and with widespread application in patient-care services [13, 22, 24]. Though these continents are far from realising full adoption of AI applications in all aspects of their healthcare systems, there are modest gains across these continents [25, 26]. For instance, funding for research projects in AI adoption in healthcare through the European Union Horizon 2020 scheme shot up between 2014 and 2020 [27]. Moreover, the European Commission developed several ethicolegal instruments to regulate and guarantee the responsible design and use of AI systems in patient care and beyond [22, 27].

In North America, for example, AI tools are currently being applied in the management of cancer, hypertension, cerebrovascular accidents and conditions, and in obstetrics as well as paediatric care services [6, 9, 27]. The other continents, including Australia, South America, and Africa, have also recorded modest successes in the application of AI tools in their healthcare systems [7, 13, 22, 28]. Though records exist about AI applications in invasive and non-invasive procedures in these continents, especially in Africa, their use is more associated with smart devices aided by applications such as AiCure and a gamut of AI ChatBots [19, 20, 22].

Despite the growing application of AI in other fields worldwide, there seems to be inadequate empirical account, especially evidence synthesis, to clearly establish common themes and concepts across existing literature on the use of AI tools in patient care [13, 15, 20]. Specifically, there is inadequate evidence mapping on how the use of AI tools in healthcare is positively impacting patient care globally [2]. Meanwhile, these pieces of evidence are essential for developing policy and evidence-based integration of AI tools into healthcare for improved patient outcomes. Moreover, the role of AI in achieving SDG 3.4 (attainment of universal health coverage, including access to quality essential healthcare services, medicines and vaccines for all) by 2030 is not clear [3, 7, 8]. Certainly, AI would be a critical resource in this global quest, and reviews collating such pieces of evidence are of urgent need. Therefore, this review aimed to fill this research gap by mapping the existing evidence of the positive effects of the use of AI tools in patient care.

Methods

Study design and search strategy

We examined, synchronised, and analysed peer-reviewed articles using Tricco et al. [29] guidelines. This includes developing and examining the purpose of the study, crafting, reviewing, and examining the research questions, and then identifying and discussing the search terms. The other guidelines include identifying and exploring relevant databases, downloading articles, data mining, organising and synthesising results and carrying out consultation. Five questions defined the review, including, “what are the potential positive effects of AI applications on (1) healthcare data, (2) diagnostic decisions, (3) patient-care, (4) medical errors, and (5) medical emergencies?”.

To inject rigour and comprehension into the search process, we first explored PubMed for Medical Subject Heading (MeSH) terms on the topic (see Table 1). The MeSH terms were validated by a research librarian with over 10 years of working experience in the university. Search for articles was executed at two levels based on the MeSH terms. First, the search terms “Confidentiality” or “Artificial Intelligence” yielded a total of 4,512 articles. Second, the search was based on 30 MeSH terms and also produced a total of 1,688 articles (see Fig. 1 and Table 1). The search covered studies conducted between January 1, 2010, and October 31, 2023. The current study was executed between January 1 and October 31, 2023.

Table 1 Search strategy
Fig. 1
figure 1

PRISMA flow diagram

This review was conducted in consistent with the Preferred Reporting Items for Reviews and Meta-Analyses extension for Scoping Reviews—PRISMA-ScR [29, 30]. Through a detailed and an exhaustive data screening process, all duplicate articles were kept in a folder and later deleted. Many of these deleted articles were considered incoherent with the inclusion standards (as described below). The first level screening was executed by five authors (FSA, SM, RVK, LAA, and IST). However, where the suitability of an article was in contention, that article was referred to four other authors (EWA, CES, VKD, and NNB) for further assessment until consensus attained. To ensure comprehension and rigour in the search process, citation chaining was done on all full-text articles that met the inclusion standards to identify relevant additional articles for further assessment.

Data sources

We searched for peer-reviewed articles from the following databases/publishers/search engines: Nature, PubMed, Scopus, ScienceDirect, Dimensions, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar (see Fig. 1 and Table 1). Through a comprehensive and independent assessment of various databases/publishers/search engines conducted by five authors (EWA, VKD, CES, SM, and NNB), the sources mentioned above were found to contain a very good number of relevant articles on the subject under review.

Study selection

A random sample of 12 titles and abstracts screened independently by four authors (EWA, NNB, CES, and SM) was used to standardise the inclusion and exclusion criteria. Weekly virtual and in-person meetings were held to discuss and reconcile disagreements and clarify eligibility among the four authors (EWA, NNB, CES, and SM) and the fifth author (VKD). Before progressing to full-text screening, authors ensured that all differences concerning the selection of articles were resolved by five authors (EWA, NNB, CES, SM, and VKD). Three authors (EWA, NNB, and CES) independently screened the articles before data extraction commenced.

Data extraction and thematic analysis

All authors carried out data extraction independently. Four authors (CES, RVK, LAA, and IST) extracted data on “authors, purpose, methods, and country,” while five authors (EWA, VKD, FSA, SM, and NNB) extracted data on “perceived positive effects and conclusions” (see Table 2).

Table 2 Extracted data

In consonance with Cypress [126] and Morse [127], thematic analysis was done by six authors (EWA, VKD, CES, SM, RVK, and NNB). Thus, data were coded until themes emerged directly from the data, in line with the stated research questions [127,128,129]. Our analysis included reading over and over to familiarise ourselves with the data, identifying candidate codes, identifying and assessing emerging themes. Additionally, emerging themes were reviewed, clearly named and defined. However, where doubt occurred, we extensively discussed till consensus was established. Finally, a qualitative report was developed and extensively reviewed to guarantee internal and external homogeneity of the themes.

Quality rating

We conducted quality ratings on all candidate articles in line with the guidelines provided by Tricco et al. [29]. That is, the shortlisted articles must have a research background, aim, context, clear method, sampling technique, data collection and analysis, reflectivity, value of research, and ethics. Therefore, all candidate articles were examined and scored according to the majority of the sections. Thus, articles that scored “A” had little or no limitations, “B” had some limitations, “C” had substantial limitations but carried some relevance, and “D” had substantial flaws that could undermine the validity of the study as a whole, so such articles were not used for this review [29].

Findings

We explored previous studies conducted from January 1, 2010, to October 31, 2023, on the positive effects of AI tools on patient care. A total of 1,688 articles were screened, of which 527(31%) discussed the use of AI applications in healthcare. Upon further assessment of the 527 articles, 95(18%) met the inclusion standards and were used in this review. The included 95 articles were distributed across the following years: 2023 = 1(1%), 2022 = 7(7%), 2021 = 30(32%), 2020 = 23(24%), 2019 = 11(12%), 2018 = 9(9.5%), 2017 = 9(9.5%), 2016 = 1(1%), 2015 = 2(2%), 2014 = 1(1%), and 2013 = 1(1%) (See Fig. 2). Additionally, 65(68%) of the reviewed articles adopted the quantitative approach, 20(21%) applied the qualitative, and 10(11%) used the mixed method approach. Furthermore, the reviewed articles are conducted across: Asia = 36(39%), North America = 25(26%), Europe = 19(20%), Australia = 5(5%), South America = 4(4%), Africa = 3(3%), North America & Asia = 1(1%), North America & Europe = 1(1%), and Europe & Asia = 1(1%) (See Fig. 3). Clearly, the articles reviewed were disproportionately concentrated in three continents (Asia, North America, and Europe), accounting for over one-third of the total articles reviewed. Though Australia, South America, and Africa recorded very few articles in this review, this may mean that most of the articles from these continents did not meet our inclusion standards.

Fig. 2
figure 2

Yearly distribution of included articles in percentages

Fig. 3
figure 3

Geographical distribution of included articles in percentages

Improvement in patient diagnosis

The majority of the articles, 66 (69%) reviewed [31, 32, 35, 37,38,39, 41, 43, 44, 47, 49,50,51,52, 55,56,57,58, 62, 63, 65,66,67,68,69,70, 73,74,75,76, 79,80,81,82, 84,85,86,87, 89, 92, 94,95,96,97,98,99, 101,102,103,104,105,106,107, 110,111,112, 114,115,116,117,118,119, 122,123,124,125], reported a high sensitivity of AI applications in detecting various clinical conditions. It is widely reported [44, 50, 58, 62, 63, 65,66,67, 92, 110, 115] that AI applications would significantly improve the accuracy of clinical diagnosis. While these intelligent machines could act independently during patient-care [32, 37, 43, 98, 101], they may also enhance the quality of decisions reached by clinicians [49,50,51,52, 55,56,57,58, 69, 103, 119]. According to van der Zander et al. [112], Visram et al. [114], and Wittal et al. [118], given that AI tools thrive on large datasets, they are better at diagnosing (Artificial Clinicians) far more diseases in a relatively shorter time than human clinicians. This looks very promising, considering the ability of AI tools to leverage algorithms that help to predict accurately future outbreaks of diseases within specific populations [58, 62, 63, 65, 74, 81, 119]. Although there are concerns about the ability of AI tools to act independently, the public is cautiously optimistic that these “artificial clinicians” could still be controlled to act responsibly [66, 73, 76, 84, 89] to reduce errors.

Reduction of medical errors and improvement in workflow

Evidence (50; 53%) suggests that as AI tools are increasingly introduced into the workflow, the incidence of medical errors associated with workload and stress will significantly reduce [32, 34, 42,43,44, 47,48,49, 52,53,54,55, 57, 58, 61,62,63,64, 67, 68, 70, 71, 73, 74, 77, 79,80,81,82,83, 86, 88, 91, 93, 98, 100, 101, 104, 105, 109,110,111,112, 115, 116, 118,119,120,121,122, 124]. Meanwhile, the global healthcare system is seeing a rapid reduction in the workforce per unit population while the number of patients seeking care is ever-increasing [105, 110, 124]. This situation arguably contributes to increases in the incidence of medical errors (sometimes fatal) [55, 101]. However, if properly deployed, AI tools can offer superior care and significantly reduce these errors [55, 67, 104, 110, 116]. This is significant because patients can be assured of adequate protection from avoidable medical errors. Consistent with Catho et al. [45], several other articles [52, 55, 57, 58, 62, 63, 67, 68, 110, 122] found that AI applications could easily be integrated into the workflow and actually facilitate clinical decisions by physicians and produce near-accurate data.

Accurate and reliable data

Healthcare decisions, especially those regarding patients’ diagnoses, rely heavily on data that are incontrovertible, accurate, and reliable [48, 77, 87, 89, 91, 92, 95,96,97,98,99, 101,102,103,104,105,106,107]. Therefore, it is incumbent on healthcare managers to develop health information management systems that guarantee an uninterrupted supply of accurate and reliable patient data in real time for both administrative and clinical decision-making [79, 82, 89, 91, 93, 104]. Many of the reviewed articles (39; 41%) [34, 35, 48, 77,78,79,80,81,82, 84, 85, 87, 89, 91,92,93, 95,96,97,98,99, 101,102,103,104,105,106,107, 111,112,113,114, 116,117,118,119, 122, 124, 125] reported that AI tools hold enormous potential to process large volumes of patients’ data and make timely and accurate inferences. Apart from providing rich and accurate data for decision-making in clinical settings, AI tools provide expeditious and reliable data for quick action in epidemiological and public health fields [80, 89, 112]. Although concerns have been raised about data privacy [84, 94, 104, 113], the public believes that AI tools would positively impact healthcare decisions and enhance trustworthiness to improve patient care.

Improvement in patient-care

Artificial intelligent tools are credited for their ability to mitigate, if not eliminate, most of the factors that currently predict poor patient outcomes (23; 24%) [32, 33, 36, 40, 45, 49, 54, 55, 60, 71, 82, 94,95,96, 100, 103, 108, 109, 113, 117, 121, 123, 125]. These include errors in clinical diagnosis, long waiting times, poor staff attitudes, inaccurate and missing patient data, workload and staff burnout, and discrimination because a large number of patients need care [32, 40, 49, 55, 82, 109, 121]. According to Fritsch et al. [55], Wang et al. [117], MacPherson et al. [82], and Ploug et al. [95], when AI tools that serve as machine clinicians are applied in combination with human clinicians, a rich and valuable context is provided to improve the quality of care provided to patients. Moreover, AI tools provide valuable opportunities for patients to receive needed care remotely [54, 96, 121, 123]. For instance, domestic caregivers could receive valuable guidance from machine clinician AI tools when confronted with difficult decisions regarding patients with chronic conditions such as cerebrovascular accident and related conditions, hypertension, and diabetes. This reduces the stressors associated with caring for patients with such chronic conditions and improves the needed quality of care [82, 95, 103], which may prevent or reduce medical emergencies.

Prompt detection of medical emergencies

Changes in the conditions of patients can sometimes be sudden and unpredictable, especially during emergency care [37, 51, 60, 106, 110]. As reported in 16(17%) of the selected articles [31, 33, 35,36,37, 46, 51, 59, 60, 72, 90, 102, 106, 109, 110, 113], with the introduction of AI tools into healthcare, clinicians can now detect early and act swiftly in providing life-saving care to patients during medical emergencies. These are possible because AI tools have features that could trigger instantaneous alerts on imminent changes in patient conditions, such as seizures and strokes, and ensure timely medical interventions [31, 36, 51, 59, 107]. The public [51, 72, 110, 113] is hopeful that if well implemented, AI tools could become valuable in the management of medical emergencies.

Discussion

The utilisation of AI tools in governance, academia, manufacturing, security, entertainment, space and marine exploration, health, etcetera, is gaining popularity among researchers globally [15, 24]. There are several studies about the utility of AI tools in other fields [10, 15, 24], yet few studies exist on direct AI applications in patient care [13]. Moreover, most of the studies that examined the application of AI tools in health were not conducted in the area of patient care [2, 13]. Affirming this, the current study found that out of 527 articles on AI use in health, only 95(31%) met the inclusion standards and formed part of this review. Besides, consistent with Khalid et al. [13] and Naik et al. [24], the current study reports that the articles reviewed were disproportionately concentrated in three continents (Asia, North America, and Europe), which account for over one-third of the total articles reviewed. We discuss our findings under the following themes: improvement in patient diagnosis, reduction in medical errors and facilitation of workflow, accurate and reliable data, improvement in patient care, and prompt detection of medical emergencies.

Improvement in patient diagnosis

Accurate and timely determination of a patient’s diagnosis defines the patient-clinician relationship, which becomes a key prerequisite for administering treatment for improved patient outcomes [13, 130]. Center of Intellectual Property and Technology Law [12] explained that it is both a legal and moral obligation for a clinician to exercise due diligence in diagnosing patients’ conditions and disclosing the same to the patient. The suggestion is that the design of AI tools provides a superior advantage of delivering accurate diagnoses in real time [14, 19]. Consistent with this, our review found that these machine clinician AI tools could significantly improve the accuracy of clinical diagnosis. Moreover, integrating AI tools into the care process will provide a robust and trustworthy context for clinicians to shape and improve their own diagnostic decisions. For instance, Al-Zaiti et al. [35] reported that an AI model outperformed both commercial interpretation software and experienced clinician interpretation in diagnosing underlying Acute Myocardial Ischemia in patients with chest pain. Again, Amarbayasgalan et al. [37] reported that the Reconstructive Error (RE) based Deep Neural Networks (DNNs) AI model outperformed other models in diagnosing coronary heart disease risk. Also, Ayatollahi et al. [38] found that Artificial Intelligence Neural Network (ANN) and Support Vector Machine (SVM) algorithms had higher sensitivity in diagnosing Cardiovascular Disease.

The caution, however, is that the current review disaffirms previous studies [10, 11] that reported the possibility of AI tools committing serious errors in their diagnostic decisions based on factors like inaccurate and biased data and inadequate machine training and learning. This notwithstanding, the findings of the current review based on a large body of previous studies [12,13,14, 19, 130] suggest that AI tools in patient care significantly address shortcomings in the traditional diagnostic regime. Thus, if well implemented, AI tools in patient care could stimulate and accelerate the realisation of SDG 3.4, which calls for a decrease by one-third the avoidable deaths from non-communicable diseases by the year 2030 [3, 7].

Reduction in medical errors and workflow

There is a correlation between increased workload, stress, and burnout of clinicians and the occurrence of medical errors in patient care [9,10,11]. Some of these medical errors result in serious negative health outcomes (including death) of patients. Our review found that AI tools can help minimise the incidence of medical errors associated with hospital-based stressors. In one example, Catho et al. [45] found that Computerised Decision Support Systems (CDSSs) model could actually reduce medical errors and support the clinical decisions of physicians. Another finding [51] suggests that the Coronary Heart Disease (CHD) prediction AI model could reduce medical errors and shape the clinical decisions of physicians. Thus, IA tools can enhance the trust that patients have for healthcare professionals. Additionally, the reviewed studies [5, 11, 15, 131] reported that AI tools will serve the best interest of patients. This may mean that these machine clinician AI tools seem to provide a great way forward in healthcare presently.

Accurate and reliable data

The role of robust, accurate, and reliable data in the decision-making process regarding patient care cannot be overemphasised [13, 15]. Several studies [11, 14, 35, 80] recognised the super-abilities of AI tools in procuring, organising, and preserving large volumes of datasets for use in both clinical and administrative decisions concerning patient care. For example, Huang et al. [67] reported that the Major Adverse Cardiac Events (MACE) prediction AI model can leverage a large data-mining-based approach in predicting acute coronary syndrome (ACS) in patients. Again, Kayvanpour et al. [73] reported that the Silico Neural Network AI model can utilise multi-modal data points in accurately predicting Acute Coronary Syndromes in patients. However, the findings from the current study oppose some earlier ones [10, 12], suggesting that AI tools could seriously compromise data privacy. In addition, AI tools are vulnerable to attack by computer hackers who could misuse patient records [9, 19]. Although these concerns are legitimate, some previous studies [2, 15] suggest that AI tools have inherent security mechanisms against data leaks and theft. Moreover, in the long term, the odds are high that AI tools will guarantee more reliable, accurate, and timely data in patient care. Thus, with proper design and implementation, including an effective ethicolegal framework, AI tools used in patient care would help in achieving SGD 3.b, which calls for research and development of vaccines and medicines for communicable and non-communicable diseases by 2030 [3, 7].

Improvement in patient-care

The traditional patient-care regime is largely characterised by delays in receiving care, discrimination, poor care-provider attitude, staff fatigue and stress, inadequate staff, misdiagnosis, treatment-diagnosis mismatches, under- and over-prescription, missing patient records, etcetera [8, 15]. These factors contribute to the ever-increasing incidence of mortality and complications in patient conditions recorded in most healthcare facilities worldwide [14]. However, our review found that AI tools have the capacity to significantly reduce, if not eliminate, most of the factors that currently undermine patent care. For example, Pattarabanjird et al. [93] reported a general improvement in the health of patients with severe Coronary Artery Disease when the Single Nucleotide Polymorphism (SNP) AI model was applied. Also, Uzir et al. [110] reported that AI-enabled smartwatch applications are promoting health democracy and personal healthcare. Meanwhile, Walter et al. [115] found that the Automatic pain recognition (APR) AI system had a high sensitivity to pain in patients, which helps in pain management.

AI tools have more effective and efficient data storage and protection ability, data interpretation ability, accurate diagnostic ability, and exceptionally expeditious and reliable service to patients [8, 11, 14]. For instance, previous studies [7, 13], as corroborated by the findings from our current review, indicated that AI tools have the ability to provide needed care remotely to patients who may not necessarily be present physically at the hospital. This will help reduce large patient numbers at the hospital and improve the overall turnaround time for care. However, our review contradicts other previous studies [20, 132], which raised concerns over AI tools providing care in a discriminatory manner. Thus, such findings question the utility of AI tools in mental health services and are sceptical about their ability to provide non-pharmacological care. Regardless, a large body of previous studies [7, 13,14,15] suggests that AI tools could significantly improve the total quality of care to patients. Ultimately, AI tools would contribute significantly to the realisation of the universal health coverage provided in SDG 3.8.

Prompt detection of medical emergencies

Critically ill patients are constantly under close monitoring for even the slightest change in condition [15, 130]. So far, AI tools have proven very helpful in saving the lives of many patients [11, 131]. For instance, with the introduction of AI tools into healthcare, clinicians can detect and act swiftly to provide life-saving care to patients during medical emergencies [5, 15]. One evidence is that the Automated Delirium-Risk Assessment System (Auto-DelRAS) has a high level of validity in predicting delirium risk in patients in the intensive care unit [85]. Also, Hu et al. [65] reported that the Rough Set Theory (RST) and Dempster-Shafer Theory (DST) have higher sensitivity in predicting the incidence of Major Adverse Cardiac events during medical emergencies. Further evidence [76] suggests that deep neural network AI application has a better sensitivity in predicting mortality among patients with Spontaneous Coronary Artery Dissection (SCAD). Thus, given its several algorithms, these intelligent applications could detect imminent changes in patient conditions and trigger instantaneous alerts for quick interventions [9]. Moreover, we reported that there are AI applications that could significantly mitigate errors associated with AI tools [9, 14, 130]. So far, there seems to be no better or more competent alternative to AI tools in the management of medical emergencies.

Strengths and limitations

The review attempts to explore evidence of the perceived positive effects of AI applications in patient care from a global perspective. To ensure reproducibility, reliability, and trustworthiness of our findings, there was strict adherence to the following: first, all authors independently searched for and screened articles using the MeSH terms. Moreover, guided by the inclusion and exclusion guidelines and a checklist, all selected full-text articles were subjected to a quality rating. Additionally, to establish validity and replicability, all authors participated in a thorough data extraction process and review.

This review also has some limitations. First, relying on only peer-reviewed articles and selecting articles written in the English language limit the literature sample used because we might have excluded other relevant articles written in other languages. Moreover, we recognise that the study may have carried weaknesses and biases contained in the reviewed articles. Therefore, the generalisability of our findings may be limited. Finally, it is important to acknowledge that regardless of the positive outcomes reported for AI tools by several randomised trials, the generalisability of the utility of these intelligent tools is yet to be established, given that the overall utility of AI tools is dependent on the quality of data and the training provided to the tools.

Recommendations for policy and research directions

We propose that governments leverage AI applications to aid and accelerate the realisation of the health-related SDGs in their jurisdictions. Specifically, governments in developing countries can provide financial support for the effective and efficient adoption of AI tools in healthcare. Secondly, there would be the need to introduce academic courses on the application of AI tools in healthcare for all categories of health professionals, especially in developing countries. Thirdly, governments in developing countries need to sponsor Biomedical Engineers to be trained in the use of AI tools in healthcare. Such training should, among other specialties, cover cyber Intelligence. In addition, public health experts and healthcare managers need to collaborate with AI developers to develop applications that can efficiently provide both pharmacological and non-pharmacological care to patients. Furthermore, we propose that AI developers need to collaborate with healthcare managers to develop AI applications that are super sensitive to socio-cultural dimensions in patient care. Lastly, we encourage WHO and other agencies to provide sponsorship for research into AI applications, patient care, and SDGs. Future research may explore how AI applications are promoting public health interventions, such as the fight against pandemics and epidemics. Also, assessing the potential negative effects of the use of AI tools in patient care is warranted.

While making these recommendations, we envisage some potential barriers. These may include non-commitment from political decision-makers to successfully implement projects incorporating AI tools into healthcare, especially in developing nations. First, budgetary constraints and increasing competing demands on governments in most developing countries may limit the implementation of these recommendations. Second, unstable power supply, cultural sensitivity and potential hesitancy to new technology, conspiracy, and prevalence of cyber Intelligence globally could undermine the smooth implementation of AI tools used in patient care in less-resourced countries.

Conclusion

AI applications are steadily and rapidly shaping the relationship between clinicians and patient care globally. This development has attracted some criticism, including potential breaches of privacy, data fraud, bias and discrimination, and decline in humanity during patient care. However, AI applications are demonstrating enhanced capacity that can change the course of our collective future for the better. Thus, AI tools have the ability to improve the accuracy of clinical diagnosis significantly and, guarantee better health outcomes for patients and mitigate, if not eliminate, most of the factors that currently predict poor patient outcomes. Furthermore, AI tools are far more efficient in generating robust and accurate data in real time and could help ease and accelerate workflow. Additionally, AI devices and applications are contributing largely to the management of medical emergencies.

Furthermore, if properly integrated into the healthcare systems and used in patient care, AI tools could accelerate the realisation of SDGs 3.4, 3.8, and 3.b. So far, there seems to be no going back on this journey of AI use, including their use as machine clinicians in patient care. Thus, the focus should be on ensuring that these tools are responsibly applied to provide the needed results—improvement in health outcomes. This study is a significant addition to existing evidence on the use of AI tools in healthcare.

Availability of data and materials

No datasets were generated or analysed during the current study.

Abbreviations

AI:

Artificial intelligence

SDGs:

Sustainable development goals

PRISMA-ScR:

Preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews

MeSH:

Medical subject headings

References

  1. Kassam I, Ilkina D, Kemp J, et al. Patient perspectives and preferences for consent in the digital health context: State-of-the-art literature review. J Med Internet Res. 2023. https://doi.org/10.2196/42507.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Khan B, Fatima H, Qureshi A, et al. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomedical Materials & Devices. 2023. https://doi.org/10.1007/s44174-023-00063-2.

    Article  Google Scholar 

  3. World Health Organization. The importance of ethics in artificial intelligence. In: WHO Consultation towards the development of guidance on ethics and governance of artificial intelligence for health: meeting report. Geneva: Switzerland. 2021. pp. 2–3. http://www.jstor.org/stable/resrep35680.6.

  4. Louiset M, Allwood D, Bailey S, et al. Let’s reconnect healthcare with its mission and purpose by bringing humanity to the point of care. BMJ Leader. 2023. https://doi.org/10.1136/leader-2023-000747.

    Article  PubMed  Google Scholar 

  5. Coiera E, Liu S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Medicine. 2022. https://doi.org/10.1016/j.xcrm.2022.100860.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Li X, Xu L, Gulliver TA, et al. Guest editorial: Special issue on artificial intelligence in e-healthcare and m-healthcare. Journal of Healthcare Engineering. 2021. https://doi.org/10.1155/2021/9857089.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Earth Institute, Columbia University, and Ericsson. ICT & health: ICT & SDGs. Sustainable Development Solutions Network. 2016;60–75. http://www.jstor.org/stable/resrep15879.12.

  8. Wang C, Zhang J, Lassi N, et al. Privacy protection in using artificial intelligence for healthcare: Chinese regulation in comparative perspective. Healthcare. 2022. https://doi.org/10.3390/healthcare10101878.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Chen C, Ding S, Wang J. Digital health for aging populations. Nat Med. 2023. https://doi.org/10.1038/s41591-023-02391-8.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Davenport TH, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthcare Journal. 2019. https://doi.org/10.7861/futurehosp.6-2-94.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ. 2020. https://doi.org/10.2471/BLT.19.237198.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Center of Intellectual Property and Technology Law (CIPIT). State of AI in Africa 2023. Nairobi, Kenya: Author. 2023. https://creativecommons.org/licenses/by-nc-sa/4.0.

  13. Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Comput Biol Med. 2023. https://doi.org/10.1016/j.compbiomed.2023.106848.

    Article  PubMed  Google Scholar 

  14. Horgan D, Romao M, Morré S A, et al. (2019). Artificial intelligence: Power for civilisation - and for better healthcare. Public health Genomics. 2019. https://doi.org/10.1159/000504785.

  15. Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. J R Soc Med. 2019. https://doi.org/10.1177/0141076818815510.

    Article  PubMed  Google Scholar 

  16. Mehraeen E, Mehrtak M, SeyedAlinaghi S, Nazeri Z, Afsahi AM, Behnezhad F, Vahedi F, Barzegary A, Karimi A, Mehrabi N, Dadras O, Jahanfar S. Technology in the Era of COVID-19: A Systematic Review of Current Evidence. Infect Disord Drug Targets. 2022;22(4):e240322202551. https://doi.org/10.2174/1871526522666220324090245.

    Article  CAS  PubMed  Google Scholar 

  17. Mehraeen E, SeyedAlinaghi S, Heydari M, Karimi A, Mahdavi A, Mashoufi M, Sarmad A, Mirghaderi P, Shamsabadi A, Qaderi K, Mirzapour P, Fakhfouri A, Cheshmekabodi HA, Azad K, Bagheri Zargande S, Oliaei S, Yousefi Konjdar P, Vahedi F, Noori T. Telemedicine technologies and applications in the era of COVID-19 pandemic: A systematic review. Health Informatics J. 2023;29(2):14604582231167432. https://doi.org/10.1177/14604582231167431.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Mohammadi S, SeyedAlinaghi S, Heydari M, Pashaei Z, Mirzapour P, Karimi A, Afsahi AM, Mirghaderi P, Mohammadi P, Arjmand G, Soleimani Y, Azarnoush A, Mojdeganlou H, Dashti M, Cheshmekabodi HA, Varshochi S, Mehrtak M, Shamsabadi A, Mehraeen E, Hackett D. Artificial Intelligence in COVID-19 Management: A Systematic Review. J Comput Sci. 2023;19(5):554–68. https://doi.org/10.3844/jcssp.2023.554.568.

    Article  Google Scholar 

  19. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017. https://doi.org/10.1136/svn-2017-000101.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Besaw C, and Filitz J. Artificial intelligence in Africa is a double-edged sword. Science and Technology. 2019. https://ourworld.unu.edu/en/ai-in-africa-is-double-edged-sword.

  21. Kissi Mireku K, Zhang F, and Komlan G. Patient knowledge and data privacy in healthcare records system. In: 2nd international conference on communication systems, computing and IP applications (CSCITA). Mumbia, India. 2017;154–159. https://doi.org/10.1109/CSCITA.2017.8066543.

  22. Arakpogun EO, Elsahn Z, Olan F, et al. Artificial intelligence in Africa: Challenges and opportunities. Springer International. 2021. https://doi.org/10.1007/978-3-030-62796-6_22.

    Article  Google Scholar 

  23. Okolo CT, Aruleba K, and Obaido G. Responsible AI in Africa: Challenges and opportunities. In: Eke DO, Wakunuma K, Akintoye S (Eds) Responsible AI in Africa. Social and Cultural Studies of Robots and AI. Palgrave Macmillan. Cham. 2023. https://doi.org/10.1007/978-3-031-08215-3.

  24. Naik N, Hameed BMZ, Shetty DK, et al. Legal and ethical consideration in artificial intelligence in healthcare: Who takes responsibility? Frontiers in Surgery. 2022. https://doi.org/10.3389/fsurg.2022.862322.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Benjumea J, Ropero J, Rivera-Romero O, et al. Assessment of the fairness of privacy policies of mobile health apps: Scale development and evaluation in cancer apps. JMIR Mhealth Uhealth. 2020. https://doi.org/10.2196/17134.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Leenes RE, Palmerini E, Koops B, et al. Regulatory challenges of robotics: Some guidelines for addressing legal and ethical issues. Law, Innovation and Technology. 2017. https://doi.org/10.1080/17579961.2017.1304921.

  27. Bak MA, Madai VI, Fritzsche M, et al. You can’t have AI both ways: Balancing health data privacy and access fairly. Front Genet. 2022. https://doi.org/10.3389/fgene.2022.929453.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Donnelly D. First do no harm: Legal principles regulating the future of artificial intelligence in health care in South Africa. Potchefstroom Electron Law J. 2022. https://doi.org/10.17159/1727-3781/2022/v25ia11118.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMAScR): Checklist and explanation. Ann Intern Med. 2018. https://doi.org/10.7326/M18-0850.

    Article  PubMed  Google Scholar 

  30. Munn Z, Peters MDJ, Stern C, et al. Systematic review or scoping review? Guidance for authors when choosing between systematic and scoping review approach. BMC Med Res Methodol. 2018. https://doi.org/10.1186/s128018-0611-x.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Al’Aref SJ, Singh G, van Rosendael AR, et al. Determinants of in-hospital mortality after percutaneous coronary intervention: A machine learning approach. J Am Heart Assoc. 2019;8(5):011160.

    Google Scholar 

  32. Al’Aref SJ, Singh G, Choi JW, et al. A boosted ensemble algorithm for determination of plaque stability in high risk patients on coronary CTA. JACC Cardiovasc Imaging. 2020;13(10):2162–73.

    Article  PubMed  Google Scholar 

  33. Aljarboa S, Miah SJ. Acceptance of clinical decision support systems in Saudi healthcare organisations. Inf Dev. 2021;39(1):86–106. https://doi.org/10.1177/02666669211025076.

    Article  Google Scholar 

  34. Aljarboa S, Shah M, Kerr D, Houghton L, Kerr D. Perceptions of the adoption of clinical decision support systems in the Saudi healthcare sector. In: Blake J, Miah SJ, editors. In Proc. 24th Asia-Pacific Decision Science Institute International Conference. Asia Pacific Decision Sciences Institute; 2019. p. 40–53.

    Google Scholar 

  35. Al-Zaiti S, Besomi L, Bouzid Z, et al. Machine learning-based prediction of acute coronary syndrome using only the pre-hospital 12-lead electrocardiogram. Nat Commun. 2020;11:3966–4010.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Alumran A, et al. Utilization of an electronic triage system by emergency department nurses. J Multidiscip Healthc. 2020;13:339–44.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Amarbayasgalan T, Park KH, Lee JY, Ryu KH. Reconstruction error based deep neural networks for coronary heart disease risk prediction”. PLoS ONE. 2019;14(12):e0225991.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Ayatollahi H, Gholamhosseini L, Salehi M. Predicting coronary artery disease: a comparison between two data mining algorithms. BMC Public Health. 2019;19:448–9.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Baskaran L. Machine learning insight into the role of imaging and clinical variables for the prediction of obstructive coronary artery disease and revascularization: An exploratory analysis of the CONSERVE study. PLoS ONE. 2020;15(6):e0233791.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Betriana F, Tanioka T, Osaka K, Kawai C, Yasuhara Y, Locsin RC. Interactions between healthcare robots and older people in Japan: A qualitative descriptive analysis study. Jpn J Nurs Sci. 2021;18:e12409.

    Article  Google Scholar 

  41. Beunza JJ, Puertas E, Garc’ıa-Ovejero E, et al. Comparison of machine learning algorithms for clinical event prediction (risk of coronary heart disease)”. J Biomed Inform. 2019;97:103257.

    Article  PubMed  Google Scholar 

  42. Blanco N, et al. Health care worker perceptions toward computerized clinical decision support tools for Clostridium difficile infection reduction: A qualitative study at 2 hospitals. Am J Infect Control. 2018;46:1160–6.

    Article  PubMed  Google Scholar 

  43. Borracci RA, Higa CC, Ciambrone G, Gambarte J. Treatment of individual predictors with neural network algorithms improves global registry of acute coronary events score discrimination. Arch Cardiol Mex. 2021;91(1):58–65.

    PubMed  PubMed Central  Google Scholar 

  44. Bouzid Z, Faramand Z, Gregg RE, et al. In search of an optimal subset of ecg features to augment the diagnosis of acute coronary syndrome at the emergency department. J Am Heart Assoc. 2021;10(3):e017871.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Catho G, et al. Factors determining the adherence to antimicrobial guidelines and the adoption of computerised decision support systems by physicians: A qualitative study in three European hospitals. Int J Med Inform. 2020;141:104233.

    Article  PubMed  Google Scholar 

  46. Cho B-J, Choi YJ, Lee M-J, Kim JH, Son G-H, Park S-H, et al. Classification of cervical neoplasms on colposcopic photography using deep learning. Sci Rep. 2020;10(1):13652.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  47. Chow A, Lye DCB, Arah OA. Psychosocial determinants of physicians’ acceptance of recommendations by antibiotic computerised decision support systems: a mixed methods study. Int J Antimicrob Agents. 2015;45:295–304.

    Article  CAS  PubMed  Google Scholar 

  48. Davari Dolatabadi A, Khadem SEZ, Asl BM. Automated diagnosis of coronary artery disease (CAD) patients using optimized SVM. Comput Methods Programs Biomed. 2017;138:117–26.

    Article  PubMed  Google Scholar 

  49. Davis MA, Rao B, Cedeno P, Saha A, Zohrabian VM. Machine Learning and Improved Quality Metrics in Acute Intracranial Hemorrhage by Non-Contrast Computed Tomography. Curr Probl Diagn Radiol. 2020;51:556–61.

    Article  PubMed  Google Scholar 

  50. Dogan MV, Beach S, Simons R, Lendasse A, Penaluna B, Philibert R. Blood-based biomarkers for predicting the risk for 4ve-year incident coronary heart disease in the Framingham Heart Study via machine learning. Genes. 2018;9(12):641.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Du Z, Yang Y, Zheng J, et al. Accurate prediction of coronary heart disease for patients with hypertension from electronic health records with big data and machine-learning methods: Model development and performance evaluation. JMIR Med Inform. 2020;8(7):e17257.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Elahi C, et al. An attitude survey and assessment of the feasibility, acceptability, and usability of a traumatic brain injury decision support tool in Uganda. World Neurosurg. 2020;139:495–504.

    Article  PubMed  Google Scholar 

  53. English D, Ankem K, English K. Acceptance of clinical decision support surveillance technology in the clinical pharmacy. Inform Health Soc Care. 2017;42:135–52.

    Article  PubMed  Google Scholar 

  54. Fan X, et al. Utilization of self-diagnosis health chatbots in real-world settings: Case study. J Med Internet Res. 2021;23:e19928.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, et al. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digital Health. 2022. https://doi.org/10.1177/20552076221116772.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Garzon-Chavez D, et al. Adapting for the COVID-19 pandemic in Ecuador, a characterization of hospital strategies and patients. PLoS ONE. 2021;16:e0251295.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Goldman O, Raphaeli O, Goldman E, Leshno M. Improvement in the prediction of coronary heart disease risk by using arti4cial neural networks”. Qual Manag Health Care. 2021;30(4):244–50.

    Article  PubMed  Google Scholar 

  58. Golpour P, Ghayour-Mobarhan M, Saki A, et al. Comparison of support vector machine, na¨ıve bayes and logistic regression for assessing the necessity for coronary angiography. Int J Environ Res Public Health. 2020;17(18):6449–50.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Gonçalves LS, Amaro MLM, Romero ALM, Schamne FK, Fressatto JL, Bezerra CW. Implementation of an artificial intelligence algorithm for sepsis detection. Rev Bras Enferm. 2020;73:e20180421.

    Article  PubMed  Google Scholar 

  60. Gonzalez-Briceno G, Sanchez A, Ortega-Cisneros S, Contreras MSG, Diaz GAP, Moya-Sanchez EU. Artificial intelligence-based referral system for patients with diabetic retinopathy. Computer. 2020;53:77–87.

    Article  Google Scholar 

  61. Grau LE, Weiss J, O’Leary TK, Camenga D, Bernstein SL. Electronic decision support for treatment of hospitalized smokers: A qualitative analysis of physicians’ knowledge, attitudes, and practices. Drug Alcohol Depend. 2019;194:296–301.

    Article  PubMed  Google Scholar 

  62. Hand M, et al. A clinical decision support system to assist pediatric oncofertility: A short report. J Adolesc Young-Adult Oncol. 2018;7:509–13.

    Article  PubMed  Google Scholar 

  63. Horsfall HL, et al. Attitudes of the surgical team toward artificial intelligence in neurosurgery: International 2-stage cross-sectional survey. World Neurosurg. 2021;146:e724–30. https://doi.org/10.1016/j.wneu.2020.10.171.

  64. Hsiao JL, Wu WC, Chen RF. Factors of accepting pain management decision support systems by nurse anesthetists. BMC Med Inform Decis Mak. 2013;13:1–13.

    Article  Google Scholar 

  65. Hu D, Dong W, Lu X, Duan H, He K, Huang Z. Evidential MACE prediction of acute coronary syndrome using electronic health records. BMC Med Inform Decis Mak. 2019;19(S2):61.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Huang Z, Chan TM, Dong W. MACE prediction of acute coronary syndrome via boosted resampling classi4cation using electronic medical records. J Biomed Inform. 2017;66:161–70.

    Article  PubMed  Google Scholar 

  67. Huang X, Chen P, Tang F, Hua N. Detection of coronary artery disease in patients with chest pain: a machine learning model based on magnetocardiography parameters. Clin Hemorheol Microcirc. 2021;78(3):227–36.

    Article  CAS  PubMed  Google Scholar 

  68. Isbanner S, Pauline O, Steel D, Wilcock S, Carter S. The adoption of artificial intelligence in health care and social services in Australia: Findings from a methodologically innovative national survey of values and attitudes (the AVA-AI Study). J Med Internet Res. 2022. https://doi.org/10.2196/37611.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Jauk S, et al. Technology acceptance of a machine learning algorithm predicting delirium in a clinical setting: A mixed-methods study. J Med Syst. 2021;45:48.

    Article  PubMed  PubMed Central  Google Scholar 

  70. Joloudari JH, Hassannataj Joloudari E, Saadatfar H, et al. Coronary artery disease diagnosis; ranking the signi4cant features using a random trees model. Int J Environ Res Public Health. 2020;17(3):731.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Jones EK, Banks A, Melton GB, Porta CM, Tignanelli CJ. Barriers to and facilitators for acceptance of comprehensive clinical decision support system–driven care maps for patients with thoracic trauma: interview study among health care providers and nurses. JMIR Hum Factors. 2022;9:e29019.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Kanagasundaram NS, et al. Computerized clinical decision support for the early recognition and management of acute kidney injury: A qualitative evaluation of end-user experience. Clin Kidney J. 2016;9:57–62.

    Article  PubMed  Google Scholar 

  73. Kayvanpour E, Gi WT, Sedaghat-Hamedani F, et al. MicroRNA neural networks improve diagnosis of acute coronary syndrome (ACS). J Mol Cell Cardiol. 2021;151:155–62.

    Article  CAS  PubMed  Google Scholar 

  74. Kim JK, Kang S. Neural network-based coronary heart disease risk prediction using feature correlation analysis. J Healthc Eng. 2017;13.

  75. Kisling K, et al. Fully automatic treatment planning for external-beam radiation therapy of locally advanced cervical cancer: a tool for low-resource clinics. J Glob Oncol. 2019. https://doi.org/10.1200/JGO.18.00107.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Krittanawong C, Virk HUH, Kumar A, et al. Machine learning and deep learning to predict mortality in patients with spontaneous coronary artery dissection. Sci Rep. 2021;11(1):8992.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  77. Lee EK, Atallah HY, Wright MD, Post ET, Thomas CIV, Wu DT, Haley LL. Transforming hospital emergency department workflow and patient care. Interfaces. 2015;45:58–82.

    Article  Google Scholar 

  78. Liberati EG, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement Sci. 2017;12:1–13.

    Article  Google Scholar 

  79. Li D, Xiong G, Zeng H, Zhou Q, Jiang J, Guo X. Machine learning-aided risk stratification system for the prediction of coronary artery disease. Int J Cardiol. 2021;326:30–4. https://doi.org/10.1016/j.ijcard.2020.09.070.

  80. Liu X, Jiang J, Wei L, et al. Prediction of all-cause mortality in coronary artery disease patients with atrial 4brillation based on machine learning models. BMC Cardiovasc Disord. 2021;21:1–12.

    Article  CAS  Google Scholar 

  81. Love SM, et al. Palpable breast lump triage by minimally trained operators in Mexico using computer-assisted diagnosis and low-cost ultrasound. J Glob Oncol. 2018. https://doi.org/10.1200/JGO.17.00222.

    Article  PubMed  PubMed Central  Google Scholar 

  82. MacPherson P, et al. Computer-aided X-ray screening for tuberculosis and HIV testing among adults with cough in Malawi (the PROSPECT study): A randomised trial and cost-effectiveness analysis. PLoS Med. 2021;18:e1003752.

    Article  PubMed  PubMed Central  Google Scholar 

  83. McCoy A, Das R. Reducing patient mortality, length of stay and readmissions through machine learning-based sepsis prediction in the emergency department, intensive care unit and hospital floor units. BMJ Open Quality. 2017;6:e000158.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Mehta N, Harish V, Bilimoria K, et al. Knowledge and attitudes on artificial intelligence in healthcare: A provincial survey study of medical students. MedEd Publish. 2021. https://doi.org/10.15694/mep.2021.000075.1.

    Article  Google Scholar 

  85. Moon KJ, Jin Y, Jin T, Lee SM. Development and validation of an automated delirium risk assessment system (Auto-DelRAS) implemented in the electronic health record system. Int J Nurs Stud. 2018;77:46–53.

    Article  PubMed  Google Scholar 

  86. Morgenstern JD, Rosella LC, Daley MJ, Goel V, Schünemann HJ, Piggott T. AI’s gonna have an impact on everything in society, so it has to have an impact on public health: A fundamental qualitative descriptive study of the implications of artificial intelligence for public health. BMC Public Health. 2021. https://doi.org/10.1186/s12889-020-10030-x.

    Article  PubMed  PubMed Central  Google Scholar 

  87. Motwani M, Dey D, Berman DS, et al. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: A 5-year multicentre prospective registry analysis. Eur Heart J. 2017;38(7):500–7.

    PubMed  Google Scholar 

  88. Betriana F, Tanioka T, Osaka K, Kawai C, Yasuhara Y, Locsin RC. Improving the delivery of palliative care through predictive modeling and healthcare informatics. J Am Med Inform Assoc. 2021;28:1065–73.

    Article  Google Scholar 

  89. Naushad SM, Hussain T, Indumathi B, Samreen K, Alrokayan SA, Kutala VK. Machine learning algorithm-based risk prediction model of coronary artery disease. Mol Biol Rep. 2018;45(5):901–10.

    Article  CAS  PubMed  Google Scholar 

  90. Nydert P, Vég A, Bastholm-Rahmner P, Lindemalm S. Pediatricians’ understanding and experiences of an electronic clinical-decision-support-system. Online J Public Health Inform. 2017;9:e200.

    Article  PubMed  PubMed Central  Google Scholar 

  91. O’Leary P, Carroll N, Richardson I. The practitioner’s perspective on clinical pathway support systems. In: IEEE International Conference on Healthcare Informatics. 2014. p. 194–201.

    Google Scholar 

  92. Orlenko A, Kofink D, Lyytikainen LP, et al. Model selection for metabolomics: Predicting diagnosis of coronary artery disease using automated machine learning. Bioinformatics. 2020;36(6):1772–8.

    Article  CAS  PubMed  Google Scholar 

  93. Pattarabanjird T, Cress C, Nguyen A, Taylor A, Bekiranov S, McNamara C. A machine learning model utilizing a novel SNP shows enhanced prediction of coronary artery disease severity. Genes. 2020;11(12):1–14.

    Article  Google Scholar 

  94. Pieszko K. Predicting long-term mortality after acute coronary syndrome using machine learning techniques and hematological markers. Dis Markers. 2019;2019:9.

    Article  Google Scholar 

  95. Ploug T, Sundby A, Moeslund TB, Holm S. Population preferences for performance and explainability of artificial intelligence in health care: Choice-based conjoint survey. J Med Internet Res. 2021;e26611. https://doi.org/10.2196/26611.

  96. Polero LD. A machine learning algorithm for risk prediction of acute coronary syndrome (Angina). Revista Argentina de Cardiolog’ıa. 2020;88:9–13.

    Google Scholar 

  97. Prakash A, Das S. Intelligent conversational agents in mental healthcare services: a thematic analysis of user perceptions. Pacific Asia J Assoc Inf Syst. 2020;1–34.

  98. Pumplun L, Fecho M, Wahl N, Peters F, Buxmann P. Adoption of machine learning systems for medical diagnostics in clinics: Qualitative interview study. J Med Internet Res. 2021;23:e29301.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Richardson JP, Smith C, Curtis S, Watson S, Zhu X, Barry B, et al. Patient apprehensions about the use of artificial intelligence in healthcare. npj Digit. Med. 2021. https://doi.org/10.1038/s41746-021-00509-1.

  100. Romero-Brufau S, Wyatt KD, Boyum P, Mickelson M, Moore M, Cognetta-Rieke C. Implementation of artificial intelligence-based clinical decision support to reduce hospital readmissions at a regional hospital. Appl Clin Inform. 2020;11:570–7.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Sarwar S, Dent A, Faust K, Richer M, Djuric U, Ommeren RV, et al. Physician perspectives on integration of artificial intelligence into diagnostic pathology. npj Digit. Med. 2019. https://doi.org/10.1038/s41746-019-0106-0.

  102. Scheetz J, Koca D, McGuinness M, Holloway E, Tan Z, Zhu Z, et al. Real-world artificial intelligence-based opportunistic screening for diabetic retinopathy in endocrinology and indigenous healthcare settings in Australia. Sci Rep. 2021. https://doi.org/10.1038/s41598-021-94178-5.

    Article  PubMed  PubMed Central  Google Scholar 

  103. Schuh C, de Bruin JS, Seeling W. Clinical decision support systems at the Vienna General Hospital using Arden Syntax: Design, implementation, and integration. Artif Intell Med. 2018;92:24–33.

    Article  PubMed  Google Scholar 

  104. Sendak MP, Ratliff W, Sarro D, Alderton E, Futoma J, Gao M. Real-world integration of a sepsis deep learning technology into routine clinical care: Implementation study. J Med Internet Res Med Inform. 2020;8(7):e15182. https://doi.org/10.2196/15182.

  105. Sherazi SWA, Jeong YJ, Jae MH, Bae JW, Lee JY. A machine learning–based 1-year mortality prediction model after hospital discharge for clinical patients with acute coronary syndrome. Health Informatics J. 2020;26(2):1289–304.

    Article  PubMed  Google Scholar 

  106. Sujan M, White S, Habli I, Reynolds N. Stakeholder perceptions of the safety and assurance of artificial intelligence in healthcare. SSRN Electron J. 2022. https://doi.org/10.2139/ssrn.4000675.

    Article  Google Scholar 

  107. Tayefi M, Tajfard M, Saffar S, et al. hs-CRP is strongly associated with coronary heart disease (CHD): A data mining approach using decision tree algorithm. Comput Methods Programs Biomed. 2017;141:105–9. https://doi.org/10.1016/j.cmpb.2017.02.001.

    Article  PubMed  Google Scholar 

  108. Terry AL, Kueper JK, Beleno R, Brown JB, Cejic S, Dang J, et al. Is primary health care ready for artificial intelligence? What do primary health care stakeholders say? BMC Med Inform Decis Mak. 2022. https://doi.org/10.1186/s12911-022-01984-6.

    Article  PubMed  PubMed Central  Google Scholar 

  109. Tscholl DW, Weiss M, Handschin L, Spahn DR, Nöthiger CB. User perceptions of avatar-based patient monitoring: A mixed qualitative and quantitative study. BMC Anesthesiol. 2018;18:188.

    Article  PubMed  PubMed Central  Google Scholar 

  110. Uzir MUH, Halbusi HA, Lim R, Jerin I, Hamid ABA, Ramayah T, Haque A. Applied Artificial Intelligence and user satisfaction: Smartwatch usage for healthcare in Bangladesh during COVID-19. Technol Soc. 2021;67:101780.

    Article  PubMed  PubMed Central  Google Scholar 

  111. Van der Heijden AA, Abramoff MD, Verbraak F, van Hecke MV, Liem A, Nijpels G. Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the hoorn diabetes care system. Acta Ophthalmol. 2018;96:63–8.

    Article  PubMed  Google Scholar 

  112. Van der Zander QEW, van der Ende-van Loon MCM, Janssen JMM, Winkens B, van der Sommen F, Masclee AAM, et al. Artificial intelligence in (gastrointestinal) healthcare: Patients’ and physicians’ perspectives. Sci Rep. 2022. https://doi.org/10.1038/s41598-022-20958-2.

    Article  PubMed  PubMed Central  Google Scholar 

  113. Velusamy D, Ramasamy K. Ensemble of heterogeneous classi4ers for diagnosis and prediction of coronary artery disease with reduced feature subset. Comput Methods Programs Biomed. 2021;198:105770.

    Article  PubMed  Google Scholar 

  114. Visram S, Leyden D, Annesley O, et al. Engaging children and young people on the potential role of artificial intelligence in medicine. Pediatr Res. 2023;93:440–4. https://doi.org/10.1038/s41390-022-02053-4.

    Article  PubMed  Google Scholar 

  115. Walter S, et al. What about automated pain recognition for routine clinical use? A survey of physicians and nursing staff on expectations, requirements, and acceptance. Front Med. 2020;7:566278.

    Article  Google Scholar 

  116. Wang D, et al. Brilliant AI Doctor in rural clinics: Challenges in AI-powered clinical decision support system deployment. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021. p. 1–18.

    Google Scholar 

  117. Wang L, et al. CASS: Towards building a social-support chatbot for online health community. Proc ACM Hum-Comput Interact. 2021;5:1–31.

    Google Scholar 

  118. Wittal CG, Hammer D, Klein F, Rittchen J. Perception and knowledge of artificial intelligence in healthcare, therapy and diagnostics: A population-representative survey. 2022. https://doi.org/10.1101/2022.12.01.22282960.

  119. Xu H, Li P, Yang Z, Liu X, Wang Z, Yan W, He M, Chu W, She Y, Li Y, et al. Construction and application of a medical-grade wireless monitoring system for physiological signals at general wards. J Med Syst. 2020;44:1–15.

    Article  Google Scholar 

  120. Yurdaisik I, Aksoy SH. Evaluation of knowledge and attitudes of radiology department workers about artificial intelligence. Ann Clin Anal Med. 2021;12:186–90.

    Article  Google Scholar 

  121. Zhai H, et al. Radiation oncologists’ perceptions of adopting an artificial intelligence-assisted contouring technology: Model development and questionnaire study. J Med Internet Res. 2021;23:1–16.

    Article  Google Scholar 

  122. Zhang H, Wang X, Liu C, et al. Detection of coronary artery disease using multi-modal feature fusion and hybrid feature selection. Physiol Meas. 2020;41(11):115007.

    Article  Google Scholar 

  123. Zheng B, et al. Attitudes of medical workers in China toward artificial intelligence in ophthalmology: A comparative survey. BMC Health Serv Res. 2021;21:1067.

    Article  PubMed  PubMed Central  Google Scholar 

  124. Zhou N, et al. Concordance study between IBM watson for oncology and clinical practice for patients with cancer in China. Oncologist. 2019;24:812–9.

    Article  PubMed  Google Scholar 

  125. Zhou LY, Yin W, Wang J, et al. A novel laboratory-based model to predict the presence of obstructive coronary artery disease comparison to coronary artery disease consortium ½ score, duke clinical score and diamond-forrester score in China. Int Heart J. 2020;61(3):437–46.

    Article  CAS  PubMed  Google Scholar 

  126. Cypress BS. Rigor or reliability and validity in qualitative research, perspectives, strategies, reconceptualisation and recommendations. Dimens Crit Care Nurs. 2017. https://doi.org/10.1097/DCC.0000000000000253.

    Article  PubMed  Google Scholar 

  127. Morse JM. Critical analysis of strategies for determining rigor in qualitative inquiry. Qual Health Res. 2015. https://doi.org/10117/1049732315588501.

  128. Sundler AJ, Lindberg E, Nilsson C, et al. Qualitative thematic analysis based on descriptive phenomenology. Nurs Open. 2019. https://doi.org/10.1002/nop2.275.

    Article  PubMed  PubMed Central  Google Scholar 

  129. Van Wijngaarden E, Meide HV, Dahlberg K. Researching health care as a meaningful practice: Towards a nondualistic view on evidence for qualitative research. Qual Health Res. 2017. https://doi.org/10.1177/1049732317711133.

    Article  PubMed  Google Scholar 

  130. Lord R, and Roseen D. Why should we care? In do no harm. New America 2019. http://www.jstor.org/stable/resrep19972.6.

  131. Tusabe F. Bacterial contamination of healthcare worker’s mobile phones: A case study at two referral hospitals in Uganda. Research Square. 2021. https://doi.org/10.21203/rs.3.rs-955201/v1.

    Article  Google Scholar 

  132. Solanki P, Grundy J, Hussain W. Operationalising ethics in artificial intelligence for healthcare: a framework for AI developers. AI Ethics. 2023. https://doi.org/10.1007/s43681-022-00195-z.

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to Lieutenant Commander (Ghana Navy) Candice FLEISCHER-DJOLETO of 37 Military Hospital, Ghana Armed Forces Medical Services, for proofreading the draft manuscript.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sector.

Author information

Authors and Affiliations

Authors

Contributions

N.N.B., E.W.A., C.E.S., S.M., and V.K.D. Conceptualised and Designed the Review Protocols. E.W.A., V.K.D., C.E.S., F.S.A., I.S.T., L.A.A., S.M., and N.N.B. Conducted Data Collection and Acquisition. E.W.A., V.K.D., R.V.K.., C.E.S., F.S.A., I.S.T., L.A.A., S.M., and N.N.B. carried out extensive data processing and management. E.W.A., C.E.S., N.N.B., and R.V.K. developed the initial manuscript. All authors edited and considerably reviewed the manuscript, proofread for intellectual content and consented to its publication.

Corresponding author

Correspondence to Nkosi N. Botha.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Botha, N.N., Ansah, E.W., Segbedzi, C.E. et al. Artificial intelligent tools: evidence-mapping on the perceived positive effects on patient-care and confidentiality. BMC Digit Health 2, 33 (2024). https://doi.org/10.1186/s44247-024-00091-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44247-024-00091-y

Keywords