- Research
- Open access
- Published:
Influence of the use of a tablet-based clinical decision support algorithm by general practitioners on the consultation process: the example of FeverTravelApp
BMC Digital Health volume 2, Article number: 59 (2024)
Abstract
Background
Despite the proven positive effects of clinical decision support systems (CDSSs) on general practitioners’ (GPs’) performance and patient management, their adoption remains slow. Several factors have been proposed to explain GPs' reluctance to adopt these tools. This study hypothesizes that the influence of CDSSs on patient-physician interactions could be a determining factor. To explore this hypothesis, we utilized the FeverTravelApp, designed to assist GPs in managing patients presenting with fever after returning from the tropics. A case–control study was conducted, observing and analyzing fourteen consultations between seven physicians and three simulated patients. Each physician conducted consultations both with and without the FeverTravelApp. The consultations were video-recorded and analyzed using a custom analysis grid based on three existing tools. Simulated patients completed the Communication Assessment Tool (CAT) after each consultation, and each physician participated in a semistructured interview following the use of the app.
Results
The use of the FeverTravelApp influenced multiple aspects of the consultation, particularly communication. Both patient and GP speaking times decreased, while active silence (no one talking while the GP actively performed a task) increased. GPs focused more on the app, which reduced direct patient interaction. However, this influence seemed to bother GPs more than simulated patients, who rated their GPs equally whether the app was used or not. This could be because patients felt better understood when GPs asked fewer but more specific questions related to travel medicine, thus effectively addressing their concerns.
Conclusions
This study supports the hypothesis that CDSSs influence consultation dynamics, which may contribute to their slow adoption. It is essential to involve clinicians early in the development of CDSSs to adapt them to clinical workflows and ensure system interoperability. Additionally, tools that allow clinicians to follow the entire clinical reasoning process, such as decision trees, are needed. Further research is necessary to confirm these findings in real patient settings and to develop CDSSs that meet both patients’ and GPs’ expectations.
Introduction
Humans have limited capacities for probabilistic reasoning, especially when facing situations with multiple variables, knowledge gaps or limited experience. Clinicians’ diagnostic decision making will thus be strongly influenced by the level of knowledge and experience, by common cognitive biases like anchoring effects, information and availability biases and by personality traits such as overconfidence, lower tolerance to risk, [1]. All these factors might alter physicians’ disease probability appraisal, in particular by overestimating rare diseases and by missing frequent diseases with atypical clinical presentation [2].
To overcome these difficulties, there is growing interest in clinical decision support systems (CDSSs). A CDSS is an expert system that integrates medical knowledge with patient data to infer case-specific advice to support healthcare providers in their decisions and, ultimately, to improve patient care. Ledley and Lusted, with their founding article “Reasoning Foundations of Medical Diagnosis” in 1959, are among the pioneers in computer-assisted diagnosis. They deconstructed and analyzed, from a mathematical point of view, the medical diagnosis process and had already envisaged the decisive role of computers in providing structured information to their users, be they medical students or physicians. On the other hand, they also predicted physicians’ fear of losing their autonomy, or even worse, of being replaced by computers [3].
In recent years, the integration of artificial intelligence (AI) into healthcare, including the development of advanced CDSSs, has become increasingly prominent, enhancing the precision and efficiency of medical diagnostics and treatments.
Despite proven positive effects on general practitioners’ (GPs’) performance [4] and on the management of patients and treatments [5,6,7,8,9,10], the adoption of CDSSs is slow [11]. We may point out five main reasons for this limited success.
First, some CDSSs have low accuracy (defined as the capacity to find the right diagnosis and to avoid giving the wrong one). For example, a study comparing the performance of several symptom checkers revealed that the proper diagnosis was given in 34% of the patients, and the appropriate triage advice was given in 57% of the patients [12]. Second, some existing tools perform particularly poorly when confronted with a complex clinical case because they fail to consider the full spectrum of clinical data. Neglecting the clinical context and the time-related aspects (order of appearance of symptoms) sometimes leads to an unsuitable assessment of the situation [13]. Third, the integration of a CDSS into the consultation workflow and the routine, habits, and administrative duties of GPs are particularly challenging [14]. The interaction of existing tools with the workflow of private practice also seems to be a key issue, with, for example, a duplication of administrative work and the GP having to fill in the information in the CDSS and in the electronic medical records [15]. Fourth, medical information security, confidentiality, and interoperability of systems concerns make it difficult to create a tool adapted to the actual legislation and administration already in place [16, page 67]. Finally, the economic impact of implementing CDSSs like FeverTravelApp in a private practice setting cannot be overlooked. While the app itself may be offered at no initial cost, potential indirect expenses such as the need for compatible hardware and the time investment required for training and integration into daily practice may pose financial burdens. Long-term benefits, however, such as improved diagnostic efficiency and potentially reduced operational costs, suggest a nuanced cost–benefit scenario that warrants careful consideration.
However, the significant surge of mobile phone usage [16], coupled with the ever-growing use of smartphones, represents a turning point as it has catalyzed the development of new CDSSs [17, 18]. As a result, in line with the increase in the use of apps by GPs, new CDSSs are being developed in very different fields, ranging from the diagnosis and treatment of childhood illness in resource-limited countries [19] to the assessment and management of suicide risk [20]. A study conducted in France in 2016 reported that 75% of GPs use their phone during consultations and have a plethora of apps, from prescribing medicine and scoring on calculators to dedicated social media apps for sharing medical advice between colleagues [21].
An area of clinical care in which CDSSs may be particularly useful is travel-related infectious diseases and tropical medicine. Solving such cases requires thorough patient history and clinical examination, as well as broad knowledge of the clinical presentations of exotic diseases and fast-changing epidemiological patterns. Therefore, the majority of GPs in higher-income countries lack the necessary expertise, as they rarely deal with patients upon return from the tropics. Notable CDSSs in this field are Gideon [22] and Kabisa [23], which are differential diagnosis generators (DDxs) that provide a list of all possible diagnoses in order of probability. The latter was initially created for pedagogical purposes but is currently also used by practitioners. In 2003, we developed evidence-based guidelines for the management of febrile patients upon return from the tropics in the form of freely accessible decision charts on a website (www.fevertravel.ch) [24]. The evidence behind these guidelines was reviewed in 2010 (Rossi et al. unpublished) and in 2020 by Buss et al. in a systematic review [25]. The format of these guidelines has recently been revamped and adapted into a prototype app called FeverTravelApp.
FevertravelApp is a CDSS based on clinical decision support algorithms (CDSAs). CDSAs take a decision-tree approach, guiding clinician in taking a thorough patient history, performing an appropriate clinical exam, as well as giving treatment and management recommendations, somehow mimicking the cognitive process of decision-making an expert clinician would adopt.
Although several studies have shown that CDSSs have a positive influence on patient management [5,6,7,8,9,10], the effect of CDSSs on patient‒physician interactions during the consultation process remains unclear. A few studies have assessed patient-physician interactions through interviews [14, 26], but it has never been objectively assessed through real-time consultation analysis.
In our study, we examined the impact that CDSAs, such as the FevertravelApp, may have on patient‒physician communication and the flow of consultation.
Materials and methods
Study design
We conducted a case‒control study by observing and analyzing fourteen consultations between seven physicians—patients who used or did not use the FeverTravelApp—and three simulated patients.
Setting, participants, and data production
This study was conducted at the outpatient Department of the Centre for Primary Care and Public Health, University of Lausanne and at a private practice in Neuchâtel (Switzerland) between the 28. May 2018 and the 30. September 2018. The participant physicians were recruited on a voluntary basis. The physicians were contacted either by e-mail or directly by their supervisor. In the end, three female and four male physicians aged 25 to 61 years (median age = 31 years) and with various degrees of experience (from junior to senior) participated in the study. Most physicians had no experience in tropical medicine (ranging from 0 to 1 year). The physicians conducted two consultations with different simulated patients, the first one without using the App (C1) and the second one using the App (C2). For consultations, two different clinical scenarios were used. Both methods were created to lead to differential diagnoses, mainly involving tropical diseases. The final version of the scenarios was checked by tropical medicine specialists, and a grid of adequate history was taken for each patient. In the first scenario, the patient came back from Brazil, Bolivia and Chile, presenting with fever and conjunctival suffusion. In the second scenario, the patient returned from Nepal and India with fever and a maculopapular rash on the chest. The simulated patients were hired from the simulated patients Program of the University of Lausanne (Switzerland), and training was organized to optimize their performance and familiarize themselves with the cases.
Physicians had a 20-min time limit to familiarize themselves with the app between C1 and C2 (see Fig. 1). The consultation was organized as follows:
During C1 and C2, physicians could use a computer to take notes and check medical information, as in usual consultations. As we used simulated patients (and our goal was to not evaluate their practical skills), physicians did not have to examine their patients; all the positive signs were provided in writing with pictures when relevant. The fourteen consultations were video recorded to facilitate the analysis.
After the consultation, the simulated patients had to complete the Communication Assessment Tool (CAT) to evaluate their physician’s communication skills. All the physicians underwent a semistructured interview to explore their experience using the app.
Description of the CDSA (FeverTravel App)
The FeverTravel algorithm takes the form of a mobile app that targets primary care physicians working at outpatient clinics, at emergency departments of hospitals or in private practices. The tool is aimed at assisting physicians in managing travelers and migrants with fever upon return from a tropical area.
It supports the user’s decision-making process with diagnostic and therapeutic advice during or after a consultation. Given that it was impossible to develop good-quality guidelines covering the whole internal medicine field, the FeverTravelApp includes only tropical diagnoses or diseases that are much more frequent in tropical countries (and not ubiquitous infectious diseases or noncommunicable diseases).
The App proposes a series of assessments: 1) “General questions” on age, sex, height, weight, dates of travel and start of the symptoms, countries visited, malaria prophylaxis taken and appropriate vaccines received before traveling; 2) “Vital signs”, including temperature, heart and respiratory rates; 3) “Exposures”, at-risk activities the migrant/traveler did, or he was exposed to during her/his stay in the tropical area, i.e., freshwater, animals, food; 4) “Symptoms”, 5) “Signs”, The App then proposes the “Differential diagnosis” and the corresponding “Investigations” to perform. The physician was then asked to submit the results of the investigations. Based on the latter, a list of “Final diagnoses” and the corresponding “Treatments” and “Managements” are proposed. Diagnoses that require presumptive treatment (due to either no immediate confirmatory investigation being available or the potential occurrence of rapid complications if left untreated) are directly listed under “Final diagnoses”. The tool is designed to be as flexible and intuitive as possible. It allows the user to answer questions in whichever sequence she/he wishes, and subsequent recommendations are updated in real time (see Fig. 2). While questions are raised and answered, new diagnoses pop up in the “Differential diagnosis” list, and further investigations and treatments are suggested.
Data analysis
Three main aspects were analyzed: 1) objective quality of the communication between physicians and patients (analysis of the videotaped consultations); 2) communication skills of physicians according to the simulated patients (CAT questionnaires); and 3) physicians’ perception of the app (semistructured interviews).
Quality of the communication during consultations
To examine communicational aspects, we developed a custom analysis grid (Fig. 3) derived from the following existing tools: the Roter Interaction Analysis System (RIAS) [27], the Nonverbal Accommodation Analysis System (NAAS) [28], and the Calgary Cambridge-Global consultation scale (CC-GCS) [29]. We deemed it necessary to use a custom grid because of the scarcity of similar studies and tools able to capture the influence of an intervention such as an app. We also based our review on articles assessing physicians using an electronic device during a consultation [19, 20, 30,31,32,33,34,35].
Physicians’ communication skills rated by simulated patients
Simulated patients had to complete a questionnaire after C1 and C2. We used the Communication Assessment Tool (CAT) [31], a validated tool for the evaluation of physicians’ communication skills by their patients, to determine how those skills vary according to the use or absence of the App and to evaluate patients’ satisfaction. It consisted of 15 items measured on a Likert-type scale ranging from 0 to 5 (0 = very poor, 5 = very good). For the present study, we used only 14 items, as the last one was not relevant (“the front desk staff treated me with respect”).
Physicians’ perspectives on the app
Four main themes were explored through the 20-min-long semistructured interviews: i) user-friendliness of the app, ii) app content and approach to managing cases of fever upon return, iii) influence of the app on both the consultation flow and clinical communication, and iv) physicians’ global perception of the app and their advice for further improvement of the tool. The consultations were audio-recorded and transcribed verbatim. The transcripts were analyzed for thematic content using qualitative data analysis software (MAXQDA12). The existing themes were supplemented by themes that emerged during the analysis. Verbatim quotes were selected for illustration.
Results
In total, 14 consultations were analyzed; among them, 7 were conducted without an App and 7 were conducted with an App. Three simulated patients participated in the study, all of whom were professionals. The simulated patients had to complete a form after each consultation to rate their physicians’ communication performance. A semiconductive interview was performed with the physicians after each consultation with the app. The study took place between the 28. May 2018 and the 30. September 2018.
We present the results related to 1) communication during the consultation, 2) patients’ evaluation of the communication skills of the physicians, and 3) physicians’ perspectives on the app.
Communication during the consultation
Consultation duration
The overall length of the consultation slightly decreased between C1 (consultation without an App) and C2. The history-taking time (anamnesis) slightly increased in C2, whereas the amount of time needed to decide on the investigations and management (reflection time) decreased drastically.
Talking time and silences
The talking times of both patients and physicians decreased when the physician was using the app. However, active silence (no one talking while the physician actively performs a task) increased with the use of the app. This change was observed even though a computer was already used during C1. Passive silence (no one talking or physician doing nothing) was not influenced by the app (see Fig. 4).
Gazing time
The use of the app increased the proportion of time the physician spent gazing at electronic devices, even though a computer was already used during C1. This increase mostly occurred at the expense of the time spent gazing at their patient and, to a lesser extent, at objects present in the consultation room (see Fig. 5).
Quantity, types, and content of questions
Physicians asked more questions without the app (median of 78 questions) than with it (median of 71 questions). Physicians asked slightly more open-ended questions without the app (median of 5,4) than with the app (median of 4,3). A similar phenomenon was observed for closed-ended questions (median of 65 vs 70) and leading questions (median of 1,6 vs 1,9).
Out of 33 questions required to identify diseases causing fever, including tropical diseases, physicians asked a median of 14.5 (range 11 to 21) questions about fever without the app and 22 (range 13 to 31) questions about fever with the app. Specifically, regarding the 6 questions related to destination, timing and prophylaxis, patients were asked a median of 6 (range 3 to 6) or 5 (range 3 to 6) questions about their visit without or with the app, respectively. Regarding the 9 questions related to symptoms, they were asked a median of 7 (range 4 to 9) and 8 (range 7 to 9) questions about symptoms, respectively, with and without the app. Regarding exposures, they asked a median of 4 (range 1 to 7) and 10 (range 2 to 18) of the 18 questions without and with the app, respectively (see Fig. 6).
Furthermore, among the 4 questions that involved a normal anamnesis but were not asked about by the app (past medical history, allergies, usual treatment and treatment taken for the actual symptoms), physicians asked a median of 3 of them (range 2 to 4) without the app and 1 of them (range 0 to 3) with the app.
Checking for understanding and information provision
Physicians checked for patients’ understanding more often when they used the app (median of 13,4 times vs 7.9 without the app) and provided more information (13.0 vs 8.9).
Socioemotional exchange
With respect to the socioemotional exchange between physicians and patients, the only difference was a decrease in the number of physicians’ laughs and smiles when using the app (3,1 vs. 2,3). Empathy and reassurance statements were too rare to look for differences.
Patients’ evaluation of physicians’ communication
Overall, the patients assessed their physicians’ communication skills similarly with and without the app (see Fig. 7).
Physicians’ perspectives on the app
Our interview guide was divided into 3 main themes: software, impact on consultation, and physicians’ global assessment.
Regarding the user friendliness of the app, all the physicians found it user friendly and easy to use. Nevertheless, some physicians have reported some technical difficulties. and considered not familiar enough with the app to feel comfortable. The majority commented that they would prefer to use the tool on a computer.
“It was not a consultation; it was a technical battle against the application.” (physician 1)
“I have spent quite a bit of time looking for the location of this lab test and for an element of history. It is truly a matter of use and habit.” (physician 2)
«You'd have to spend at least 15-30 minutes reading through the entire application to have an idea of all the questions they ask, that way, in the end, you could do your interview without looking at the application and you will just add the marks. (...).” (physician 3)
All the physicians said that the app had a negative impact on communication with their patients, and most of them were dissatisfied with the relationships they had with their patients when using the app. They mentioned a negative impact on the consultation flow and said they felt “unnatural”; for instance, the order of the questions was considered poor by some physicians. Overall, the physicians felt that the app disturbed their habits. Furthermore, physicians were worried that using the app could have a detrimental effect on patients’ trust in their medical capacities.
“I got stuck on the app because I could not use it as I wanted. So I had to focus on it, I lost the thread and the contact with the patient” (physician 4).
“It broke my usual pattern of approaching these patients, but I think it is something that is easy to fix. You actually have to get used to it. Afterwards, I think if you use it and get used to it, it gives you a good structure.” (physician 2)
“I was afraid the patient would say, "What is she doing?" because sometimes I would stop answering some questions. (...) The patient could say "Oh yeah, but then I could have done it myself".” (physician 5)
“It could alter the image of the perfect doctor that they [the patients] may have in mind.” (physician 6)
« I have the impression that this is becoming more and more common; patients are also informed on their side, and they do not necessarily see the doctor as someone who knows everything. I think that is changed a little bit over the last few years, and it can also reassure them to see that the doctor is using reference documents (...). I think it truly depends on the patient and the image they have of the doctor before." (physician 6)
Overall, the physicians considered the app useful, and almost all of them would use it in the future. The majority thought that the medical content of the app was appropriate, while a minority considered the app to be too interventionist, for example, by offering a presumptive treatment. In this regard, most physicians reported that the app encouraged them to undertake more investigations than they usually would.
Some physicians were unsettled by the fact that the App did not consider the patient’s past medical history or autochthonous common diseases. Most of the physicians mentioned that they would have followed the indications for the app in real consultations, especially because of a lack of knowledge about tropical diseases. Suggestions for improvement were diverse. A frequent suggestion was to add a broader differential diagnosis than just tropical diseases, especially for frequent or potentially serious cases. The need to know and understand also came out. For example, it was suggested that physicians have access to decision treesFootnote 1 to provide a better overview of these trees. They also wished to obtain more specific information about tropical diseases.
“[The application] gets too specialized too fast, even if it is upon return from the tropics. You have to think about what common disease you can get, do not rule out banality too fast. (physician 1)
“More serious diseases, such as lymphoma or Hodgkin's disease, can also happen when coming back from a trip” (physician 4).
“On the website, there are visible algorithms, and I think that is what's a bit of a shame in an application like this, well, maybe it is not its purpose, but what would be interesting is to know the reasons for each question, and what's behind it, and maybe to see the algorithms in the support too. » (physician 6).
Discussion
The use of the app, as expected, influenced many aspects of the consultation, particularly at the level of communication. However, this strong influence at this level seemed to bother the GPs more than the simulated patients who rated their GP equally when the latter was using the app. This could be explained by the fact that the patients felt better understood by their GP, who asked more specific questions and seemed, as a consequence, to understand their problem better.
In the following section, we successively discuss the following aspects: communication, medical content, simulated patient assessment, physician assessment, design of the app, potential barriers to widespread adoption of CDSAs in private practice and ecological impact.
Communication
The app impacted the type of gaze, lengthening the gaze at technological tools (computers and tablets) and shortening gazes at patients. Furthermore, the number of GPs talking time decreased while using the app. This was mostly balanced by the active silences (those not talking but rather physicians busy doing something actively), which dramatically increased. Overall, physicians struggled to raise their heads from the tablet. The computers did not monopolize their attention as did the app on the tablet. This difference could be explained by the fact that GPs are familiar with taking notes on computers but not with using a tablet to guide a consultation. The App medical content tends to lead the conversation, thus becoming a true third party to the consultation. The other reason is that they were not familiar with the app and had only 20 min to learn about how to use it.
Medical content
The total number of questions asked by physicians to take history did not change much, but the number of those required to identify diseases causing fever, particularly tropical diseases for which physicians are generally not familiar, increased by 1,fivefold. This was even more true for questions related to exposures. However, physicians tended to forget to ask their patients about their past medical history and treatments for chronic diseases while using the app (decreased by 2,12-fold) since those questions were not part of the app’s algorithm.
Assessment of the simulated patients.
However, despite the negative impact on communication quality markers such as gazes and silences, simulated patients rated physician communication slightly better when he/she used the app. Patients may have appreciated that their physicians provided more medical information and checked more for their understanding while using the app. Additionally, patients may place more value on the medical aspects of the consultation than on the communicational aspects, knowing that fever can sometimes be a sign of serious illness. Another explanation could be that our questionnaire (CAT) was not sensitive enough to capture the influence of the app.
Simulated patients also had the impression that their physicians spent more time with them when using the app even though the overall consultation time was slightly shorter. Indeed, the App increased the time the physician spent effectively with her/his patient and drastically lowered the time he spent alone in his office thinking about the investigations and management (reflection time). Interestingly, the physicians’ impression was that the app lengthened the consultation.
Physicians’ assessment
Overall, physicians appreciated the positive impact of the app on the medical aspects of the consultation. These authors particularly valued the provision of a differential diagnosis proposal and the corresponding relevant investigations. However, some physicians stated that the app was “too interventionist” and that they would have preferred to “wait and see”. The reason might be that the balance between sensitivity and specificity that was chosen for some rare but potentially serious diseases was too high in favor of sensitivity to avoid missing the diagnosis (this will be evaluated in a future study) or that they did not fully weigh up the potential lethality of some of the travel-related diseases.
Regarding the impact on communication and the relationship with patients, their feedback on the app was rather negative, which is in line with the findings from analyzing the videos. Furthermore, physicians noted that they felt less natural and that the app may have diminished patients’ trust by demonstrating the limits of their knowledge. Furthermore, physicians struggled to find a convenient way to integrate the use of the app in their usual consultation process. When the App is guiding the whole consultation, physicians expect it to be exhaustive in terms of possible diagnoses and treatments, including cosmopolitan (local) diseases, even if they are supposed to be familiar with those types of diseases. This lack of understanding of the App’s role may have deteriorated their usual consultation structure: they neglected essential aspects of a real consultation, such as asking for their past medical history and treatments for chronic diseases.
Consequently, one of the main complaints was that the App considered tropical diseases but neither noninfectious nor autochthonous diseases. This is a challenge for developers of CDSAs, as it would require them to develop an algorithm covering all existing infectious diseases and even all health issues. The message provided at the beginning of the app and the possibility of adding other diagnoses, investigations and treatments manually were deemed insufficient. Physicians suggested that a clear message should regularly remind them that the app mainly covers tropical medicine aspects and that all other aspects of the consultation be considered.
Design of the App
Although the app was designed to be flexible, the interface seems to still be too rigid to allow good consultation. Indeed, the physicians found themselves stuck following a series of questions that quickly led them to lose full control of the consultation. With this new interface (vs the old version in the form of decision trees), overall vision was no longer achieved. Paradoxically, traditional decision-making algorithms seem to be a better answer to this problem since physicians see in which direction each question leads. This is all the more true for topics that they only partially master, as is the case for fever upon return from the tropics, which remains a rare entity for Swiss GPs. The present study allowed us to provide important user-experience feedback that will be used to improve the design of the app interface.
Potential barriers to the widespread adoption of CDSAs in private practice.
CDSAs have struggled to find their place in the medical world. One reason might be the weak accuracy (capacity to give the right diagnosis) of some CDSAs [12]. A second hypothesis is that the existing tools struggle to integrate all the symptoms of a complex clinical case. The lack of consideration of the general clinical context (past medical history, habits, etc.) and temporal aspects (order of appearance of symptoms) might lead to a wrong diagnosis and will thus be of little use to the physician [36]. Third, not only must such a tool suit the consultation workflow as much as possible, but it must also fit the physician’s habits and administrative obligations [14]. The interaction of existing tools with workflows in private practice seems to be a key issue [15]. The medical information security [33] and interoperability [34] of systems also seem to be obstacles to their implementation. The potential negative impact of CDSAs on communication between the patient and the physician, as demonstrated in the present study, might also be a reason.
Physicians’ reluctance to use the CDSS was also mentioned by Sanousi et al. [35], who proposed moving away from the “black box model”, which describes a tool whose functioning is not known by the end user. Therefore, they proposed the user acceptance and system adaptation design (UASAD) model, which includes the end user early on in the conception of the CDSS. Physicians were included in the development of the FeverTravelApp app from the beginning. However, the initial web-based noninteractive tree representation of the algorithm had to be abandoned when moving to the App format, leading to the abovementioned “black box” effect, which is known to contribute to physicians’ reluctance to use the CDSS. To overcome this problem, it is thus essential that a tree representation of the algorithm be provided to clinicians next to the interactive part of the tool.
Ecological impact
However, none of the above mentioned aspects consider one major challenge of our time: climate change. The paradoxical indirect impact of digital healthcare on health (and not only) through CO2 emissions (running and cooling servers in datacenters, extraction of mineral components, etc.) must be taken into consideration when creating new digital tools [37]. The ultimate question one should ask oneself is and should remain: “Do we truly need it?”.
Limitations of the study
This study has several limitations that should be considered when interpreting its results:
-
1.
Small Sample Size:
The small sample size limits the statistical power and generalizability of our findings. Our primary aim was to qualitatively assess the influence of the FeverTravelApp on physician consultations, with an intent to generate insights for further app development rather than to establish broad statistical conclusions.
-
2.
Use of Simulated Patients:
Employing simulated patients, while beneficial for controlling experimental variables and ensuring consistency in data collection, inherently restricts the complexity of real patient interactions. Simulated scenarios typically present a narrower range of symptoms and lack the emotional and psychological depth seen in actual clinical settings. This can influence clinical decision-making and might alter physician behavior due to the absence of real-life consequences. Additionally, the use of scripted interactions limits the dynamism of consultations and may not accurately reflect genuine patient-physician communication dynamics. As a result, the transferability of our results to real-world clinical practice should be approached with caution.
-
3.
Training Period’s brevity:
The physicians' brief 20-min familiarization period with the app is a significant limitation. This constrained training likely affected their initial use of the app, as evidenced by an increased gaze time on the tablet, which may more accurately reflect the novelty of the technology rather than any intrinsic flaws in its design. Such a scenario can elevate cognitive load, complicating the integration of the app into the diagnostic process. We recommend extending training periods in future implementations to include comprehensive navigation tutorials and scenario-based exercises that mirror real-life clinical encounters. This approach would help mitigate initial unfamiliarity and support smoother integration of the app into practice.
-
4.
Methodological Limitations in Communication Analysis:
The tools available for quantifying the quality of clinical communication often do not account for the modern high connectivity environment characterized by frequent use of digital devices [38, 39]. The prevalent use of technology in clinical settings can reduce the time clinicians spend looking directly at patients, which could be misconstrued as a decline in communication quality. In response, we developed a custom analysis grid tailored to our study's needs. However, this innovation has its limitations, notably the difficulty in comparing our findings with those from other studies due to the lack of similar methodologies.
Conclusion
The fact that CDSAs have an impact on consultation quality, whether rather positively from a clinical point of view or rather negatively from a communication point of view, is increasingly better documented. Although FeverTravelApp was designed to follow the consultation process as much as possible, further improvements accounting for user experience, including the findings of the present study, are needed. We found that the app tended to monopolize GPs' attention, an aspect they were unsatisfied about, as they lost the lead of the consultation. Most likely, because they perceived that GPs were more comfortable with the medical part of the consultation, simulated patients were, however, generally more satisfied with the consultation performed with the app.
Further research is needed to confirm our findings when CDSAs are used to manage real patients, the goal being to create CDSAs that are in line with both patients’ and GPs’ expectations.
Information technologies are undeniably reshaping our daily lives, and physicians’ consultation rooms are no exception. Health professionals must have a critical look at the opportunities but also the risks related to their impact on quality of care. Finding a balance between the benefits of improved medical decision-making and potential shortcomings regarding miscommunication or workflow will prove to be essential. Our study underlines the importance of taking patient-provider communication aspects into account when designing CDSAs if we want to ensure uptake by clinicians and acceptability by patients.
Availability of data and materials
Data will be available from the corresponding author to third parties interested in reusing them upon reasonable request.
Notes
This feature was already in our pipeline, but the prototype used for this study lacked it.
References
Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16(1):138.
De Alencastro L, Clair C, Locatelli I, Ebell MH, Senn N. Raisonnement clinique : de la théorie à la pratique… et retour. Rev Med Suisse. 2017;13:986–9.
Ledley RS, Lusted LB. Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science. 1959;130(3366):9–21.
Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerised clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223–38.
Shao AF, Rambaud-Althaus C, Samaka J, Faustine AF, Perri-Moore S, Swai N, et al. New Algorithm for Managing Childhood Illness Using Mobile Technology (ALMANACH): A Controlled Non-Inferiority Study on Clinical Outcome and Antibiotic Use in Tanzania. PLoS ONE. 2015;10(7):e0132316.
Bernasconi A, Crabbé F, Rossi R, Qani I, Vanobberghen A, Raab M, Du Mortier S. The ALMANACH Project: Preliminary results and potentiality from Afghanistan. Int J Med Inform. 2018;114:130–5.
Toth-Pal E, Wårdh I, Strender LE, Nilsson G. A guideline-based computerised decision support system (CDSS) to influence general practitioners management of chronic heart failure. Inform Prim Care. 2008;16(1):29–39.
Prasert V, Shono A, Chanjaruporn F, Ploylearmsang C, Boonnan K, Khampetdee A, Akazawa M. Effect of a computerized decision support system on potentially inappropriate medication prescriptions for elderly patients in Thailand. J Eval Clin Pract. 2019;25(3):514–20.
Jia P, Zhao P, Chen J, Zhang M. Evaluation of clinical decision support systems for diabetes care: An overview of current evidence. J Eval Clin Pract. 2019;25(1):66–77.
Keitel K, Kagoro F, Samaka J, Masimba J, Said Z, Temba H, et al. A novel electronic algorithm using host biomarker point-of-care tests for the management of febrile illnesses in Tanzanian children (e-POCT): A randomized, controlled non-inferiority trial. PLoS Med. 2017;14(10):e1002411.
Riches N, Panagioti M, Alam R, Cheraghi-Sohi S, Campbell S, Esmail A, et al. The effectiveness of electronic differential diagnoses (DDX) generators: a systematic review and meta-analysis. PLoS ONE. 2016;11(3):e0148991.
Semigran HL, Linder JA, Gidengil C, Mehrotra A. Evaluation of symptom checkers for self diagnosis and triage: audit study. BMJ. 2015;8(351):h3480.
Wasylewicz ATM, Scheepers-Hoeks AMJW. Clinical Decision Support Systems. 2018 Dec 22. Fundamentals of Clinical Data Science [Internet]. Cham (CH): Springer; 2019. Chapter 11.
Shao AF, Rambaud-Althaus C, Swai N, Kahama-Maro J, Genton B, D’Acremont V, et al. Can smartphones and tablets improve the management of childhood illness in Tanzania? A qualitative study from a primary health care worker’s perspective. BMC Health Serv Res. 2015;15:135.
Cahan A, Cimino JJ. A Learning Health Care System Using Computer-Aided Diagnosis. J Med Internet Res. 2017;19(3):e54.
ICT facts and figures 2017 - International Telecommunication Union. https://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2017.pdf
White A, Thomas DS, Ezeanochie N, Bull S. Health Worker mHealth Utilisation: A Systematic Review. Comput Inform Nurs. 2016;34(5):206–13.
Chang L, Qing Z, Kenneth AH, Elizabeth KS. Status and trends of mobile-health applications for iOS devices: A developer’s perspective. The Journal of Systems and Software. 2011;84:2022–33.
Keitel K, D’Acremont V. Electronic clinical decision algorithms for the integrated primary care management of febrile children in low-resource settings: review of existing tools. Clin Microbiol Infect. 2018;24(8):845–55.
Horrocks M, Michail M, Aubeeluck A, Wright N, Morriss R. An Electronic Clinical Decision Support System for the Assessment and Management of Suicidality in Primary Care: Protocol for a Mixed-Methods Study. JMIR Res Protoc. 2018;7(12):e11135.
Hemery MV. Utilisation des smartphones en médecine générale en Picardie, Thèse n°2016-39. UFR de médecine d’Amiens: Université de Picardie Jules Verne; 2016.
Edberg SC. Global Infectious Diseases and Epidemiology Network (GIDEON): a world wide Web-based program for diagnosis and informatics in infectious diseases. Clin Infect Dis. 2005;40(1):123–6.
Van den Ende J, Blot K, Kestens L, Van Gompel A, Van den Enden E. Kabisa: an interactive computer-assisted training program for tropical diseases. Med Educ. 1997;31(3):202–9.
D’Acremont V, Ambresin AE, Burnand B, Genton B. Practice guidelines for evaluation of Fever in returning travelers and migrants. J Travel Med. 2003;10(Suppl 2):S25-52.
Buss I, Genton B, D’Acremont V. Aetiology of fever in returning travellers and migrants: a systematic review and meta-analysis. J Travel Med. 2020;27(8):207.
Perri-Moore S, Routen T, Shao AF, Rambaud-Althaus C, Swai N, Kahama-Maro J, D’Acremont V, Genton B, Mitchell M. Using an eIMCI-Derived Decision Support Protocol to Improve Provider-Caretaker Communication for Treatment of Children Under 5 in Tanzania. Glob Health Commun. 2015;1(1):41–7.
Roter D, Larson S. The Roter interaction analysis system (RIAS): utility and flexibility for analysis of medical interactions. Patient Educ Couns. 2002;46(4):243–51.
D’Agostino TA, Bylund CL. The Nonverbal Accommodation Analysis System (NAAS): initial application and evaluation. Patient Educ Couns. 2011;85(1):33–9.
Burt J, Abel G, Elmore N, Campbell J, Roland M, Benson J, Silverman J. Assessing communication quality of consultations in primary care: initial reliability of the Global Consultation Rating Scale, based on the Calgary-Cambridge Guide to the Medical Interview. BMJ Open. 2014;4(3):e004339.
Street RL Jr, Liu L, Farber NJ, Chen Y, Calvitti A, Weibel N, et al. Keystrokes, Mouse Clicks, and Gazing at the Computer: How Physician Interaction with the EHR Affects Patient Participation. J Gen Intern Med. 2018;33(4):423–8.
Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67(3):333–42.
Zill JM, Christalle E, Müller E, Härter M, Dirmaier J, Scholl I. Measurement of physician-patient communication–a systematic review. PLoS ONE. 2014;9(12):e112637.
Patrick K, Griswold WG, Raab F, Intille SS. Health and the mobile phone. Am J Prev Med. 2008;35(2):177–81.
Marcos M, Maldonado JA, Martínez-Salvador B, Boscá D, Robles M. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility. J Biomed Inform. 2013;46(4):676–89.
Khairat S, Marc D, Crosby W, Al SA. Reasons For Physicians Not Adopting Clinical Decision Support Systems: Critical Analysis. JMIR Med Inform. 2018;6(2):e24.
Ryu S. Book Review: mHealth: New Horizons for Health through Mobile Technologies: Based on the Findings of the Second Global Survey on eHealth (Global Observatory for eHealth Series, Volume 3). Healthcare Informatics Research. 2012;18(3):231–3.
Thompson M. The environmental impacts of digital health. Digit Health. 2021;10(7):20552076211033420.
Brugel S, Postma-Nilsenová M, Tates K. The link between perception of clinical empathy and nonverbal behavior: The effect of a doctor’s gaze and body orientation. Patient Educ Couns. 2015;98(10):1260–5.
Mast MS. On the importance of nonverbal communication in the physician-patient interaction. Patient Educ Couns. 2007;67(3):315–8.
Acknowledgements
The authors would like to thank Prof. Senn and Dr. Jeannot for their review, Denis Roberge and Prof. Singy for their advice concerning the communicational aspect of the study, Mr. Brun for the logistical aspects, Ms. Bonsembiante for the transcription of the interviews and Ms. Viret for the coordination of the simulated patients. We would also like to acknowledge the commitment of physicians and simulated patients who donated their time to participate in this study.
Funding
Open access funding provided by University of Lausanne This work was supported by the Centre for Primary Care and Public Health, University of Lausanne.
Author information
Authors and Affiliations
Contributions
J J.V., V.D., O.S. and L.C. conceived and planned the experiments. J.V. and L.C carried out the simulations. J.V., C.B, L.C. and V.D. contributed to the interpretation of the results. J.V. took the lead in writing the manuscript. All authors.
provided critical feedback and helped shape the research, analysis and manuscript..
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The study was performed in accordance with relevant guidelines and regulations. All the experimental protocols were approved by the ethics committee of the Centre for Primary Care and Public Health, University of Lausanne. All the physicians and simulated patients provided written informed consent before they could participate in the study.
Competing interests
The authors declare no competing interests.
Consent for publication
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Vibert, J., Bourquin, C., De Santis, O. et al. Influence of the use of a tablet-based clinical decision support algorithm by general practitioners on the consultation process: the example of FeverTravelApp. BMC Digit Health 2, 59 (2024). https://doi.org/10.1186/s44247-024-00118-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s44247-024-00118-4