chatbot – Digital Health Global https://www.digitalhealthglobal.com digital health tools and services Mon, 25 Mar 2024 11:53:23 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8 https://www.digitalhealthglobal.com/wp-content/uploads/2018/05/faviconDHI.png chatbot – Digital Health Global https://www.digitalhealthglobal.com 32 32 Evaluating biases in Language Models for clinical use: a study on GPT-4 https://www.digitalhealthglobal.com/evaluating-biases-in-language-models-for-clinical-use-a-study-on-gpt-4/ Mon, 08 Jan 2024 14:32:29 +0000 https://www.digitalhealthglobal.com/?p=12070 Healthcare professionals have increasingly been exploring the potential of Large Language Models (LLMs) like GPT-4 to revolutionize aspects of patient care, from streamlining administrative tasks to enhancing clinical decision-making.

Despite their potential, language models can encode biases, impacting historically marginalized groups.

A recent study published by The Lancet Digital Health has shed light on potential pitfalls, emphasizing the need for cautious integration into healthcare settings.

The study

In this study, researchers delved into whether GPT-4 harbors racial and gender biases that could adversely impact its utility in healthcare applications. Using the Azure OpenAI interface, the team assessed four key aspects: medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment.

To simulate real-world scenarios, researchers employed clinical vignettes from NEJM Healer and drew from existing research on implicit bias in healthcare. The study aimed to gauge how GPT-4’s estimations aligned with the actual demographic distribution of medical conditions, comparing the model’s outputs with true prevalence estimates in the United States.

The study assessed GPT-4’s potential biases in medical applications, including medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment.

  • Simulating Patients for Medical Education:
    • GPT-4 was evaluated for creating patient presentations based on specific diagnoses, revealing biases in demographic portrayals.
    • The analysis included 18 diagnoses, assessing GPT-4’s ability to model demographic diversity and comparing the generated cases with true prevalence estimates.
    • Various prompts and geographical factors were considered, and strategies for de-biasing prompts were explored.
  • Constructing Differential Diagnoses and Clinical Treatment Plans:
    • GPT-4’s response to medical education cases was analyzed, evaluating the impact of demographics on diagnostic and treatment recommendations.
    • Cases from NEJM Healer and additional scenarios were used, examining the effect of gender and race on GPT-4’s outputs.
    • Two specific cases, acute dyspnea and pharyngitis in a sexually active teenager, underwent a more in-depth analysis.
  • Assessing Subjective Features of Patient Presentation:
    • GPT-4’s perceptions were examined using case vignettes designed to assess implicit bias in registered nurses.
    • Changes in race, ethnicity, and gender were introduced to measure the impact on GPT-4’s clinical decision-making abilities across various statements and categories.
    • The study aimed to identify significant differences in GPT-4’s agreement with statements based on demographic factors.

The results were concerning.

GPT-4 consistently generated clinical vignettes that perpetuated stereotypes related to demographic presentations, failing to accurately model the diversity of medical conditions.
The differential diagnoses provided by the model were more likely to include stereotypical associations with certain races, ethnicities, and genders.
Additionally, assessments and plans created by GPT-4 revealed significant associations between demographic attributes and recommendations for more costly procedures, as well as variations in patient perception.

These findings underscore the critical importance of subjecting LLM tools like GPT-4 to thorough and transparent bias assessments before their integration into clinical care. The study discusses potential sources of biases and proposes mitigation strategies to ensure responsible and ethical use in healthcare settings.

Priscilla Chan and Mark Zuckerberg provided funding for this research, which actively calls on the healthcare community to approach the integration of advanced language models with caution and a commitment to mitigating biases for improved patient care.

]]>
The influence of AI-powered chatbots on urolithiasis treatment https://www.digitalhealthglobal.com/the-influence-of-ai-powered-chatbots-on-urolithiasis-treatment/ Wed, 15 Nov 2023 08:54:01 +0000 https://www.digitalhealthglobal.com/?p=11668 Artificial Intelligence has the potential to revolutionize the healthcare industry, but how patients react to content produced by AI tools remains a question. A recent study examines the impact of AI-powered chatbots in the treatment of urolithiasis.

AI has emerged as a transformative force in various fields, and its application in healthcare is no exception.

The use of Large Language Models in healthcare

Large Language Models (LLMs) represent a significant milestone in AI development. They empower machines to comprehend and produce human-like language. Among these models, the generative pre-trained transformer (GPT), particularly the GPT-3.5 model, has gained attention for its capacity to generate intricate responses across various languages. Recent advances in GPT-4 have expanded its capabilities, allowing images to be uploaded as input.

These AI models have the potential to transform the medical field, but understanding how patients perceive and engage with the content generated by these tools is equally crucial to effectively integrate them into healthcare facilities.

The study on patients in treatment for urolithiasis

In a study recently published in the Journal of Digital Health, Seong Hwan Kim and other authors analyzed a case study involving patients undergoing treatment for urolithiasis, a condition characterized by the formation of stones in the urinary tract. The authors examined how AI-powered chatbots, such as ChatGPT version 3.5, impacted patients’ perceptions both before and after receiving information about lifestyle changes aimed at preventing the recurrence of urolithiasis. The goal of this study was to illuminate the evolving relationship between patients and the use of artificial intelligence in healthcare.

Patients involved in the study were asked to complete questionnaires via a self-administered survey. An initial questionnaire was provided before the explanation on lifestyle modifications to prevent the recurrence of urolithiasis, while the next questionnaires were distributed after patients received the explanation generated by ChatGPT.

Inclusion criteria consisted of patients who had been diagnosed with urolithiasis through computed tomography, had undergone ureterorenoscopy treatment, and who were 18-80 years old. Patients who were unable to understand the ChatGPT indications, or to complete the questionnaires, were excluded.

The introduction of AI-based chatbots in healthcare can enhance patient engagement and education. However, the findings of the cited study revealed negative reactions from patients, especially among those with lower levels of education. This implies that this category of patients may have a more negative perception of AI-generated content, potentially due to an insufficient understanding of the digital world.

The perception of AI in healthcare: how is it influenced?

Like any potentially innovative technology, the perception of AI in healthcare is influenced by many factors, including the nature of the technology itself and the features of the individual patient. This influence is justified by one of the health determinants identified by the WHO, namely, the patient’s level of education. While AI has the potential to improve healthcare and clinical outcomes for patients, it must be used in a way that meets the needs of patients according to their educational background. This is particularly important in the medical field, where complete reliance on AI-generated information is pivotal for their well-being.

Urolithiasis can profoundly affect patients’ physical and mental health. Lifestyle changes play a crucial role in preventing the formation of stones, alongside adherence to specific dietary guidelines. AI-powered chatbots, like ChatGPT, can help patients understand and follow these recommendations, summarizing complex information and offering guidance in a straightforward manner.

The reliability of AI-generated content

The reliability of AI-generated content continues to be a significant topic of debate. Chatbots like ChatGPT depend on data collected from the Internet, which may contain inaccuracies and errors. Ensuring the reliability of AI-generated medical information is essential for reducing risks for patients.

For this very reason, future developments of AI chatbots should prioritize the verification of medical content to increase its reliability in clinical settings. Allowing physicians to program the AI themselves should be considered to ensure the accuracy of the information and the responses provided to users.

Despite the challenges, chatbots are being used to simplify low-complexity tasks and improve the flow of information in healthcare. They aid physicians by summarizing clinical information, managing health records, and offering advice from evidence-based data. However, generative artificial intelligence cannot replace human physicians. To ensure responsible use in the medical field, problems surrounding data accuracy and the generation of false information need to be addressed as soon as possible.

Conclusions

In conclusion, the study highlighted how patients perceive AI in healthcare and how this perception is changing, specifically in the management of urolithiasis. As previously mentioned, patients with lower levels of education expressed a negative evaluation after receiving an explanation generated by ChatGPT. Even though chatbots have the potential to improve patient knowledge and engagement, some points need to be resolved to integrate them effectively.

To improve patient perceptions and promote the correct adoption of AI in healthcare, it is necessary to develop user-friendly interfaces, provide clear and accurate information, and prioritize authoritative verification of medical content. Chatbots have their limits, but their continued development and improvement promises to enhance healthcare delivery and its clinical outcomes in the future.

]]>
Large Language Models: Revolutionizing Unstructured Data Analysis in Healthcare https://www.digitalhealthglobal.com/large-language-models-revolutionizing-unstructured-data-analysis-in-healthcare/ Thu, 05 Oct 2023 11:48:00 +0000 https://www.digitalhealthglobal.com/?p=13108 In the big world of health care, the amount of unstructured data: medical records, clinical research papers, scientific publications, clinical trials, etc., can be overwhelming.

Extracting valuable information and knowledge from this unstructured data has long been a challenge, hindering the progress of medical research, diagnosis, and patient care.

However, with the advent of large-scale language models (LLMs), a breakthrough has occurred. These powerful artificial intelligence models have broken barriers, paving the way for unprecedented advances in the analysis of unstructured data in healthcare.

These models have incredible potential and are already transforming the data analytics landscape.

What are Large Language Models?

In recent years, large language models have emerged as an innovative development in artificial intelligence (AI) technology and natural language processing (NLP), transforming several fields.

They are designed to process and understand human language by exploiting large amounts of textual data. By learning patterns, relationships, and contextual information from this data, these models gain the ability to generate coherent and contextually appropriate responses and perform various language-related tasks.

LLMs are built with interconnected artificial neurons that imitate the human brain. They undergo extensive training on enormous datasets containing billions of sentences from diverse sources like books, articles, and websites.

Another vital aspect of these models is their immense number of parameters, which can range from millions to billions. These parameters enable the models to grasp the intricacies of language, resulting in the generation of contextually relevant and high-quality text.

Real-world Examples of Large Language Models in Healthcare Analytics

Disease diagnosis and treatment recommendations
In a study, researchers trained a language model using a large amount of medical literature and medical records. The model was then used to analyze complex patient cases, accurately diagnosing rare diseases and recommending tailored treatment strategies based on the latest research findings.

Literature review and evidence synthesis
Researchers have used these models to analyze large volumes of scientific literature, enabling comprehensive reviews and evidence-based assessments. By automating the extraction and synthesis of information, language models accelerate the identification of relevant studies, summarize key findings, and support evidence-based decision making.

Medical image analysis and radiology
In many scenarios, models can interpret radiology reports and extract key findings, aiding radiologists in diagnosis. They can also help with automatic report generation, reducing reporting time and improving workflow efficiency in radiology.

Mental health support and chatbots
These models have been integrated into mental health support systems and chatbots, providing personalized assistance and resources to people. They are also able to initiate natural language conversations, understand emotional nuances, and provide support, information, and referrals for mental health issues.

Integrating Large Language Models in Life Sciences

LLMs are not easily replicable, nor affordable for all organizations. The energy cost of training GPT-4 has been close to $100 million and rising in proportion to the complexity of the model itself. Thus, large IT companies, including Google, Amazon and OpenAI (sponsored by Microsoft, and others) have been the only players to have entered this space.

Users are therefore forced to work with these pre-trained models, limited to simple “fine tuning” with respect to their needs. However, for very specific domains, it is crucial to recognize that the results and performance may differ substantially from expectations.

Healthcare is a knowledge domain where many of the documents (scientific publications, etc.) are publicly available, and, therefore, large language models are already trained and seem to work well. When, however, we submit private and very specialized documents, performance may change and the LLM may not recognize concepts, such as: active ingredients, or names of molecules, or development processes that are internal knowledge.

Often implemented by universities and research centers, some LLMs, such as Google BERT, have been specialized, with additional training on certain areas, and released to the open-source community: BioBERT, MedBERT, SciBERT; and more recently, BioGPT, a verticalized version on biomedical concepts of the well-known GPT, have been released as well.

Therefore, it is important to have awareness and understanding of the scope of the intended use cases to choose the most suitable model, without getting dragged into the mainstream ChatGPT.

The right process of development can thus be summarized as:

  • Identify the right use case: Assess business operations to identify areas where an LLM can add value.
  • Select the appropriate model: Choose an LLM that fits your needs, considering the complexity of the task, model capabilities and resource requirements.
  • Prepare and fine-tune data: Collect and if necessary, pre-process relevant data to fine-tune the chosen model to ensure that it is aligned with the business context and produces accurate, domain-specific results.
  • Plan integration with existing systems: Perform the integration of an LLM into existing business processes and technology infrastructure.
  • Monitor and evaluate performance: Continuously monitor the performance of the implemented LLM, using metrics such as accuracy, response time, and user satisfaction to identify areas for improvement.
  • Ethical and privacy considerations: Take into account potential ethical and privacy issues related to AI implementation, while ensuring compliance with data protection regulations and responsible use of AI technologies.
  • Promote a culture of AI adoption: Encourage understanding and acceptance of AI technologies throughout the company by providing training and resources for employees to embrace and leverage LLMs.

Encouraging further exploration and experimentation

Ongoing research, development and testing of language models are essential to fully unlock their potential in health data analytics, to ensure that data privacy and security standards are met and to promote responsible use of AI technologies. While the seamless integration of language models with existing healthcare systems and workflows is critical for widespread adoption. By developing interoperable platforms and APIs that allow easy access to the models and facilitate integration with electronic health records, clinical decision support systems, and other healthcare applications, the potential impact and usability of large language models can be maximized.

It’s clear that these technologies have disrupted the landscape of healthcare data analytics, providing healthcare providers with advanced capabilities to extract information, thus improving care, and driving medical research.

The way forward with Healthware

For years, we at Healthware have been following the evolution of artificial intelligence and have utilized our machine learning and data science expertise to help our customers.

The new LLM-based tools offer us and our customers new ways to accelerate, enhance, and develop processes, products, and projects. They won’t make professionals obsolete, instead; they will empower them to work faster and more efficiently.

Our senior developers are already utilizing ChatGPT to speed up development work. Instead of researching documentation, the developer can ask the chatbot to help create a new component, which they can then review and integrate into the codebase. Chatbots are especially useful for more senior developers, who can adequately review the code and ensure it is suitable, working, and secure.

This approach allowed our designer to focus entirely on the core design. Looking toward the future, we could ask Chatbots to generate ideas or sketches of these characters. Ultimately, this design approach expedited the discovery process, allowing the designers to find the correct style and refine it.

And the number of opportunities just keeps growing. We can generate audio, video, text, images, code, and more with the current tools. These usually cannot be used as-is at the moment but are great drafts that our experts can finalize. And as the technology evolves, more and more final production content can be generated with these tools. They have already opened up a new skillset of growing importance in the future: prompt hacking. I.e., the ability to ask the right questions with the proper context in the right way from the right chatbot to get the best possible results. 

]]>
The Health Innovation Ecosystem vs COVID-19: a landscape map by Frontiers Health and Healthware https://www.digitalhealthglobal.com/the-health-innovation-ecosystem-vs-covid-19-a-landscape-map-by-frontiers-health-and-healthware/ Thu, 28 May 2020 13:26:41 +0000 https://www.digitalhealthglobal.com/?p=3714 Created by a joint effort of the Frontiers Health and the Healthware Group teams, the map is a useful tool for those who want to discover the health innovation ecosystem response to the pandemic.

“We have worked to make the map as complete and accurate as possible, identifying the various areas of intervention in which health innovators have worked,” said Roberto Ascione, CEO of Healthware and Chairman of Frontiers Health.

The map divides the interventions by thematic areas: remote monitoring, education & disease management, telehealth, screening, triage & virtual care, disease knowledge, diagnostics & detection, mental & behavioral health, disease tracking & predictive analysis, hubs & special initiatives.

Check the interactive map here or download the high-res PDF version here.

]]>
The Coronavirus Diagnostic Chatbot by Paginemediche is now available in multiple languages https://www.digitalhealthglobal.com/the-coronavirus-diagnostic-chatbot-by-paginemediche-is-now-available-in-multiple-languages/ Fri, 03 Apr 2020 10:57:03 +0000 https://www.digitalhealthglobal.com/?p=3605 The Coronavirus emergency needs to be tackled seriously, but without panicking.

That is why correct information is the first step to prevent infection and contain the Covid-19 epidemic.

Since the beginning of the health emergency, Paginemediche, a start-up of the Healthware Ventures portfolio, is committed to release free initiatives addressed to manage the crisis generated by COVID-19 and to support patients and doctors.

Last February the 7th, Paginemediche made available an info-chat to control suspicious symptoms caused by New Coronavirus, following the guidelines of the Italian Ministry of Health and under the medical supervision of Dr. Emanuele Urbani, General Practitioner in Milan. To date there are about 90 thousand active chats and it has also been adopted by many institutions on their public portals and used as support for telephone triage to highlight potential cases of coronavirus.

Thanks to the collaboration with the Healthware team, the Paginemediche Coronovirus Chatbot is now available in a multilingual version (English, Italian, German, French, Spanish) and can be configured and used by health and other organizations.

To help provide a public service, Healthware Group has activated the Coronavirus Diagnostic Chatbot in English on its corporate website.

The widget invites users to start chatting with the bot, which can assess risk exposure and possible symptoms and contagions of COVID-19 with a series of short questions on behaviour and health symptoms.

By analysing the answers based on the National/Provincial Health Guidelines, the chatbot will provide recommendations on the behaviours to follow and actions to take to check the health status of people.

It also facilitates the continuous collection and processing of epidemiological data useful to institutions for monitoring the spread of the disease and retrospective analysis.

Click here for more information.

]]>