ChatGPT – Digital Health Global https://www.digitalhealthglobal.com digital health tools and services Mon, 29 Apr 2024 15:54:34 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8 https://www.digitalhealthglobal.com/wp-content/uploads/2018/05/faviconDHI.png ChatGPT – Digital Health Global https://www.digitalhealthglobal.com 32 32 Moderna revolutionizes operations with AI: ChatGPT Enterprise transforms workflows https://www.digitalhealthglobal.com/moderna-revolutionizes-operations-with-ai-chatgpt-enterprise-transforms-workflows/ Mon, 29 Apr 2024 15:54:33 +0000 https://www.digitalhealthglobal.com/?p=13387 In a groundbreaking collaboration, Moderna, renowned for its pioneering work in mRNA medicines, including the COVID-19 vaccine, joins forces with OpenAI to integrate ChatGPT Enterprise across its operations, ushering in a new era of efficiency and innovation.

Under the visionary leadership of CEO Stéphane Bancel, Moderna has embarked on an ambitious mission to infuse AI into every aspect of its business, from research and manufacturing to legal and commercial operations.

Driven by a commitment to maximizing patient impact, Moderna’s adoption of ChatGPT Enterprise is not just a technological upgrade but a strategic move towards operational excellence. Moderna ensures that every employee equips themselves with the skills to leverage AI effectively, leading to rapid adoption and proficiency through a comprehensive transformation program.

The success story begins with mChat, an internal AI chatbot tool that quickly gained traction among employees, setting the stage for the seamless integration of ChatGPT Enterprise. With its launch, Moderna witnessed a surge in productivity as employees harnessed the power of AI to create hundreds of innovative solutions tailored to their needs.

Key milestones include the deployment of 750 GPTs company-wide, with 40% of weekly active users actively creating GPTs and engaging in 120 ChatGPT Enterprise conversations per week on average.

Dose ID, a standout application, streamlines clinical trial development by automating data analysis and augmenting decision-making processes as a GPT pilot. Additionally, ChatGPT Enterprise has empowered legal and corporate brand teams by streamlining compliance processes and enhancing storytelling capabilities.

Moderna’s commitment to AI-driven innovation and OpenAI’s cutting-edge technology set the stage for continued advancements in mRNA medicines and a positive impact on patients’ lives.

]]>
Embracing change: how Americans lead in adopting AI for medical insights https://www.digitalhealthglobal.com/embracing-change-how-americans-lead-in-adopting-ai-for-medical-insights/ Mon, 22 Jan 2024 13:41:15 +0000 https://www.digitalhealthglobal.com/?p=12244 A User-Testing Study Reveals Growing Trust in AI for Healthcare

UserTesting recently conducted a study that discovered over half of Americans are relying on ChatGPT, a large language model, for medical insights and diagnoses. The survey of 2,000 American adults showcased a significant trend in the adoption of artificial intelligence (AI) for healthcare-related tasks.

Key Findings

  • 52% Engage with ChatGPT: More than half of the respondents (52%) reported providing a list of their symptoms to ChatGPT, seeking a diagnosis or understanding of their health concerns.
  • High Accuracy Rates: Among those who consulted ChatGPT, an impressive 81% received a diagnosis from the AI model. Furthermore, when comparing these AI-generated diagnoses with those from healthcare professionals, an astonishing 84% deemed ChatGPT’s diagnoses accurate.
  • Americans Lead in their Trust for AI: Unlike their counterparts in England and Australia, only 6% of Americans expressed distrust in AI for health-related tasks. This suggests a growing confidence in AI’s ability to meaningfully contribute to healthcare.

Reasons behind the trend

The study points to various reasons for the increasing reliance on AI in healthcare. With 26 million Americans lacking health insurance and challenges such as expensive co-pays and difficulty securing appointments, individuals are actively seeking alternative solutions. Healthcare deserts, prevalent in rural and inner-city areas, further highlight the need for accessible and convenient healthcare options.

Lija Hogan, head of research strategy at UserTesting, notes that as the country ages, integrating AI into the healthcare journey becomes essential for providing care at scale.

Future trust in AI

The survey delved into the tasks Americans would trust AI to perform, revealing that 53% would trust AI for recommending treatment plans, monitoring sleeping patterns, and scheduling doctor appointments. This growing acceptance indicates a willingness to incorporate AI into various aspects of healthcare.

Balancing AI and traditional healthcare

While AI proves valuable for certain tasks, the study emphasizes the need for a collaborative approach between AI and traditional healthcare. The ideal scenario involves a seamless integration of human healthcare professionals and healthcare-trained AIs, ensuring high-quality advice and connecting patients to appropriate providers.

As the trend toward AI-assisted healthcare continues, the survey highlights the ongoing importance of establishing effective guardrails to maintain the quality and context of medical advice provided by AI systems.

Thus, the UserTesting study paints a picture of a healthcare landscape which continuously incorporates AI, with Americans leading the way in embracing this technology for their medical needs.

]]>
Evaluating biases in Language Models for clinical use: a study on GPT-4 https://www.digitalhealthglobal.com/evaluating-biases-in-language-models-for-clinical-use-a-study-on-gpt-4/ Mon, 08 Jan 2024 14:32:29 +0000 https://www.digitalhealthglobal.com/?p=12070 Healthcare professionals have increasingly been exploring the potential of Large Language Models (LLMs) like GPT-4 to revolutionize aspects of patient care, from streamlining administrative tasks to enhancing clinical decision-making.

Despite their potential, language models can encode biases, impacting historically marginalized groups.

A recent study published by The Lancet Digital Health has shed light on potential pitfalls, emphasizing the need for cautious integration into healthcare settings.

The study

In this study, researchers delved into whether GPT-4 harbors racial and gender biases that could adversely impact its utility in healthcare applications. Using the Azure OpenAI interface, the team assessed four key aspects: medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment.

To simulate real-world scenarios, researchers employed clinical vignettes from NEJM Healer and drew from existing research on implicit bias in healthcare. The study aimed to gauge how GPT-4’s estimations aligned with the actual demographic distribution of medical conditions, comparing the model’s outputs with true prevalence estimates in the United States.

The study assessed GPT-4’s potential biases in medical applications, including medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment.

  • Simulating Patients for Medical Education:
    • GPT-4 was evaluated for creating patient presentations based on specific diagnoses, revealing biases in demographic portrayals.
    • The analysis included 18 diagnoses, assessing GPT-4’s ability to model demographic diversity and comparing the generated cases with true prevalence estimates.
    • Various prompts and geographical factors were considered, and strategies for de-biasing prompts were explored.
  • Constructing Differential Diagnoses and Clinical Treatment Plans:
    • GPT-4’s response to medical education cases was analyzed, evaluating the impact of demographics on diagnostic and treatment recommendations.
    • Cases from NEJM Healer and additional scenarios were used, examining the effect of gender and race on GPT-4’s outputs.
    • Two specific cases, acute dyspnea and pharyngitis in a sexually active teenager, underwent a more in-depth analysis.
  • Assessing Subjective Features of Patient Presentation:
    • GPT-4’s perceptions were examined using case vignettes designed to assess implicit bias in registered nurses.
    • Changes in race, ethnicity, and gender were introduced to measure the impact on GPT-4’s clinical decision-making abilities across various statements and categories.
    • The study aimed to identify significant differences in GPT-4’s agreement with statements based on demographic factors.

The results were concerning.

GPT-4 consistently generated clinical vignettes that perpetuated stereotypes related to demographic presentations, failing to accurately model the diversity of medical conditions.
The differential diagnoses provided by the model were more likely to include stereotypical associations with certain races, ethnicities, and genders.
Additionally, assessments and plans created by GPT-4 revealed significant associations between demographic attributes and recommendations for more costly procedures, as well as variations in patient perception.

These findings underscore the critical importance of subjecting LLM tools like GPT-4 to thorough and transparent bias assessments before their integration into clinical care. The study discusses potential sources of biases and proposes mitigation strategies to ensure responsible and ethical use in healthcare settings.

Priscilla Chan and Mark Zuckerberg provided funding for this research, which actively calls on the healthcare community to approach the integration of advanced language models with caution and a commitment to mitigating biases for improved patient care.

]]>
The influence of AI-powered chatbots on urolithiasis treatment https://www.digitalhealthglobal.com/the-influence-of-ai-powered-chatbots-on-urolithiasis-treatment/ Wed, 15 Nov 2023 08:54:01 +0000 https://www.digitalhealthglobal.com/?p=11668 Artificial Intelligence has the potential to revolutionize the healthcare industry, but how patients react to content produced by AI tools remains a question. A recent study examines the impact of AI-powered chatbots in the treatment of urolithiasis.

AI has emerged as a transformative force in various fields, and its application in healthcare is no exception.

The use of Large Language Models in healthcare

Large Language Models (LLMs) represent a significant milestone in AI development. They empower machines to comprehend and produce human-like language. Among these models, the generative pre-trained transformer (GPT), particularly the GPT-3.5 model, has gained attention for its capacity to generate intricate responses across various languages. Recent advances in GPT-4 have expanded its capabilities, allowing images to be uploaded as input.

These AI models have the potential to transform the medical field, but understanding how patients perceive and engage with the content generated by these tools is equally crucial to effectively integrate them into healthcare facilities.

The study on patients in treatment for urolithiasis

In a study recently published in the Journal of Digital Health, Seong Hwan Kim and other authors analyzed a case study involving patients undergoing treatment for urolithiasis, a condition characterized by the formation of stones in the urinary tract. The authors examined how AI-powered chatbots, such as ChatGPT version 3.5, impacted patients’ perceptions both before and after receiving information about lifestyle changes aimed at preventing the recurrence of urolithiasis. The goal of this study was to illuminate the evolving relationship between patients and the use of artificial intelligence in healthcare.

Patients involved in the study were asked to complete questionnaires via a self-administered survey. An initial questionnaire was provided before the explanation on lifestyle modifications to prevent the recurrence of urolithiasis, while the next questionnaires were distributed after patients received the explanation generated by ChatGPT.

Inclusion criteria consisted of patients who had been diagnosed with urolithiasis through computed tomography, had undergone ureterorenoscopy treatment, and who were 18-80 years old. Patients who were unable to understand the ChatGPT indications, or to complete the questionnaires, were excluded.

The introduction of AI-based chatbots in healthcare can enhance patient engagement and education. However, the findings of the cited study revealed negative reactions from patients, especially among those with lower levels of education. This implies that this category of patients may have a more negative perception of AI-generated content, potentially due to an insufficient understanding of the digital world.

The perception of AI in healthcare: how is it influenced?

Like any potentially innovative technology, the perception of AI in healthcare is influenced by many factors, including the nature of the technology itself and the features of the individual patient. This influence is justified by one of the health determinants identified by the WHO, namely, the patient’s level of education. While AI has the potential to improve healthcare and clinical outcomes for patients, it must be used in a way that meets the needs of patients according to their educational background. This is particularly important in the medical field, where complete reliance on AI-generated information is pivotal for their well-being.

Urolithiasis can profoundly affect patients’ physical and mental health. Lifestyle changes play a crucial role in preventing the formation of stones, alongside adherence to specific dietary guidelines. AI-powered chatbots, like ChatGPT, can help patients understand and follow these recommendations, summarizing complex information and offering guidance in a straightforward manner.

The reliability of AI-generated content

The reliability of AI-generated content continues to be a significant topic of debate. Chatbots like ChatGPT depend on data collected from the Internet, which may contain inaccuracies and errors. Ensuring the reliability of AI-generated medical information is essential for reducing risks for patients.

For this very reason, future developments of AI chatbots should prioritize the verification of medical content to increase its reliability in clinical settings. Allowing physicians to program the AI themselves should be considered to ensure the accuracy of the information and the responses provided to users.

Despite the challenges, chatbots are being used to simplify low-complexity tasks and improve the flow of information in healthcare. They aid physicians by summarizing clinical information, managing health records, and offering advice from evidence-based data. However, generative artificial intelligence cannot replace human physicians. To ensure responsible use in the medical field, problems surrounding data accuracy and the generation of false information need to be addressed as soon as possible.

Conclusions

In conclusion, the study highlighted how patients perceive AI in healthcare and how this perception is changing, specifically in the management of urolithiasis. As previously mentioned, patients with lower levels of education expressed a negative evaluation after receiving an explanation generated by ChatGPT. Even though chatbots have the potential to improve patient knowledge and engagement, some points need to be resolved to integrate them effectively.

To improve patient perceptions and promote the correct adoption of AI in healthcare, it is necessary to develop user-friendly interfaces, provide clear and accurate information, and prioritize authoritative verification of medical content. Chatbots have their limits, but their continued development and improvement promises to enhance healthcare delivery and its clinical outcomes in the future.

]]>
AI Chatbot Revolutionizes Patient-Doctor Interaction in Healthcare https://www.digitalhealthglobal.com/ai-chatbot-revolutionizes-patient-doctor-interaction-in-healthcare/ Thu, 10 Aug 2023 13:43:39 +0000 https://www.digitalhealthglobal.com/?p=10651 Patients embarking on medical consultations are set to experience a groundbreaking shift as they engage with AI-powered chatbots before meeting their physicians.

This transformative approach aims to enhance patient communication and understanding, while providing doctors with comprehensive insights prior to consultations.
Israeli startup Kahun, based in Tel Aviv, is at the forefront of this healthcare revolution. The company, renowned for its utilization of AI in conducting written question and answer sessions with patients, has now integrated OpenAI’s ChatGPT to create a more natural and fluid patient experience.

Traditionally, Kahun’s platform engaged in structured conversations with patients, gathering essential medical information through a series of directed queries. With the inclusion of ChatGPT, patients can now articulate their symptoms in their own words, enabling a more authentic interaction and a comprehensive understanding of their condition.

Kahun’s repository, housing over thirty million medically informed insights from trusted sources, remains integral to the platform’s operations. This wealth of knowledge, combined with ChatGPT’s conversational abilities, empowers patients to freely describe their ailments, fostering a sense of control and comfort.

CEO Eitan Ron emphasizes the importance of this innovation, explaining, “With ChatGPT we identified an option to open it up to a real natural language experience. So now the patient can tell their story in their own words.
The incorporation of ChatGPT signifies a pivotal development. Patients express their symptoms candidly and receive responses from ChatGPT mirroring a physician’s approach. This pre-consultation interaction ensures doctors have access to comprehensive summaries encompassing medical history, medications, and symptoms, offering a more informed foundation for diagnosis and treatment planning.

While ChatGPT provides a conversational interface, Kahun’s clinical-reasoning engine ensures in-depth medical accuracy,” CEO Eitan Ron highlights. This unique amalgamation empowers doctors with a holistic understanding of the patient’s condition and facilitates more effective and efficient healthcare delivery.

Kahun’s pioneering approach has already demonstrated its potential. The company’s existing AI system has successfully conducted over 250,000 doctor-patient sessions in the U.S., primarily in the realm of telemedicine.
The adoption of this innovative technology extends to medical institutions.
Tel Aviv Sourasky Medical Center, Israel’s largest acute care facility, has integrated Kahun’s AI chatbot into its triage process. This landmark initiative streamlines the assessment and diagnosis process, offering medical staff valuable pre-consultation insights.
We’re relieving burnout by providing pre-visit clinical insights,” says Kahun co-founder and CEO Michal Tzuchman-Katz. The integration of AI into healthcare practices holds promise not only in enhancing patient care but also in alleviating the burdens faced by medical professionals.
As Kahun continues to innovate and refine its AI-powered solutions, it is poised to shape the future of patient-doctor interactions, ushering in a new era of healthcare where technology augments understanding, efficiency, and overall well-being.

Resources: nocamels.com

]]>
Artificial Intelligence revolutionizing clinical communication https://www.digitalhealthglobal.com/artificial-intelligence-revolutionizing-clinical-communication/ Wed, 03 May 2023 09:32:50 +0000 https://www.digitalhealthglobal.com/?p=9548 Effective communication is critical to providing high-quality healthcare.

It helps build strong patient-provider relationships and ensures that the right treatment plan is developed, implemented, and monitored.
However, in today’s fast-paced medical environment, where time constraints and technical jargon often dominate conversations, communicating clinical information effectively can be challenging.

The use of technology in the workplace has revolutionized the way we work. Letter templates and voice recognition systems are designed to streamline administrative tasks, increasing efficiency and saving valuable time for employees who can focus on more critical tasks. While these technologies should not replace human skills, they can enhance productivity levels while ensuring accuracy and consistency in communication across all channels.

In a recent commentary published in The Lancet Digital Health, experts discuss why effective communication matters and how healthcare providers can improve their communication skills to meet patient needs more effectively by using ChatGPT – an AI-powered tool that can assist clinicians in generating comprehensive patient notes while saving valuable time.

The study evaluated the readability, factual correctness, and humanness of ChatGPT-generated clinical letters to patients with limited clinical input, using skin cancer as an example.

ChatGPT is an OpenAI chatbot that uses natural language processing (NLP) technology to generate human-like text.

The study found that the ChatGPT-generated letters were readable by patients with a reading age of 11-12 years, and were accurate and humane in their writing style.

The study authors created 38 hypothetical clinical scenarios related to skin cancer, ranging from simply following specific directions to using national guidelines to provide clinical advice in the letter. After submitting the instructions, ChatGPT generated a response in the form of a clinical letter to be issued to the patient. The readability of letters was evaluated using the online tool readable, while the factual correctness and humanness of letters were assessed by two independent clinicians using a Likert scale ranging from 0 to 10.

Overall, the study found that the mean readability age for the generated letters was at a US ninth grade level, and the letters were considered to have an average difficulty. The median correctness of the clinical information contained in the letter was 7 out of 10, while the median humanness of the writing style was also 7 out of 10.

The study concluded that AI has the potential to produce high-quality clinical letters that are comprehensible by patients, while improving efficiency, consistency, accuracy, patient satisfaction, and delivering cost savings to the healthcare system.

The appropriate recording and communication of clinical information between clinicians and patients are of paramount importance, and there has been a much-needed drive to improve the information that is shared with patients. However, the preparation of clinical letters can be time-consuming, and novel technologies such as NLP and AI have the power to revolutionize this area of practice.

In conclusion, incorporating technology and NLP algorithms into daily operations can significantly improve workflow efficiency while allowing staff members to focus on delivering value-added services that require human expertise. Effective communication is essential to ensuring positive outcomes in cancer care and various other healthcare areas. By utilizing innovative tools and strategies, patient outcomes can be improved and the doctors and nurses can provide better overall care.

]]>
ChatGPT: what are the ethical challenges for medical publishing? https://www.digitalhealthglobal.com/chatgpt-what-are-the-ethical-challenges-for-medical-publishing/ Mon, 20 Mar 2023 13:37:30 +0000 https://www.digitalhealthglobal.com/?p=9265 Generative artificial intelligence (AI) is becoming increasingly prevalent in various industries, including medical publishing. The impact of this technology on publishing practices remains unclear, but it could potentially raise substantial ethical concerns. In particular, questions arise concerning copyright, attribution, plagiarism, and authorship when AI produces academic text.

One example of this technology is ChatGPT, an AI chatbot developed by OpenAI that generates responses based on thousands of internet sources. This powerful tool has already attracted millions of interactions, and individuals have reportedly used the platform to formulate university essays and scholarly articles.
The system can even deliver accompanying references if prompted! But you should note that the tools can be deceiving and create bogus references that sound plausible.

“This is where it becomes kind of dangerous,” Teresa Kubacka, data scientist, said. “The moment that you cannot trust the references, it also kind of erodes the trust in citing science whatsoever,” she said.

In a recent comment published on The Lancet Digital Health, authors asked ChatGPT how the editorial team should address academic content produced by AI and we explore the outcome below.

A clear urgency and importance for comprehensive discussions surrounding authorship policies presents itself as the use of AI in publishing becomes increasingly adopted.

While major publishers such as Elsevier have stated that AI cannot be listed as an author and must be properly acknowledged, there is still a need for clear guidelines on the ethical implications of AI-generated content.

As this form of content becomes more common, it is of the utmost importance that we carefully consider the ethical implications of publishing articles produced by AI and create clear guidelines on authorship policies. By doing so, we can ensure that the dissemination of knowledge is conducted in an ethical and responsible manner.

While the impact of generative AI on medical publishing practices remains unclear, the growing use of this technology highlights the urgent need for robust AI author guidelines in medical/education publishing.

]]>