Deep Learning – Digital Health Global https://www.digitalhealthglobal.com digital health tools and services Tue, 09 Jan 2024 10:33:08 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8 https://www.digitalhealthglobal.com/wp-content/uploads/2018/05/faviconDHI.png Deep Learning – Digital Health Global https://www.digitalhealthglobal.com 32 32 Groundbreaking deep-learning algorithm enhances prediction of postoperative mortality https://www.digitalhealthglobal.com/groundbreaking-deep-learning-algorithm-enhances-prediction-of-postoperative-mortality/ Tue, 09 Jan 2024 10:33:06 +0000 https://www.digitalhealthglobal.com/?p=12060 In the United States, where more than 20 million surgeries occur annually, current methods for assessing risks before surgery, especially predicting outcomes such as mortality, prove to be ineffective.

Despite ongoing advancements in guidelines and tools, such as biomarkers and other data, accurately predicting postoperative risks remains difficult. This underscores the need for simpler methods to assess risks before surgery, aiming to identify high-risk patients earlier and improve predictions for a diverse group undergoing surgery.

Deep-learning analyses offer a new opportunity to identify hidden risk markers and understand complex relationships using available clinical resources for risk prediction. One valuable resource is the 12-lead electrocardiogram (ECG), a cost-effective, non-invasive diagnostic test routinely done in preoperative settings.

Prior research has shown that using deep-learning algorithms on ECG waveforms can reveal clinical traits and outcomes not identified by standard ECG measures or expert human interpretations.

The idea is that using deep-learning algorithms on a single preoperative ECG can effectively differentiate postoperative mortality outcomes and outperform established clinical preoperative assessment methods.

The Lancet Digital Health recently published a groundbreaking study by Cedars-Sinai Medical Center researchers, introducing a powerful deep-learning algorithm. This algorithm enhances postoperative mortality prediction by analyzing preoperative electrocardiograms (ECGs).

To test this concept, researchers conducted a comprehensive study using an artificial intelligence (AI) algorithm trained on perioperative ECGs. They evaluated the performance of the resulting model on cohorts from three independent health-care systems.

Methods and results

The study involved 45,969 patients undergoing medical procedures who needed a full ECG within 30 days before the procedure.

Trained on Cedars-Sinai Medical Center data, the algorithm showed impressive discriminatory capabilities. It achieved an AUC of 0.83 in the internal test cohort, outperforming the RCRI score’s AUC of 0.67.

Researchers divided patients into training, internal validation, and final algorithm test cohorts using a diverse dataset that included 59,975 inpatient procedures and 112,794 ECGs.

They also tested the algorithm’s performance in two external hospital cohorts, showcasing its consistent and robust predictive power.

Key findings show the algorithm’s superiority over the RCRI score in predicting postoperative mortality. High-risk patients identified by the algorithm had a significantly higher unadjusted odds ratio (8.83) for postoperative mortality compared to those with RCRI scores over 2 (2.08).

The algorithm proved effective across various medical procedures, including cardiac surgeries (AUC 0.85), non-cardiac surgeries (AUC 0.83), and catheterization or endoscopy suite procedures (AUC 0.76).

Conclusions

This deep-learning algorithm is a breakthrough in improving postoperative mortality risk stratification. Its ability to interpret preoperative ECGs could revolutionize medical procedure decision-making. Researchers validated the algorithm’s robustness in three healthcare systems, emphasizing its broad applicability.

The National Heart, Lung, and Blood Institute funded the study, indicating a significant stride towards AI-based, precise, and personalized patient care in perioperative settings.

]]>
Deep learning system predicts systemic medical conditions from external eye photographs https://www.digitalhealthglobal.com/deep-learning-system-predicts-systemic-medical-conditions-from-external-eye-photographs/ Mon, 01 May 2023 09:18:03 +0000 https://www.digitalhealthglobal.com/?p=9651 A new study suggests that artificial intelligence can detect biomarkers of systemic disease from external eye photographs.

Artificial intelligence (AI) is transforming healthcare in a myriad of ways, from diagnosing diseases to predicting outcomes and personalizing treatment plans. One of the most promising areas of AI in healthcare is the use of computer vision to analyze medical images and detect biomarkers of disease.

Researchers from the University of Southern California developed a deep learning system (DLS) that predicts systemic parameters related to the liver, kidney, bone or mineral, thyroid, and blood using external eye photographs as input.

The study analysed over 123,000 images of 38,398 diabetes patients across 11 sites in Los Angeles, California, and achieved statistically significant superior performance compared to baseline models in detecting multiple biomarkers.

This research has found that external eye photographs could be used to detect biomarkers for systemic medical conditions such as liver, kidney, bone or mineral, thyroid, and blood count.

By using deep learning systems, the team trained a model to identify systemic parameters including, estimated glomerular filtration rate, urine albumin-to-creatinine ratio, and white blood cells, among others.

The DLS was trained using 123,130 images from 38,398 diabetes patients undergoing diabetic eye screening. Results showed that the DLS outperformed baseline models for detecting various medical conditions, such as abnormal liver and kidney function and anemia.

The system was trained on data from three sources: clinics in the Los Angeles County Department of Health Services, Veterans Affairs primary care clinics in the greater Atlanta area, and community-based outpatient clinics in the Atlanta VA Healthcare System.

The researchers trained a convolutional neural network to take an external eye photograph as input and predict all clinical and laboratory measurements in a multitask fashion. The DLS identified nine parameters with clinical utility that it could predict with accuracy. These parameters included albumin, AST, calcium, eGFR, haemoglobin, platelets, TSH, urine ACR, and WBC.

The AI system was able to outperform a baseline model that only considered clinicodemographic variables for predicting several health conditions, including severely increased albuminuria and moderate anemia.

The researchers also conducted explainability experiments to determine which parts of the eye image were most important for the AI’s performance, showing that color information was at least somewhat important for most prediction targets.
This new technology has the potential to improve early detection and treatment of various health conditions, especially in areas with limited access to healthcare professionals.

The results showed that the DLS performed significantly better than baseline clinicodemographic models at predicting kidney function and blood count abnormalities across all three validation sets and at predicting abnormalities in liver and multiorgan parameters in validation set A.

The researchers believe that the tool could be best used in screening settings. In a young, healthy population, this may be used to detect moderate kidney disease or severe kidney dysfunction in an older population.

The DLS also outperformed the baseline for low hemoglobin detection. However, the absolute performance for liver (AST) and thyroid (TSH) abnormalities was lacklustre, with AUCs in the low 60s. The study’s limitations include that the datasets were primarily from diabetic retinopathy screening populations. In addition, all images were collected on fundus cameras.

This non-invasive screening method could enable early detection of systemic disease, but further work is needed to understand the translational implications.

]]>