Serial platelet count as a dynamic prediction marker of hospital mortality among septic patients.
Burns & trauma
Background:Platelets play a critical role in hemostasis and inflammatory diseases. Low platelet count and activity have been reported to be associated with unfavorable prognosis. This study aims to explore the relationship between dynamics in platelet count and in-hospital morality among septic patients and to provide real-time updates on mortality risk to achieve dynamic prediction. Methods:We conducted a multi-cohort, retrospective, observational study that encompasses data on septic patients in the eICU Collaborative Research Database (eICU-CRD) and the Medical Information Mart for Intensive Care IV (MIMIC-IV) database. The joint latent class model (JLCM) was utilized to identify heterogenous platelet count trajectories over time among septic patients. We assessed the association between different trajectory patterns and 28-day in-hospital mortality using a piecewise Cox hazard model within each trajectory. We evaluated the performance of our dynamic prediction model through area under the receiver operating characteristic curve, concordance index (C-index), accuracy, sensitivity, and specificity calculated at predefined time points. Results:Four subgroups of platelet count trajectories were identified that correspond to distinct in-hospital mortality risk. Including platelet count did not significantly enhance prediction accuracy at early stages (day 1 C-index C-index: 0.713 0.714). However, our model showed superior performance to the static survival model over time (day 14 C-index C-index: 0.644 0.617). Conclusions:For septic patients in an intensive care unit, the rapid decline in platelet counts is a critical prognostic factor, and serial platelet measures are associated with prognosis.
10.1093/burnst/tkae016
Machine Learning Electrocardiogram for Mobile Cardiac Pattern Extraction.
Sensors (Basel, Switzerland)
BACKGROUND:Internet-of-things technologies are reshaping healthcare applications. We take a special interest in long-term, out-of-clinic, electrocardiogram (ECG)-based heart health management and propose a machine learning framework to extract crucial patterns from noisy mobile ECG signals. METHODS:A three-stage hybrid machine learning framework is proposed for estimating heart-disease-related ECG QRS duration. First, raw heartbeats are recognized from the mobile ECG using a support vector machine (SVM). Then, the QRS boundaries are located using a novel pattern recognition approach, multiview dynamic time warping (MV-DTW). To enhance robustness with motion artifacts in the signal, the MV-DTW path distance is also used to quantize heartbeat-specific distortion conditions. Finally, a regression model is trained to transform the mobile ECG QRS duration into the commonly used standard chest ECG QRS durations. RESULTS:With the proposed framework, the performance of ECG QRS duration estimation is very encouraging, and the correlation coefficient, mean error/standard deviation, mean absolute error, and root mean absolute error are 91.2%, 0.4 ± 2.6, 1.7, and 2.6 ms, respectively, compared with the traditional chest ECG-based measurements. CONCLUSIONS:Promising experimental results are demonstrated to indicate the effectiveness of the framework. This study will greatly advance machine-learning-enabled ECG data mining towards smart medical decision support.
10.3390/s23125723
Sepsis subphenotyping based on organ dysfunction trajectory.
Critical care (London, England)
BACKGROUND:Sepsis is a heterogeneous syndrome, and the identification of clinical subphenotypes is essential. Although organ dysfunction is a defining element of sepsis, subphenotypes of differential trajectory are not well studied. We sought to identify distinct Sequential Organ Failure Assessment (SOFA) score trajectory-based subphenotypes in sepsis. METHODS:We created 72-h SOFA score trajectories in patients with sepsis from four diverse intensive care unit (ICU) cohorts. We then used dynamic time warping (DTW) to compute heterogeneous SOFA trajectory similarities and hierarchical agglomerative clustering (HAC) to identify trajectory-based subphenotypes. Patient characteristics were compared between subphenotypes and a random forest model was developed to predict subphenotype membership at 6 and 24 h after being admitted to the ICU. The model was tested on three validation cohorts. Sensitivity analyses were performed with alternative clustering methodologies. RESULTS:A total of 4678, 3665, 12,282, and 4804 unique sepsis patients were included in development and three validation cohorts, respectively. Four subphenotypes were identified in the development cohort: Rapidly Worsening (n = 612, 13.1%), Delayed Worsening (n = 960, 20.5%), Rapidly Improving (n = 1932, 41.3%), and Delayed Improving (n = 1174, 25.1%). Baseline characteristics, including the pattern of organ dysfunction, varied between subphenotypes. Rapidly Worsening was defined by a higher comorbidity burden, acidosis, and visceral organ dysfunction. Rapidly Improving was defined by vasopressor use without acidosis. Outcomes differed across the subphenotypes, Rapidly Worsening had the highest in-hospital mortality (28.3%, P-value < 0.001), despite a lower SOFA (mean: 4.5) at ICU admission compared to Rapidly Improving (mortality:5.5%, mean SOFA: 5.5). An overall prediction accuracy of 0.78 (95% CI, [0.77, 0.8]) was obtained at 6 h after ICU admission, which increased to 0.87 (95% CI, [0.86, 0.88]) at 24 h. Similar subphenotypes were replicated in three validation cohorts. The majority of patients with sepsis have an improving phenotype with a lower mortality risk; however, they make up over 20% of all deaths due to their larger numbers. CONCLUSIONS:Four novel, clinically-defined, trajectory-based sepsis subphenotypes were identified and validated. Identifying trajectory-based subphenotypes has immediate implications for the powering and predictive enrichment of clinical trials. Understanding the pathophysiology of these differential trajectories may reveal unanticipated therapeutic targets and identify more precise populations and endpoints for clinical trials.
10.1186/s13054-022-04071-4
Comparison of time series clustering methods for identifying novel subphenotypes of patients with infection.
Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE:Severe infection can lead to organ dysfunction and sepsis. Identifying subphenotypes of infected patients is essential for personalized management. It is unknown how different time series clustering algorithms compare in identifying these subphenotypes. MATERIALS AND METHODS:Patients with suspected infection admitted between 2014 and 2019 to 4 hospitals in Emory healthcare were included, split into separate training and validation cohorts. Dynamic time warping (DTW) was applied to vital signs from the first 8 h of hospitalization, and hierarchical clustering (DTW-HC) and partition around medoids (DTW-PAM) were used to cluster patients into subphenotypes. DTW-HC, DTW-PAM, and a previously published group-based trajectory model (GBTM) were evaluated for agreement in subphenotype clusters, trajectory patterns, and subphenotype associations with clinical outcomes and treatment responses. RESULTS:There were 12 473 patients in training and 8256 patients in validation cohorts. DTW-HC, DTW-PAM, and GBTM models resulted in 4 consistent vitals trajectory patterns with significant agreement in clustering (71-80% agreement, P < .001): group A was hyperthermic, tachycardic, tachypneic, and hypotensive. Group B was hyperthermic, tachycardic, tachypneic, and hypertensive. Groups C and D had lower temperatures, heart rates, and respiratory rates, with group C normotensive and group D hypotensive. Group A had higher odds ratio of 30-day inpatient mortality (P < .01) and group D had significant mortality benefit from balanced crystalloids compared to saline (P < .01) in all 3 models. DISCUSSION:DTW- and GBTM-based clustering algorithms applied to vital signs in infected patients identified consistent subphenotypes with distinct clinical outcomes and treatment responses. CONCLUSION:Time series clustering with distinct computational approaches demonstrate similar performance and significant agreement in the resulting subphenotypes.
10.1093/jamia/ocad063
Development and validation of a deep learning model to predict the survival of patients in ICU.
Journal of the American Medical Informatics Association : JAMIA
BACKGROUND:Patients in the intensive care unit (ICU) are often in critical condition and have a high mortality rate. Accurately predicting the survival probability of ICU patients is beneficial to timely care and prioritizing medical resources to improve the overall patient population survival. Models developed by deep learning (DL) algorithms show good performance on many models. However, few DL algorithms have been validated in the dimension of survival time or compared with traditional algorithms. METHODS:Variables from the Early Warning Score, Sequential Organ Failure Assessment Score, Simplified Acute Physiology Score II, Acute Physiology and Chronic Health Evaluation (APACHE) II, and APACHE IV models were selected for model development. The Cox regression, random survival forest (RSF), and DL methods were used to develop prediction models for the survival probability of ICU patients. The prediction performance was independently evaluated in the MIMIC-III Clinical Database (MIMIC-III), the eICU Collaborative Research Database (eICU), and Shanghai Pulmonary Hospital Database (SPH). RESULTS:Forty variables were collected in total for model development. 83 943 participants from 3 databases were included in the study. The New-DL model accurately stratified patients into different survival probability groups with a C-index of >0.7 in the MIMIC-III, eICU, and SPH, performing better than the other models. The calibration curves of the models at 3 and 10 days indicated that the prediction performance was good. A user-friendly interface was developed to enable the model's convenience. CONCLUSIONS:Compared with traditional algorithms, DL algorithms are more accurate in predicting the survival probability during ICU hospitalization. This novel model can provide reliable, individualized survival probability prediction.
10.1093/jamia/ocac098
Machine learning for the prediction of acute kidney injury in patients with sepsis.
Journal of translational medicine
BACKGROUND:Acute kidney injury (AKI) is the most common and serious complication of sepsis, accompanied by high mortality and disease burden. The early prediction of AKI is critical for timely intervention and ultimately improves prognosis. This study aims to establish and validate predictive models based on novel machine learning (ML) algorithms for AKI in critically ill patients with sepsis. METHODS:Data of patients with sepsis were extracted from the Medical Information Mart for Intensive Care III (MIMIC- III) database. Feature selection was performed using a Boruta algorithm. ML algorithms such as logistic regression (LR), k-nearest neighbors (KNN), support vector machine (SVM), decision tree, random forest, Extreme Gradient Boosting (XGBoost), and artificial neural network (ANN) were applied for model construction by utilizing tenfold cross-validation. The performances of these models were assessed in terms of discrimination, calibration, and clinical application. Moreover, the discrimination of ML-based models was compared with those of Sequential Organ Failure Assessment (SOFA) and the customized Simplified Acute Physiology Score (SAPS) II model. RESULTS:A total of 3176 critically ill patients with sepsis were included for analysis, of which 2397 cases (75.5%) developed AKI during hospitalization. A total of 36 variables were selected for model construction. The models of LR, KNN, SVM, decision tree, random forest, ANN, XGBoost, SOFA and SAPS II score were established and obtained area under the receiver operating characteristic curves of 0.7365, 0.6637, 0.7353, 0.7492, 0.7787, 0.7547, 0.821, 0.6457 and 0.7015, respectively. The XGBoost model had the best predictive performance in terms of discrimination, calibration, and clinical application among all models. CONCLUSION:The ML models can be reliable tools for predicting AKI in septic patients. The XGBoost model has the best predictive performance, which can be used to assist clinicians in identifying high-risk patients and implementing early interventions to reduce mortality.
10.1186/s12967-022-03364-0
Independent effects of the triglyceride-glucose index on all-cause mortality in critically ill patients with coronary heart disease: analysis of the MIMIC-III database.
Cardiovascular diabetology
BACKGROUND:The triglyceride-glucose (TyG) index is a reliable alternative biomarker of insulin resistance (IR). However, whether the TyG index has prognostic value in critically ill patients with coronary heart disease (CHD) remains unclear. METHODS:Participants from the Medical Information Mart for Intensive Care III (MIMIC-III) were grouped into quartiles according to the TyG index. The primary outcome was in-hospital all-cause mortality. Cox proportional hazards models were constructed to examine the association between TyG index and all-cause mortality in critically ill patients with CHD. A restricted cubic splines model was used to examine the associations between the TyG index and outcomes. RESULTS:A total of 1,618 patients (65.14% men) were included. The hospital mortality and intensive care unit (ICU) mortality rate were 9.64% and 7.60%, respectively. Multivariable Cox proportional hazards analyses indicated that the TyG index was independently associated with an elevated risk of hospital mortality (HR, 1.71 [95% CI 1.25-2.33] P = 0.001) and ICU mortality (HR, 1.50 [95% CI 1.07-2.10] P = 0.019). The restricted cubic splines regression model revealed that the risk of hospital mortality and ICU mortality increased linearly with increasing TyG index (P for non-linearity = 0.467 and P for non-linearity = 0.764). CONCLUSIONS:The TyG index was a strong independent predictor of greater mortality in critically ill patients with CHD. Larger prospective studies are required to confirm these findings.
10.1186/s12933-023-01737-3
The Many Roles of Cholesterol in Sepsis: A Review.
American journal of respiratory and critical care medicine
The biological functions of cholesterol are diverse, ranging from cell membrane integrity, cell membrane signaling, and immunity to the synthesis of steroid and sex hormones, vitamin D, bile acids, and oxysterols. Multiple studies have demonstrated hypocholesterolemia in sepsis, the degree of which is an excellent prognosticator of poor outcomes. However, the clinical significance of hypocholesterolemia has been largely unrecognized. We undertook a detailed review of the biological roles of cholesterol, the impact of sepsis, its reliability as a prognosticator in sepsis, and the potential utility of cholesterol as a treatment. Sepsis affects cholesterol synthesis, transport, and metabolism. This likely impacts its biological functions, including immunity, hormone and vitamin production, and cell membrane receptor sensitivity. Early preclinical studies show promise for cholesterol as a pleiotropic therapeutic agent. Hypocholesterolemia is a frequent condition in sepsis and an important early prognosticator. Low plasma concentrations are associated with wider changes in cholesterol metabolism and its functional roles, and these appear to play a significant role in sepsis pathophysiology. The therapeutic impact of cholesterol elevation warrants further investigation.
10.1164/rccm.202105-1197TR