Primary immune deficiency differential diagnosis prediction via machine learning and data mining of the USIDNET registry.

INTRODUCTION: Computational aids based on data mining and machine learning might facilitate the diagnostic task by extracting rules from large datasets and making predictions when faced with new problem cases. In a proof-of-concept data mining study, we aimed to predict PID diagnoses using a supervised machine learning algorithm based on classification tree boosting. METHOD: Through a data query at the USIDNET registry we obtained a database of 2,396 patients with common diagnoses of PID, including their clinical and laboratory features. We kept 286 features and all 12 diagnoses to include in the model; used XGBoost with parallel tree boosting for supervised classification, and SHAP for variable importance interpretation, on Python. The patient database was split into training and testing subsets; after boosting through gradient descent, the predictive model provides measures of diagnostic prediction accuracy and individual feature importance. RESULTS: The twelve PID diagnoses were CVID (1,098 patients), DiGeorge syndrome (406), Chronic granulomatous disease (154), Congenital agammaglobulinemia (135), PID not otherwise classified (132), Specific antibody deficiency (117), Complement deficiency (12), Hyper-IgM (46), Leukocyte adhesion deficiency (6), Ectodermal dysplasia with immune deficiency (25), Severe combined immune deficiency (202), and Wiskott-Aldrich syndrome (63). For all, accuracy stayed between 0.75 and 0.99, AUC 0.46-0.87, Gini 0.07-0.75, and LogLoss 0.09-8.55. The diagnoses with greater precision and recall were Wiskott-Aldrich Syndrome, DiGeorge syndrome, and Common Variable Immune deficiency, and predictive performance plummeted when the number of disease representatives fell under 50-60 cases.
CONCLUSIONS: Clinicians should remember to consider the negative predictive features. Good performance is encouraging, feature importance might aid feature selection for future endeavors. We can learn from the rules derived from the model and build a user-friendly decision tree to generate differential diagnoses.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top