Abstract | Historically, probabilistic models for decision support have focused on discrimination, e.g., minimizing the ranking error of predicted outcomes. Unfortunately, these models ignore another important aspect, calibration, which indicates the magnitude of correctness of model predictions. Using discrimination and calibration simultaneously can be helpful for many clinical decisions. We investigated tradeoffs between these goals, and developed a unified maximum-margin method to handle them jointly. Our approach called, Doubly Optimized Calibrated Support Vector Machine (DOC-SVM), concurrently optimizes two loss functions: the ridge regression loss and the hinge loss. Experiments using three breast cancer gene-expression datasets (i.e., GSE2034, GSE2990, and Chanrion's datasets) showed that our model generated more calibrated outputs when compared to other state-of-the-art models like Support Vector Machine ([Formula: see text] = 0.03, [Formula: see text] = 0.13, and [Formula: see text]<0.001) and Logistic Regression ([Formula: see text] = 0.006, [Formula: see text] = 0.008, and [Formula: see text]<0.001). DOC-SVM also demonstrated better discrimination (i.e., higher AUCs) when compared to Support Vector Machine ([Formula: see text] = 0.38, [Formula: see text] = 0.29, and [Formula: see text] = 0.047) and Logistic Regression ([Formula: see text] = 0.38, [Formula: see text] = 0.04, and [Formula: see text]<0.0001). DOC-SVM produced a model that was better calibrated without sacrificing discrimination, and hence may be helpful in clinical decision making. |