This function plots critical indicators to help users evaluate model performance, including the average values and 95% confidence intervals for metrics such as accuracy, sensitivity (recall), specificity, positive predictive value (precision), negative predictive value, F1 score, prevalence, detection rate, detection prevalence, and balanced accuracy.
plot_ml_evaluation(ml_se, eval_method)
A SummarizedExperiment object with results computed by ml_model
.
Character. The evaluation method to be used. Allowed methods
include 'Accuracy', 'Sensitivity', 'Specificity', 'Pos Pred Value',
'Neg Pred Value', 'Precision', 'Recall', 'F1', 'Prevalence', 'Detection Rate',
'Detection Prevalence', 'Balanced Accuracy'. Default is 'Accuracy'
.
Return 1 interactive plot, 1 static plot, and 2 tables.
interactive_evaluation_plot & static_evaluation_plot: model performance plot
table_evaluation: table of model evaluation information.
table_evaluation_plot: table for plotting evaluation plots.
data("ml_se_sub")
res <- plot_ml_evaluation(ml_se_sub, eval_method='Accuracy')