This function plots critical indicators to help users evaluate model performance, including the average values and 95% confidence intervals for metrics such as accuracy, sensitivity (recall), specificity, positive predictive value (precision), negative predictive value, F1 score, prevalence, detection rate, detection prevalence, and balanced accuracy.

plot_ml_evaluation(ml_se, eval_method)

Arguments

ml_se

A SummarizedExperiment object with results computed by ml_model.

eval_method

Character. The evaluation method to be used. Allowed methods include 'Accuracy', 'Sensitivity', 'Specificity', 'Pos Pred Value', 'Neg Pred Value', 'Precision', 'Recall', 'F1', 'Prevalence', 'Detection Rate', 'Detection Prevalence', 'Balanced Accuracy'. Default is 'Accuracy'.

Value

Return 1 interactive plot, 1 static plot, and 2 tables.

  1. interactive_evaluation_plot & static_evaluation_plot: model performance plot

  2. table_evaluation: table of model evaluation information.

  3. table_evaluation_plot: table for plotting evaluation plots.

Examples

data("ml_se_sub")
res <- plot_ml_evaluation(ml_se_sub, eval_method='Accuracy')