Skip to contents

Assess how well a model can predict time to event less than a certain threshold with a scalar metric.

Public fields

metrics

Assess the model for these scalar metrics. Check out the initializer for possible choices.

prev_range

For metrics that need thresholding only consider thresholds that yield a prevalence in this range.

confidence_level

Confidence level gamma, e.g. for confidence intervals.

benchmark

Name and pivot time cutoff of the benchmark Model.

round_digits

Round the results in tables to round_digits digits after the point.

file

Save the resulting tibble to this csv file.

Methods


Method new()

Construct an AssScalar R6 object.

Usage

AssScalar$new(
  metrics = c("auc", "accuracy", "precision", "prevalence", "precision_ci_ll",
    "precision_ci_ul", "hr", "hr_ci_ll", "hr_ci_ul", "hr_p", "n_true", "perc_true",
    "n_samples", "logrank", "threshold"),
  prev_range = c(0, 1),
  confidence_level = 0.95,
  benchmark = NULL,
  file = NULL,
  round_digits = 3
)

Arguments

metrics

character. Assess the model for these metrics. For currently offered choices see "Usage". If you have a model with non-binary output (like the linear predictor of a Cox model), we choose a threshold by maximizing the left-most metric in metrics that is made for classifiers with binary output (e.g. precision within prev_range below). If this cannot be done reasonably, throw an error. Make sure that hr precedes hr_ci_ll, hr_ci_ul and hr_p in metrics; precision_ci_ll must precede precision_ci_ul.

prev_range

numeric numeric vector of length 2. For metrics that need thresholding only consider thresholds that yield a prevalence in this range.

confidence_level

numeric. The confidence level gamma (e.g. for confidence intervals).

benchmark

list or NULL. If not NULL, it is a list with names

  • "name": the name attribute of the benchmark Model in the model_list parameter of the assess() and assess_center() method,

  • "prev_range": An extra value for the prev_range attribute used for the benchmark Model. Often, we need a higher prevalence for our, new models to gain statistical power and be able to significantly outperform the benchmark.

file

string or NULL. If not NULL, save the resulting tibble to this csv file.

round_digits

numeric. The number of digits to round the results to.

Returns

A new AssScalar object.


Method assess()

Assess a single model.

Usage

AssScalar$assess(data, model, quiet = FALSE)

Arguments

data

Data object. Assess on this data. Data must already be read in and its cohort attribute set.

model

Model object. Assess this model.

quiet

logical. Whether to suppress messages.

Returns

named numeric vector. The calculated metrics.


Method assess_center()

Wrap assess() to assess multiple models and store the result.

Usage

AssScalar$assess_center(data, model_list, quiet = FALSE)

Arguments

data

Data object. Assess on this data. The cohort attribute of data must be set.

model_list

list of Model objects. Assess these models.

quiet

logical. Whether to suppress messages.

Returns

A tibble of shape (length(model_list) x length(metrics)).


Method clone()

The objects of this class are cloneable with this method.

Usage

AssScalar$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.