Evaluating Survival Models

The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell's estimator of the c index is implemented in concordance_index_censored.

While Harrell's concordance index is easy to interpret and compute, it has some shortcomings:

  1. it has been shown that it is too optimistic with increasing amount of censoring [1],
  2. it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).

Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.

The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.

The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.

scikit-survival 0.8 released

This release of scikit-survival 0.8 adds some nice enhancements for validating survival models.

Previously, scikit-survival only supported Harrell's concordance index to assess the performance of survival models. While it is easy to interpret and compute, it has some shortcomings:

  1. it has been shown that it is too optimistic with increasing amount of censoring1,
  2. it is not a useful measure of performance if a specific time point is of primary interest (e.g. predicting 2 year survival).

Convolutional Autoencoder as TensorFlow estimator

In my previous post, I explained how to implement autoencoders as TensorFlow Estimator. I thought it would be nice to add convolutional autoencoders in addition to the existing fully-connected autoencoder. So that's what I did. Moreover, I added the option to extract the low-dimensional encoding of the encoder and visualize it in TensorBoard.

The complete source code is available at https://github.com/sebp/tf_autoencoder.

Why convolutions?

For the fully-connected autoencoder, we reshaped each 28x28 image to a 784-dimensional feature vector. Next, we assigned a separate weight to each edge connecting one of 784 pixels to one of 128 neurons of the first hidden layer, which amounts to 100,352 weights (excluding biases) that need to be learned during training. For the last layer of the decoder, we need another 100,352 weights to reconstruct the full-size image. Considering that the whole autoencoder consists of 222,384 weights, it is obvious that these two layers dominate other layers by a large margin. When using higher resolution images, this imbalance becomes even more dramatic.

Denoising Autoencoder as TensorFlow estimator

I recently started to use Google's deep learning framework TensorFlow. Since version 1.3, TensorFlow includes a high-level interface inspired by scikit-learn. Unfortunately, as of version 1.4, only 3 different classification and 3 different regression models implementing the Estimator interface are included. To better understand the Estimator interface, Dataset API, and components in tf-slim, I started to implement a simple Autoencoder and applied it to the well-known MNIST dataset of handwritten digits. This post is about my journey and is split in the following sections:

  1. Custom Estimators
  2. Autoencoder network architecture
  3. Autoencoder as TensorFlow Estimator
  4. Using the Dataset API
  5. Denoising Autocendoer

I will assume that you are familiar with TensorFlow basics. The full code is available at https://github.com/sebp/tf_autoencoder.

A second part on Convolutional Autoencoders is available too.

scikit-survival 0.5 released

Today, I released a new version of scikit-survival. This release adds support for the latest version of scikit-learn (0.19) and pandas (0.21). In turn, support for Python 3.4, scikit-learn 0.18 and pandas 0.18 has been dropped.

Many people are confused about the meaning of predictions. Often, they assume that predictions of a survival model should always be non-negative since the input is the time to an event. However, this not always the case. In general, predictions are risk scores of arbitrary scale. In particular, survival models usually do not predict the exact time of an event, but the relative order of events. If samples are ordered according to their predicted risk score (in ascending order), one obtains the sequence of events, as predicted by the model. A more detailed explanation is available in the Understanding Predictions in Survival Analysis section of the documentation.

scikit-survival 0.4 released and presented at PyCon UK 2017

I'm pleased to announce that scikit-survival version 0.4 has been released.

This release adds CoxnetSurvivalAnalysis, which implements an efficient algorithm to fit Cox’s proportional hazards model with LASSO, ridge, and elastic net penalty. This allows fitting a Cox model to high-dimensional data and perform feature selection. Moreover, it includes support for Windows with Python 3.5 and later by making the cvxopt package optional.

scikit-survival 0.3 released

Today, I released a new version of scikit-survival, a Python module for survival analysis built on top of scikit-learn.

This release adds predict_survival_function and predict_cumulative_hazard_function to sksurv.linear_model.CoxPHSurvivalAnalysis, which return the survival function and cumulative hazard function using Breslow's estimator.

Moreover, it fixes a build error on Windows (#3) and adds the sksurv.preprocessing.OneHotEncoder class, which can be used in a scikit-learn pipeline.