## Sebastian Pölsterl

### Artificial Intelligence in Medical Imaging

I’m a researcher at the lab for Artificial Intelligence in Medical Imaging working on machine learning for biomedical applications. My research interests are time-to-event analysis (survival analysis) and using deep learning techniques to learn from non-Euclidean data such as graphs. Previously, I worked at The Institute of Cancer Research, London and was among the winners of the Prostate Cancer DREAM challenge. I’m the author of scikit-survival, a machine learning library for survival analysis built on top of scikit-learn.

### Interests

• Time-to-event analysis
• Non-Euclidean data
• High-dimensional data
• Biomedical applications
• Deep learning

### Education

• PhD in Computer Science, 2016

Technische Universität München

• MSc in Bioinformatics, 2011

Ludwig-Maximilians-Universität & Technische Universität München

• BSc in Bioinformatics, 2008

Ludwig-Maximilians-Universität & Technische Universität München

# Recent Posts

### Survival Analysis for Deep Learning

Most machine learning algorithms have been developed to perform classification or regression. However, in clinical research we often want to estimate the time to and event, such as death or recurrence of cancer, which leads to a special type of learning task that is distinct from classification and regression. This task is termed survival analysis, but is also referred to as time-to-event analysis or reliability analysis. Many machine learning algorithms have been adopted to perform survival analysis: Support Vector Machines, Random Forest, or Boosting. It has only been recently that survival analysis entered the era of deep learning, which is the focus of this post.

You will learn how to train a convolutional neural network to predict time to a (generated) event from MNIST images, using a loss function specific to survival analysis. The first part, will cover some basic terms and quantities used in survival analysis (feel free to skip this part if you are already familiar). In the second part, we will generate synthetic survival data from MNIST images and visualize it. In the third part, we will briefly revisit the most popular survival model of them all and learn how it can be used as a loss function for training a neural network. Finally, we put all the pieces together and train a convolutional neural network on MNIST and predict survival functions on the test data.

### scikit-survival 0.9 released

This release of scikit-survival adds support for scikit-learn 0.21 and pandas 0.24, among a couple of other smaller fixes. Please see the release notes for a full list of changes. If you are using scikit-survival in your research, you can now cite it using an Digital Object Identifier (DOI).

### Evaluating Survival Models

The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell’s estimator of the c index is implemented in concordance_index_censored.

While Harrell’s concordance index is easy to interpret and compute, it has some shortcomings:

1. it has been shown that it is too optimistic with increasing amount of censoring [1],
2. it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).

Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.

The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.

The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.

### scikit-survival 0.8 released

This release of scikit-survival 0.8 adds some nice enhancements for validating survival models. Previously, scikit-survival only supported Harrell’s concordance index to assess the performance of survival models. While it is easy to interpret and compute, it has some shortcomings:

1. it has been shown that it is too optimistic with increasing amount of censoring1,
2. it is not a useful measure of performance if a specific time point is of primary interest (e.g. predicting 2 year survival).

### scikit-survival 0.7 released

This is a long overdue maintenance release of scikit-survival 0.7 that adds compatibility with Python 3.7 and scikit-learn 0.20. For a complete list of changes see the release notes.

# Recent Publications

Quickly discover relevant content by filtering publications.
(2019). Quantifying Confounding Bias in Neuroimaging Datasets with Causal Inference. Medical Image Computing and Computer-Assisted Intervention – MICCAI.