Models for time-to-event data – From Cox's proportional hazards model to deep learning

Abstract

Predictive models for time-to-event data are suitable when only partial information about the outcome is known for a subset of the data – they are censored. Right censoring is the most common form of censoring and is common to clinical studies because patients can drop out or fail to complete follow-up. For these patients, it is unknown whether they experienced an adverse event after their last day of contact. Cox’s proportional hazards model is by far the most popular model applied to time-to-event data, but many alternative models exist. Many traditional machine learning models have been extended to deal with censored outcomes, e.g., support vector machines, gradient boosting, random forest, and multilayer perceptrons. In this talk, I will explain the underlying principals of modelling time-to-event data and how they extend to modern machine learning models, including deep learning.

Date
Oct 2, 2018 —
Event
Invited Talk
Location
École Centrale de Nantes
École Centrale de Nantes, France
Avatar
Sebastian Pölsterl
AI Researcher

My research interests include machine learning for time-to-event analysis, causal inference and biomedical applications.