In my previous post, I explained how to implement autoencoders as TensorFlow
Estimator. I thought it would be nice to add convolutional autoencoders in addition to the existing fully-connected autoencoder. So that's what I did. Moreover, I added the option to extract the low-dimensional encoding of the encoder and visualize it in TensorBoard.
The complete source code is available at https://github.com/sebp/tf_autoencoder.
For the fully-connected autoencoder, we reshaped each 28x28 image to a 784-dimensional feature vector. Next, we assigned a separate weight to each edge connecting one of 784 pixels to one of 128 neurons of the first hidden layer, which amounts to 100,352 weights (excluding biases) that need to be learned during training. For the last layer of the decoder, we need another 100,352 weights to reconstruct the full-size image. Considering that the whole autoencoder consists of 222,384 weights, it is obvious that these two layers dominate other layers by a large margin. When using higher resolution images, this imbalance becomes even more dramatic.
I recently started to use Google's deep learning framework TensorFlow. Since version 1.3, TensorFlow includes a high-level interface inspired by scikit-learn. Unfortunately, as of version 1.4, only 3 different classification and 3 different regression models implementing the
Estimator interface are included. To better understand the
Dataset API, and components in tf-slim, I started to implement a simple Autoencoder and applied it to the well-known MNIST dataset of handwritten digits. This post is about my journey and is split in the following sections:
- Custom Estimators
- Autoencoder network architecture
- Autoencoder as TensorFlow Estimator
- Using the Dataset API
- Denoising Autocendoer
I will assume that you are familiar with TensorFlow basics. The full code is available at https://github.com/sebp/tf_autoencoder.
A second part on Convolutional Autoencoders is available too.
Today, I released a new version of scikit-survival. This release adds support for the latest version of scikit-learn (0.19) and pandas (0.21). In turn, support for Python 3.4, scikit-learn 0.18 and pandas 0.18 has been dropped.
Many people are confused about the meaning of predictions. Often, they assume that predictions of a survival model should always be non-negative since the input is the time to an event. However, this not always the case. In general, predictions are risk scores of arbitrary scale. In particular, survival models usually do not predict the exact time of an event, but the relative order of events. If samples are ordered according to their predicted risk score (in ascending order), one obtains the sequence of events, as predicted by the model. A more detailed explanation is available in the Understanding Predictions in Survival Analysis section of the documentation.
I'm pleased to announce that scikit-survival version 0.4 has been released.
This release adds CoxnetSurvivalAnalysis, which implements an efficient algorithm to fit Cox’s proportional hazards model with LASSO, ridge, and elastic net penalty. This allows fitting a Cox model to high-dimensional data and perform feature selection. Moreover, it includes support for Windows with Python 3.5 and later by making the cvxopt package optional.
Today, I released a new version of scikit-survival, a Python module for survival analysis built on top of scikit-learn.
This release adds predict_survival_function and predict_cumulative_hazard_function to sksurv.linear_model.CoxPHSurvivalAnalysis, which return the survival function and cumulative hazard function using Breslow's estimator.
Last week, I attended the Docker Containers for Reproducible Research Workshop hosted by the Software Sustainability Institute. Many talks were addressing how containers can be used in a high performance computing (HPC) environment. Since running the Docker daemon requires root privileges, most administrators are reluctant to allow users running Docker containers in a HPC environment.
I've meant to do this release for quite a while now and last week I finally had some time to package everything and update the dependencies. scikit-survival contains the majority of code I developed during my Ph.D.
About Survival Analysis
Survival analysis – also referred to as reliability analysis in engineering – refers to type of problem in statistics where the objective is to establish a connections between a set of measurements (often called features or covariates) and the time to an event. The name survival analysis originates from clinical research: in many clinical studies, one is interested in predicting the time to death, i.e., survival. Broadly speaking, survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. Consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.
IPython and IPython notebook (or Jupyter as it is now called) are great tools that make interactive Python work incredibly easy. If you want to do large-scale computations, like nested cross-validation I mentioned in a previous post, you want to automatically distribute your work across multiple compute nodes instead of interactively looking at results. Thankfully, the folks behind IPython provide IPython.parallel, which is a library for parallel computing. It is versatile in the sense that you can choose whether you want to distribute your work locally, via SSH or MPI by simply adjusting config files.
If you are working with machine learning, at some point you have to choose hyper-parameters for your model of choice and do cross-validation to estimate how well the model generalizes to unseen data. Usually, you want to avoid over-fitting on your data when selecting hyper-parameters to get a less biased estimate of the model's true performance. Therefore, the data you do hyper-parameter search on has to be independent from data you use to assess a model's performance.
I've successfully been running an instance of GitLab for almost a year now. The same server is running Redmine, hence both GitLab and Redmine are running in their respective sub-directories. Phusion Passenger is my application server of choice. Unfortunately, it became increasingly difficult to keep this setup running with newer versions of GitLab. First, GitLab officially is not supporting running it out of a sub-directory, second, by default it uses Unicorn. Here, I want to detail my setup how you still can achieve the GitLab + Apache + Phusion Passenger combo, because I could only find slightly outdated guides online.