Abstract
Deep neural networks enable highly accurate image segmentation, but require
large amounts of manually annotated data for supervised training. Few-shot
learning aims to address this shortcoming by learning a new class from a few
annotated support examples. We introduce, a novel few-shot framework, for the
segmentation of volumetric medical images with only a few annotated slices.
Compared to other related works in computer vision, the major challenges are
the absence of pre-trained networks and the volumetric nature of medical
scans. We address these challenges by proposing a new architecture for few-
shot segmentation that incorporates ‘squeeze & excite’ blocks. Our two-armed
architecture consists of a conditioner arm, which processes the annotated
support input and generates a task-specific representation. This
representation is passed on to the segmenter arm that uses this information
to segment the new query image. To facilitate efficient interaction between
the conditioner and the segmenter arm, we propose to use ‘channel squeeze &
spatial excitation’ blocks – a light-weight computational module – that
enables heavy interaction between both the arms with negligible increase in
model complexity. This contribution allows us to perform image segmentation
without relying on a pre-trained model, which generally is unavailable for
medical scans. Furthermore, we propose an efficient strategy for volumetric
segmentation by optimally pairing a few slices of the support volume to all
the slices of the query volume. We perform experiments for organ segmentation
on whole-body contrast-enhanced CT scans from the Visceral Dataset. Our
proposed model outperforms multiple baselines and existing approaches with
respect to the segmentation accuracy by a significant margin. The source code
is available at
https://github.com/abhi4ssj/few-shot-segmentation.
Publication
Medical Image Analysis