Robust Counterfactual Medical Image Generation

Paper | Code

Counterfactual images represent what an image would have looked like if it had different specified characteristics. They find use in a wide range of tasks in medical imaging:

Counterfactual variableTask
Age / timeForecasting
PhenotypeData exploration
OutcomeBiomarker visualization
FeatureFeature visualization
Acquisition settingsModality transfer
Counterfactual MRIs generated by StarGAN equipped with spatial-intensity transforms. It was conditioned on two subjects' scans (middle column) from a dataset of acute ischemic stroke patients. The generator transforms them into their neighboring images by conditioning on changes in age (top) and stroke severity (bottom).

Generative models offer a powerful tool for generating counterfactual images. But we want such models to make the minimal changes necessary to the input image, whereas existing models usually change aspects of the input image unrelated to the counterfactual variable.

Although the counterfactual variable only specifies an increase in ventricular volume, the right image has different anatomy and orientation from the left, making it a bad counterfactual image.

These problems are exacerbated in small, real-world (clinical) datasets, where a generative model is more susceptible to artifacts and differences in scanner settings.

The problems in counterfactual images can be subtle. On the left, the generative model inserts a dark streaking artifact along the sulci. On the right, the generator inserts partial volume effects.

We address this problem by introducing spatial-intensity transforms. We parameterize the output of the generator in terms of a smooth deformation field and sparse intensity transform. The smoothness and sparsity can be controlled flexibly for different applications.

We find that this simple construction confers improved robustness and image quality on a variety of architectures, counterfactual variables, and datasets. It also disentangles morphological changes from tissue intensity and textural changes.

Read our MICCAI paper for more details, and try out our code.

More Results

(Top Row) True longitudinal MRIs from a subject in ADNI with mild cognitive impairment. (Middle Row) Predicted MRIs from an unconstrained StarGAN. (Bottom Row) Predicted MRIs from StarGAN with spatial-intensity transforms.

ADNI

A stroke patient's scan translated to a different age (originally 67 years old, translated to 82 years old) using four unconstrained models (top row) and their SIT variants (bottom row).

stroke

Ablations: A stroke patient's scan translated in age from 59 years old to 84 years old using different parameterizations of the generator in StarGAN.

Ablations

Bibtex

@inproceedings{wang2020spatial,
  title={Spatial-intensity transform GANs for high fidelity medical image-to-image translation},
  author={Wang, Clinton J and Rost, Natalia S and Golland, Polina},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={749--759},
  year={2020},
  organization={Springer}
}