Interpolating between Images with Diffusion Models

Paper | Code

One little-explored frontier of image generation and editing is the task of interpolating between two input images. We present a method for zero-shot controllable interpolation using latent diffusion models.

Drag the slider left and right. The first and last frames are the inputs. By leveraging the powerful conditioning abilities of pre-trained diffusion models, we can generate controllable and creative interpolations between images with diverse styles, layouts, and subjects.


We apply interpolation in latent space at a sequence of decreasing noise levels, then perform denoising conditioned on interpolated text embeddings derived from textual inversion and (optionally) subject poses derived from OpenPose. For greater consistency, or to specify additional criteria, we can generate several candidates and use CLIP to select the highest quality image.

To generate a new frame, we interpolate the noisy latents of two existing frames. Text prompts and (if applicable) poses are extracted from the original input images, and interpolated to provide to the denoiser as conditioning inputs. This process can be repeated for different noise vectors to generate multiple candidates. The best candidate is selected by computing its CLIP similarity to a prompt describing desired characteristics.

We obtain convincing interpolations across diverse subject poses, image styles, and image content.

Drag the slider left and right. The first and last frames are the inputs.





Paper presented at ICML 2023 Workshop on Challenges of Deploying Generative AI. Read our paper for more details, or check out our code.

Bibtex

@misc{wang2023interpolating,
      title={Interpolating between Images with Diffusion Models}, 
      author={Clinton J. Wang and Polina Golland},
      year={2023},
      eprint={2307.12560},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}