StyleCineGAN: Landscape Cinemagraph Generation
using a Pre-trained StyleGAN

Visual Media Lab, KAIST
CVPR 2024

Videos may not be presented properly on specific browsers. Check back on Chrome browser!

Abstract

We propose a method that can generate cinemagraphs automatically from a still landscape image using a pre-trained StyleGAN. Inspired by the success of recent unconditional video generation, we leverage a powerful pre-trained image generator to synthesize high-quality cinemagraphs. Unlike previous approaches that mainly utilize the latent space of a pre-trained StyleGAN, our approach utilizes its deep feature space for both GAN inversion and cinemagraph generation. Specifically, we propose multi-scale deep feature warping (MSDFW), which warps the intermediate features of a pre-trained StyleGAN at different resolutions. By using MSDFW, the generated cinemagraphs are of high resolution and exhibit plausible looping animation. We demonstrate the superiority of our method through user studies and quantitative comparisons with state-of-the-art cinemagraph generation methods and a video generation method that uses a pre-trained StyleGAN.

Supplementary Video


Comparison to Baseline Methods

Ours

T2C

EMF

DL

AL


BibTeX

@misc{choi2024stylecinegan,
        title={StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN}, 
        author={Jongwoo Choi and Kwanggyoon Seo and Amirsaman Ashtari and Junyong Noh},
        year={2024},
        eprint={2403.14186},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }