Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts
[Paper] [Code coming soon!]
Automatic image synthesis research has been rapidly growing with deep networks getting more and more expressive. In the last couple of years, we have observed images of digits, indoor scenes, birds, chairs, etc. being automatically generated. The expressive power of image generators have also been enhanced by introducing several forms of condi- tioning variables such as object names, sentences, bound- ing box and key-point locations. In this work, we propose a novel deep conditional generative adversarial network architecture that takes its strength from the semantic layout and scene attributes integrated as conditioning variables. We show that our architecture is able to generate realistic outdoor scene images under different conditions, e.g. day- night, sunny-foggy, with clear object boundaries.
ExperimentsEffect of Varying Transient Attributes
Incrementally Adding/Deleting Scene Elements
Searching for Nearest Training Images
We gratefully acknowledge the support of NVIDIA Cor- poration with the donation of the Tesla K40 GPU used in this study.