To access the full text documents, please follow this link: http://hdl.handle.net/2117/129790

Unsupervised person image synthesis in arbitrary poses
Pumarola Peris, Albert; Agudo Martínez, Antonio; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc
Institut de Robòtica i Informàtica Industrial; Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial; Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel.ligents; Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
We present a novel approach for synthesizing photo-realistic images of people in arbitrary poses using generative adversarial learning. Given an input image of a person and a desired pose represented by a 2D skeleton, our model renders the image of the same person under the new pose, synthesizing novel views of the parts visible in the input image and hallucinating those that are not seen. This problem has recently been addressed in a supervised manner, i.e., during training the ground truth images under the new poses are given to the network. We go beyond these approaches by proposing a fully unsupervised strategy. We tackle this challenging scenario by splitting the problem into two principal subtasks. First, we consider a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image. Second, we devise a novel loss function that incorporates content and style terms, and aims at producing images of high perceptual quality. Extensive experiments conducted on the DeepFashion dataset demonstrate that the images rendered by our model are very close in appearance to those obtained by fully supervised approaches.
Peer Reviewed
-Àrees temàtiques de la UPC::Informàtica
-Human mechanics
-Pattern recognition systems
-computer vision
-optimisation. Author keywords: GANs
-Deep Learning
-Conditioned Image Generation
-Mecànica humana
-Geometria computacional
-Reconeixement de formes (Informàtica)
Attribution-NonCommercial-NoDerivs 3.0 Spain
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
Article - Submitted version
Conference Object
Institute of Electrical and Electronics Engineers (IEEE)
         

Show full item record

Related documents

Other documents of the same author

Pumarola Peris, Albert; Agudo Martínez, Antonio; Martinez, Aleix M.; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc
Pumarola Peris, Albert; Vakhitov, A.; Agudo Martínez, Antonio; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc
Pumarola Peris, Albert; Agudo Martínez, Antonio; Porzi, Lorenzo; Sanfeliu Cortés, Alberto; Lepetit, Vincent; Moreno-Noguer, Francesc
Agudo Martínez, Antonio; Moreno-Noguer, Francesc
 

Coordination

 

Supporters