Per accedir als documents amb el text complet, si us plau, seguiu el següent enllaç: http://hdl.handle.net/2117/125337
dc.contributor | Institut de Robòtica i Informàtica Industrial |
---|---|
dc.contributor | Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial |
dc.contributor | Universitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel.ligents |
dc.contributor | Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI |
dc.contributor.author | Pumarola Peris, Albert |
dc.contributor.author | Agudo Martínez, Antonio |
dc.contributor.author | Martinez, Aleix M. |
dc.contributor.author | Sanfeliu Cortés, Alberto |
dc.contributor.author | Moreno-Noguer, Francesc |
dc.date | 2018 |
dc.identifier.citation | Pumarola, A., Agudo, A., Martinez, A., Sanfeliu, A., Moreno-Noguer, F. GANimation: anatomically-aware facial animation from a single image. A: European Conference on Computer Vision. "Computer Vision – ECCV 2018. 15th European Conference, Munich, Germany, September 8-14, 2018, proceedings, part I". Berlín: Springer, 2018, p. 835-851. |
dc.identifier.citation | 10.1007/978-3-030-01249-6_50 |
dc.identifier.uri | http://hdl.handle.net/2117/125337 |
dc.description.abstract | The final publication is available at link.springer.com |
dc.description.abstract | Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN, that conditions GANs' generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild. |
dc.description.abstract | Peer Reviewed |
dc.description.abstract | Award-winning |
dc.language.iso | eng |
dc.publisher | Springer |
dc.relation | https://link.springer.com/chapter/10.1007%2F978-3-030-01249-6_50 |
dc.relation | info:eu-repo/grantAgreement/ES/2PE/MDM-2016-0656 |
dc.relation | info:eu-repo/grantAgreement/EC/H2020/644271-AEROARMS |
dc.relation | info:eu-repo/grantAgreement/ES/2PE/DPI2016-78957-R |
dc.rights | info:eu-repo/semantics/openAccess |
dc.subject | Àrees temàtiques de la UPC::Informàtica::Automàtica i control |
dc.subject | computer vision |
dc.subject | GANs |
dc.subject | Face Animation |
dc.subject | Action-Unit Condition |
dc.subject | Classificació INSPEC::Pattern recognition::Computer vision |
dc.title | GANimation: anatomically-aware facial animation from a single image |
dc.type | info:eu-repo/semantics/submittedVersion |
dc.type | info:eu-repo/semantics/conferenceObject |