Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Make static person walk again via separating pose action from shape
oleh: Yongwei Nie, Meihua Zhao, Qing Zhang, Ping Li, Jian Zhu, Hongmin Cai
Format: | Article |
---|---|
Diterbitkan: | Elsevier 2024-08-01 |
Deskripsi
This paper addresses the problem of animating a person in static images, the core task of which is to infer future poses for the person. Existing approaches predict future poses in the 2D space, suffering from entanglement of pose action and shape. We propose a method that generates actions in the 3D space and then transfers them to the 2D person. We first lift the 2D pose of the person to a 3D skeleton, then propose a 3D action synthesis network predicting future skeletons, and finally devise a self-supervised action transfer network that transfers the actions of 3D skeletons to the 2D person. Actions generated in the 3D space look plausible and vivid. More importantly, self-supervised action transfer allows our method to be trained only on a 3D MoCap dataset while being able to process images in different domains. Experiments on three image datasets validate the effectiveness of our method.