Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
DeepHandMesh: A Weakly-supervised Deep Encoder-Decoder Framework for\n High-fidelity Hand Mesh Modeling
0
Zitationen
3
Autoren
2020
Jahr
Abstract
Human hands play a central role in interacting with other people and objects.\nFor realistic replication of such hand motions, high-fidelity hand meshes have\nto be reconstructed. In this study, we firstly propose DeepHandMesh, a\nweakly-supervised deep encoder-decoder framework for high-fidelity hand mesh\nmodeling. We design our system to be trained in an end-to-end and\nweakly-supervised manner; therefore, it does not require groundtruth meshes.\nInstead, it relies on weaker supervisions such as 3D joint coordinates and\nmulti-view depth maps, which are easier to get than groundtruth meshes and do\nnot dependent on the mesh topology. Although the proposed DeepHandMesh is\ntrained in a weakly-supervised way, it provides significantly more realistic\nhand mesh than previous fully-supervised hand models. Our newly introduced\npenetration avoidance loss further improves results by replicating physical\ninteraction between hand parts. Finally, we demonstrate that our system can\nalso be applied successfully to the 3D hand mesh estimation from general\nimages. Our hand model, dataset, and codes are publicly available at\nhttps://mks0601.github.io/DeepHandMesh/.\n