FIND: An Unsupervised Implicit 3D Model of Articulated Human Feet

BMVC 2022

Oliver Boyne James Charles Roberto Cipolla

University of Cambridge

arXiv Code Dataset Poster

Detailed, parameterised models for human bodies, hands and faces have been developed thoroughly. However, the foot has not been explored in detail - available data is extremely limited, and existing models are low fidelity, and do not capture articulation.

We improve on this by developing FIND, an implicit foot model, which models shape, pose and texture as deformation fields along the surface of a template foot. We improve the available data by collecting Foot3D - a dataset made available to the research community.

Dataset

We release Foot3D, a dataset of high quality, textured scanned feet in a variety of poses.

Method

We construct a coordinate based model, which defines, for every point along the surface of a template mesh, a deformation and a colour value.

This model takes as input a 3D point on the surface, and latent codes describing the shape, articulation (pose) and texture of the foot.

At inference time, we sample every point along the surface of a chosen template mesh to produce a target foot.

Results

GTFINDPCA SUPR

We fit our model to a selection of 3D validation scans, and compare the reconstruction quality to a baseline PCA model (produced from FoldingNet), and to a rigged generative foot model SUPR (released at ECCV 2022, after publication of this paper). We show significant qualitative and quantitative improvements.

Acknowledgements

We acknowledge the collaboration and financial support of Trya Srl.

If you make use of this project, please cite the following paper:
@inproceedings{boyne2022find,
            title={FIND: An Unsupervised Implicit 3D Model of Articulated Human Feet},
            author={Boyne, Oliver and Charles, James and Cipolla, Roberto},
            booktitle={British Machine Vision Conference (BMVC)},
            year={2022}
        }