High-fidelity hair reconstruction with 200x less memory through hierarchical card clustering and shared Gaussian textures.
We present a compact pipeline for high-fidelity hair reconstruction from multi-view images. While recent 3D Gaussian Splatting (3DGS) methods achieve realistic results, they often require millions of primitives, leading to high storage and rendering costs.
Observing that hair exhibits structural and visual similarities across a hairstyle, we cluster strands into representative hair cards and group these into shared texture codebooks. Our approach integrates this structure with 3DGS rendering, significantly reducing reconstruction time and storage while maintaining comparable visual quality. In addition, we propose a generative prior accelerated method to reconstruct the initial strand geometry from a set of images.
Our experiments demonstrate a 4-fold reduction in strand reconstruction time and achieve comparable rendering performance with over 200x lower memory footprint.
An efficient method for reconstructing strand-level hair geometry from multi-view images, leveraging the PERM parametric model for a 4x speedup over prior approaches.
A compact hair modeling pipeline that clusters strands into representative hair cards, significantly reducing redundancy at the strand level.
A shared Gaussian texture codebook for hair cards, enabling scalable and consistent appearance modeling across structurally similar hair regions with 200x memory reduction.
Given monocular video frames, we reconstruct hair strands with our efficient strand generator, group them by hair cards, and further cluster into card groups with shared Gaussian textures for compact appearance modeling.
Estimate camera poses from multi-view images, compute hair segmentation masks and orientation maps using Gabor filters for directional cues.
Reconstruct head and hair Gaussians separately, fit a FLAME head model, then use PERM's generative prior to efficiently decode strand geometry from latent UV textures.
Cluster strands into groups, construct representative hair cards for each cluster, and extract per-card geometry textures for compact representation.
Further cluster hair cards by appearance into groups with shared Gaussian texture codebooks, then optimize end-to-end with a tailored 3DGS scheme.
Novel-view rendering results of our compact CGHair representation.
Qualitative comparison of our method against prior approaches.
Strand-level animations rendered with our compact CGHair representation across diverse hairstyles.