FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Images

1Carnegie Mellon University          2Google AR

SIGGRAPH Asia 2024


From a real-world 2D clothing image, FabricDiffusion extracts high-quality texture maps and prints and transfers them to the target 3D garment of arbitrary shapes.

Abstract

We introduce FabricDiffuion, a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes. Existing approaches typically synthesize textures on the garment surface through 2D-to-3D texture mapping or depth-aware inpainting via generative models. Unfortunately, these methods often struggles at capturing and preserving texture details, particularly due to challenging occlusions, distortions, or poses in the input image. Inspired by the observation that in fashion industry most garments are constructed by stitching sewing patterns with flat, repeatable textures, we cast the task of clothing texture transfer as extracting distortion-free, tileable texture materials that are subsequently mapped onto the UV space of the garment. Building upon this insight, we train a denoising diffusion model with a large-scale synthetic dataset to rectify distortions in the input texture image. This process yields a flat texture map that enables a tight coupling with existing Physically-Based Rendering (PBR) material generation pipelines, allowing for realistic relighting of the garment under various lighting conditions. We show that FabricDiffusion can transfer a variety of features from a single clothing image including texture patterns, material properties, and detailed prints and logos. Extensive experiments demonstrate that our model significantly outperforms state-to-the-art methods on both synthetic data and real-world, in-the-wild clothing images while generalizing to unseen textures and garment shapes.



Video and Results



Framework of FabricDiffusion

Given a real-life clothing image and region captures of its fabric materials and prints, (a) our model extracts normalized textures and prints, and (b) generates distortion-free, high-quality Physically-Based Rendering (PBR) materials and transparent prints. (c) These materials and prints can be applied to the target 3D garment meshes of arbitrary shapes (d) for realistic relighting. Our model is trained purely with synthetic data and achieves zero-shot generalization to real-world images.



Acknowledgment

We thank Marc Ruiz from Carnegie Mellon University for the help on artistic design and Sam Sartor from William & Mary for the advice on fine-tuning their material generation model.

BibTeX

@inproceedings{zhang2024fabricdiffusion,
    title     = {{FabricDiffusion}: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Images},
    author    = {Zhang, Cheng and Wang, Yuanhao and Vicente Carrasco, Francisco and Wu, Chenglei and 
                 Yang, Jinlong and Beeler, Thabo and De la Torre, Fernando},
    booktitle = {ACM SIGGRAPH Asia},
    year      = {2024},
}