Perceiving Systems Talk Biography
01 April 2021 at 11:00 - 12:00 | Remote talk on zoom

Hair & garment synthesis using deep learning method

Meng

For both AR and VR applications, there is a strong motivation to generate virtual avatars with realistic hairs and garments that are the two most significant elements to personify any character. However, due to the complex structures and ever-changing fashion styles, modeling hairs and garments still remain tedious and expensive as they require considerable professional effort. My research interest focuses on deep learning methods in 3D modeling, rendering, and animation, especially to synthesis high-quality hairs and garments with plausible details. In this talk, I will present the progression of three projects: (i) Hair-GAN: recovering 3D hair structure from a single image using the generative adversarial network; (ii) deep detail enhancement for any garment; (iii) dynamic neural garments. The first project introduces an architecture of a generative adversarial network to recover 3D hair structure from a single image. The second presents a method to synthesize plausible wrinkle details on a coarse garment geometry. The third project proposes a new garment system that jointly simulates and synthesizes the dynamic appearance of a target garment directly starting from a 3D body motion sequence.

Speaker Biography

Meng Zhang (University College London)

PostDoc Associate Researcher

Dr. Meng Zhang is a PostDoc Associate Researcher in Smart Geometry Processing Group, University College London, under the supervision of Prof. Niloy Mitra. She got her PhD degree (in 2019) from Zhejiang University, supervised by Prof. Kun Zhou. She received her Bachelor’s degree (in 2010) and Master’s degree (in 2013) from the School of Telecommunications Engineering, Xidian University. Her research focuses on deep learning, 3D shape capture and reconstruction, image-based modeling, texture mapping and synthesis, style learning and transfer.