Tianfu Wang 王天夫
I am a master student in ETH Zürich supervised by Prof. Konrad Schindler.
Previously I was in Prof. Luc Van Gool's Computer Vision Lab, working with Anton Obukhov and Menelaos Kanakis on NeRF and diffusion models.
Prior to that, I obtained my undergraduate degree from Northwestern University. I worked with Prof. Oliver Cossairt and Prof. Florian Willomitzer on structured light imaging and eye tracking. I was also fortunate to work with Prof. Jack Tumblin, who inspired my interest in visual commputing.
I am always enthusiastic about building and utilizing intelligent systems that not only help us understand and better represent our 3D world, but also allow us to craft our own virtual realms. More specifically, my research interest lies at the crossroads of graphics, vision, and computational photography, including areas such as neural and inverse rendering, generative 3D content creation, and innovations in imaging and optics.
Email /
CV /
Scholar /
Github /
LinkedIn
|
|
Publications
|
|
DGInStyle: Domain-Generalizable Semantic Segmentation with Image Diffusion Models and Stylized Semantic Control
Yuru Jia,
Lukas Hoyer,
Shengyu Huang,
Tianfu Wang,
Luc Van Gool,
Konrad Schindler,
Anton Obukhov
arXiv preprint, in review, 2023
Project Page
/
Paper
We specialize a pre-trained latent diffusion model into an efficient data generation pipeline with style control and multi-resolution semantic adherence, improving downstream domain generalization performance.
|
|
Breathing New Life into 3D Assets with Generative Repainting
Tianfu Wang,
Menelaos Kanakis,
Konrad Schindler,
Luc Van Gool,
Anton Obukhov
BMVC, 2023   (Oral Presentation)
Project Page
/
Paper
/
Code
Our method lifts the power of generative 2D image models, such as Stable Diffusion, into 3D. Using them as a way to synthesize unseen content for novel views, and NeRF to reconcile all generated views, our method provides vivid painting of the input 3D assets in a variety of shape categories.
|
|
Optimization-Based Eye Tracking using Deflectometric Information
Tianfu Wang,
Jiazhang Wang,
Oliver Cossairt,
Florian Willomitzer
arXiv preprint, in review, 2023
Paper
Using a differentiable renderer to simulate reflection light transport allows us to take advantage of dense sturctured screen illumination to accurately optimize eye gaze direction.
|
|
A Mitsuba-based Study on Trade-offs Between Projection and Reflection Based Systems in Structured-Light 3D Imaging
Tianfu Wang,
Florian Schiffers,
Florian Willomitzer,
Oliver Cossairt
OPTICA COSI, 2021
Paper
/
Code
We present a first study on the trade-offs between projection and reflection based 3D imaging systems under diffusive and specular reflection mixtures. Experiments are conducted in our projection/reflection simulator framework using the Mitsuba2 renderer.
|
Misc
Aside from research, I delight in the symphony of flavors that dance on the tongue (good food) and the harmonious melodies that serenade the ears (music). A few of my musical heroes include 竇唯 (Dou Wei), Radiohead, Brian Eno, and Fleet Foxes.
|
|