About Me

I am a research scientist and manager at the Video Computer Vision org at Apple, based out of Seattle. I lead an applied research team, with a focus on neural rendering and generative AI for applications in Apple Vision Pro, iPhones, and other platforms. I did my Ph.D. at MIT and my undergrad at HKUST.

Shipped Products

I contributed to depth estimation on iPhone Pro, depth estimation on Vision Pro, face reconstruction for Persona (3D Facetime on Vision Pro), among others.

Research

I’m broadly interested in computer vision, machine learning, and computer graphics. Much of my research is about neural rendering, 3D reconstruction, and generative models.

StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D
Pengsheng Guo, Hans Hao, Adam Caccavale, Alexander G. Schwing, Zhongzheng Ren, Alex Colburn, Edward Zhang, Fangchang Ma
[arXiv]
HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion
Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV, 2023
[arXiv] [webpage]
FineRecon: Depth-aware Feed-forward Network for Detailed 3D Reconstruction
Noah Stier, Anurag Ranjan, Alex Colburn, Yajie Yan, Liang Yang, Fangchang Ma, Baptiste Angles
ICCV, 2023
[arXiv]
Generative Multiplane Images: Making a 2D GAN 3D-Aware
Xiaoming Zhao, Fangchang Ma, David Güera, Zhile Ren, Alexander Schwing, Alex Colburn
ECCV, 2022 (Oral Presentation)
[arXiv] [code] [webpage]
Texturify: Generating Textures on 3D Shape Surfaces
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ECCV, 2022
[arXiv] [video] [webpage]
RetrievalFuse: Neural 3D Scene Reconstruction with a Database
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV, 2021
[arXiv] [code] [video] [webpage]
Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera
Fangchang Ma, Guilherme Venturelli Cavalheiro, Sertac Karaman
ICRA, 2019
[arXiv] [code] [video]
FastDepth: Fast Monocular Depth Estimation on Embedded Systems
Diana Wofk*, Fangchang Ma*, Tien-Ju Yang, Sertac Karaman, Vivienne Sze
ICRA, 2019
[arXiv] [code] [video] [webpage]
Invertibility of Convolutional Generative Networks from Partial Measurements
Fangchang Ma, Ulas Ayaz, Sertac Karaman
NeurIPS, 2018
[pdf] [supplementary] [code]
Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image
Fangchang Ma, Sertac Karaman
ICRA, 2018
[arXiv] [video] [pytorch code] [torch code]
Sparse Depth Sensing for Resource-Constrained Robots
Fangchang Ma, Luca Carlone, Ulas Ayaz, Sertac Karaman
IROS, 2016
IJRR (extended version)
[arXiv] [code] [video]
On Sensing, Agility, and Computation Requirements for a Data-gathering Vehicle
Fangchang Ma, Sertac Karaman
WAFR, 2014
[arXiv]
Velocity estimator via fusing inertial measurements and multiple feature correspondences from a single camera
Guyue Zhou, Fangchang Ma, Zexiang Li, Tao Wang
ROBIO, 2013
[pdf]