Publications

NeMo: 3D Neural Motion Fields from Multiple Video Instances of the Same Action

Published in Conference on Computer Vision and Pattern Recognition (CVPR), 2023

We aim to bridge the gap between monocular human mesh recovery (HMR) methods and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. To achieve this, we introduce the Neural Motion (NeMo) field which is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from both the Penn Action dataset and a MoCap dataset we collected mimicking actions in Penn Action, and show that NeMo achieves better 3D reconstruction compared to various baselines.

Recommended citation: Wang, Kuan-Chieh, et al. "NeMo: Learning 3D Neural Motion Fields From Multiple Video Instances of the Same Action." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. https://openaccess.thecvf.com/content/CVPR2023/html/Wang_NeMo_Learning_3D_Neural_Motion_Fields_From_Multiple_Video_Instances_CVPR_2023_paper.html

Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations

Published in Conference on Neural Information Processing Systems (NeurIPS), 2021

Creating manual annotations for segementation, especially for high-resolution 3D image data, is expensive and time-consuming. We exploit the natural hierarchical organization of 3D biological data by using it as self-supervision to learn hyperbolic representations via a hyperbolic VAE, and use the learned representations to perform unsupervised 3D segmentation.

Recommended citation: Hsu, Joy, et al. "Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations." Advances in Neural Information Processing Systems 34 (2021): 5112-5123. https://proceedings.neurips.cc/paper_files/paper/2021/file/291d43c696d8c3704cdbe0a72ade5f6c-Paper.pdf

Staying in Shape: Learning Invariant Shape Representations using Contrastive Learning

Published in Conference on Uncertainty in Artificial Intelligence (UAI), 2021

Invariant and almost-invariant representations have long been important in shape analysis. Using contrastive learning, we develop a method to learn unsupervised invariant and almost-invariant shape representations and demonstrate both representation quality and robustness.

Recommended citation: Gu, Jeffrey and Yeung, Serena. (2021). "Staying in Shape: Learning Invariant Shape Representations using Contrastive Learning." UAI. UAI 2021. https://arxiv.org/pdf/2107.03552.pdf

Learning Hyperbolic Representations for Unsupervised 3D Segmentation

Published in NeurIPS Differential Geometry Workshop (Top 5 paper), 2020

Creating manual annotations for segementation, especially for high-resolution 3D image data, is expensive and time-consuming. We exploit the natural hierarchical organization of 3D biological data by using it as self-supervision to learn hyperbolic representations via a hyperbolic VAE, and use the learned representations to perform unsupervised 3D segmentation.

Recommended citation: Gu, Jeffrey* and Hsu, Joy* and Yeung, Serena. (2020). "Learning Hyperbolic Representations for Unsupervised 3D Segmentation." arXiv. arXiv:2012.01644. https://arxiv.org/pdf/2012.01644.pdf