Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in NeurIPS Differential Geometry Workshop (Top 5 paper), 2020
Creating manual annotations for segementation, especially for high-resolution 3D image data, is expensive and time-consuming. We exploit the natural hierarchical organization of 3D biological data by using it as self-supervision to learn hyperbolic representations via a hyperbolic VAE, and use the learned representations to perform unsupervised 3D segmentation.
Recommended citation: Gu, Jeffrey* and Hsu, Joy* and Yeung, Serena. (2020). "Learning Hyperbolic Representations for Unsupervised 3D Segmentation." arXiv. arXiv:2012.01644. https://arxiv.org/pdf/2012.01644.pdf
Published in Conference on Uncertainty in Artificial Intelligence (UAI), 2021
Invariant and almost-invariant representations have long been important in shape analysis. Using contrastive learning, we develop a method to learn unsupervised invariant and almost-invariant shape representations and demonstrate both representation quality and robustness.
Recommended citation: Gu, Jeffrey and Yeung, Serena. (2021). "Staying in Shape: Learning Invariant Shape Representations using Contrastive Learning." UAI. UAI 2021. https://arxiv.org/pdf/2107.03552.pdf
Published in Conference on Neural Information Processing Systems (NeurIPS), 2021
Creating manual annotations for segementation, especially for high-resolution 3D image data, is expensive and time-consuming. We exploit the natural hierarchical organization of 3D biological data by using it as self-supervision to learn hyperbolic representations via a hyperbolic VAE, and use the learned representations to perform unsupervised 3D segmentation.
Recommended citation: Hsu, Joy, et al. "Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations." Advances in Neural Information Processing Systems 34 (2021): 5112-5123. https://proceedings.neurips.cc/paper_files/paper/2021/file/291d43c696d8c3704cdbe0a72ade5f6c-Paper.pdf
Published in Conference on Computer Vision and Pattern Recognition (CVPR), 2023
We aim to bridge the gap between monocular human mesh recovery (HMR) methods and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. To achieve this, we introduce the Neural Motion (NeMo) field which is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from both the Penn Action dataset and a MoCap dataset we collected mimicking actions in Penn Action, and show that NeMo achieves better 3D reconstruction compared to various baselines.
Recommended citation: Wang, Kuan-Chieh, et al. "NeMo: Learning 3D Neural Motion Fields From Multiple Video Instances of the Same Action." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. https://openaccess.thecvf.com/content/CVPR2023/html/Wang_NeMo_Learning_3D_Neural_Motion_Fields_From_Multiple_Video_Instances_CVPR_2023_paper.html
Published in International Conference on Computer Vision (ICCV), 2023
Neural processes outperform gradient-based meta-learning and other methods for neural field generalization.
Recommended citation: Gu, Jeffrey, Kuan-Chieh Wang, and Serena Yeung. "Generalizable Neural Fields as Partially Observed Neural Processes." arXiv preprint arXiv:2309.06660 (2023). https://arxiv.org/pdf/2309.06660.pdf
Published:
Abstract: There exists a need for unsupervised 3D segmentation on complex volumetric data, particularly when annotation ability is limited or discovery of new categories is desired. Using the observation that much of 3D volumetric data is innately hierarchical, we propose learning effective representations of 3D patches for unsupervised segmentation through a variational autoencoder (VAE) with a hyperbolic latent space and a proposed gyroplane convolutional layer, which better models the underlying hierarchical structure within a 3D image. We also introduce a hierarchical triplet loss and multi-scale patch sampling scheme to embed relationships across varying levels of granularity. We demonstrate the effectiveness of our hyperbolic representations for unsupervised 3D segmentation on a hierarchical toy dataset, BraTS whole tumor dataset, and cryogenic electron microscopy data. With Joy Hsu.
Published:
In my invited talk for Stanford MedAI, I give an introduction to hyperbolic representation learning and cover our recent NeurIPS 2021 publication Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations. You can find the talk online at Stanford MedAI’s YouTube channel here.
Published:
As part of our ECCV 2022 tutorial Hyperbolic Representation Learning for Computer Vision with Pascal Mettes, Mina Ghadimi Atigh, Martin Keller-Ressel, and Serena Yeung, I gave the third talk on recent research in unsupervised hyperbolic representation learning methods in computer vision. Slides for the talks can be found here and notebooks walking through the basics of hyperbolic representation learning and some recent research can be found here.
Course, Stanford University, Institute for Computational & Mathematical Engineering, 2019
Course assistant for Reinforcement Learning for Stochastic Control Problems in Finance (CME 241). A current webpage for the class can be found here.
Course, Stanford University, 2022
Course assistant for the Fall 2022 offering of Artificial Intelligence in Healthcare (BIODS 220/CS 271/BIOMEDIN 220) at Stanford University. A current webpage for the class can be found here.