花猫加速器官网

Short Bio

I am currently a Ph.D. student in the Electrical & Computer Engineering Department at Rice University, where I work in the DSP group with Dr. Richard G. Baraniuk. My research is focused on the intersection of Deep Learning, Probabilistic Modeling, and Neuroscience. I received my MSEE and BSEE from Rice in May 2018 and May 2014, respectively. During my Master, I worked with Dr. Richard G. Baraniuk and Dr. Ankit B. Patel on probabilistic generative models for Deep Neural Networks (DCNs). While being an undergraduate at Rice, I was a part of the DSP group, where I conducted research in signal processing and computational neuroscience. I also worked in Dr. Robert Hauge‘s research group, where I investigated the effect of temperature and pressure on the collapse of carbon nanotubes. In the summer of 2012 and 2011, I was an undergraduate research assistant in Dr. Zhu Han‘s lab at the University of Houston and Dr. Stephan Link‘s lab at Rice University, respectively.

Curriculum Vitae

手机火狐翻墙

I am interested in understanding the probabilistic generative models underlying deep learning systems. Using these models, I develop new deep learning algorithms for solving challenging problems in computer vision. In particular, my advisors and I invented the Deep Rendering Model (DRM), the first graphical models whose inference is exactly a DCN. The DRM unifies two perspectives: neural network and probabilistic inference.

I am also interested in developing a system that can learn from very few labeled data. Particularly, I design the Neural Rendering Model (NRM) for semi-supervised learning. Generation in NRM is informed by the Deep Convolutional Networks (DCNs) and jointly designed with the inference. Using the NRM, I further develop a new deep network architecture, namely the Max-Min networks, which exceed or match the state-of-the-art for semi-supervised learning on various benchmarks including SVHN, CIFAR10, and CIFAR100. The Max-Min networks also help improve state-of-the-art for supervised learning on CIFAR10 and ImageNet.

Furthermore, I find domain adaptation particularly fascinating and highly demanded in practice. In my internship with AWS Deep Learning, I develop a new domain adaptation method that can map synthetic images to real images. I term my method the Mixed Reality Generative Adversarial Networks (MrGANs). MrGANs map both synthetic and real images into a shared space. Models trained on this shared space generalize better (from synthetic) to real data.

Finally, I am excited about how the brain works and would like to use the insights from the brain to develop new artificial intelligence systems, which outperform the existing ones. I am part of the NINAI (Neuroscience-Inspired Networks for Artificial Intelligence) team, whose goal is to conduct brain research for machine learning.

花猫加速器官网

手机火狐翻墙