Yingzhen Yang


About me

I am an assistant professor at Arizona State University (staring in Fall 2019). My research covers statistical machine learning, deep learning, optimization and theoretical computer science.

Open Positions

I am always looking for self-motivated students. Two PhD research assistantships are available, and students with backgound in deep learning or machine learning and optimization are highly encouraged to apply. If you would like to work with me, please send your CV to yingzhen.yang@asu.com with a brief introduction to your research interest and the projects/directions that mostly interest you. Undergradue students with said background and strong programming skills are also welcome to conduct research in my group.

Research Interests

I am intersted in statistical machine learning and its theory, including theory and application of deep learning, subspace learning, manifold learning, sparse representation and compressive sensing, nonparametric models, probabilistic graphical models and generalization analysis of classification, semi-supervised learning and clustering. I also deveote efforts to optimization theory for machine learning and theoretical computer science.

In my early years I also conducted research on computer vision and computer graphics. Click to see the details

Contact

Office: BYENG 590, 699 S Mill Ave. Tempe, AZ 85281
Email: yingzhen.yang -AT- asu.edu (official), superyyzg -AT- gmail.com (personal)

Honors and Awards

2016 ECCV Best Paper Finalist (among 11 out of all submissions)
2010 Carnegie Institute of Technology Dean's Tuition Fellowship
Before 2005: Bronze medal in National Senior High School Mathematics Competition, First Prize of National Junior High School Mathematics Competition


Professional Services & Activities

Program Committee Member: International Joint Conferences on Artificial Intelligence (IJCAI) 2015, IJCAI 2017, IJCAI 2018, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018
Reviewer: Journal of Machine Learning Research (JMLR), IEEE Transactions on Image Processing (TIP), Pattern Recognition (PR), Knowledge and Information Systems (KAIS), Machine Vision and Applications (Springer Journal)


Industrial Experience

Over ten years of experience in C/C++ programming and software design.

Research Intern, Microsoft Research at Redmond, WA. May 2015 to Aug. 2015. online probabilistic topic models for large-scale application with CUDA C/C++ programming.
Research Intern, Microsoft Research at Redmond, WA. May 2014 to Aug. 2014. Developed parallelized and accelerated probabilistic topic models with CUDA C/C++ programming.
Research Intern, Hewlett-Packard Labs, Palo Alto, California. May 2011 to Aug. 2011. Efficient markerless augmented reality with C/C++ programming.


Projects

Please refer to the details of my projects here.

Recent Publications (Full List)

Yingzhen Yang, Jiahui Yu.
Fast Proximal Gradient Descent for A Class of Non-convex and Non-smooth Sparse Learning Problems.
Proc. of Conference on Uncertainty in Artificial Intelligence (UAI) 2019. [Paper] [Supplementary] [Code]

Yingzhen Yang.
Dimensionality Reduced L0-Sparse Subspace Clustering.
Proc. of International Conference on Artificial Intelligence and Statistics (AISTATS) 2018. [Paper] [Supplementary]

Xiaojie Jin, Yingzhen Yang, Ning Xu, Jianchao Yang, Nebojsa Jojic, Jiashi Feng, Shuicheng Yan.
WSNet: Compact and Efficient Networks Through Weight Sampling.
Proc. of International Conference on Machine Learning (ICML) 2018.

Yingzhen Yang, Jiashi Feng, Nebojsa Jojic, Jianchao Yang, Thomas S. Huang.
Subspace Learning by L0-Induced Sparsity.
International Journal of Computer Vision (IJCV) 2018, special issue on the best of European Conference on Computer Vision (ECCV) 2016.

Yingzhen Yang, Jiashi Feng, Jiahui Yu, Jianchao Yang, Pushmeet Kohli, Thomas S. Huang.
Neighborhood Regularized L1-Graph.
Proc. of Conference on Uncertainty in Artificial Intelligence (UAI) 2017. [Paper]

Yingzhen Yang, Jiahui Yu, Pushmeet Kohli, Jianchao Yang, Thomas S. Huang.
Support Regularized Sparse Coding and Its Fast Encoder.
Proc. of International Conference on Learning Representations (ICLR) 2017. [Paper]

Yingzhen Yang, Jiashi Feng, Nebojsa Jojic, Jianchao Yang, Thomas S. Huang.
L0-Sparse Subspace Clustering.
Proc. of European Conference on Computer Vision (ECCV) 2016. (Oral Presentation, Among 11 Best Paper Candidates) [Paper] [Supplementary] [Slides] [Code (Both CUDA C++ for extreme efficiency and MATLAB)]
Our work establishs almost surely equivalence between L0 sparsity and subspace detection property, under the mild condition of i.i.d. data generation and nondegenerate distribution. This is much milder than previous conditions required by L1 sparse subspace clustering literature. Click to see the key points in the talk

Yingzhen Yang, Zhangyang Wang, Zhaowen Wang, Shiyu Chang, Ding. Liu, Honghui Shi, Thomas S. Huang.
Epitomic Image Super-Resolution.
Proc. of AAAI Conference on Artificial Intelligence (AAAI) 2016 (Best Poster/Best Presentation Finalist for Student Poster Program). [Project&Code]