| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 28 Nov 2025 02:32:28 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"692909bc-b8a1"
expires: Mon, 29 Dec 2025 04:45:03 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: B793:2F7ECD:846D90:94D849:695204F6
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 04:35:03 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210025-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766982904.552716,VS0,VE202
vary: Accept-Encoding
x-fastly-request-id: 6ac3bdea7691fc159a23d411ad8f5d206c0304a2
content-length: 10168
ZERO Lab
Wechat account
Zlin’s Extraordinary Research Oasis
Zeal, Excellence, Reliability and Openness
Welcome to the ZERO Lab, the research group lead by Prof. Zhouchen Lin (Zlin), affiliated to School of Artificial Intelligence, Peking University. We research on machine learning and computer vision.
Recruiting
- Our group at Peking University is recruiting tenure-track faculties and PostDocs (academic or industrial). For the postdoc program in 2020, please refer to the Chinese version or the English version.
- Our group at Peking University is recruiting Ph.D.s who have strong mathematical abilities (however, this does not imply that you have to come from mathematics department) and great interest in theoretical analysis in order to enjoy with me how to use mathematics to solve real problems elegantly.
News
Our paper won the Silver Best Paper Award in the Adversarial Machine Learning Workshop at ICML 2021!
Aug 9, 2021.
Four papers were accepted by ICML 2021! Two are oral!
Jun 15, 2021.
One paper was accepted by KDD 2021!
Jun 13, 2021.
One paper was accepted by SIGIR 2021!
Jun 13, 2021.
Media Report
Topics
Acceleration
ADMM
Adversarial Robustness
Adversarial Transferability
Alternating direction method
Alternating Direction Method of Multipliers
Bayes error
Boosting
Color Filter Array
Compressed Sensing
Compressive Phase Retrieval,
computer vision
Contextual distance
Convergence Analysis
Convex Optimization
Data Compression
Deep Learning
Demosaicking
Denoising
Dictionary Learning
Dimensionality reduction
Discriminant analysis
Document analysis
Double quantization
Expectation Maximization
Face recognition
Feature detection
Feature extraction
forgery
Geometric Optimization
Handwriting recognition
Image Annotation
Image classification
Image Denoising
Image Processing
Image Reconstruction
Image rectification
Image restoration
Image Retrieval
Image segmentation
Laplace equations
Learning-based PDEs
Light field
Linear discriminant analysis
Lorentzian geometry
Low Rank
Low Rank Representation
Low-level vision
Lumigraph
Machine Learning
Majorization Minimization
Manifold learning
Manifolds
Matrix Completion
matrix decomposition
Metric learning
Mismatch Removal
Neural Network
Neural networks
Nonconvex Optimization
Optimal control
Optimization
Partial Differential Equation
plenoptic functions
Pose Estimation
Principal component analysis
Robust PCA
Robust Principal Component Analysis
sampling
Semantic segmentation
Semi-supervised learning
Singular value decomposition
Sparse Coding
Sparse matrices
Sparse Representation
Spectral clustering
Subspace clustering
Subspace Recovery
Super-Resolution
Superresolution
Latest Publications
Symmetry Discovery for Different Data Types. NN, 2025.
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance. However, …
Explicit Discovery of Nonlinear Symmetries from Dynamic Data. ICML, 2025.
Symmetry is widely applied in problems such as the design of equivariant networks and the discovery of governing equations, but in …
High-Rank Irreducible Cartesian Tensor Decomposition and Bases of Equivariant Spaces. JMLR, 2025.
Irreducible Cartesian tensors (ICTs) play a crucial role in the design of equivariant graph neural networks, as well as in theoretical …
Low-Dimension-to-High-Dimension Generalization And Its Implications for Length Generalization. ICML, 2025.
Low-Dimension-to-High-Dimension (LDHD) generalization, a subset of Out-of-Distribution (OOD) generalization, involves training on a …
Projective Equivariant Network via Second-order Fundamental Differential Invariants. NeurIPS, 2025.
Equivariant networks enhance model efficiency and generalization by embedding symmetry priors into their architectures. However, most …
Affine Steerable Equivariant Layer for Canonicalization of Neural Networks. ICLR, 2025.
In the field of equivariant networks, achieving affine equivariance, particularly for general group representations, has long been a …
Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective. ICML, 2024.
Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial …
Affine Equivariant Networks Based on Differential Invariants. CVPR, 2024.
Convolutional neural networks benefit from translation equivariance, achieving tremendous success. Equivariant networks further extend …
Contact