| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 23 Feb 2025 07:53:52 GMT
access-control-allow-origin: *
etag: W/"67bad410-c66"
expires: Tue, 30 Dec 2025 04:27:52 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: ECA1:2680BD:98C8E9:ABB551:69535270
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 04:17:52 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210064-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767068272.383979,VS0,VE201
vary: Accept-Encoding
x-fastly-request-id: 305c9f481adcff8fd2df00bfd5310ec3dd787600
content-length: 1528
Pengcheng Yin
Hi! I'm Pengcheng. I am a research scientist at Google Deepmind, working on Gemini Code and research projects in natural language to code generation. Before that, I was a Ph.D. student at Language Technologies Institute, Carnegie Mellon University.
Outside of work, I am a student pilot based at Palo Alto Airport. I also train Coke using RLHF :)
Research Papers
Please see my Google Scholar page for recent publications.Past Industrial Experiences
- Researcher Intern, Microsoft Semantic Machines
- Part-time Research Collaborator, Facebook AI Research
- Research Intern, Facebook AI Research London
- Research Intern, Microsoft Research Cambridge, UK
- Research Intern, Microsoft Research
- Research Intern, Noah's Ark Lab, Huawei
- Research Intern, Microsoft Research Asia
Professional Services
- Area Chair: ICLR 2024, ICLR 2025, ACL 2025
- Reviewer: ACL (outstanding reviewer @ ACL 2020), EMNLP, NAACL, NeurIPS, ICML (top 33% reviewer @ ICML '20), ICLR, etc.
Talks and Coding
- Stanford CS224N DL with NLP: Code Generation (slides)
- TranX: a general-purpose syntax-driven neural semantic parser
- Strong results on six semantic parsing benchmarks
- pytorch_basic_nmt: a basic implementation of attentional nerual seq2seq models
- Used for instructional purposes in Stanford CS224N Nautral Language Processing with Deep Learning and CMU 11-731 Machine Translation and Sequence-to-Sequence Models.