| CARVIEW |
Select Language
HTTP/2 200
x-frame-options: SAMEORIGIN
last-modified: Fri, 09 Jun 2023 00:01:18 GMT
server: Google Frontend
via: 1.1 google, 1.1 varnish, 1.1 varnish, 1.1 varnish
x-cloud-trace-context: 05cc95d223e16d3e46fb991a89701c6a
cache-control: max-age=3600
content-security-policy: frame-ancestors 'none'
content-type: text/html; charset=utf-8
accept-ranges: bytes
age: 114619
date: Thu, 01 Jan 2026 02:41:46 GMT
x-served-by: cache-lga21930-LGA, cache-lga21962-LGA, cache-bom-vanm7210030-BOM
x-cache: MISS, HIT, MISS
x-timer: S1767235306.442785,VS0,VE200
content-length: 45756
[2211.16740] Explicit Knowledge Transfer for Weakly-Supervised Code Generation
Skip to main content
[v1] Wed, 30 Nov 2022 04:51:26 UTC (2,010 KB)
[v2] Mon, 13 Feb 2023 08:44:13 UTC (2,025 KB)
[v3] Wed, 7 Jun 2023 18:01:11 UTC (2,025 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Computation and Language
arXiv:2211.16740 (cs)
[Submitted on 30 Nov 2022 (v1), last revised 7 Jun 2023 (this version, v3)]
Title:Explicit Knowledge Transfer for Weakly-Supervised Code Generation
View a PDF of the paper titled Explicit Knowledge Transfer for Weakly-Supervised Code Generation, by Zhangir Azerbayev and 3 other authors
View PDF
Abstract:Large language models (LLMs) can acquire strong code-generation capabilities through few-shot learning. In contrast, supervised fine-tuning is still needed for smaller models to achieve good performance. Such fine-tuning demands a large number of task-specific NL-code pairs, which are expensive to obtain. In this paper, we attempt to transfer the code generation ability of an LLM to a smaller model with the aid of weakly-supervised data. More specifically, we propose explicit knowledge transfer (EKT), which uses the few-shot capabilities of a teacher LLM to create NL-code pairs that we then filter for correctness and fine-tune the student on. We evaluate EKT on the task of generating code solutions to math word problems from the GSM8k dataset. We find that EKT not only yields better performance than training with expert iteration, but also outperforms knowledge distillation, another form of knowledge transfer. A GPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4% pass@100 on GSM8k, while the same student and teacher trained with knowledge distillation yield only a 3.7% pass@100. We also show that it is possible for a student model to outperform the teacher using EKT.
| Comments: | Updated on Jun 7. 2023 with ICLR workshop header |
| Subjects: | Computation and Language (cs.CL) |
| Cite as: | arXiv:2211.16740 [cs.CL] |
| (or arXiv:2211.16740v3 [cs.CL] for this version) | |
| https://doi.org/10.48550/arXiv.2211.16740
arXiv-issued DOI via DataCite
|
Submission history
From: Zhangir Azerbayev Mr [view email][v1] Wed, 30 Nov 2022 04:51:26 UTC (2,010 KB)
[v2] Mon, 13 Feb 2023 08:44:13 UTC (2,025 KB)
[v3] Wed, 7 Jun 2023 18:01:11 UTC (2,025 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
View a PDF of the paper titled Explicit Knowledge Transfer for Weakly-Supervised Code Generation, by Zhangir Azerbayev and 3 other authors
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.