Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:

  • How can we exploit our domain knowledge to effectively guide the meta-learning process?
  • What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
  • Which ML approaches are best suited for meta-learning, in which circumstances, and why?
  • What principles can we learn from meta-learning to help us design the next generation of learning systems?
  • What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
  • Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
  • How can we design more sample-efficient meta-learning methods?

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.

In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.

Invited Speakers

Submit challenge questions for the speakers here.

  • Pieter Abbeel (UC Berkeley, Covariant.ai)
    Interaction of Model-based RL and Meta-RL

  • David Abel (Brown University)
    Abstraction & Meta-Reinforcement Learning
    Reinforcement learning is hard in a fundamental sense: even in finite and deterministic environments, it can take a large number of samples to find a near-optimal policy. In this talk, I discuss the role that abstraction can play in achieving reliable yet efficient learning and planning. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent’s resulting abstract model, yielding a practical algorithm for learning useful and compact representations from a demonstrator. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for options that make planning more efficient. I present new computational complexity results that illustrate it is NP-hard to find the optimal options that minimize planning time, but show this set can be approximated in polynomial time. Collectively, these results provide a partial path toward abstractions that minimize the difficulty of high quality learning and decision making.

  • Jeff Clune (University of Wyoming, Uber AI)
    How Meta-Learning Could Help Us Accomplish Our Grandest AI Ambitions, and Early, Exotic Steps in that Direction
    A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. To produce AI-GAs, we need work on Three Pillars: meta-learning architectures, meta-learning learning algorithms, and automatically generating environments. In this talk I will present recent work from our team in each of the three pillars: Pillar 1: Generative Teaching Networks (GTNs); Pillar 2: Differential plasticity, differentiable neuromodulated plasticity (“backpropamine”), and a Neuromodulated Meta-Learning algorithm (ANML); Pillar 3: the Paired Open-Ended Trailblazer (POET). My goal is to motivate future research into each of the three pillars and their combination.

  • Erin Grant (UC Berkeley)
    Meta-learning as hierarchical modelling

  • Raia Hadsell (DeepMind)
    Scalable Meta-Learning

  • Brenden Lake (NYU, Facebook AI Research)
    Compositional generalization in minds and machines
    People learn in fast and flexible ways that elude the best artificial neural networks. Once a person learns how to “dax,” they can effortlessly understand how to “dax twice” or “dax vigorously” thanks to their compositional skills. In this talk, we examine how people and machines generalize compositionally in language-like instruction learning tasks. Artificial neural networks have long been criticized for lacking systematic compositionality (Fodor & Pylshyn, 1988; Marcus, 1998), but new architectures have been tackling increasingly ambitious language tasks. In light of these developments, we reevaluate these classic criticisms and find that artificial neural nets still fail spectacularly when systematic compositionality is required. We then show how people succeed in similar few-shot learning tasks and find they utilize three inductive biases that can be incorporated into models. Finally, we show how more structured neural nets can acquire compositional skills and human-like inductive biases through meta-learning.

Spotlights

Morning Session

Afternoon Session

Organizers

Important Dates

  • Submission deadline: 10 September 2019 (11:59 PM anywhere on Earth)
  • Notification: 1 October 2019
  • Camera ready: 4 December 2019
  • Workshop: 13 December 2019

Schedule

09:00 Introduction and opening remarks
09:10 Erin Grant
09:40 Jeff Clune
10:10 Coffee & posters
10:30 Poster spotlights 1
10:50 Posters
11:30 Pieter Abbeel
12:00 Discussion 1
12:30 Lunch break
14:00 David Abel
14:30 Raia Hadsell
15:00 Poster spotlights 2
15:20 Coffee & posters
16:30 Sebastian Flennerhag: Meta-Learning with Warped Gradient Descent
16:45 Jessica Lee: MetaPix: Few-shot video retargeting
17:00 Brenden Lake
17:30 Discussion 2
17:50 End

Video1

Video2

Video3

Video4

FAQ

  1. Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?

    Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).

  2. Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?

    We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.

  3. If a submission is accepted, is it possible for all authors of the accepted paper to receive a chance to register?

    We cannot confirm this yet, but it is most likely that we will have at most one registration to offer per accepted paper.

  4. Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?

    We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).

  5. Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?

    MetaLearn submissions are 4 pages, i.e., much shorter than standard conference submissions. But from our side it is perfectly fine to submit a condensed version of a parallel conference submission, if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.

  6. Are there any instructions for poster formatting or for the camera-ready?

  • Posters should be A0, preferably on light paper. Poster can be hung up before the start of the workshop or during the breaks or poster sessions.
  • The camera-ready version of your accepted paper should be limited to 4 pages, with up to 10 pages additional materials.

Accepted Papers

Program Committee

We thank the program committee for shaping the excellent technical program (in alphabetical order):

Aaron Klein, Abhishek Gupta, Alexander Toshev, Alexandre Galashov, Andre Carvalho, Andrei A. Rusu, Ang Li, Ashvin V. Nair, Avi Singh, Aviral Kumar, Ben Eysenbach, Benjamin Letham, Bradly C, Brandon Schoenfeld, Brian Cheung, Carlos Soares, Daniel Hernandez, Deirdre Quillen, Devendra ingh, Dumitru Erhan, Dushyant Rao, Eleni Triantafillou, Erin Grant, Esteban Real, Eytan Bakshy, Frank Hutter, Haoran Tang, Hugo air, Igor Mordatch, Jakub Sygnowski, Jan Humplik, Jan N. van Rijn, Jan endrik, Jiajun Wu, Jonas Rothfuss, Jonathan Schwarz, Jürgen Schmidhuber, Kate Rakelly, Katharina Eggensperger, Kevin Swersky, Kyle Hsu, Lars Kotthoff, Leonard Hasenclever, Lerrel Pinto, Luisa Zintgraf, Marc Pickett, Marta Garnelo, Marvin Zhang, Matthias Seeger, Maximilian Igl, Misha Denil, Parminder Bhatia, Parsa Mahmoudieh, Pavel Brazdil, Pieter Gijsbers, Piotr Mirowski, Rachit Dubey, Rafael Gomes, Razvan Pascanu, Ricardo B. Prudencio, Roger B. Grosse, Rowan McAllister, Sayna Ebrahimi, Sebastien Racaniere, Sergio Escalera, Siddharth Reddy, Stephen Roberts, Sungryull Sohn, Surya Bhupatiraju, Thomas Elsken, Tin K. Ho, Udayan Khurana, Vincent Dumoulin, Vitchyr H. Pong, Zeyu Zheng

Past Workshops

Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017

Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018

Sponsors

We are grateful for the support of our sponsors, which enabled us to offer travel grants to several participants.

Facebook Amazon Deepmind

Contacts

For any further questions, you can contact us at info@metalearning.ml.