Call for Papers

Submission deadline: 31st October 2022, Anywhere On Earth (AoE): Submit on OpenReview

Reviewing period: 1st-14th November 2022

Decisions: 17th November 2022

Accepted papers can be found on OpenReview.

Machine learning and deep learning often assume all the data is available at once and accessible whenever needed during training, which is restrictive. Ideally, we want machines to learn as flexibly as humans do: humans can adapt quickly to new environments and can continue to learn throughout our lives. This is currently not possible in machine learning. Over recent years, there has been growing interest in developing systems that can adapt quickly. In Continual Lifelong Learning, methods can ideally handle a stream of incoming data from an ever-changing source, where revisiting data is challenging or even impossible. Ideally, such a system should be able to

  • quickly adapt to changes,
  • remember and faithfully transfer old knowledge to new situations,
  • acquire new skills but continue to do so without forgetting,
  • adjust to drifts in data and/or tasks,
  • adapt the model/architecture accordingly, and so on.

Despite recent advances, many challenges remain. Different studies often formalise the problem differently and use different benchmarks. Even when there are empirical successes, there is little theoretical understanding. The field of continual lifelong learning remains an important, yet challenging, problem that we hope to discuss in this workshop.

The workshop will welcome submissions from a wide variety of topics aiming to address such challenges. We invite submissions (up to 5 pages, excluding references and appendix) in the ACML 2022 format. The submission deadline is October 31st (AoE). All submissions will be managed through OpenReview: The review process is double-blind so submissions should be anonymised. Please edit the ACML template so that the Editors section is empty/blank.

Accepted work will be presented as posters during the workshop, and select contributions will be invited to give spotlight talks during the workshop.

We accept submissions accepted at non-archival workshops, as well as works currently in-submission. However, submissions that are substantially similar to papers that have been previously published at conferences with proceedings may not be submitted.

We encourage submissions on the following topics, including but not limited to:

  • Fast adaptation,
  • Forward/backward transfer,
  • Continual Reinforcement Learning,
  • Bayesian continual learning,
  • Memory-based methods for continual learning,
  • Theory for continual lifelong learning,
  • Applications of continual lifelong learning,
  • Skill Learning, Temporal Abstractions for Continual RL,
  • Unsupervised, semi-supervised and self-supervised continual learning.

Please contact if you have any questions.