Scope and Topics

The AAAI Workshop on Machine Learning for Operations Research (ML4OR) builds on the momentum that has been directed over the past 5 years, in both the OR and ML communities, towards establishing modern ML methods as a “first-class citizen" at all levels of the OR toolkit.

ML4OR will serve as an interdisciplinary forum for researchers in both fields to discuss technical issues at this interface and present ML approaches that apply to basic OR building blocks (e.g., integer programming solvers) or specific applications.

Topics

Topics: ML4OR will place particular emphasis on:
  1. ML methodologies for enhancing traditional OR algorithms for integer programming, combinatorial optimization, stochastic programming, multi-objective optimization, location and routing problems, etc.;
  2. Deep Learning (DL) approaches that can exploit large datasets, particularly Graph Neural Networks (GNNs) and Deep Reinforcement Learning (DRL);
  3. Datasets and benchmark libraries that enable ML approaches for a particular OR application or challenging combinatorial problems.

Format

ML4OR is a one-day workshop consisting of a mix of events: multiple invited talks by recognized speakers from both OR and ML covering central theoretical, algorithmic, and practical challenges at this intersection; a number of technical sessions where researchers briefly present their accepted papers; a virtual poster session for accepted papers and abstracts; a panel discussion with speakers from academia and industry focusing on the state of the field and promising avenues for future research; an educational session on best practices for incorporating ML in advanced OR courses including open software and data, learning outcomes, etc.

Attendance

Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Important Dates

  • November 12, 2021 November 20, 2021– Submission Deadline [Extended]
  • December 16, 2021 – Acceptance Notification
  • March 1, 2022 [09:00-14:10] – Workshop Date

Submission Information

Submission URL: https://openreview.net/group?id=AAAI.org/2022/Workshop/ML4OR-22

Submission Types

We invite researchers to submit either full-length research papers (8 pages) or extended abstracts (2 pages) describing novel contributions and preliminary results, respectively, to the topics above; a more extensive list of topics is available on the Workshop website. Submissions tackling new problems or more than one of the aforementioned topics simultaneously are encouraged.
Note: We invite researchers with work on topics such as decision-focused learning/smart predict-and-optimize to submit their papers to the AAAI-22 Workshop on AI for Decision Optimization, rather than to ML4OR, as the former workshop is most relevant to such topics.

Fast Track (Rejected NeurIPS or AAAI papers)

Rejected NeurIPS or AAAI papers with *average* scores of at least 5.0 may be submitted to ML4OR along with previous reviews and scores and an optional letter indicating how the authors have addressed the reviewers comments.
Please use the submission link above and indicate that the submission is a resubmission from of an NeurIPS/AAAI rejected paper.
These submissions will not undergo the regular review process, but a light one, and will be accepted if the previous reviews are judged to meet the workshop standard.

All papers must be submitted in PDF format, using the AAAI-22 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the NeurIPS 2021 and AAAI 2022 technical program are welcomed.

For questions about the submission process, contact the workshop chairs.

Registration

Registration in each workshop is required by all active participants, and is also open to all interested individuals. Early registration deadline is on December 17th. For more information please refer to AAAI-22 Workshop page.

Schedule

All times are in PST (UCT-8:00)
  • [09:00-09:05]: Workshop Opening
  • [09:05-09:50]: Invited Talk 1 [Ellen Vitercik]
  • [09:50-10:05]: Paper Presentations 1 (3 papers)
  • [10:05-10:50]: Invited Talk 2 [Stefanie Jegelka]
  • [10:50-11:05]: Paper Presentations 2 (3 papers)
  • [11:05-12:15]: BREAK
  • [12:15-13:00]: Invited Talk 3 [Pascal van Hentenryck]
  • [13:00-13:15]: Paper Presentations 3 (3 papers)
  • [13:15-14:00]: Invited Talk 4 [Kevin Leyton-Brown]
  • [14:00-14:10]: Paper Presentations 4 (2 papers)
  • [14:10-14:10]: Farewells

Invited Talks

Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond - Ellen Vitercik [09:05-09:50]

Abstract: Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree search algorithm used to find optimal solutions. In this talk, we provide the first sample complexity guarantees for learning high-performing cut-selection policies tailored to the instance distribution at hand using samples. We then develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree. Finally, we describe how our techniques can be generalized to cover a diversity of other combinatorial algorithm configuration problems, including clustering, computational biology, and greedy algorithm configuration. This talk is based on joint work with Nina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, Siddharth Prasad, and Tuomas Sandholm.

Bio: Ellen Vitercik is a Miller postdoctoral fellow at UC Berkeley. In Fall 2022, she will join Stanford as an Assistant Professor with a joint appointment between the Management Science and Engineering department and the Computer Science Department. Her research revolves around machine learning theory, algorithm design, and the interface between economics and computation. She received a PhD in Computer Science from Carnegie Mellon University (CMU), where her thesis won the CMU School of Computer Science Distinguished Dissertation Award and the Honorable Mention Victor Lesser Distinguished Dissertation Award.

Generalization and Extrapolation in Graph Neural Networks for Learning Combinatorial Algorithms - Stefanie Jegelka [10:05-11:50]

Abstract: Graph Neural Networks (GNNs) have become a popular tool for learning certain algorithmic tasks, in particular related to combinatorial optimization. In this talk, we will focus on the “algorithmic reasoning” task of learning a full algorithm. While GNNs have shown promising empirical results, their generalization properties are less well understood. In particular, the task and data properties, architecture and learning algorithm all affect the results. For instance, empirically, we observe an interplay between the structure of the task — or target algorithm — and the inductive biases of the architecture: although many networks may be able to represent a task, some architectures learn it better than others. I will show an approach to formalize this relationship, and empirical and theoretical implications for generalization within and out of the training distribution. Our results suggest that a strong alignment between model and task can enable extrapolation, and, in particular, nonlinearities play a key role. This talk is based on joint work with Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du and Ken-ichi Kawarabayashi.

Bio: Stefanie Jegelka is an Associate Professor in the Department of EECS at MIT. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, Google research awards, a Two Sigma faculty research award, the German Pattern Recognition Award and a Best Paper Award at ICML. She has co-organized multiple workshops on (discrete) optimization in machine learning and graph representation learning, and serves as an Action Editor at JMLR and a program chair of ICML 2022. Her research interests span the theory and practice of algorithmic machine learning, in particular, learning problems that involve combinatorial structure.

Machine Learning for Engineering - Pascal van Hentenryck [12:15-13:00]

Abstract: TBA

Bio: Pascal Van Hentenryck is the A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology and the Associate Chair for Innovation and Entrepreneurship. He is the director of the NSF AI Institute for Advances in Optimization, the director of the Socially Aware Mobility (SAM) and the Risk-Aware Market Clearing (RAMC) labs. Several of his optimization systems have been in commercial use for more than 20 years for solving logistics, supply-chains, and manufacturing applications. His current research focuses on machine learning, optimization, and privacy with applications in energy, mobility, and supply chains.

What Does It Mean To Prefer The Fastest Algorithm? - Kevin Leyton-Brown [13:15-14:00]

Abstract: It is surprisingly difficult to identify a defensible, general rule describing when we prefer one runtime distribution to another, particularly if we have only a bounded amount of time for performing each run and not all runs will terminate. (We restrict our attention to settings in which all algorithm runs produce exact, correct answers, and where we care only about runtimes; think e.g. of complete SAT or MIP solvers.) Identifying such a scoring function is critical when we use machine learning approaches to learn new algorithms from data, as in the case of algorithm configuration.
This talk describes a new, utility-theoretic answer to the question of how we should formalize our preferences between arbitrary pairs of algorithms. It leverages axiomatic assumptions about preferences over runtime distributions to prove the existence of a scoring function and to identify its properties. This function depends on the way our value for a solution decreases with time and on the distribution from which captimes are drawn. We describe a maximum-entropy approach to modeling captime distributions and show how our overall framework can be applied in a variety of realistic scenarios.

Bio: Kevin Leyton-Brown is a professor of Computer Science and a Distinguished University Scholar at the University of British Columbia. He also holds a Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute and is an associate member of the Vancouver School of Economics. He received a PhD and an M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies artificial intelligence, mostly at the intersection of machine learning and (1) the design and operation of electronic markets and (2) the design of heuristic algorithms. He has co-written two books, "Multiagent Systems" and "Essentials of Game Theory," and over 150 peer-refereed technical articles; his work has received over 20,000 citations and an h-index of 56. He is an Fellow of the Association for Computing Machinery (ACM; awarded in 2020) and the Association for the Advancement of Artificial Intelligence (AAAI; awarded in 2018).

Accepted Papers

Invited Speakers

Pascal van Hentenryck

Pascal van Hentenryck

Georgia Tech

Kevin Leyton-Brown

Kevin Leyton-Brown

University of British Columbia (UBC)

Stefanie Jegelka

Stefanie Jegelka

Massachusetts Institute of Technology (MIT)

Ellen Vitercik

Ellen Vitercik

UC Berkeley

Workshop Chairs

Ferdinando Fioretto

Syracuse University

ffi...@syr.edu

Emma Frejinger

Université de Montréal

fre...@gmail.com

Elias B. Khalil

University of Toronto

kha...@mie.utoronto.ca