Twelfth Cargese-Porquerolles Workshop on Combinatorial Optimization

September 4 - 8, 2023

IGESA Resort, Porquerolles (France)

About

The yearly Cargese workshop aims to bring together researchers in combinatorial optimization around a chosen topic of current interest. It is intended to be a forum for the exchange of recent developments and powerful tools, with an emphasis on theory. Other goals include the promotion of interactions between participants and discussions of open problems.

Mathematicians are known to be the only ones that can lay down in a sofa, close their eyes and pretend to work.
We propose to swap the sofa for a hammock... Do you want to join?

Sea at Porquerolles

Topic

This edition's workshop will focus on "Machine Learning and Discrete Optimization".

Distinguished speakers

  • Christian Coester (Optimization with predictions and/or online)
  • Sanjeeb Dash (Discrete optimization methods for some problems in classification, regression, and causal structure learning)
  • Ellen Vitercik (Machine learning for algorithm design: Theoretical guarantees and applied frontiers)

Organization

The workshop will be hosted at the IGESA Resort. This year's organizers are:

The organizers receive advise from the steering commitee, consisting of Fritz Eisenbrand, Michel Goemans, Monique Laurent, R. Ravi, and Jens Vygen.

Conference Room at Porquerolles

Transportation

We have chartered a private bus and ferry to the island of Porquerolles that should service the majority of participants. The bus will service both Marseilles Provence Airport (MRS) and Marseilles Saint-Charles Train Station. The IGESA resort has a private ferry "La Valériane" which will provide transport from La Tour Fondue (Giens) to the island of Porquerolles and back.

Sunday, September 3: the bus will leave Marseille Airport at about 4pm. The bus will make a stop at Marseille Saint-Charles Train Station around 4:30pm and will leave again by 5pm. We will arrive at La Tour Fondue around 7pm, where the ferry will be waiting for us. The ferry ride is about 25 min, and the IGESA resort is a 10-15 min walk from the ferry.

Friday, September 8: the ferry will leave between 11:45am-noon, to get us to the bus at La Tour Foundue, which will leave by 12:30pm. We will make a stop at the Marseilles Saint-Charles Train Station by 2:30pm, and will arrive at Marseilles Provence Airport (MRS) by 3:30pm.

Schedule

There will be morning and afternoon sessions. Morning sessions will be dedicated to longer survey-type talks or open problems. The afternoon sessions will be dedicated to discussions as well as shorter talks on related topics.

In their spare time, we encourage researchers to enjoy the beautiful landscape and opportunities of the island, besides solving a few open problems.

The workshop schedule can be found here: schedule.

Beach at Porquerolles

Talks

Lectures

  • Christian Coester – Optimization with predictions and/or online

    The rapidly growing field of algorithms with predictions (aka learning-augmented algorithms) considers problems where an algorithm’s input is augmented with predictions that — if correct — provide some useful information about the problem instance. Access to such predictions is often realistic when similar instances have been solved in the past, and they could be obtained, for example, via machine-learning, human experts or simple heuristics. The goal is to design algorithms that benefit from predictions if they are good, but even if predictions are highly erroneous the performance of the algorithm shall not deteriorate compared to the case without predictions. We will see how predictions can be applied to the classical sorting problem to improve over the $O(n\log n)$ comparison complexity and to the weighted paging problem. We will also discuss the question of how to combine advice from multiple predictors that may contradict each other and whose qualities may vary over time. This latter question can be reduced to the layered graph traversal problem — a classical online version of the shortest paths problem. The final part of the lecture series is dedicated to two recent advances in classical online algorithms that resolve some 30+ year old open problems. The first tightens the competitive ratio of the aforementioned layered graph traversal problem from $O(\ell^{13})$ to $O(\ell^2)$. The second settles the competitive ratio of metrical task systems and refutes the randomized k-server conjecture. Based on joint works with Bai (“Sorting with predictions”), Bansal, Kumar, Purohit, Vee (“Learning-augmented weighted paging”), Antoniadis, Elias, Polak, Simon (“Mixing predictions for online metric algorithms”), Bubeck and Rabani (“Shortest paths without a map, but with an entropic regularizer” and “The randomized k-server conjecture is false!”).

  • Sanjeeb Dash – Discrete optimization methods for some problems in classification, regression, and causal structure learning

    We will discuss a few problems in machine learning, namely binary classification, symbolic regression, knowledge graph completion, and Bayesian network structure learning, and present Integer, linear, and polynomial optimization methods for these problems.

  • Ellen Vitercik – Machine learning for algorithm design: Theoretical guarantees and applied frontiers

    When designing algorithmic solutions to computational problems in practice, there is often ample data about the application domains in which the algorithms will be used---data that can be harnessed to optimize algorithmic performance. This exposes a new frontier of algorithm design where machine learning techniques can be used to improve the performance of existing combinatorial optimization algorithms and design entirely new ones.
    In this talk, I will give an overview of research in this area ranging from theoretical (Parts 1 and 2) to applied (Part 3). Part 1 will cover statistical guarantees for algorithm configuration. Algorithms often have tunable parameters that considerably impact their runtime and solution quality. In automated algorithm configuration, the goal is to use a training set of problem instances sampled from an unknown, application-specific distribution to find parameter settings with strong average performance on the training set. I will present a broadly applicable theory for deriving generalization guarantees which relate average performance on the training set to expected future performance. Part 2 will cover theoretical guarantees for online algorithm configuration, where problem instances arrive one by one over time, not necessarily from any distribution. The goal is to select a sequence of configurations that maximize the algorithm’s cumulative performance.
    Finally, Part 3 will cover applied techniques for integrating machine learning into discrete optimization. I will discuss the use of graph neural networks to accelerate integer programming solvers and potentially even replace hand-crafted combinatorial algorithms. I will also describe how reinforcement learning can be harnessed to develop heuristics for NP-hard problems.

    Lecture Slides: slides.

Contributed talks

  • Eric Balkanski – Strategyproof Scheduling with Predictions

    In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen (STOC '99) studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an $n$-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, n remains the best known approximation and very recent work by Christodoulou et al (FOCS '21) has been able to prove an $\Omega(\sqrt{n})$-approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the ``learning-augmented framework'', whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of Nisan and Ronen using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is $6$-consistent and $2n$-robust. We thus achieve the ``best of both worlds'': an $O(1)$ consistency and an $O(n)$ robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any $1$-consistent deterministic strategyproof mechanism has unbounded robustness.

  • Marek Eliáš – Learning-Augmented Algorithms with Explicit Predictors

    In the field of algorithmic design, there has been recent progress in utilizing predictions from machine learning models applied to the input data (or historical data). These approaches have demonstrated an enhancement in performance when the predictions are accurate, while also ensuring robustness by providing worst-case guarantees when predictions fail.
    In this manuscript we focus on online problems; prior research in this context has generally treated the machine learning algorithm as a black-box with unspecified training methods. In contrast, in this work, we unpack the algorithm and examine the learning problem it gives rise to, with the ultimate goal of designing online learning algorithms specifically tailored for the algorithmic task at hand. Adopting this perspective, we turn our attention to a number of fundamental problems, including caching and scheduling, which have been well-studied in the black-box setting. For each of the problems we consider, we introduce new algorithms that take advantage of explicit learning algorithms which we carefully design towards optimizing the overall performance. We demonstrate the potential of our approach by deriving performance bounds that show improvement over those established in previous work.
    Joint work with Haim Kaplan, Yishay Mansour, Shay Moran.

  • Stefano Leonardi – Learning Pricing Mechanisms for Trading

    The bilateral trade problem models a two-sided market with both buyers and sellers acting strategically. An ideal goal in bilateral trade is to maximize the social welfare of buyers and sellers with individually rational (IR), incentive compatible (IC), and budget-balanced mechanisms (BB), which requires that the mechanism does not subsidize the market or make an arbitrary profit from the exchange. Unfortunately, Myerson and Satterthwaite proved in 1983 that this goal cannot be achieved even in the Bayesian setting and for the simple case of only one buyer and one seller. Meaningful trade-offs and algorithmic approximations of the above requirements have recently been proposed under the often-unrealistic assumption that the agents' priors are perfectly known beforehand. I focus in this talk on learning pricing mechanisms that achieve near-optimal approximations with one single sample from each distribution, the minimum amount of information required to achieve any positive result, and online learning mechanisms that achieve optimal regret bounds with no prior information while interacting with the agents in a sequence of repeated bilateral trade rounds.

  • Sammy Khalife – On the power of graph neural networks and the role of the activation function

    In this talk I will present new results about the expressivity of Graph Neural Networks (GNNs), answering an open question formulated by Grohe (LICS '21). For any GNN with piecewise polynomial activations and whose architecture size does not grow with the graph input sizes, there exists a pair of non-isomorphic rooted trees of depth two such that the GNN cannot distinguish their root vertex up to an arbitrary number of iterations. The proof relies on the algebra of symmetric polynomials. In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations. Our results imply a strict separation between bounded and unbounded size GNNs.
    If one allows activations that are not piecewise polynomial, then in two iterations a single neuron perceptron can distinguish the root vertices of any pair of non-isomorphic trees of depth two (our results hold for activations like the sigmoid, hyperbolic tan and others). This shows how the power of graph neural networks can change drastically if one changes the activation function of the neural networks. The proof of this result utilizes the Lindemann-Weierstrass theorem from transcendental number theory.
    Link to article: https://arxiv.org/abs/2307.04661.

  • Christoph Hertrich - Neural Networks and Extension Complexity

    The smallest known extended formulation for the spanning tree polytope has size $O(n^3)$. The smallest known (ReLU) neural network optimizing over the spanning tree polytope has size $O(n^3)$, too. Is this a coincidence? More generally, we ask whether any interesting relation between the extension complexity of a polytope and the minimum size of a neural network optimizing over the polytope can be proven. We report on progress towards answering this question by showing the following. If a neural network with $N$ neurons solves the optimization problem over a polytope P, then there exist two polytopes $Q$ and $R$ with extension complexity at most $2N$ each such that $R$ is the Minkowski sum of $P$ and $Q$. This implies that an optimal solution to the optimization problem over $P$ can be obtained as the difference of optimal solutions to the optimization problems of $R$ and $Q$, respectively (with appropriate tie-breaking).
    This is work in progress with Sam Fiorini, Georg Loho, and László Végh.

  • Axel Parmentier - Combinatorial optimization layers in deep learning pipelines provide efficient policies for dynamic combinatorial optimization problems

    The 2022 EURO-NeurIPS challenge (1) on dynamic vehicle routing highlighted the difficulties of multistage stochastic optimization problems whose state and decision spaces are discrete and combinatorially large. Multistage stochastic optimization policies do not scale, while reinforcement learning approaches have difficulties with the combinatorial decisions. We suggested using a combinatorial optimization layer in a deep learning pipeline : a deep neural network takes in input the state of the system and returns as output the parameters of a deterministic vehicle routing problem, which we solve with a classic combinatorial optimization heuristic to obtain the decision. It seems to overcome the difficulties above and enabled to win the challenge with a three percent margin. The approach requires a learning algorithm to choose the parameters of the neural network. The results of the challenge startled the winning team and opened a scientific question because the learning algorithm used should not have worked. Indeed, it imitated the anticipative policy, which is known to perform poorly in multi-stage stochastic optimization. In this talk, we use tools from combinatorial optimization, reinforcement learning, and multi-stage stochastic optimization to reinterpret the learning algorithm, understand why it worked on the problem of the challenge and fails on others, and introduce better learning algorithms for policies based on combinatorial optimization layers. We illustrate the performance of these algorithms on several dynamic routing problems. In particular, we significantly improve the performance of the policy used to win the challenge.
    (1) EURO Meets NeurIPS 2022 Vehicle Routing Competition.

  • Bissan Ghaddar – Learning for Spatial Branching: An Algorithm Selection Approach

    The use of machine learning techniques to improve the performance of branch-and-bound optimization algorithms is a very active area in the context of mixed integer linear problems, but little has been done for non-linear optimization. To bridge this gap, we develop a learning framework for spatial branching and show its efficacy in the context of the Reformulation-Linearization Technique for polynomial optimization problems. The proposed learning is performed offline, based on instance-specific features and with no computational overhead when solving new instances. Novel graph-based features are introduced, which turn out to play an important role in the learning. Experiments on different benchmark instances from the literature show that the learning-based branching rule significantly outperforms the standard rules.

  • Merve Bodur – Neural Approximate Dynamic Programming for the Ultra-fast Delivery Problem

    The same-day delivery (SDD) services necessitate optimized operational efficiency, especially for the two sub-tasks of order assignment/dispatching and vehicle routing. We consider the ultra-fast delivery variant of SDD, where the critical time constraints make the real-time decision-making particularly challenging. We propose a neural approximate dynamic programming (NeurADP) approach for this problem, a hybrid methodology combining approximate dynamic programming (ADP) and deep reinforcement learning (DRL). This work constitutes the first application of NeurADP beyond its original context of the ride-pooling problem, extending its versatility in solving real-world, dynamic, discrete optimization problems. NeurADP integrates neural network-based value function approximations into integer programming models solved in the ADP algorithm in an efficient way via a two-step decomposition approach, and also it performs a more general Bellman update via the DRL connection. Our extensive numerical experiments demonstrate the benefits of the NeurADP approach, with generated policies outperforming various DRL and myopic policies. Detailed sensitivity analysis further confirms effectiveness of the NeurADP policies under varying operational constraints.

  • Andrea Lodi – Structured Pruning of Neural Networks for Constraint Learning

    In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse applications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.

  • Emiliano Traversi – Clique Merging Algorithms to Solve SemidefiniteRelaxations of Optimal Power Flow Problems

    Semidefinite Programming (SDP) is a powerful technique to compute tight lower bounds for Optimal Power Flow (OPF) problems. Even using clique decomposition techniques, semidefinite relaxations are still computationally demanding. However, there are many different clique decompositions for the same SDP problem and they are not equivalent in terms of computation time. In this paper, we propose a new strategy to compute efficient clique decompositions with a clique merging heuristic. This heuristic is based on two different estimates of the computational burden of an SDP problem: the size of the problem and an estimation of a per-iteration cost for a state-of-the-art interior-point algorithm. We compare our strategy with other algorithms on MATPOWER instances and we show a significant decrease in solver time. In the last part of the talk we present recent developments on how to incorporate machine learning techniques to automatically identify effective clique decompositions.

  • Sander Borst – Node selection in branch-and-bound, using the explorable heap problem

    The branch-and-bound algorithm has two important ingredients, a branching rule and a node selection rule. The latter has not been studied much from a theoretical point of view. We study node selection rules, using the explorable heap model. In this model, proposed by Karp, Saks and Widgerson (FOCS '86), the goal is to find a node selection rule that minimizes the amount of movement of the solver head through the branch-and-bound tree. This problem can be formulated in a very simple way: How many steps does an agent need to find the nth smallest number in a heap if the agent can only learn a number by visiting the corresponding node in the tree. We provide a new randomized algorithm for this problem that substantially improves upon the best previously known algorithm. This algorithm implies a new node selection rule for branch-and-bound.

  • Afrouz Jabal Ameli – $4/3$-approximation for two node connectivity

    The $2$-Vertex-Connected Spanning Subgraph problem (2VCSS) is among the most basic NP-hard (Survivable) Network Design problems: we are given an (unweighted) undirected graph $G$. Our goal is to find a subgraph $S$ of $G$ with the minimum number of edges which is $2$-vertex-connected, namely $S$ remains connected after the deletion of an arbitrary node. 2VCSS is well-studied in terms of approximation algorithms, and the current best (polynomial-time) approximation factor is $10/7$ by Heeger and Vygen (SIDMA '17) (improving on earlier results by Khuller and Vishkin (STOC '92) and Garg, Vempala and Singla (SODA '93)).

  • Thomas Rothvoss – The Subspace Flatness Conjecture and Faster Integer Programming

    In a seminal paper, Kannan and Lovász (Annals of Math. '88) considered a quantity $\mu_{KL}(\Lambda,K)$ which denotes the best volume-based lower bound on the covering radius $\mu(\Lambda,K)$ of a convex body $K$ with respect to a lattice $\Lambda$. Kannan and Lov\'asz proved that $\mu(\Lambda,K) \leq n \cdot \mu_{KL}(\Lambda,K)$ and the Subspace Flatness Conjecture by Dadush (2012) claims a $O(\log n)$ factor suffices, which would match the lower bound from the work of Kannan and Lovász. We settle this conjecture up to a constant in the exponent by proving that $\mu(\Lambda,K) \leq O(\log^{3}(n)) \cdot \mu_{KL} (\Lambda,K)$. Our proof is based on the Reverse Minkowski Theorem due to Regev and Stephens-Davidowitz (STOC '17, Annals of Math. '23). Following the work of Dadush (2012, STOC '19), we obtain a $(\log n)^{O(n)}$-time randomized algorithm to solve integer programs in $n$ variables. Another implication of our main result is a near-optimal \emph{flatness constant} of $O(n \log^{4}(n))$.
    This is joint work with Victor Reis.

Participants

  • Manuel Aprile
  • Eleonore Bach
  • Xingjian Bai
  • Eric Balkanski
  • Giulia Bernardini
  • Merve Bodur
  • Sander Borst
  • Deeparnab Chakrabarty
  • Antonia Chmiela
  • Christian Coester
  • Claudia D'Ambrosio
  • Daniel Dadush
  • Sanjeeb Dash
  • Ilias Diakonikolas
  • Jelena Diakonikolas
  • Fritz Eisenbrand
  • Marek Elias
  • Yuri Faenza
  • Samuel Fiorini
  • Bissan Ghaddar
  • Christoph Hertrich
  • Christopher Hojny
  • Dylan Hyatt-Denesik
  • Gabriele Iomazzo
  • Afrouz Jabal Ameli
  • Elias Kalil
  • Sammy Khalife
  • Stefan Köber
  • Stefano Leonardi
  • Alex Lindermayr
  • Siyue Liu
  • Andrea Lodi
  • Quentin Louveaux
  • Meike Neuwohner
  • Neil Olver
  • Joseph Paat
  • Philipp Pabst
  • Axel Parmentier
  • Thomas Rothvoss
  • Anna Russo Russo
  • Laura Sanità
  • Lara Scavuzzo
  • Jens Schlöter
  • Frits Spieksma
  • Ola Svensson
  • Emiliano Traversi
  • Mark Turner
  • Paolo Ventura
  • Ellen Vitercik
  • Stefan Weltge

Previous editions

A list of all previous editions can be found here.

Credits

Logo designed by Giuseppe Musolino. Website hosted at GitHub Pages. Layout based on skeleton using AmatiSC font.