Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS139, part 3: Combinatorics and algorithms in decision and reason
Time:
Saturday, 13/Jul/2019:
3:00pm - 5:00pm

Location: Unitobler, F-121
52 seats, 100m^2

Presentations
3:00pm - 5:00pm

Combinatorics and algorithms in decision and reason

Chair(s): Liam Solus (KTH Royal Institute of Technology, Sweden), Svante Linusson (KTH Royal Institute of Technology)

Combinatorial, or discrete, structures are a fundamental tool for modeling decision-making processes in a wide variety of fields including machine learning, biology, economics, sociology, and causality. Within these various contexts, the goal of key problems can often be phrased in terms of learning or manipulating a combinatorial object, such as a network, permutation, or directed acyclic graph, that exhibits pre-specified optimal features. In recent decades, major break-throughs in each of these fields can be attributed to the development of effective algorithms for learning and analyzing combinatorial models. Many of these advancements are tied to new developments connecting combinatorics, algebra, geometry, and statistics, particularly through the introduction of geometric and algebraic techniques to the development of combinatorial algorithms. The goal of this session is to bring together researchers from each of these fields who are using combinatorial or discrete models in data science so as to encourage further breakthroughs in this important area of mathematical research.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Stability and Convergence Trade-off of Iterative Optimization Algorithms

Yuansi Chen
UC Berkeley

The expected excess risk of an iterative machine learning algorithm can be decomposed into training error and generalization error. While the former is controlled by its convergence analysis, the latter can be tightly handled by algorithmic stability. The statistical machine learning community has a rich history investigating convergence and stability separately. However, the question about the trade-off between these two quantities has remained open. In this talk, we show that for any iterative algorithm at any iteration, the expected excess risk is lower bounded by the minimax statistical error over an appropriately chosen loss function class. This lower bound has important implications for the trade-off between convergence and stability of the algorithm. Specifically, we show that a fast converging algorithm can not be too stable, and vice versa.

 

Probabilistic tensors and opportunistic Boolean matrix multiplication

Petteri Kaski
Aalto University

We introduce probabilistic extensions of classical deterministic measures of algebraic complexity of a tensor, such as the rank and the border rank. We show that these probabilistic extensions satisfy various natural and algorithmically serendipitous properties, such as submultiplicativity under taking of Kronecker products. Furthermore, the probabilistic extensions enable strictly lower rank over their deterministic counterparts for specific tensors of interest, starting from the tensor <2,2,2> that represents 2-by-2 matrix multiplication. By submultiplicativity, this leads immediately to novel randomized algorithm designs, such as algorithms for Boolean matrix multiplication as well as detecting and estimating the number of triangles and other subgraphs in graphs. Joint work with Matti Karppa (Aalto University).

Reference: https://doi.org/10.1137/1.9781611975482.31

 

On attempts to characterize facets of the chordal graph polytope

Milan Studeny
Academy of Sciences of the Czech Republic

Our idea of integer linear programming approach to structural learning decomposable graphical models, which are models described by undirected chordal graphs, is to encode them by special zero-one vectors, named characteristic imsets. It leads to the study of a special polytope, defined as the convex hull of all characteristic imsets for chordal graphs we name the chordal graph polytope (Studeny and Cussens; 2017). The talk will be devoted to the attempts to characterize theoretically all facet-defining inequalities for this polytope in order to utilize that in ILP-based procedures for learning decomposable models.