Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS191, part 1: Algebraic and geometric methods in optimization.
Time:
Tuesday, 09/Jul/2019:
3:00pm - 5:00pm

Location: Unitobler, F021
104 seats, 126m^2

Presentations
3:00pm - 5:00pm

Algebraic and geometric methods in optimization.

Chair(s): Jesus A. De Loera (University of California, Davis, United States of America), Rekha Thomas (University of Washington)

Recently advanced techniques from algebra and geometry have been used to prove remarkable results in Optimization. Some examples of the techniques used are polynomial algebra for non-convex polynomial optimization problems, combinatorial tools like Helly's theorem from combinatorial geometry to analyze and solve stochastic programs through sampling, and using ideal bases to find optimality certificates. Test-set augmentation algorithms for integer programming involving Graver sets for block-structured integer programs, come from concepts in commutative algebra. In this sessions experts will present a wide range of results that illustrate the power of the above mentioned methods and their connections to applied algebra and geometry.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Integer optimization from the perspective of subdeterminants

Robert Weismantel
ETH Zurich, Switzerland

For an integer optimization problem (IP), one important data parameter is the maximum absolute value among all square submatrices of the constraint matrix. We present recent developments about this topic.

 

The Minimum Euclidean-Norm Point in a Convex Polytope: Wolfe's Combinatorial Algorithm is Exponential

Jamie Haddock
Dept. Math. UCLA, USA

The complexity of Philip Wolfe's method for the minimum Euclidean-norm point problem over a convex polytope has remained unknown since he proposed the method in 1974. The method is important because it is used as a subroutine for one of the most practical algorithms for submodular function minimization. We present the first example that Wolfe's method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in strongly-polynomial time to the minimum norm point problem over a simplex. This is joint work with J.A. De Loera and L. Rademacher.

 

Matrices of bounded factor width and sums of $k$-nomial squares

Joao Gouveia
University of Coimbra, Portugal

In 2004, Boman et al introduced the concept of factor width of a semidefinite matrix $A$. This is the smallest $k$ for which one can write the matrix as $A=VV^T$ with each column of $V$ containing at most $k$ non-zeros. The cones of matrices of bounded factor width give a hierarchy of inner approximations to the PSD cone. In the polynomial optimization context, a generalized Hankel matrix of a polynomial having factor width k corresponds to the polynomial being a sum of squares where each polynomial being squared has support at most $k$.

This connection has recently been explored by Ahmadi and Majumdar to introduce SDSOS, a sum of squares hierarchy based on sums of binomial squares (sobs), but the study of sobs goes back to Robinson, Choi, Lam and Reznick and ultimately Hurwitz. In this presentation we will prove some results on the geometry of the cones of matrices with bounded factor widths and their duals, and use them to derive new results on the existence of certificates of nonnegativity by sums of k-nomial squares.

Joint work with Mina Saee Bostanabad and Alexander Kovačec

 

A friendly smooth analysis of the Simplex method

Sophie Huiberts
CWI, Amsterdam

The simplex method for linear programming is known for its good performance in practice, although the theoretical worst-case performance is exponential in the input size. The smoothed analysis framework of Spielman and Teng (2001) aims to explain the good practical performance. In our work, we improve on all previous smoothed complexity results for the simplex algorithm on all parameter regimes with a substantially simpler and more general proof. This is joint work with Daniel Dadush. https://arxiv.org/abs/1711.05667