Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS182, part 2: Matrix and tensor optimization
Time:
Wednesday, 10/Jul/2019:
10:00am - 12:00pm

Location: Unitobler, F021
104 seats, 126m^2

Presentations
10:00am - 12:00pm

Matrix and tensor optimization

Chair(s): Max Pfeffer (Max Planck Institute MiS, Leipzig, Germany), André Uschmajew (Max Planck Institute MiS, Leipzig, Germany)

Matrix and tensor optimization has important applications in the context of modern data analysis and high dimensional problems. Specifically, low rank approximations and spectral properties are of interest. Due to their multilinear parametrization, sets of low rank matrices and tensors form sets with interesting, and sometimes challenging, geometric and algebraic structures. Studying such sets of tensors and matrices in the context of algebraic geometry is therefore not only helpful but also necessary for the development of efficient optimization algorithms and a rigorous analysis thereof. In this respect, the area of matrix and tensor optimization relates to the field applied algebraic geometry by the addressed problems and some of the employed concepts. In this minisymposium, we wish to bring the latest developments in both of these aspects to attention.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Matrix and Tensor Factorizations with Nonnegativity

Eugene Tyrtyshnikov1, Elena Scherbakova2
1Institute of Numerical Mathematics of Russian Academy of Sciences, Lomonosov Moscow State University, 2Lomonosov Moscow State University

In this talk we survey recent essential developments of the ideas of low-rank matrix approximation and consider their extensions to tensors. The practical importance of the very approach consists in its paradigma of using only small part of matrix entries that allows one to construct a sufficiently accurate appoximation in a fast way for ”big data” matrices that cannot be placed in any available computer memory and are accessed implicitly through calls to a procedure producing any individual entry in demand. We consider how this approach can be used in the cases when we need to maintain nonnegativity of the elements.

 

Decompositions and optimizations of conjugate symmetric complex tensors

Zhening Li
University of Portsmouth, UK

Conjugate partial-symmetric (CPS) tensors are the high-order generalization of Hermitian matrices. As the role played by Hermitian matrices in matrix theory and quadratic optimization, CPS tensors have shown growing interest in tensor theory and optimization, particularly in applications including radar signal processing and quantum entanglement. We study CPS tensors with a focus on ranks, rank-one decompositions and optimizations over the spherical constraint. We prove and propose a constructive algorithm to decompose any CPS tensor into a sum of rank-one CPS tensors. Three types of ranks for CPS tensors are defined and shown to be different in general. This leads to the invalidity of the conjugate version of Comon's conjecture. We then study rank-one approximations and matricizations of CPS tensors. By carefully unfolding CPS tensors to Hermitian matrices, rank-one equivalence can be preserved. This enables us to develop new convex optimization models and algorithms to compute best rank-one approximations of CPS tensors. Numerical experiments from various data are performed to justify the capability of our methods.

 

Chebyshev polynomials and best rank-one approximation ratio

Khazhgali Kozhasov
Max Planck Institute MiS, Leipzig, Germany

We establish a new extremal property of the classical Chebyshev polynomials in the context of the theory of rank-one approximations of tensors. We also give some necessary conditions for a tensor to be a minimizer of the ratio of spectral and Frobenius norms. This is joint work with Andrei Agrachev and André Uschmajew.

 

Optimization methods for computing low rank eigenspaces

André Uschmajew
Max Planck Institute MiS, Leipzig, Germany

We consider the task of approximating the eigenspace belonging to the lowest eigenvalues of a self-adjoint operator on a space of matrices, with the condition that it is spanned by low rank matrices that share a common row space of small dimension. Such a problem arises for example in the DMRG algorithm in quantum chemistry. We propose a Riemannian optimization method based on trace minimization that takes orthogonality and low rank constraints simultaneously into account, and shows better numerical results in certain scenarios compared to other current methods. This is joint work with Christian Krumnow and Max Pfeffer.