Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS163: Theory and methods for tensor decomposition
Time:
Wednesday, 10/Jul/2019:
10:00am - 12:00pm

Location: Unitobler, F023
104 seats, 126m^2

Presentations
10:00am - 12:00pm

Theory and methods for tensor decomposition

Chair(s): Tamara Kolda (Sandia National Laboratories), Elina Robeva (MIT)

Tensors are a ubiquitous data structure with applications in numerous fields, including machine learning and big data. Decomposing a tensor is important for understanding the structure of the data it represents. Furthermore, there are different ways to decompose tensors, each of which poses its own theoretical and computational challenges and has its own applications. In our minisymposium, we will bring together researchers from different communities to share their recent research discoveries in the theory, methods, and applications of tensor decomposition.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

A nearly optimal algorithm to decompose binary forms

Elias Tsigaridas
Inria Paris

Symmetric tensor decomposition is equivalent to Waring’s problem for homogeneous polynomials; that is, to write a homogeneous polynomial in n variables of degree D as a sum of D-th powers of linear forms, using the minimal number of summands. We focus on decomposing binary forms, a problem that corresponds to the decomposition of symmetric tensors of dimension 2 and order D. We present the first quasi-linear algorithm to decompose binary forms. It computes a symbolic decomposition in O(M(D)log(D)) arithmetic operations, where M(D) is the complexity of multiplying two polynomials of degree D. We also bound the algebraic degree of the problem by min(rank, D − rank + 1) and show that this bound is tight.

 

On convergence of matrix and tensor approximate diagonalization algorithms by unitary transformations

Konstantin Usevich1, Jianze Li2, Pierre Comon3
1CNRS and University of Lorraine, 2No affiliation, 3CNRS, Université Grenoble Alpes

Jacobi-type methods are commonly used in signal processing for approximate diagonalization of complex matrices and tensors by unitary transformations. In this paper, we propose a gradient-based Jacobi algorithm and prove several convergence results for this algorithm. We establish global convergence rates for the norm of the gradient and prove local linear convergence under mild conditions.The convergence results also apply to the case of approximate orthogonal diagonalisation of real-valued tensors.

 

Non-linear singular value decomposition

Mariya Ishteva1, Philippe Dreesen2
1Free University Brussels, 2Vrije Universiteit Brussel (VUB)

In data mining, machine learning, and signal processing, among others, many tasks such as dimensionality reduction, feature extraction, and classification are often based on the singular value decomposition (SVD). As a result, the usage and computation of the SVD have been extensively studied and well understood. However, as current models take into account the non-linearity of the world around us, non-linear generalizations of the SVD are needed. We present our ideas on this topic. In particular, we aim at decomposing nonlinear multivariate vector functions with the following three goals in mind: 1. to provide an interpretation of the underlying processes or phenomena, 2. to simplify the model by reducing the number of parameters, 3. and to preserve its descriptive power. We use tensor techniques to achieve these goals and briefly discuss the potential of this approach for inverting nonlinear functions and curve fitting.

 

A symmetrization approach to hypermatrix {SVD}

Edinah Gnang
Johns Hopkins University

We describe how to derive the third order hypermatrix SVD from the spectral decomposition of third order hypermatrices resulting from the product of transposes of a given third order hypermatrix.