Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS194: Latent graphical models
Time:
Thursday, 11/Jul/2019:
10:00am - 12:00pm

Location: Unitobler, F-121
52 seats, 100m^2

Presentations
10:00am - 12:00pm

Latent graphical models

Chair(s): Piotr Zwiernik (Universitat Pompeu Fabra, Spain)

Algebro-geometric methods have been extensively applied to study probabilistic graphical models. They became particularly useful in the context of graphical models with hidden variables (latent graphical models). Latent variables appear in graphical models in several important contexts: to represent processes that cannot be observed or measured (e.g. economic activity in business cycle dating, ancestral species in phylogenetics), in causal modelling (confounders), and in machine learning (deep learning, dimension reduction).

Graphical models with latent variables lead to sophisticated geometry problems. The simplest examples, like the latent class model, link directly to secant varieties of the Segre variety and low rank tensors. Understanding the underlying geometry proved to be the driving force behind designing new learning algorithms and was essential to understand fundamental limits of these models. This minisession features three speakers who have been leading this research in the last couple of years.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Latent-variable graphical modeling with generalized linear models

Venkat Chandrasekaran
California Institute of Technology

We describe a convex optimization framework for fitting latent-variable graphical models in the class of generalized linear models. We discuss scaling laws under which our framework succeeds in identifying a population model with high probability as well as experimental results with real data. We also highlight natural tradeoffs in our setup between computational resources and sample size. (Joint with Armeen Taeb and Parikshit Shah)

 

Representation of Markov kernels with deep graphical models

Guido Montúfar
University of California Los Angeles

We revisit the topic representational power of deep probabilistic graphical models. We consider directed and undirected models with multiple layers of finite valued hidden variables. We discuss relations between directed and undirected models, as well as relations between deep and shallow models, in relation to the number of layers and variables within layers that are needed and sufficient to express any Markov kernel.

 

Conditional independence statements with hidden variables

Fatemeh Mohammadi
Bristol University

Conditional independence is an important tool in statistical modeling, as, for example, it gives a statistical interpretation to graphical models. In causal reasoning, it is important to know what constraints on the observed variables are caused by hidden variables. In general, given a sub-family of random variables satisfying a list of conditional independence (CI) statements, it is difficult to say which constraints are implied by the CI statements on this sub-family. However, the CI statements correspond to some determinantal conditions on the tensor of joint probabilities of the observed random variables. Hence, the algebraic analogue of this question relates to determinantal varieties and their irreducible decompositions. In a joint project with Ollie Clarke and Johannes Rauh, we generalize the intersection axiom for CI statements, and we study a family of CI statements whose corresponding variety and its irreducible components are all determinantal varieties.