Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
MS148, part 1: Algebraic neural coding
Time:
Tuesday, 09/Jul/2019:
10:00am - 12:00pm

Location: Unitobler, F-105
53 seats, 70m^2

Presentations
10:00am - 12:00pm

Algebraic Neural Coding

Chair(s): Nora Youngs (Colby College), Zvi Rosen (Florida Atlantic University, United States of America)

Neuroscience aims to decipher how the brain represents information via the firing of neurons. Place cells of the hippocampus have been demonstrated to fire in response to specific regions of Euclidean space. Since this discovery, a wealth of mathematical exploration has described connections between the algebraic and combinatorial features of the firing patterns and the shape of the space of stimuli triggering the response. These methods generalize to other types of neurons with similar response behavior. At the SIAM AG meeting, we hope to bring together a group of mathematicians doing innovative work in this exciting field. This will allow experts in commutative algebra, combinatorics, geometry and topology to connect and collaborate on problems related to neural codes, neural rings, and neural networks.

 

(25 minutes for each presentation, including questions, followed by a 5-minute break; in case of x<4 talks, the first x slots are used unless indicated otherwise)

 

Flexible Motifs in Threshold-Linear Networks

Carina Curto
The Pennsylvania State University

Threshold-linear networks (TLNs) are popular models of recurrent networks used to model neural activity in the brain. The state space in these networks is naturally partitioned into regions defined by an associated hyperplane arrangement. The combinatorial properties of this arrangement, as captured by an oriented matroid, provide strong constraints on the network's dynamics. In recent work, we have studied how the graph of a TLN constrains the possible fixed points of the network by providing constraints on the combinatorics of the hyperplane arrangement. Here we study the case of flexible motifs, where the graph allows multiple possibilities for the set of fixed points FP(W), depending on the choice of connectivity matrix W. In particular, we find that mutations of oriented matroids correspond naturally to bifurcations in the dynamics. Flexible motifs are interesting from a neuroscience perspective because they allow us to study the effects of sensory and state-dependent modulation on the dynamics of neural ensembles.

 

Robust Motifs in Threshold-Linear Networks

Katherine Morrison
University of Northern Colorado

Networks of neurons in the brain often exhibit complex patterns of activity that are shaped by the intrinsic structure of the network. How does the precise connectivity structure of the network influence these patterns of activity? We address this question in the context of threshold-linear networks, a commonly used model of recurrent neural networks. We identify constraints on the dynamics that arise from network architecture and are independent of the specific values of connection strengths. By appealing to an associated hyperplane arrangement, we find families of robust motifs, which are graphs where the collection of fixed points of the corresponding networks is fully determined by the graph structure, irrespective of the particular connection strengths. These motifs provide a direct link between network structure and function, and provide new insights into how connectivity may shape dynamics in real neural circuits.

 

An Algebraic Perceptron and the Neural Ideals

Vladimir Itskov
The Pennsylvania State University

Feedforward neural networks have been widely used in machine-learning and theoretical neuroscience. The paradigm of "deep learning", that makes use of many consecutive layers of feedforward networks, has achieved impressive engineering success in the past two decades. However, a theoretical understanding of many-layer feedforward networks is still mostly lacking. While each layer of a feedforward network can be understood via the geometry of an hyperplane arrangement, satisfactory understanding the mathematical properties of many-layered networks remains elusive.
We propose a generalization of the perceptron, i.e. a single layer of a feedforward network. This perceptron is best described via a neural ideal, i.e. an ideal in the ring of functions on the Boolean lattice. It turns out that many machine-learning problems can be converted into purely algebraic problems about neural ideals. This opens up a new avenue of developing a commutative algebra-based toolbox for machine-learning. In my talk I will explain the connection between these two subjects and also give a concrete example of translating a machine-learning problem into commutative algebra.

 

Properties of Hyperplane Neural Codes

Alexander Kunin
The Pennsylvania State University

The firing patterns of neurons in sensory systems give rise to combinatorial codes, i.e. subsets of the boolean lattice. These firing patterns represent the abstract intersection patterns of subsets of a Euclidean space, and an open problem is identifying the combinatorial properties of neural codes which distinguish the geometric properties of the corresponding subsets. We introduce the polar complex, a simplicial complex associated to any combinatorial code, and relate its associated Stanley-Reisner ring to the ring of $mathbb{F}_2$-valued functions on the code to identify some distinguishing characteristics of codes arising from feed-forward neural networks. In particular, we show the associated ring is Cohen-Macaulay, and make connections to other questions in the study of boolean functions.