Program

The outline in the following program is mostly correct, but certain details may change.

Time Wednesday Thursday Friday
08:15 – 09:00 Registration
09:00 – 10:00 Sporns (pt. 1) Sporns (pt. 2) Sporns (pt. 3)
10:00 – 11:00 Stolz Kanari Rybakken
11:00 – 11:45 Coffee break Coffee break Coffee break
11:45 – 12:45 Jost Perea Giusti
12:45 – 14:15 Lunch Lunch Lunch
14:15 – 15:15 Curto (pt. 1) Curto (pt. 2) Curto (pt. 3)
15:15 – 16:00 Coffee break Coffee break Coffee break
16:00 – 17:00 Expert Wagner Leinster
17:00 – 18:00 Levi Turner
18:30 – 20:30 Apéro & poster session
19:30 – ∞ Workshop dinner

Talks

Topology in neuroscience: some examples from neural coding and neural networks

Carina Curto, Penn State — slides online

In this series of talks I will give a sampling of examples illustrating how topological ideas arise in neuroscience. First, I'll tell you about some interesting neurons, such as place cells and grid cells, and the topology associated to their neural activity. I will also explain how convex neural codes capture additional features of the stimulus space, such as intrinsic dimension. Second, I'll explain how the statistics of persistent homology can be used to detect - or reject - geometric organization in neural activity data, using examples from hippocampus and olfaction. In the third and final talk, I'll transition to studying attractor neural networks, where we will see how network dynamics are determined by topological features of special network motifs and their embeddings in the larger network.

Localisation and embedding of topological features

Paul Expert, Imperial College London — slides online

The characterisation of topological objects with respect to their environment, or their embedding in that environment, has a long history in graph theory: node metrics are a well known example. This has also been extended to mesoscopic structures such as clusters of nodes or communities. It is therefore natural to try to extend this approach to more abstract mesoscopic objects such as holes in simplicial complexes. The main obstacle is the localisation problem, as holes are can have multiple representative. It is however an important question, as part of our ability to attach a meaning to holes ultimately relies their « physical » localisation. This is particularly important in neuroscience considering the identification of certain types of biomarkers. In this talk, I will present our attempts at attaching an embedding and interpretation to holes.

Path space cochains for neural time series analysis

Chad Giusti, University of Delware — slides online

One of the fundamental questions of interest in the study of complex neural systems is how to assign causality to activity patterns of individual units (neurons, cortical stacks, anatomical regions, etc.) Standard tools for causality inference reply on some level of model for the underlying system, and in practice require strong assumptions, such as stationarity and Gaussian noise, which are clearly violated by the systems being studied. Here, we describe a semantic interpretation of Chen's iterated integral cochain model for path spaces as a model-free measure of potential influence between observed time series, which can in turn be used to infer causality under appropriate experimental conditions. We demonstrate this method in the context of experimental transcranial stimulation/EEG data and discuss generalizations to other pertinent mapping spaces.

Geometric methods for the analysis of neuroscience data

Jürgen Jost, Max Planck Institute for Mathematics in the Sciences — slides not shared

The structure of a network, from the neurosciences or elsewhere, is embodied in its connections. Therefore, formal approaches to network analysis should take the edges, and not the vertices, as the basic objects. Recently, inspired by Riemannian geometry, a new class of quantities, called Ricci curvatures, have been introduced that quantify how an edge is connected inside a network or how important an edge is for information gathering and dispersal or how it links the neighborhoods of its end points. We describe these concepts and present some application to correlations between voxels in brain imaging data. This is ongoing work.

Understanding the shapes of neurons with algebraic topology

Lida Kanari, The Blue Brain Project — slides not shared

The morphological diversity of neurons supports the complex information-processing capabilities of biological neuronal networks. A major challenge in neuroscience has therefore been to reliably describe neuronal shapes with universal morphometrics that generalize across cell types and species. Inspired by algebraic topology, we have conceived a topological descriptor of trees that couples the topology of their complex arborization with the geometric features of its structure, retaining more information than traditional morphometrics. The topological morphology descriptor (TMD) has proved to be very powerful in categorizing cortical neurons into concrete groups based on morphological grounds, and has lead to the discovery of two distinct classes of pyramidal cells in the human cortex. In this talk, I will describe the transformation of neuronal trees into persistence barcodes and how the TMD algorithm can be used for the computational generation of neuronal morphologies, enabling the reconstruction of large scale networks.

The magnitude of graphs and finite metric spaces

Tom Leinster, University of Edinburgh — slides online

Magnitude is a numerical invariant defined in wide categorical generality, and therefore applicable to many kinds of mathematical object. In topology and algebra, magnitude is closely related to Euler characteristic. For subsets of Euclidean space, it combines classical geometric invariants such as volume, surface area and perimeter. But I will focus here on the cases of graphs and finite metric spaces. Here, it is less clear what magnitude "means", but it appears to convey useful information about dimensionality and number of clusters. There is even a magnitude homology theory available (due to Hepworth, Willerton and Shulman), lifting magnitude from a numerical to an algebraic invariant. I will give an overview.

Complexes of Tournaments, Directionality Filtrations and Persistent Homology

Ran Levi, The University of Aberdeen — slides online

Clique graphs whose edges are oriented are referred to in the combinatorics literature as tournaments. We consider a family of semi-simplicial complexes, that we refer to as "tournaplexes", whose simplices are tournaments. In particular, given a directed graph G, we associate with it a "flag tournaplex" which is a tournaplex containing the directed flag complex of G, but also the geometric realisation of cliques that are not directed. We define two types of filtration on tournaplexes, and exploting persistent homology, we observe that filtered flag tournaplexes provide finer means of distinguishing graph dynamics than the directed flag complex. We then demonstrate the power of those ideas by applying them to graph data arising from the Blue Brain Project's digital reconstruction of a rat's neorcortex.

Data coordinatization with classifying spaces

Jose Perea, Michigan State University — slides not shared

When dealing with complex high-dimensional data, several machine learning tasks rely on having appropriate low-dimensional representations. These reductions are often phrased in terms of preserving statistical or metric information. We will describe in this talk several schemes for taking advantage of the underlying topology of a data set, in order to produce informative low-dimensional coordinates.

Decoding of neural data using cohomological feature extraction

Erik Rybakken, Norwegian University of Science and Technology — slides not shared

I will present the results of a project that I worked on with Benjamin Dunn and Nils Baas at NTNU, where we devised a method to decode features in neural data coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We applied our approach to neural recordings of mice moving freely in a box, where we captured head direction neurons and decoded the head direction of the mice from the neural population activity alone, without knowing a priori that the neurons were encoding head direction. Interestingly, the decoded values conveyed more information about the neural activity than the tracked head direction did, with differences that had some spatial organization.

Preprint: https://arxiv.org/abs/1711.07205

Computational Connectomics: Mapping and Modeling Complex Brain Networks

Olaf Sporns, Indiana University — slides not shared

The confluence of modern brain mapping techniques and network science has given rise to a new field, computational connectomics – the study of brain connectivity patterns with the tools and techniques of complex systems and networks. I will provide an introductory overview of how experimental data on brain networks is acquired, analyzed and modeled, with an emphasis on the topology of structural connectivity, the prevalent occurrence of modules and hubs, the network mechanisms supporting functional integration, and the relation of structural topology to brain dynamics.

Application of topological data analysis to vascular networks of tumours

Bernadette Stolz, University of Oxford — slides online

A tumour, like an organ, has its own blood supply that it relies on to survive. However, compared to blood vessels in healthy tissue, blood vessels in a tumour are highly inefficient and are characterised by structural abnormalities such as many loops and twists. Even though these differences in the network structure are obvious to the human eye, quantifying them has so far been very difficult. In the past 20 years many mathematical models have been developed to understand the growth of tumour blood vessels. This has lead to insights, but also to many challenges for example in parametrisation or model selection. In my talk I will motivate the study of tumour blood vessels and overview some of the mathematical models to date. I will further present a new filtration that combined with persistent homology aims to capture the unique characteristics of tumour blood vessels at different stages of tumour growth and/or during treatment. I will present preliminary results of the new filtration on biological networks and lay out how we aim to use it to gain insight into both the biology and the modelling of tumour blood vessels.

Euler measures of simplicial feature maps

Kate Turner, ANU — slides not shared

Feature maps are a way of considering data points in a parameter space of interest. By placing into a common feature space we can compare different data sets by comparing there images under the feature map. In particular we can measure the number of data points in each set within different regions. This way we have constructed a measure over the feature space. In this talk we want to consider data sets which come a pair of vertex set for which a feature map is defined, alongside an abstract simplicial complex over this vertex set which represents a connectivity structure over the original data. We define a extend the feature map over the vertices to the geometric realisation of the abstract simplical complex. We then define Euler measures over the feature space using the Euler characteristic with compact support. After discussing some properties of the Euler measures of the feature simplicial map we look at some applications to neurological data from the Blue Brain Project.

On data, information theory and topology

Hubert Wagner, IST Austria — slides not shared

While the main topic of the talk is TDA, I focus on data, in particular high dimensional point cloud data. Using concepts from information theory, I explore why non-metric measurements between data points are preferred in certain practical situations. Finally, I discuss recent progress in generalizing TDA methods to this non-metric setting.

Posters

Learning from hierarchy via persistence forests, with applications to graph learning

Sergei Burkin, University of Tokyo — poster online

We introduce persistence forests, a generalization of persistent homology and of single linkage hierarchical clustering, whose summary can serve as an interface between ``spaces'' and machine learning. While persistent homology is a (quiver) representation of the real line, persistence forest is a representation of the merge forest. The latter is more sensitive, but still stable, topological summary. We show that persistence forests improve the performance of a modern graph learning algorithm (DGCNN) on several common datasets, and describe how to use persistent forests to find weaknesses in such algorithms.

Convolutional Neural Network on Simplicial Complexes

Michaël Defferrard, EPFL — poster not shared

Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning toolkit that enabled many breakthroughs in Artificial Intelligence. The recent generalization of CNNs to graphs received a lot of attention in the Machine Learning community and powers many applications, from autonomous driving to brain analysis and protein design. Simplicial complexes (SCs) are good models for structured data where relations connect more than two entities and exhibit subset closure. An example is the projection of a paper-author bipartite graph on the set of authors: the resulting collaboration network is a simplicial complex. This contribution aims to enable the development of learned data processing on simplicial complexes, under the assumption that they are better models than graphs for certain kinds of networks.

Topological data analysis for brain network dynamics

Ali Nabi Duman, King Fahd University of Petroleum and Minerals — poster online

Recent developments in brain imaging techniques have expanded the size, scope and complexity of high temporal resolution neural data. These novel powerful techniques and methodologies measure the connections and interactions between the elements of neurobiological systems. The mathematical formalism of the graph theory and the network science play a vital role in understanding the interconnected architecture of the brain. One can model network dynamics of the brain by considering levels of activity in order to describe how a brain network evolves into another. In this work, we apply the tools from topological data analysis (TDA) to discover meaningful patterns of neural network dynamics. TDA methods have ability to combine the best features of different methods such as principal component and cluster analyses to provide geometric representation of complex data sets. We investigate the temporal dynamics of brain networks using a TDA method: “Mapper” algorithm. We found that cognitively more demanding memory tasks have higher coreness score for Mapper graphs.

Rips Magnitude

Dejan Govc, University of Aberdeen — poster not shared

Magnitude is a numerical invariant of metric spaces (and more generally, enriched categories) introduced by Tom Leinster which has been shown to arise as the graded Euler characteristic of a certain homology theory. Richard Hepworth has recently suggested to examine an analogous invariant for persistent homology, called Rips magnitude, which arises as a graded Euler characteristic of persistent homology. We examine some of its basic properties and asymptotic behaviour in the case of finite subsets of the circle. This is joint work with Richard Hepworth.

Quantified persistence homology: stability and good practices for topological data analysis

Esther Ibanez, ISI Foundation — poster online

Brain activity can be described as the exploration of a repertoire of different dynamical states. These states are usually defined, both in resting state and task-based recordings, on the basis of correlations between functional signals, the most notable example being fMRI functional connectivity (Honey et al., 2009). However, the exact definition, the properties and even the modalities of transitions among such states are still controversial (Liégeois et al., 2017).

Against this background, topological observables of brain function have emerged as a versatile, powerful yet still immature candidates to capture robust features of how the human brain processes information. Indeed, topology-based analysis of functional connectivity have recently shown the capacity of such tools to describe well the transformations of the shape of the brain activation patterns, highlighting differences in altered states of consciousness (Petri et al, 2014), neurological disorders (Lee et al., 2012) and even resting state (Saggar et al, 2018;, Lord et al., 2016).

However, to date, there has been little focus on the reliability, reproducibility and requirements necessary for the stability of homological features extracted from fMRI data. We take a first step in this direction by investigating to what degree homological features are reproducible in a test-retest rs-fMRI BOLD signal dataset. We find that already simple (one-dimensional) homological features provide discriminative power comparable to the use of the full correlation matrix, while requiring significantly less information, underlining the information compression capacity of topological observables. We then compare the speed at which brain states, both correlation-based and homology-based, change in time. We find that brain states perform a Levi-like walk in activation space (similar to Battaglia et al (2017)), but intriguingly also find that topological observables provide better discrimination --as compared to correlation matrices alone-- between data and a set of benchmark null models. Finally, in order to provide insights into their biological relevance, we characterize some of the most notable transitions both correlationally and homologically, and compare explicit examples of transitions between states that preserve homology with examples where the persistent homology changes significantly despite small apparent changes in the correlation structure.

This is joint work with D. Battaglia, M. Saggar and F. Vaccarino.

High-dimensional cortical E/I balance for movement generation

Xizi Li, University of Cambridge — poster not shared

Many theoretical models of motor cortical circuits have been proposed, but it remains unclear which of these, if any, underlies the transformation of internal decisions into concrete motor plans and robust sequences of muscle activation. To rule in and rule out models of motor cortex dynamics, further theoretical analysis is required to extract essential diagnostic predictions from each model and confront them to experimental data. Here, we revisit a family of models of primary motor cortex (M1) that were shown to capture key qualitative aspects of both single-neuron and population dynamics in M1, during movement preparation and execution (Hennequin, Vogels \& Gerstner, 2014). These so-called “stability-optimised circuits” (SOCs) are composed of a population of strongly connected excitatory (E) neurons with random topology, complemented by a population of inhibitory (I) neurons providing stabilising feedback. Importantly, the disordered structure of the excitatory connections implies the existence of many unstable modes that are dynamically stabilised by inhibition, thus generalising the classical notion of excitation/inhibition balance in high-dimension.

Here, we provide further theoretical and geometric analyses of the dynamics of SOCs, leading to new predictions to be tested using experimental recordings. First, by coupling the network's dynamics to a mechanical model of a two-link arm and requiring the system to produce straight reaches, we are able to predict the tuning of excitatory, inhibitory, and total inputs into single cells to variables defining the arm kinematics -- all of these quantities are, or should eventually become, experimentally measurable. In particular, we predict that for most cells, E and I inputs should evolve from being oppositely tuned shortly before movement onset, to being almost perfectly co-tuned by the time the hand attains its peak velocity. Second, while spontaneous (noise-driven) activity in single model neurons is largely non-oscillatory, we show that there exist many linear combinations of activities that strongly oscillate, with peak frequencies spanning a large range. We develop a novel method to extract such collective oscillations from population data.

Reducing higher order connectivity in a model of neocortical microcircuitry: a comparative study

Max Nolte, EPFL — poster not shared

We study the relationship between higher order connectivity and electrical activity in a neocortical microcircuit model by comparing activity in a fully biologically constrained microcircuit model (Markram et al., 2015) with a control model which has significantly fewer higher order connectivity motifs, but similar first order connectivity (Reimann et al., 2017).

Markram et al. (2015). Reconstruction and Simulation of Neocortical Microcircuitry. Cell. Reimann et al. (2017). Morphological Diversity Strongly Constrains Synaptic Connectivity and Plasticity. Cereb. Cortex 27, 4570–4585.

Construction of and efficient sampling from the simplicial configuration model

Alice Patania, IUNI, Indiana University — poster online

[No abstract]

Topological phase transitions in functional brain networks

Fernando Santos, UFPE — poster not shared

Functional brain networks are often constructed by quantifying correlations among brain regions. Their topological structure includes nodes, edges, triangles and even higher-dimensional objects. Topological data analysis (TDA) is the emerging framework to process datasets under this perspective. In parallel, topology has proven essential for understanding fundamental questions in physics. Here we report the discovery of topological phase transitions in functional brain networks by merging concepts from TDA, topology, geometry, physics, and network theory. We show that topological phase transitions occur when the Euler entropy has a singularity, which remarkably coincides with the emergence of multidimensional topological holes in the brain network. Our results suggest that a major alteration in the pattern of brain correlations can modify the signature of such transitions, and may point to suboptimal brain functioning. Due to the universal character of phase transitions and noise robustness of TDA, our findings open perspectives towards establishing reliable topological and geometrical biomarkers of individual and group differences in functional brain network organization.

Role of Topology in Short-term Synaptic Plasticity

Marco A Roque Sol, Texas A\&M University — poster online

Poster paper on ongoing research report, review of the application of Topology to understand the short term plasticity in neurons, that is still in their early stages. The idea here is to investigate, in particular, how we can use some tools from Algebraic Topology to have a better understanding of the process.

Morse-theoretical clustering algorithm for annotated networks

Fabio Strazzeri, University of Southampton — poster not shared

We present a novel clustering algorithm, called Morse, which integrates node annotation on a network and Morse theory, a well-known topological theory, to reveal the `basins of attraction' induced by the annotation. The algorithm mimics Discrete Morse Theory on an annotated network, defining on it a \emph{Morse flow}, which reveals a hierarchical structure on the links as well as the nodes. We show how Morse was used with Topological Data Analysis to identify asthma phenotypes from blood gene expression profiles.

Effective learning is accompanied by high dimensional and efficient representations of neural activity

Evelyn Tang, Max Planck Institute of Dynamics and Self-Organization — poster online

A fundamental cognitive process is the ability to map value and identity onto objects as we learn about them. Exactly how such mental constructs emerge and what kind of space best embeds this mapping remains incompletely understood. Here we develop tools to quantify the space and organization of such a mapping, thereby providing a framework for studying the geometric representations of neural responses as reflected in functional MRI. Considering how human subjects learn the values of novel objects, we show that quick learners have a higher dimensional geometric representation than slow learners, and hence more easily distinguishable whole-brain responses to objects of different value. Furthermore, we find that quick learners display a more compact embedding of the task-based information and hence have a higher ratio of task-based dimension to embedding dimension, consistent with a greater efficiency of cognitive coding. Lastly, we investigate the neurophysiological drivers of high dimensional patterns at both regional and voxel levels, and complete our study with a complementary test of the distinguishability of associated whole-brain responses. Our results demonstrate a spatial organization of neural responses characteristic of learning, and offer a suite of geometric measures applicable to the study of efficient coding in higher-order cognitive processes more broadly.

Topological Analysis of Population Activity in a Secondary Auditory Region of a Songbird

Bradley Theilman, UC San Diego — poster not shared

It is commonly thought that the complex spatiotemporal activity patterns of neural populations carry information relevant to perception, cognition, and behavior. Yet, our understanding of how invariant information is encoded in these activity patterns remains primitive and coarse. Applied topology is emerging as a promising tool to discover invariant structure in neuroscience datasets. Here, applying the method developed by Curto and Itskov (2008), we characterize the spiking activity of large populations of simultaneously recorded neurons in a secondary auditory forebrain region of songbirds trained on an auditory recognition task. The simplicial complexes derived from neural activity show non-trivial topological structure that emerges from the temporally specific patterns of coincident firing across the neuronal population. These topological structures carry information related to learned acoustic features that distinguish individual stimuli and learned stimulus classes. We also introduce a novel, information-theoretic method for comparing simplicial complexes based on the notion of simplicial Laplacians. We demonstrate that this method is sensitive to invariant structure in neural activity and extracts neural representations of learned behavioral features, suggesting that topology may provide a functionally relevant language with which to describe neural activity.