The outline in the following program is mostly correct, but certain details may change.

Time Wednesday Thursday Friday
08:15 – 09:00 Registration
09:00 – 10:00 Sporns (pt. 1) Sporns (pt. 2) Sporns (pt. 3)
10:00 – 11:00 Stolz Kanari Rybakken
11:00 – 11:45 Coffee break Coffee break Coffee break
11:45 – 12:45 Jost Perea Giusti
12:45 – 14:15 Lunch Lunch Lunch
14:15 – 15:15 Curto (pt. 1) Curto (pt. 2) Curto (pt. 3)
15:15 – 16:00 Coffee break Coffee break Coffee break
16:00 – 17:00 Levi Wagner Leinster
17:00 – 18:00 Expert Turner
18:00 – 20:00 Apéro & poster session
20:00 – ∞ Workshop dinner


Topology in neuroscience: some examples from neural coding and neural networks

Carina Curto, Penn State

In this series of talks I will give a sampling of examples illustrating how topological ideas arise in neuroscience. First, I'll tell you about some interesting neurons, such as place cells and grid cells, and the topology associated to their neural activity. I will also explain how convex neural codes capture additional features of the stimulus space, such as intrinsic dimension. Second, I'll explain how the statistics of persistent homology can be used to detect - or reject - geometric organization in neural activity data, using examples from hippocampus and olfaction. In the third and final talk, I'll transition to studying attractor neural networks, where we will see how network dynamics are determined by topological features of special network motifs and their embeddings in the larger network.

Localisation and embedding of topological features

Paul Expert, Imperial College London

The characterisation of topological objects with respect to their environment, or their embedding in that environment, has a long history in graph theory: node metrics are a well known example. This has also been extended to mesoscopic structures such as clusters of nodes or communities. It is therefore natural to try to extend this approach to more abstract mesoscopic objects such as holes in simplicial complexes. The main obstacle is the localisation problem, as holes are can have multiple representative. It is however an important question, as part of our ability to attach a meaning to holes ultimately relies their « physical » localisation. This is particularly important in neuroscience considering the identification of certain types of biomarkers. In this talk, I will present our attempts at attaching an embedding and interpretation to holes.

Path space cochains for neural time series analysis

Chad Giusti, University of Delware

One of the fundamental questions of interest in the study of complex neural systems is how to assign causality to activity patterns of individual units (neurons, cortical stacks, anatomical regions, etc.) Standard tools for causality inference reply on some level of model for the underlying system, and in practice require strong assumptions, such as stationarity and Gaussian noise, which are clearly violated by the systems being studied. Here, we describe a semantic interpretation of Chen's iterated integral cochain model for path spaces as a model-free measure of potential influence between observed time series, which can in turn be used to infer causality under appropriate experimental conditions. We demonstrate this method in the context of experimental transcranial stimulation/EEG data and discuss generalizations to other pertinent mapping spaces.

Geometric methods for the analysis of neuroscience data

Jürgen Jost, Max Planck Institute for Mathematics in the Sciences

The structure of a network, from the neurosciences or elsewhere, is embodied in its connections. Therefore, formal approaches to network analysis should take the edges, and not the vertices, as the basic objects. Recently, inspired by Riemannian geometry, a new class of quantities, called Ricci curvatures, have been introduced that quantify how an edge is connected inside a network or how important an edge is for information gathering and dispersal or how it links the neighborhoods of its end points. We describe these concepts and present some application to correlations between voxels in brain imaging data. This is ongoing work.

Understanding the shapes of neurons with algebraic topology

Lida Kanari, The Blue Brain Project

The morphological diversity of neurons supports the complex information-processing capabilities of biological neuronal networks. A major challenge in neuroscience has therefore been to reliably describe neuronal shapes with universal morphometrics that generalize across cell types and species. Inspired by algebraic topology, we have conceived a topological descriptor of trees that couples the topology of their complex arborization with the geometric features of its structure, retaining more information than traditional morphometrics. The topological morphology descriptor (TMD) has proved to be very powerful in categorizing cortical neurons into concrete groups based on morphological grounds, and has lead to the discovery of two distinct classes of pyramidal cells in the human cortex. In this talk, I will describe the transformation of neuronal trees into persistence barcodes and how the TMD algorithm can be used for the computational generation of neuronal morphologies, enabling the reconstruction of large scale networks.

The magnitude of graphs and finite metric spaces

Tom Leinster, University of Edinburgh

Magnitude is a numerical invariant defined in wide categorical generality, and therefore applicable to many kinds of mathematical object. In topology and algebra, magnitude is closely related to Euler characteristic. For subsets of Euclidean space, it combines classical geometric invariants such as volume, surface area and perimeter. But I will focus here on the cases of graphs and finite metric spaces. Here, it is less clear what magnitude "means", but it appears to convey useful information about dimensionality and number of clusters. There is even a magnitude homology theory available (due to Hepworth, Willerton and Shulman), lifting magnitude from a numerical to an algebraic invariant. I will give an overview.

Complexes of Tournaments, Directionality Filtrations and Persistent Homology

Ran Levi, The University of Aberdeen

Abstract: Clique graphs whose edges are oriented are referred to in the combinatorics literature as tournaments. We consider a family of semi-simplicial complexes, that we refer to as "tournaplexes", whose simplices are tournaments. In particular, given a directed graph G, we associate with it a "flag tournaplex" which is a tournaplex containing the directed flag complex of G, but also the geometric realisation of cliques that are not directed. We define two types of filtration on tournaplexes, and exploting persistent homology, we observe that filtered flag tournaplexes provide finer means of distinguishing graph dynamics than the directed flag complex. We then demonstrate the power of those ideas by applying them to graph data arising from the Blue Brain Project's digital reconstruction of a rat's neorcortex.

Data coordinatization with classifying spaces

Jose Perea, Michigan State University

When dealing with complex high-dimensional data, several machine learning tasks rely on having appropriate low-dimensional representations. These reductions are often phrased in terms of preserving statistical or metric information. We will describe in this talk several schemes for taking advantage of the underlying topology of a data set, in order to produce informative low-dimensional coordinates.

Decoding of neural data using cohomological feature extraction

Erik Rybakken, Norwegian University of Science and Technology

I will present the results of a project that I worked on with Benjamin Dunn and Nils Baas at NTNU, where we devised a method to decode features in neural data coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We applied our approach to neural recordings of mice moving freely in a box, where we captured head direction neurons and decoded the head direction of the mice from the neural population activity alone, without knowing a priori that the neurons were encoding head direction. Interestingly, the decoded values conveyed more information about the neural activity than the tracked head direction did, with differences that had some spatial organization. Preprint:

Computational Connectomics: Mapping and Modeling Complex Brain Networks

Olaf Sporns, Indiana University

The confluence of modern brain mapping techniques and network science has given rise to a new field, computational connectomics – the study of brain connectivity patterns with the tools and techniques of complex systems and networks. I will provide an introductory overview of how experimental data on brain networks is acquired, analyzed and modeled, with an emphasis on the topology of structural connectivity, the prevalent occurrence of modules and hubs, the network mechanisms supporting functional integration, and the relation of structural topology to brain dynamics.

Application of topological data analysis to vascular networks of tumours

Bernadette Stolz, University of Oxford

A tumour, like an organ, has its own blood supply that it relies on to survive. However, compared to blood vessels in healthy tissue, blood vessels in a tumour are highly inefficient and are characterised by structural abnormalities such as many loops and twists. Even though these differences in the network structure are obvious to the human eye, quantifying them has so far been very difficult. In the past 20 years many mathematical models have been developed to understand the growth of tumour blood vessels. This has lead to insights, but also to many challenges for example in parametrisation or model selection. In my talk I will motivate the study of tumour blood vessels and overview some of the mathematical models to date. I will further present a new filtration that combined with persistent homology aims to capture the unique characteristics of tumour blood vessels at different stages of tumour growth and/or during treatment. I will present preliminary results of the new filtration on biological networks and lay out how we aim to use it to gain insight into both the biology and the modelling of tumour blood vessels.

Euler measures of simplicial feature maps

Kate Turner, ANU

Feature maps are a way of considering data points in a parameter space of interest. By placing into a common feature space we can compare different data sets by comparing there images under the feature map. In particular we can measure the number of data points in each set within different regions. This way we have constructed a measure over the feature space. In this talk we want to consider data sets which come a pair of vertex set for which a feature map is defined, alongside an abstract simplicial complex over this vertex set which represents a connectivity structure over the original data. We define a extend the feature map over the vertices to the geometric realisation of the abstract simplical complex. We then define Euler measures over the feature space using the Euler characteristic with compact support. After discussing some properties of the Euler measures of the feature simplicial map we look at some applications to neurological data from the Blue Brain Project.

On data, information theory and topology

Hubert Wagner, IST Austria

While the main topic of the talk is TDA, I focus on data, in particular high dimensional point cloud data. Using concepts from information theory, I explore why non-metric measurements between data points are preferred in certain practical situations. Finally, I discuss recent progress in generalizing TDA methods to this non-metric setting.


To be announced. Posters of consenting presenters will appear here after the presentation.