## Online Course: Introduction to Topological Data Analysis

**February 8 to April 11, 2024**

https://sites.google.com/view/introductiontotda/home

## Jornadas de Topología de Datos

**February 1 and 2, 2024**

https://sites.google.com/view/tda2024

## Spring Seminar Series on Theoretical Neuroscience

**6 June 2023** [ Recording ]

17:15 – 18:15 Albert Compte (IDIBAPS)*Neural circuits of visuospatial working memory* [ Slides ]

Abstract: One elementary brain function that underlies many of our cognitive behaviors is the ability to maintain parametric information briefly in mind, in the time scale of seconds, to span delays between sensory information and actions. This component of working memory is fragile and quickly degrades with delay length over various quantifiable dimensions. Under the assumption that behavioral delay-dependencies mark core functions of the working memory system, our goal is to find a neural circuit model that represents their neural mechanisms and apply it to research on working memory deficits in neuropsychiatric disorders. We have constrained computational models of spatial working memory with delay-dependent behavioral effects and with neural recordings in the prefrontal cortex during visuospatial working memory. I will show that a simple bump attractor model with inhomogeneities and long time-scale synaptic mechanisms can link neural data with fine-grained behavioral output in a trial-by-trial basis and account for the main delay-dependent limitations of working memory: precision, cardinal biases and serial dependence. I will finally present data from participants with neuropsychiatric disorders that suggest that serial dependence in working memory is specifically altered, and I will use the model to infer the possible neural mechanisms affected.

**9 May 2023** [ Recording ]

17:15 – 18:15 Rubén Moreno-Bote (UPF)*Entropy maximization is the goal of natural behavior and neural networks* [ Slides ]

Abstract: Intrinsic motivation generates behaviors that do not necessarily lead to immediate reward, but help exploration and learning. I will show that agents having the sole goal of maximizing occupancy of future actions and states, that is, moving and exploring on the long term, are capable of complex behavior. We find that action-state path entropy is the only measure consistent with additivity and other intuitive properties of expected future action-state path occupancy. Using discrete and continuous state tasks, I will show that ‘dancing’, hide-and-seek and a basic form of altruistic behavior naturally result from entropy seeking without external rewards. Neural networks also can be trained to maximize action path entropy, generating interesting dynamical regimes close to brain networks.

[2205.10316] Seeking entropy: complex behavior from intrinsic motivation to occupy action-state path space (arxiv.org)

[2302.01098] A general Markov decision process formalism for action-state entropy-regularized reward maximization (arxiv.org)

**2 May 2023** [ Recording ]

17:15 – 18:15 Emmanuel Guigon (Sorbonne Université – CNRS – ISIR)*A computational theory for the production of limb movements* [ Slides ]

Abstract: Motor control is a fundamental process that underlies all voluntary behavioral responses. Several theories based on different principles (task dynamics, equilibrium-point theory, passive-motion paradigm, active inference, optimal control) account for specific aspects of how actions are produced, but fail to provide a unified view on this problem. We propose a concise theory of motor control based on three principles: optimal feedback control, control with a receding time horizon, and task representation by a series of via-points updated at fixed frequency. By construction, the theory provides a suitable solution to the degrees-of-freedom problem, that is, trajectory formation in the presence of redundancies and noise. We show through computer simulations that the theory also explains the production of discrete, continuous, rhythmic, and temporally constrained movements, and their parametric and statistical properties (scaling laws, power laws, speed/accuracy trade-offs). The theory has no free parameters, and only limited variations in its implementation details and in the nature of noise are necessary to guarantee its explanatory power. An assumption of the model is that the optimal feedback controller is universal (i.e., independent of the task at hand) and thus should not be modified during motor adaptation. We have tested this issue in a force field adaptation experiment. We have shown that the shape of after-effect trajectories is not compatible with a modification at the control level as proposed by compensation/reoptimization models. Accordingly, adaptation would not occur at the control level but at the goal level. An open question is the neural basis of the model. We have trained a neural network to approximate the optimal feedback controller (not only to learn optimal trajectories) and we have shown that properties of one layer of the network closely match those of the primate primary motor cortex. Some peculiar results were obtained in a neural network trained to approximate a forward model.

**25 April 2023 ONLINE** [ Recording ]

17:15 – 18:15 Paul Cisek (Université de Montréal)*Neural mechanisms of dynamic decisions* [ Slides ]

Abstract: Psychological and neurophysiological studies of decision-making have focused primarily on scenarios in which subjects are faced with abstract choices that are stable in time. This has led to serial models of decision-making which begin with the representation of relevant information about costs and benefits, followed by careful deliberation about the choice leading to commitment. These cognitive models are separate from models of motor planning and execution, which normally begin with a single target or goal. However, the brain evolved to interact with a dynamic and constantly changing world, in which the choices themselves as well as their relative costs and benefits are defined by the momentary geometry of the immediate environment and are continuously changing, even during ongoing activity. To deal with the demands of real-time interactive behavior, animals require a neural architecture in which the sensorimotor specification of potential actions, their valuation, selection, and even execution can all take place in parallel. I will describe a general hypothesis for how the brain deals with the challenges of such dynamic and embodied behavior, and present the results of a series of behavioral and neurophysiological experiments in which humans and monkeys make decisions on the basis of sensory information that changes over time. These experiments suggest that sensory information pertinent to decisions is processed quickly and combined with a growing signal related to the urge to act, and the result biases a competition between potential actions that unfolds within the same sensorimotor circuits that guide action. Finally, I will present analyses and a computational model describing how the processes of deliberation, commitment, and movement execution can be considered as states of an integrated dynamical system distributed across cortical and subcortical circuits.

Suggested reading: https://pubmed.ncbi.nlm.nih.gov/36520685/

**18 April 2023 ONLINE** [ Recording ]

17:15 – 18:15 Caroline I. Jahn (Princeton Neuroscience Institute)*Learning and using generalized attentional templates in the frontal and parietal cortex* [ Slides ]

Abstract: Attention filters the flood of sensory inputs, allowing us to focus on information that is relevant to our task. In general, our attention is guided by an ‘attentional template’ that encapsulates the set of stimulus features that are relevant for the current situation. For example, when hailing a taxicab in New York City, we attend to stimuli with features that match a ‘yellow sedan’. The computational goal of an attentional template is to establish a cognitive state in which task-relevant stimuli have increased representation and, therefore, drive behavior. Importantly, attention is not static —as the environment, or our goals, change, attention adapts to focus on what is currently relevant. For example, our template of a taxicab must be updated to ‘black hatchback’ when in London and a ‘gondola’ in Venice. While the neural mechanisms representing, and applying, attentional templates have been well-studied, relatively little is known about how templates are learned and relate to one another. To address this, we trained monkeys to perform a novel attention-learning task that required the animal to repeatedly learn new attentional templates in a continuous stimulus space (color). Large-scale neural recordings in prefrontal and parietal cortex showed that the attentional template was represented in both regions in a generalized structured space. The template was updated on every trial, such that it shifted towards features that were rewarded. The template transformed stimulus representations into a generalized value representation, allowing the decision-making process to generalize across templates and locations.

**11 April 2023** [ Recording ]

17:15 – 18:15 Gustavo Deco (UPF – ICREA)*The thermodynamics of mind* [ Slides ]

Abstract: Finding precise signatures of different brain states is a central, unsolved question in neuroscience. The difference in brain state can be described as differences in the detailed causal interactions found in the underlying intrinsic brain dynamics. We use a thermodynamics framework to quantify the breaking of the detailed balance captured by the level of asymmetry in temporal processing, i.e., the arrow of time. We also formulate a novel whole-brain model paradigm allowing us to derive the generative underlying mechanisms for changing the arrow of time between brain regions in different conditions. We found precise, distinguishing signatures in terms of the reversibility and hierarchy of large-scale dynamics in three radically different brain states (cognition, rest, deep sleep and anaesthesia) in fMRI and electrocorticography data from human and non-human primates. Overall, this provides signatures of the breaking of detailed balance in different brain states, reflecting different levels of computation.

**14 March 2023** [ Recording ]

17:15 – 18:15 Matthieu Gilson (Aix-Marseille Université)*Learning in neuronal networks: Processing high-order statistics embedded in time series for classification tasks* [ Slides ]

Abstract: In biological neuronal networks, information representation and processing are achieved through plasticity learning rules that have been empirically characterized as sensitive to second and higher-order statistics in spike trains, which led to the development of models in both unsupervised and supervised learning on structured time series [1, 2]. This contrasts to most models in machine learning that aim to convert diverse statistical properties in inputs into first-order statistics in outputs (i.e., static labels), like in modern deep learning networks. In the context of classification, such schemes have merit for inputs like static images, but they may not be well suited to capture the temporal structure in time series. I will first present the recent “covariance perceptron” that maps input covariances to output covariances, enabling the design of a consistent processing pipeline on second-order statistics [2]. Then, I will explore the applicability of covariance-based readouts for reservoir computing networks to classify time series, both with synthetic data with controlled structures at different statistical orders (first and second) and real dataset like spoken digits [3]. Meanwhile, we compare covariance decoding with the classical mean decoding paradigm in terms of classification accuracy. The results highlight the important role for the recurrent connectivity in transforming information representations in biologically inspired architectures.

[1] Gilson, Burkitt, van Hemmen, Front Comput Neurosci 2011

[2] Gilson, Dahmen, Helias, PLoS Comput Biol 2020

[3] Lawrie, Moreno-Bote, Gilson, bioRxiv https://www.biorxiv.org/content/10.1101/2021.04.30.441789v1

## TML Spring 2023

**28 June 2023** [ Recording ]

11:00 – 12:00 Ana Romero (Universidad de La Rioja)*Cálculo efectivo de sistemas espectrales y su relación con la homología persistente multiparamétrica* [ Slides ]

Abstract: Los sistemas espectrales son una herramienta de topología algebraica computacional que proporciona información topológica sobre espacios con filtraciones generalizadas sobre un conjunto parcialmente ordenado y generalizan la construcción clásica de las sucesiones espectrales asociadas a complejos de cadenas filtrados. En esta charla presentaremos algunos algoritmos y programas para calcular sistemas espectrales. Nuestros programas se han implementado como un nuevo módulo para el sistema Kenzo y resuelven los problemas clásicos de las sucesiones espectrales, que son las diferenciales y las extensiones. Además, combinados con el uso de la homología efectiva, nuestros programas permiten calcular sistemas espectrales de espacios complicados de tipo infinito. En la charla mostraremos también la relación entre los sistemas espectrales y la homología persistente multiparamétrica, con la que hemos ampliado nuestros programas de sistemas espectrales para calcular la homología persistente multiparamétrica con coeficientes enteros (válida para espacios de tipo infinito). Se trata de un trabajo conjunto con J. Divasón, A. Guidolin y F. Vaccarino.

**28 April 2023** [ Recording ]

17:00 – 18:00 Tom Gebhart (University of Minnesota)*Generalizing graph representation learning with cellular sheaves* [ Slides ]

Abstract: Graph representation learning algorithms like graph neural networks have led to state-of-the-art performance across a number of relational domains such as drug discovery, knowledge graph completion, and recommender systems. Despite this success, these algorithms are typically derived from relatively simple graph-theoretic concepts, which implicitly limits their expressibility and generalizability in particular scenarios. In this talk, I will introduce a generalization of the graph representation learning paradigm from the perspective of cellular sheaf theory. This topological lens provides insight into the cause of a number of shortcomings suffered by standard graph representation learning approaches, permits the definition of relational representations which are more expressive than those definable within a traditional graph-theoretic framework, and may be efficiently implemented algorithmically using the recently-developed spectral theory of cellular sheaves. As an example application of this sheaf representation learning framework, I will introduce the sheaf neural networks: a deep learning architecture for learning functions on sheaves which can also be used to reduce oversmoothing in graph learning tasks. Time permitting, I will also discuss how the representational consistency constraints underlying cellular sheaves may be leveraged to formalize and extend link prediction tasks like knowledge graph completion.

**21 March 2023** **ONLINE** [ Recording ]

11:15 – 12:00 Morgane Goibert (Criteo AI Lab and Télécom, Paris)*What paths do adversarial examples take in neural networks to fool them? An adversarial robustness perspective on the topology of neural networks* [ Slides ]

Abstract: We investigate the impact of neural network topologies on adversarial robustness. Specifically, we study the graph produced when an input traverses all the layers of a neural network, and show that such graphs are different for clean inputs and adversarial inputs. We find that graphs from clean inputs are more centralized around highway edges, whereas those from adversaries are more diffuse, leveraging under-optimized edges. Using the topological structure information extracted with persistence diagrams computed on the aforementioned graphs, we are able to study their differences. This leads us to better understand the behavior of adversarial examples compared to clean ones. We then develop a detection method based on the persistence diagrams that correctly spot adversarial examples on a variety of datasets and architectures, illustrating that under-optimized edges are a source of vulnerability.

**7 March 2023** [ Recording ]

11:15 – 12:00 Eduardo Sáenz de Cabezón (UB and Universidad de La Rioja)*Some tools from commutative algebra in topological data analysis*

Abstract: The Stanley-Reisner correspondence establishes a fruitful relationship between abstract simplicial complexes and monomial ideals on rings of polynomials. Based on this relationship, we propose a structural filtration for a given simplicial complex, which allows us to calculate the persistent homology of the complex with respect to the said filtration. These tools are useful in certain situations where the usual topological data analysis techniques are not easily applicable.

**21 February 2023** **ONLINE** [ Recording ]

11:15 – 12:00 Thanasis Zoumpekas (UB)*Ex ploring superquadrics for 3D shape parsing* [ Slides ]

Abstract: The ability to learn complex 3D geometrical shapes using primitives facilitates the analysis of 3D shapes, which can have wide-ranging applications in various fields, for instance, object recognition and segmentation, 3D modeling and animation, medical imaging, and virtual and augmented reality. In addition, unsupervised learning of part-based representations of 3D shapes enhances the discovery of structural relationships and enables generative modeling. Superquadrics are a class of geometric primitives that can take on complex forms and are used to represent 3D shapes. Recently, Paschalidou et al. [1] proposed the utilization of superquadrics to learn 3D shape parsing into consistent 3D representations without supervision. Their method learns the geometric parameters of superquadrics and outperforms other methods that use cuboids to represent parts of a 3D shape. They use a volumetric representation, i.e., voxelized meshes as input, containing rich geometrical information. In this talk, we will cover the basics around superquadrics, discuss recent studies on 3D shape parsing, and open the discussion for potential research directions.

[1] D. Paschalidou, A. O. Ulusoy, and A. Geiger, *Superquadrics revisited: Learning 3D shape parsing beyond cuboids*, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10344–10353, 2019

## TML Fall 2022

**7 February 2023**

11:15 – 12:00 Ignacio Javier Morera Barrios (UB)*Enhancing cardiac image segmentation through persistent homology regularization* [ Slides ]

Abstract: Cardiovascular diseases are a major cause of death and disability. Deep learning-based segmentation methods could help to reduce their severity by aiding in early diagnosing. In response, some promising deep learning methods have emerged to increase the accuracy of segmentation. Among them, incorporating prior knowledge about the topology of segmented objects has recently come into the spotlight. One of the most promising methods is topological data analysis.

**24 January 2023** [ Recording ]

11:15 – 12:00 Adrián Inés Armas (Universidad de La Rioja)*Semi-supervised machine learning: A homological approach* [ Slides ]

Abstract: Machine Learning and Deep Learning methods have become the state-of-the-art approach to solve data classification tasks. In order to use those methods, it is necessary to acquire, and label, a considerable amount of data; however, this is not straightforward in some fields. This challenge can be tackled by means of semi-supervised learning methods that take advantage of both labelled and unlabelled data. In this talk, we present a new semi-supervised learning method based on techniques from Topological Data Analysis. In addition, we carry out a thorough analysis of the method using five structured datasets. The results show that this semi-supervised method outperforms both the results obtained with models trained with only manually labelled data and those obtained with classical semi-supervised learning methods.

**13 December 2022** [ Recording ]

11:15 – 12:00 Aina Ferrà (UB)*Topological analysis in a neuroimaging study* [ Slides ]

**29 November 2022** [ Recording ]

11:15 – 12:00 Rubén Ballester (UB)*Quiver representations of neural networks* [ Slides ]

**22 November 2022** [ Recording ]

11:15 – 12:00 Polyxeni Gkontra (UB)*Cardiovascular magnetic resonance radiomics: First successful applications* [ Slides ]

Abstract: Radiomic analysis refers to the extraction of quantitative features from medical images. After successful applications in oncology, radiomics are gaining increasing attention for the analysis of cardiac magnetic resonance images (CMR), the reference imaging modality for the assessment of cardiac structure and function. In this talk, we will introduce the basics of radiomics with particular focus on their application to CMR. Moreover, we will dive into example applications of CMR radiomics from unraveling unknown associations of imaging and non-imaging phenotypes to prediction of cardiovascular diseases and of the biological heart age. Last but not least, we will discuss advantages over conventional approaches, pitfalls and challenges.

**25 October 2022** [ Recording ]

11:15 – 12:00 Elchanan Solomon (Duke University)*A convolutional persistence transform* [ Slides ]

Abstract: By viewing an image as a function on a cubical complex, we can compute persistence features for image data and use them in classification and regression tasks. This approach has a number of limitations: it is unstable to outliers, expensive to compute, inflexible, and non-injective. Instability and computational complexity can be addressed using downsampling techniques, as in [S-Wagner-Bendich, 2021]. Inflexibility means that the invariant is not context-dependent, so that the same diagram is produced for a fixed image, regardless of the ambient data set and learning task. Non-injectivity means that different images can produce identical diagrams, so there is necessarily some loss of information when using image persistence, and this loss of information is hard to make precise.

We propose to add a very simple step in the image persistence pipeline: pre-processing our images via various convolutions and thereby computing multiple associated persistence diagrams for each image. In addition to improving robustness and computational complexity, the resulting pipeline is more flexible because we are allowed to choose/learn the convolutions in question. Moreover, if we use enough convolutions the invariant becomes injective, so there is no loss of information in going from an image to its convolutional persistence. We conclude with some experiments showing that this is borne out in practice, and that convolutional persistence can improve image classification accuracy even when using random convolutions and extremely simple persistence vectorizations.

**18 October 2022** [ Recording ]

11:15 – 12:00 Esmeralda Ruiz (UB)*Graph-based segmentation methods: From a classical approach to graph convolutional networks* [ Slides ]

Abstract: We will go through classical segmentation methods focusing on graph-based segmentation methods and how those methods have evolved with the rise in popularity of Deep Learning techniques. Traditional graph cuts, normalized cuts and random walk algorithms will be explored and described in detail. The evolution of those algorithms in order to integrate the potential of Deep Learning techniques will also be discussed by presenting the current line of research combining both methodologies and showing the potential of hybrid methods.

**4 October 2022** [ Recording ]

11:15 – 12:00 Julian Pfeifle (UPC)*Direct sum configurations in weight space* [ Slides ]

Abstract: In a recent paper entitled *Toy models of superposition*, the authors posit that neural networks learn correlated features by embedding them as direct sums of well spread-out vector configurations. We discuss how to validate this claim using the embedding of the Grassmannian as projection matrices.

## TML Spring 2022

**20 May 2022** [ Recording ]

11:00 – 11:45 Rocío González (Universidad de Sevilla)*Computational topology inside the REXASIPRO project (REliable & eXplAinable Swarm Intelligence for People with Reduced mObility)* [ Slides ]

11:45 – 12:15 Questions and discussion

**13 May 2022** [ Recording ]

9:30 – 10:00 Rubén Ballester (UB)*Practical examples using Giotto-TDA*

Jupyter notebook of the session: Persistent homology of Vietoris-Rips and Čech complexes

Example: MNIST classification with Giotto-TDA

10:00 – 10:45 Discussion about collaboration possibilities

**6 May 2022** [ Recording ]

9:30 – 10:15 Josep Vives (UB)*Persistence landscapes and correlation: an application to financial multivariate time series* [ Slides ]

10:15 – 10:45 Questions and discussion

**22 April 2022** [ Recording ]

9:30 – 10:15 Víctor Manuel Campello (UB)*Conditional generative adversarial networks for cardiac aging synthesis using cross-sectional data* [ Slides ]

10:15 – 10:45 Questions and discussion

**25 March 2022** [ Recording ]

9:30 – 10:15 Marina Camacho (UB) *Early diagnosis of dementia prediction using accessible exposome features: A comparison of statistical and machine learning models * [ Slides ]

10:15 – 10:45 Questions and discussion

**11 March 2022**

9:30 – 10:15 Vien Ngoc Dang (UB) *Fairness and bias correction in machine learning for healthcare data: a case study in adolescent mental illness *

10:15 – 10:45 Questions and discussion

**25 February 2022** [ Recording ]

9:30 – 10:15 Rubén Ballester (UB) *The topology of neural networks: how topology helps us to understand generalization and future challenges * [ Slides ]

10:15 – 10:45 Questions and discussion

**11 February 2022** [ Recording 1 ] [ Recording 2 ]

9:30 – 10:00 Ignasi Cos (UB) *Assessing the effect of consequence on cognitive decisions: a theoretical-experimental framework *

10:15 – 10:45 Gloria Cecchini (UB) *A topological classifier for neural data * [ Slides ]

**28 January 2022** [ Recording 1 ] [ Recording 2 ]

9:30 – 10:00 Lidia Garrucho (UB) *Domain generalization in deep learning based mass detection in mammography: A large multi-center study *

10:15 – 10:45 Akis Linardos (UB) *Federated learning for multi-center breast cancer classification in the real world * [ Slides ]

**14 January 2022** [ Recording ]

10:00 – 10:30 Carles Casacuberta (UB) *A theoretical and practical overview of homological persistence * [ Slides ]

10:45 – 11:15 Aina Ferrà (UB) *Classification based on topological data analysis * [ Slides ]