**Carina Curto** (Penn State University, USA)

Title: Emergent dynamics from network connectivity: a minimal model

Abstract:

Many networks in the brain display internally-generated patterns of activity — that is, they exhibit emergent dynamics that are shaped by intrinsic properties of the network rather than inherited from an external input. While a common feature of these networks is an abundance of inhibition, the role of network connectivity in pattern generation remains unclear.

In this talk I will introduce Combinatorial Threshold-Linear Networks (CTLNs), which are simple “toy models” of recurrent networks consisting of threshold-linear neurons with binary inhibitory interactions. The dynamics of CTLNs are controlled solely by the structure of an underlying directed graph. By varying the graph, we observe a rich variety of emergent patterns including: multistability, neuronal sequences, and complex rhythms. These patterns are reminiscent of population activity in cortex, hippocampus, and central pattern generators for locomotion. I will present some theorems about CTLNs, and explain how they allow us to predict features of the dynamics by examining properties of the underlying graph. Finally, I’ll show examples illustrating how these mathematical results guide us to engineer complex networks with prescribed dynamic patterns.

**Rachel Kuske** (University British Columbia, Canada) *
* Title: Non-smooth and non-autonomous dynamics revisited in the context of stochastic facilitation

Abstract:

There is a long history of studying reduced subsystems of larger neural (and biological) models. For example, dynamic bifurcations appear through reductions based on slow and fast subsystems, and integrate-and-fire models capture key features of threshold crossings. Asymptotic analyses and geometric singular perturbation theory are just some of the techniques used to understand the underlying complex nonlinear interactions. At the same time, new analyses of non-smooth dynamics and (quasi)-periodic fluctuations suggest scenarios where noise plays a critical role in selecting the behaviour. We outline some preliminary results which indicate new perspectives and open problems in considering the interplay of noise and these dynamical features in neural models.

**Ruben Moreno-Bote** (Foundation Sant Joan de Deu, Spain) *
*Title: Dynamic explaining away and causal inference by spiking networks

Abstract:

While the brain uses spiking neurons for communication, theoretical research on brain computations has in most parts focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, are largely unknown. In this talk I will show that a family of non-linear, high-dimensional quadratic optimization problems, which correspond to causal inference problems, can be solved exactly and efficiently by networks of spiking neurons. The network naturally implements the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and the neuron spiking generating and reset non-linearities. The network infers the set of most likely causes from an observation using ‘dynamic explaining away’, which is implemented by tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of networks might underlie tasks such as odor identification and classification.

**Hiroya Nakao** (Tokyo Institute of Technology, Japan)

Title: Stochastic synchronization of rhythmic spatiotemporal patterns and network dynamics

Abstract:

Coupled-oscillator models have played important roles in studying various rhythmic activities in biological systems. Recently, the phase reduction method for limit-cycle oscillators, which is a simple and useful theoretical technique for analyzing synchronization dynamics of weakly coupled limit-cycle oscillators, has been extended to rhythmic spatiotemporal patterns in reaction-diffusion systems and to collective oscillations in networks of coupled dynamical systems. In this talk, I will outline some of the recent results on the phase reduction method for such non-conventional rhythmic dynamical systems. As an application, stochastic synchronization in reaction-diffusion systems and in networks of coupled dynamical systems induced by common or correlated noise are analyzed.

**Hinke Osinga** (University of Auckland, New Zeland)

Title: Intrinsic excitability and the role of saddle slow manifolds

Abstract:

Excitable cells, such as neurons, exhibit complex oscillations in response to external input, e.g., from other neurons in a network. We consider the effect of a brief stimulation from the rest state of a minimal neuronal model with multiple time scales. The transient dynamics arising from such short current injections brings out the intrinsic bursting capabilities of the system. We focus on transient bursts, that is, the transient generation of one or more spikes, and use a simple polynomial model to illustrate our analysis. We take a geometric approach to explain how spikes arise and how their number changes as parameters are varied. We discuss how the onset of new spikes is controlled by stable manifolds of a slow manifold of saddle type. We give a precise definition of such a stable manifold and use numerical continuation of suitable two-point boundary value problems to approximate them. (Joint work with: Krasimira Tsaneva-Atanasova (University of Exeter), Vivien Kirk and Saeed Farjami (both University of Auckland)

**Alex Roxin** (Centre de Recerca Matematica, Spain)

Title: A model of spatial learning in rodent hippocampus

Abstract:

Place cells of the rodent hippocampus fire action potentials when the animal traverses a particular spatial location in a given environment. Therefore, for any given trajectory one will observe a repeatable sequence of place cell activations as the animal explores. Interestingly, when the animal is quiescent or sleeping, one can observe similar sequences of activation, although at a highly compressed rate, known as “replays”. It is hypothesized that this replay underlies the process of memory consolidation whereby memories are “transferred” from hippocampus to cortex. However, it remains unclear how the memory of a particular environment is actually encoded in the place cell activity and what the mechanism for replay is.

Here I study how spike-timing dependent plasticity (STDP) during spatial exploration shapes the patterns of synaptic connectivity in model networks of place cells. I will show how a simple pairwise STDP rule can lead to the formation of attracting manifolds, essentially patterns of activity which represent the spatial environment learned. These states become spontaneously active when the animal is quiescent, reproducing the phenomonology of replays. Interestingly, the attractors are formed most rapidly when place cell activity is modulated by an ongoing oscillation. The optimal oscillation frequency can be calculated analytically, is directly related to the STDP rule, and for experimentally determined values of the STDP window in rodent slices gives values in the theta range.

**Jonathan Rubin** (University of Pittsburgh, USA*)**
*Title: Identifying timescales in respiratory bursting dynamics

Abstract:

Mammalian respiration emerges from the rhythmic activity of neuronal populations in the brain stem. Detailed mathematical models have been developed for this system and have led to novel predictions and insights about respiratory rhythmogenesis and its control. At the same time, the study of these models has inspired new mathematical developments in several directions. In this talk, I will focus on particularly complicated activity patterns observed experimentally in respiratory neurons and computationally in models. I will address the questions of how to determine the timescale classes present in biologically detailed neuronal models and needed to produce particular outputs in a robust way.

**Eric Shea-Brown** (University of Washington, USA)

Title: Assembling collective activity in neural circuits

Abstract:

We often find coherent, or correlated, spiking in simultaneously recorded cells. What features of this correlation matter for neural coding? And what are their origins in circuit mechanisms and connectivity? I’ll report ongoing work on both of these questions. First, we study how correlated spiking arises among a class of retinal ganglion cells, and why this seems to add to the precision of encoded signals. Second, we use graphical and point-process tools isolate the contribution of successively more-complex network features to coherent spiking in neural populations.

**Si Wu** (Beijing Normal University, China*)**
*Title: Continuous Attractor Neural Network: A Canonical Model for Neural Information Representation

Abstract:

Owing to its many computationally desirable properties, the model of continuous attractor neural networks (CANNs) has been successfully applied to describe the encoding of simple continuous features in neural systems, such as orientation, moving direction, head direction and spatial location of objects. Recent experimental and computational studies revealed that complex features of external inputs may also be encoded by low-dimensional CANNs embedded in the high-dimensional space of neural population activity. The new experimental data also confirmed the existence of the M-shaped correlation between neuronal responses, which is a correlation structure associated with the unique dynamics of CANNs. This body of evidence suggests that CANNs may serve as a canonical model for neural information representation. In this talk, I will introduce our recent studies on CANNs and the potential application of CANNs in brain-inspired computation.