Randomness in Neuroscience

In order to build a reliable system it seems natural to require components that behave precisely and reliably as they should. Neuroscientists, however, know very well that neurons, the building blocks of brains, come with huge variances in their properties and that these properties also vary over time. Synapses, the connections between neurons, are known to be highly unreliable in forwarding signals: some 40%-80% percent of the time they simply ignore the incoming signal instead of forwarding it. There are many indications that this randomness in the brain is actually not a 'deficiency' that needs to be overcome, but that, quite to the contrary, it is an essential design principle. This is also our motivation: exhibiting examples of the usefulness of randomness in various aspects of neuroscience.

Grants

Together with João Sacramento, we are funded by an ETH Research Grant.

Neuro-MiSe (weekly seminar)

Subscribe to the mailinglist here to learn about our weekly talks.

Members (Neuro-CSA)

Angelika Steger
Johannes Lengler
Asier Mujika
Robert Meier
Johannes von Oswald
Nicolas Zucchet
Akram, Yassir
Meulemans, Alexander
Kobayashi, Seijin
Schug, Simon

Publications

A Simple Optimal Algorithm for the 2-Arm Bandit Problem
M. Larcher, R. Meier and A. Steger
2023 Symposium on Simplicity in Algorithms (SOSA)

Random initialisations performing above chance and how to find them
F. Benzing, S. Schug, R. Meier, J. von Oswald, Y. Akram, N. Zucchet, L. Aitchison*, A. Steger*
Workshop paper at the 14th International OPT Workshop on Optimization for Machine Learning, 2022.

Open-Ended Reinforcement Learning with Neural Reward Functions (arxiv version)
R. Meier*, A. Mujika*
to appear at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Our Poster at the Workshop "Agent Learning in Open-Endedness" at ICLR 2022

The least-control principle for learning at equilibrium
A. Meulemans*, N. Zucchet*, S. Kobayashi*, J. von Oswald and J. Sacramento
selected as an oral, to appear at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022).

A contrastive rule for meta-learning
N. Zucchet*, S. Schug*, J. von Oswald*, D. Zhao and J. Sacramento
to appear at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022).

Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
S. Kobayashi, P. Vilimelis Aceituno, J. von Oswald
to appear at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022).

Beyond backpropagation: implicit gradients for bilevel optimization
N. Zucchet and J. Sacramento
to appear in Neural Computation.

Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
F. Benzing
ICML, 2022.

Presynaptic stochasticity improves energy efficiency and helps alleviate the stability-plasticity dilemma
S. Schug*, F. Benzing*, A. Steger
eLife, 10/2021.

Learning where to learn: Gradient sparsity in meta and continual learning
J. von Oswald*, D. Zhao*, S. Kobayashi, S. Schug, M. Caccia, N. Zucchet and J. Sacramento
35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.

Adaptive Tuning Curve Widths Improve Sample Efficient Learning
F. Meier, R. Dang-Nhu, and A. Steger
Frontiers in Computational Neuroscience, 2020

Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions
F. Meier*, A. Mujika*, M. Gauy and A. Steger
Deep Reinforcement Learning Workshop at NeurIPS and Workshop on Optimization Foundation for Reinforcement Learning at NeurIPS, 2019

Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses
A. Mujika*, F. Weissenberger* and A. Steger

Optimal Kronecker-Sum Approximation of Real Time Recurrent Learning
F. Benzing*, M. Gauy*, A. Mujika, A. Martinsson and A. Steger
ICML, 2019

Mutual Inhibition with Few Inhibitory Cells via Nonlinear Inhibitory Synaptic Interaction
F. Weissenberger, M. Gauy, X. Zou and A. Steger
Neural Computation, 2019

A hippocampal model for behavioral time acquisition and fast bidirectional replay of spatio-temporal memory sequences
(M. Gauy., H. Einarsson, J. Lengler, F. Meier, F. Weissenberger, M. F. Yanik and A. Steger)
Frontiers in Neuroscience, Systems Biology, 2018.

On the origin of lognormal network synchrony in CA1
F. Weissenberger, H. Einarsson, M. Gauy, F. Meier, A. Mujika, J. Lengler and A. Steger
Hippocampus, 2018

Approximating Real-Time Recurrent Learning with Random Kronecker Factors
A. Mujika, F. Meier, and A. Steger
NIPS, 2018

Voltage dependence of synaptic plasticity is essential for rate based learning with short stimuli
F. Weissenberger, M. Gauy, F. Meier, J. Lengler, H. Einarsson, and A. Steger
Scientific Reports, 2018

Fast-Slow Recurrent Neural Networks
A. Mujika, F. Meier, and A. Steger
NIPS, 2017

Long synfire chains emerge by spike-timing dependent plasticity modulated by population activity
F. Weissenberger, F. Meier, J. Lengler, H. Einarsson, and A. Steger
International Journal of Neural Systems, 2017

A model of fast Hebbian spike latency normalization
H. Einarsson, M. Gauy, J. Lengler, and A. Steger
Frontiers in Computational Neuroscience, 2017

Multiassociative Memory: Recurrent Synapses Increase Storage Capacity
M. Gauy, F. Meier, and A. Steger
Neural Computation, 2017

Note on the coefficient of variations of neuronal spike trains
A. Steger and J. Lengler
Biological Cybernetics, 2017

Randomness as a Building Block for Reproducibility in Local Cortical Networks
J. Lengler and A. Steger
In Reproducibility: Principles, Problems, Practices, and Prospects, Wiley, 2016. Editors: H. Atmanspacher, S. Maasen.

Normalization phenomena in asynchronous networks
A. Karbasi, J. Lengler, and A. Steger
In Proceedings of the 42nd International Conference on Automata, Languages, and Programming (ICALP '15), 2015, 688-700.

Bootstrap Percolation with Inhibition (preprint)
H. Einarsson, J. Lengler, F. Mousset, K. Panagiotou, and A. Steger

A high-capacity model for one shot association learning in the brain
H. Einarsson, J. Lengler, and A. Steger
Frontiers in Computational Neuroscience, 07 November 2014.

Reliable neuronal systems: the importance of heterogeneity
J. Lengler, F. Jug, and A. Steger
PLOS ONE, December 2013.

Recurrent competitive networks can learn locally excitatory topologies
M. Cook, F. Jug, and A. Steger
In Proceedings of the International Joint Conference on Neural Networks (IJCNN '12), 2012, 1-8.

Interacting maps for fast visual interpretation
M. Cook, L. Gugelmann, F. Jug, C. Krautz, and A. Steger
In Proceedings of the International Joint Conference on Neural Networks (IJCNN '11), 2011, 770-776.

Neuronal Projections Can Be Sharpened by a Biologically Plausible Learning Mechanism
M. Cook, F. Jug, and C. Krautz
In Proceedings of the 21th International Conference on Artificial Neural Networks (ICANN '11)
Lecture Notes in Computer Science 6791, 2011, 101-108.

M. Cook, F. Jug, C. Krautz, A. Steger
Unsupervised Learning of Relations
In: Lecture Notes in Computer Science (ICANN 2010), 6352, 2010, 164-173.