Asier Mujika

ETH Zürich
Asier Mujika
Institut für Theoretische Informatik
CAB J 21.2
Universitätstrasse 6
8092 Zürich

Phone: +41 44 632 74 03
E-Mail: asierm@ethz.ch

Research Interests

I am a second year PhD student in the group of Prof. Angelika Steger. My research goal is understanding and building intelligence. To achieve this I work at the intersection of neuroscience and deep learning. I take inspiration from the brain to build new learning algorithms and take inspiration from machine learning techniques to find parallels in our brains.

Publications

Accepted Publications

Optimal Kronecker-Sum Approximation of Real Time Recurrent Learning
(joint work with Frederik Benzig, Marcelo Gauy, Anders Martinsson and Angelika Steger)
International Conference on Machine Learning (ICML), 2019

Approximating Real-Time Recurrent Learning with Random Kronecker Factors
(joint work with Florian Meier and Angelika Steger)
Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), 2018
Oral presentation at the Credit assignment in Deep Learning and Deep Reinforcement Learning Workshop, ICML 2018

On the origin of lognormal network synchrony in CA1
(joint work with Felix Weissenberger, Hafsteinn Einarsson, Marcelo Gauy, Johannes Lengler and Angelika Steger)
Hippocampus

The linear hidden subset problem for the (1+1) EA with scheduled and adaptive mutation rates
(joint work with Hafsteinn Einarsson, Marcelo Matheus Gauy, Johannes Lengler, Florian Meier, Angelika Steger and Felix Weissenberger)
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2018

Fast-Slow Recurrent Neural Networks
(joint work with Florian Meier and Angelika Steger)
Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), 2017

Workshops

Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions
(joint work with Florian Meier, Marcelo Gauy and Angelika Steger)
Deep Reinforcement Learning Workshop on the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS), 2019
Optimization Foundations of Reinforcement Learning Workshop on the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS), 2019

Multi-task learning with deep model based reinforcement learning
Deep Reinforcement Learning Workshop on the 30th Annual Conference on Neural Information Processing Systems (NIPS), 2016

Preprints

Decoupling Hierarchical Recurrent Neural Networks With Locally Computable Losses
(joint work with Felix Weissenberger and Angelika Steger)