David Klindt
I am a Machine Learning Research Scientist.
If you are a student looking for internship, bachelor or master’s thesis opportunities or other sorts of collaborations, feel free to reach out.
Email  / 
Google Scholar  / 
Github  / 
LinkedIn  / 
Twitter  / 
YouTube
|
|
Research Interest
Our brains are remarkable at perceiving and interpreting
inputs in a dynamic environment. The goal of my research is
to: a) study, with modern machine learning methods, how brains
perform intelligent perception, and b) use these insights to
build the next generation of machine learning models that
have the same flexibility and robustness to identify
meaningful structure from high-dimensional sensory data
across different environments. I am particularly
interested in machine learning, computer vision and
computational neuroscience. Recently, I have been working on
generative models, disentanglement (nonlinear ICA), domain
adaptation, robustness, topological data analysis and
different topics in computational neuroscience.
|
|
Score-Based Generative Classifiers
Roland Zimmermann,
Lukas Schott,
Yang Song,
Benjamin Dunn,
David Klindt*
NeurIPS, 2021, Deep Generative Models and Downstream Applications Workshop
paper
Analysis by synthesis
performs classification by using class-conditional likelihoods of a
generative model. This has proven adversarially robust on MNIST,
but it has been difficult to scale to natural images. In this work,
we build on recent advances in score based generative models and
show that they can be turned into competitive classifiers on natural
images. However, they are no longer robust – inviting further research.
|
|
Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience
Dominic Gonschorek,
Larissa Höfling,
Klaudia P. Szatko,
Katrin Franke,
Timm Schubert,
Benjamin Dunn,
Philipp Berens,
David Klindt,
Thomas Euler*
NeurIPS, 2021 (Spotlight)
paper /
code
Inter-experimental variability is commonly overlooked in systems neuroscience.
We show how this leads to problems in the case of retinal cell type classification.
As a solution we propose a flexible model, based on recent machine learning
advances in domain adaptation, that removes this variability to unmask biological effects.
|
|
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
David Klindt*,
Lukas Schott*,
Yash Sharma*,
Ivan Ustyuzhaninov,
Wieland Brendel,
Matthias Bethge,
Dylan Paiton
ICLR, 2021 (Oral,
top 0.1% of submissions)
paper /
code /
talk
We show that accounting for the temporally sparse nature of
natural transitions leads to a proof of identifiability and
reliable learning of disentangled representations on several
established benchmark datasets, as well as contributed
datasets with natural dynamics.
|
|
Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations
Yongrong Qiu,
Zhijian Zhao,
David Klindt,
Magdalena Kautzky,
Klaudia P. Szatko,
Frank Schaeffel,
Katharina Rifai,
Katrin Franke,
Laura Busse,
Thomas Euler
Current Biology, 2021
doi
We record natural visual scenes of mouse habitats and show
that these have different UV and green contrast levels. This
is reflected in mice retinal circuits and facilitates predator
detection.
|
|
Modelling functional wiring and processing from retinal bipolar to ganglion cells
David Klindt*,
Cornelius Schröder*,
Anna Vlasits,
Katrin Franke,
Philipp Berens,
Thomas Euler
Cosyne, 2021
poster /
presentation
We extend our model of the inner retina (see below) to
predict the activity of ganglion cells, the next processing
layer in the retina. The model matches known connectivity and
stratification profiles.
|
|
System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina
Cornelius Schröder*,
David Klindt*,
Sarah Strauss,
Katrin Franke,
Matthias Bethge,
Thomas Euler,
Philipp Berens
NeurIPS, 2020 (Spotlight)
paper /
talk
We construct a mechanistically detailed but fully
differentiable model of the inner retina. This performs well
against black box deep learning models, reproduces essential
retinal circuit functions and drug effects and makes new cell
type specific experimental predictions.
|
|
Unsupervised Learning of Image Manifolds with Mutual Information
David Klindt*,
Johannes Ballé*,
Jonathon Shlens,
Eero Simoncelli
NAISys, 2020
poster
We present a neural network layer and an unsupervised
objective function that learns sparse features
from natural images and projects them into a low dimensional
space preserving their topology. Stacking such unsupervised
layers yields a model that is not fooled by shortcuts on a
benchmark dataset.
|
|
The temporal structure of the inner retina at a single glance
Zhijian Zhao*,
David Klindt*,
André Maia Chagas,
Klaudia P. Szatko,
Luke E. Rogerson,
Dario A. Protti,
Christian Behrens,
Deniz Dalkara,
Timm Schubert,
Matthias Bethge,
Katrin Franke,
Philipp Berens,
Alexander S. Ecker,
Thomas Euler
Nature Scientific Reports, 2020
doi
With a novel electrically tunable lens, we record the entire
inner retina which shows strong inter-experimental
variability. We construct models adjusting for the response
speed due to temperature fluctuations across experiments.
These recover most of the response variability and, for the
first time, allow a precise characterization of the kinetic
response gradient across the inner retina.
|
|
Adjusting For Batch Effects In Two Photon Imaging Recordings Of The Retinal Inner Plexiform Layer
David Klindt*,
Luke E. Rogerson*,
Zhijian Zhao,
Klaudia P. Szatko,
Katrin Franke,
Dmitry Kobak,
Alexander S. Ecker,
Matthias Bethge,
Philipp Berens,
Thomas Euler
Cosyne, 2019
poster
We demonstrate strong inter-experimental variability for
recordings of retinal Bipolar cells and propose a simple
linear and a parametric method that adjust for these effects.
|
|
Neural system identification for large
populations – separating “what” and “where”
David Klindt*,
Alexander S. Ecker,
Thomas Euler,
Matthias Bethge
NeurIPS, 2017
paper /
code
We build a model for neural system identification with
Convolutional Neural Networks, that outperforms the previous
state of the art in predictive performance, but also allows
the clustering of neurons into distinct functional cell types.
|
|
Does the way we read others' mind change over the lifespan? Insights from a massive web poll of cognitive skills from childhood to late adulthood
David Klindt*,
Marie Devaine,
Jean Daunizeau
Cortex, 2017
doi /
media coverage /
project
In a large-scale crowd sourced project with more than 10,000
participants we test the development and decline of cognitive
and social functions over the human life span.
|
|