I want to understand how our brains enable us to see. Opening our eyes gives us an almost instant sense of our surroundings, so visual computations must be rapid. This suggests that the system uses fast feedforward computations to map from retinal images to abstract representations of the scene and the objects. However, understanding the structure of the scene, the relationships among the objects, and their implications requires relating the visual signals to prior knowledge about the world in a deep and highly flexible way. This suggests that vision is also an inference process that involves the active construction of internal models that reflect both prior knowledge and present evidence. In machine learning, these two paradigms of perceptual processing have been somewhat separately explored with feedforward computational models (which still dominate computer vision) and probabilistic generative models (which more cleanly separate the roles of the prior knowledge and the inference algorithm, and promise powerful generalization to new perceptual challenges). The primate brain combines the computational efficiency of the former paradigm with the statistical efficiency of the latter. It appears to seamlessly integrate these two computational paradigms using an algorithm that is yet to be discovered. This algorithm is the central mystery that drives my interest in vision.
My lab uses deep neural networks, a brain-inspired artificial intelligence technology, to build computer models that can see and recognize objects in ways that are similar to biological visual systems. We use a top-down engineering approach to design models to perform complex visual tasks and match human behavioral performance. In addition, the models are constrained, from the bottom up, by neuroscientific data. They must use only neurobiologically plausible dynamic components and must be able to explain the internal image representations and dynamic transformations observed in biological brains with techniques including functional magnetic resonance imaging, magnetoencephalography, and cell-array recordings.
Beyond building computational models of biological vision, my lab works on methods for testing such models with brain and behavioral data. Just like vision must relate complex models of the world to the massive stream of retinal data, computational neuroscience needs to test complex neural network models with increasingly rich measurements of brain activity and behavior that we can now acquire in humans and animals. We are developing exploratory visualization methods for high-dimensional data, as well as confirmatory methods for inferential comparisons among brain-computational models.
More information on the lab's general approach is here.
Kriegeskorte is a Professor of Psychology and Neuroscience at Columbia University. He is an affiliated member of the Department of Electrical Engineering. He is also a Principal Investigator and Director of Cognitive Imaging at the Zuckerman Mind Brain Behavior Institute. Kriegeskorte is a co-founder of the conference “Cognitive Computational Neuroscience”, which had its inaugural meeting in September 2017 at Columbia University. Kriegeskorte received his MA from the University of Cologne in Germany, did his PhD thesis research at Maastricht University in the Netherlands, and worked as a postdoctoral fellow at the University of Minnesota and at the US National Institute of Mental Health. From 2009 to 2017, he was a Programme Leader at the Medical Research Council's Cognition and Brain Sciences Unit at the University of Cambridge, UK.