MIT (Computational Neuroscience)
My lab investigates the computational mechanisms underlying human object recognition as a gateway to understanding the neural bases of intelligent behavior. The ability to recognize objects is a fundamental cognitive task in every sensory modality, e.g., for friend/foe discrimination, social communication, reading, or hearing, and its loss or impairment is associated with a number of neural disorders (e.g., in autism, dyslexia, or schizophrenia). Yet despite the apparent ease with which we see and hear, object recognition is widely acknowledged as a very difficult computational problem. It is even more difficult from a biological systems perspective, since it involves several levels of understanding, from the computational level, over the level of cellular and biophysical mechanisms and the level of neuronal circuits, up to the level of behavior.
In our work, we combine computational models with human behavioral and fMRI data (and, most recently, EEG and NIRS data) from our lab and collaborators, as well as with single unit data obtained in collaboration with physiology labs. This comprehensive approach addresses one of the major challenges in neuroscience today, that is, the necessity to combine experimental data from a range of approaches in order to develop a rigorous and predictive model of human brain function that quantitatively and mechanistically links neurons to behavior. This is of interest not only for basic research, but also for the investigation of the neural bases of behavioral deficits in mental disorders. Finally, understanding the neural mechanisms underlying object recognition in the brain is also of significant interest for Artificial Intelligence, as the capabilities of pattern recognition systems (e.g., in machine vision or speech recognition) still lag far behind that of their human counterparts in terms of robustness, flexibility, and the ability to learn from few exemplars.
We are especially interested in understanding the influence of visual experience and task demands on visual processing, in the form of long term plasticity as well as short-term attentional and task-dependent modulations. Most of the work is focused on the domain of vision, reflecting its status as the most accessible sensory modality. However, given that similar problems of specificity and invariance have to be solved in other sensory modalities as well (for instance in audition), it is likely that similar computational principles underlie processing in those domains, and we are interested in understanding commonalities and differences in processing between modalities.