Research Summary: My intellectual interests revolve around a central theme: how do internal mental (brain) states determine the way in which external information is processed? I explore this broad motif of the interaction between endogenous (top-down) and exogenous (bottom-up) determinants of cognition and behavior along two main avenues, investigating the computational and neural mechanisms of (i) cognitive control and (ii) visual cognition.
(i) Vertebrates and lower mammals have limited behavioral repertoires, comprised of small sets of relatively inflexible stimulus-response associations. By contrast, higher mammals, and in particular humans, have evolved highly flexible behavior: dependent on our current goals and context, we can respond to the same stimulus in myriad ways, and overcome habitual stimulus-response associations in favor of temporarily more adaptive actions. This capacity, referred to as cognitive control, represents a key mystery of cognitive neuroscience and is a central focus of my research. I posit that cognitive control reflects complex interactions of numerous, currently ill-defined, evaluative and regulatory mechanisms, which it is my goal to characterize, both in terms of a cognitive ontology of control processes as well as their neural implementation. I pursue this aim using noninvasive neuro-imaging and -stimulation techniques in human subjects performing tasks that independently manipulate different control demands.
(ii) The fact that we can reliably recognize and navigate our surroundings is astonishing. Our visual environment is too data-rich to be processed in any great detail, and it is also inherently ambiguous: the same retinal image can be cast by countless different stimuli. Von Helmholtz concluded 130ys ago that visual cognition must thus rely on inference, the top-down use of prior knowledge to interpret a given percept. In spite of this insight, 20th century visual neuroscience has been resolutely feed-forward, treating visual cognition as emerging purely from an assemblage of bottom-up feature detectors. A major interest of mine lies with challenging this notion. I do so by adjudicating empirically between neural and computational hypotheses derived from this traditional perspective vs. recent rival models that instead view vision in the Helmholtzian tradition as a heavily top-down, predictive process. To wit, I combine noninvasive neuroimaging techniques in human subjects with probabilistic task structure manipulations to gauge the influence of perceptual expectations on visual cognition and its neural substrates.