We are interested in how the brain uses prediction and context to integrate information from different sensory systems into multimodal representations of the world.

With Petra Vetter and Fraser Smith, we have shown that in human early visual cortex, abstract auditory information is found in regions that process unimodal visual information. Recording 3T fMRI in humans blind from birth, now also with Amir Amedi, we have further shown that visual imagery is not necessary for the feedback of auditory information into early visual cortex.

Decoding Natural Sounds in Early ‘‘Visual’’
Cortex of Congenitally Blind Individuals

In other studies, we have demonstrated that the Capture of Auditory Motion by Vision Is Represented by an Activation Shift from Auditory to Visual Motion Cortex

See also Retinotopic effects during spatial audio-visual integration and Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools.