Research

Develop imaging biomarkers for functional characterization, early diagnosis, and treatment of the atypical brain

We are actively pursuing research directions where our technical advances can translate to surrogate and even clinically actionable imaging biomarkers for early diagnosis and treatment of brain cognitive disorders:

Alzheimer's disease and related dementias: Our long-term research goal is to identify and broadly deploy Alzheimer's Disease (AD) biomarkers that are diagnostically effective and elucidate mechanisms underlying the disorder in order to generate hypotheses for disease-modifying interventions, and have potential for clinical translation. Our current objective is to develop a combinatorial MEG/EEG and PET-based biomarker for early detection and monitoring of Alzheimer’s disease. Early detection of dementia is of paramount importance for treatment of Alzheimer’s disease (AD), as evidenced by recent phase 2 and 3 clinical trials. (Collaborators: Quanzheng Li, Fernando Maestu, John Mosher, Katia Andrade, Elisabeth Breese Marsh, Richard Leahy)

Variability in the auditory-evoked neural response as a potential mechanism for dyslexia: The goal of this project is to investigate the role of neural variability in dyslexia. In particular, we explore whether trial-by-trial neural variability differs in the auditory and/or visual cortex of children with dyslexia when compared to neurotypical children. Preliminary results indicate the dyslexia condition is associated with decreased consistency in the neural response for both auditory and visual stimuli. (Collaborators: John Gabrieli, Tracy Centanni, Sara Beach)

Sensitivity to speech distributional information in children with autism: This project investigates whether children with autism spectrum disorder are sensitive to probability cues in speech. In typical language acquisition literature, ample evidence suggests neurotypical children are exquisitely poised to capture the distributional information embedded in speech, to learn various aspects of phonotactic and syntactic rules. Children with autism, however, demonstrate impaired performance in such tasks. We use an auditory mismatch paradigm (syllables ‘ba’ and ‘da’ delivered with different probabilities) to detect deficits in probabilistic learning. Preliminary findings have revealed that impaired reading skills in autism are associated with atypical sensitivity to frequency of syllables. (Collaborators: John Gabrieli, Zhenghan Qi)

 

Bridge the gap between human and machine vision

The past five years have seen considerable progress in using deep neural networks to model responses in the visual cortex. Deep neural networks (DNNs) are now the most successful biologically inspired models of computer vision, making them invaluable tools to study the computations performed by the human visual system. Recent work has shown these models achieve accuracy on par with human performance in many tasks. We have also shown that computer vision models share a hierarchical correspondence with neural object representations.

DNNs have adopted a feedforward architecture to sequentially transform visual signals into complex representations, akin to the human ventral stream. Even though models with purely feedforward architecture can easily recognize whole objects, they often mislabel objects in challenging conditions, such as incongruent object-background pairings, or ambiguous and partially occluded inputs. In contrast, models that incorporate recurrent connections are robust to partially occluded objects, suggesting the importance of recurrent processing for object recognition.

To continue bridging the gap between human and computer vision, we explore how the duration and sequencing of ventral stream processes can be used as constraints for guiding the development of computational models with recursive architecture.  

 

Develop novel neuroimaging methods to holistically capture the spatiotemporal and representational space of brain activation

Combining multimodal data to capture an integrated view of brain function in representational space is a powerful approach to study the human brain and will yield a new perspective on the fundamental analysis of brain behavior and its neurophysiological underpinnings. The approach, termed representational similarity analysis (RSA), compares representational matrices (stimulus x stimulus similarity structures) across imaging modalities and data types. We are developing computational tools to link neural (MEG, fMRI); behavioral data (e.g. button presses, video camera data); and computational models (deep neural models - DNNs) using RSA. 

The tools are exemplified in a novel computational method we recently developed, which fuses fMRI and MEG data, yielding a first-of its-kind visualization of the dynamics of object processing in humans. Intuitively, the method links the MEG temporal and fMRI spatial patterns by requiring stimuli to be equivalently represented in both modalities (if two visual stimuli evoke similar MEG patterns, they should also evoke similar fMRI patterns). To demonstrate this method, we captured the spatiotemporal dynamics of ventral stream activation of visual objects in sighted individuals in two independent data sets.

Our efforts concentrate on: a) methodological development of these tools, by extending MEG-fMRI fusion maps to experimental contrasts, derivation of statistical maps and thresholds, and optimization of spatio-temporal resolution; b) validation, by concretely demonstrating that a MEG-fMRI fusion approach can access deep neural signals which are very hard to localize with MEG alone; and c) efficient software implementations, by creating effective Matlab and GPU tools. In the long run, our goal is to expand the limits of imaging technologies by developing and popularizing computational tools that integrate the spatial and temporal richness of multi-modality data.