NeurIPS 2019 Workshop on Context and Compositionality

We were noticed that our workshop proposal was accepted! We are excited to organize the Context and Compositionality in Biological and Artificial Neural Systems next December at NeurIPS 2019, Vancouver, Canada.

During the workshop we will explore advances in context in language processing and their relationship with compositionality from the different angles of Neuroscience, Linguistics/NLP, and Machine Learning. Therefore, we invite researchers in these and subjecent fields to submit their work until September 18th (11:59pm PT). Check the Call for Papers for more information.

We are looking forward to your interesting contributions and discussions!

NeurIPS 2019 Workshop Submitted

The past year, I have been steering my research to a new fascinating topic: language processing and the brain. Together with Prof. Alex Huth and his PhD student Shailee Jain, we are exploring how the brain processes and utilizes context for language understanding. We come up with the idea to extend our internal discussions to a much broader community that includes researchers from Neuroscience, Linguistics, and Machine Learning. One of the best places where researchers from all these fields congregate is at the NeurIPS conference.

Therefore, we embarked in recruiting an amazing group of people that is both interested in the topic and would love to work together to organize such workshop. A few emails and Emma Strubell, Leila Wehbe, Chris Honey, Tal Linzen, Kyunghyun Cho, and Alan Yuille joined us to the organization. Amazing people and several meetings later, and we submitted our NeurIPS workshop proposal. Let’s see how it goes!

New Machine Learning Method for Learning about Cognition

In the past few months, I’ve been collaborating with researchers from the Turk-Browne Lab at Yale University. Their ongoing work is about learning the origins of cognition in the human brain. Equipped with fMRI scanners, they scan kids to analyze their cognitive skills at different ages. Their proposal is simple but quite challenging. The challenges start by recruiting families, making sure they are safe and comfortable during the experiments, developing tasks that are suitable for kids of very young ages, and overcoming the data challenges. In particular, the latter requires to rethink machine learning methods that neuroscientists typically use for analyzing data of experiments with adults. The brain develops fast at these ages, and changes are to be expected over time.

Continue reading “New Machine Learning Method for Learning about Cognition”

Pushing the Limits of Neuroscience

Neuroscientist is the science of learning how the brain works and understanding, among other things, how the brain stores and processes all the information that is received from the world around it. Several imaging techniques have been developed in recent years that allow neuroscientists to peek inside the human brain. The most important step on this direction is the functional Magnetic Resonance Imaging, or fMRI, that captures the brain activation indirectly from the blood oxygenation levels. With fMRI we can capture a full brain scan every few seconds. Such scans are volumes of the brain comprised of thousands-to-millions of voxels. Processing of these scans is done usually with machine learning algorithms and statistic tools.

Storing a subject information in memory is possible with today servers. However, doing it with a tens of them is very limiting. Therefore, storing all this data requires multiple machines to be stored at once. Moreover, using multi-subject datasets helps to improve the statistical capacity of the machine learning methods that are incorporated in the neuroscience experiments. In a recent work from our research group, we published a manuscript describing how we scale out two factor analysis methods (for dimensionality reduction). We show that is possible to use hundreds to thousands of subjects for neuroscience studies.

The first method is called the Shared Response Model (SRM). The SRM computes a series of mappings from the subjects’ volumes to a shared subspace. These mappings improve the predictability of the model and help increase the accuracy of subsequent machine learning algorithms used in a study. The second method, dubbed Hierarchical Topographic Analysis (HTFA), is a model that abstracts the brain activity with hubs (spheres) of activity and dynamic interlinks across them. HTFA helps with the interpretation of the brain dynamics, outputting networks as the one in the figure below. For both methods, we present algorithms that can run distributively and process a 1000-subject dataset. Our work “Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets” aims to push the limits of what neuroscientist can do with multi-subject data and enable them to propose experiments that were unthinkable before.

Brain Topographical Factor Analysis