Sensory perception and behavior are dependent on the brain’s ability to create internal representations of the outside world. These representations are formed through the coordinated electrical activity of networks of neurons in different brain areas. The primary goal of our research is to understand the nature of these representations.
The brain’s internal representations, or ‘neural codes’, can be highly complex. Each brain area contains millions of individual neurons, each of which can transmit information through rapid electrical events known as action potentials. The first step in characterizing a neural code requires identifying the spatial and temporal scales at which information is encoded.
Spatial scale: Is it necessary to consider the activity of each neuron separately? How much information is lost when individual neurons are ignored and the overall activity of the entire network is considered instead? Do networks of neurons work together in a concerted manner such that the information in the activity of any one neuron can only be understood when considered together with the activity of the other neurons in the network?
Temporal scale: Is it necessary to consider the timing of individual action potentials with sub-millisecond precision? How much information is lost when the timing of individual action potentials is ignored and only the average level of activity in the recent past is considered instead? Do patterns of action potentials carry synergistic information that would be lost when considering each action potential alone?
Identifying the spatial and temporal scales of a neural code requires specialized analytical tools, many of which we develop ourselves together with our collaborators. These tools are important because they allow us to reduce the dimensionality of the data that we collect in our experiments, i.e. they allow us to find ways to simplify a neural code without losing the information that it contains. Without this simplification, analyzing data from our experiments would be difficult, even with today’s high-powered computers.
Once the spatial and temporal scales of a neural code have been defined, the next step is to determine which features of the outside world are encoded. One of our main interests is the representation of sound, so we study the activity of neurons in the brain’s auditory system. While it is obvious that activity of auditory neurons conveys information about sound, it is not necessarily clear exactly which features of sound are the most relevant for a particular neuron.
The initial stage of the auditory system appears to consist a number of parallel processing streams. The information that reaches the brain from the ear is split to provide input to several different areas in the brainstem, each of which is specialized for the extraction of a specific sound feature. The outputs of each of these parallel streams then converge in an area of the midbrain known as the inferior colliculus (IC), where a feature-based representation of sound is formed.
Each neuron in the IC receives a different set of inputs from the brainstem and, thus, is sensitive to a different combination of sound features such as intensity, frequency, or location. Together, the activity of the network of neurons in the IC provide a complete representation of the features of sound, and it is this representation that allows the final stage of the auditory system, the auditory cortex, to generate complex perceptions and behaviors.
One of the biggest challenges faced by the brain is isolating the particular sensory inputs that are important at any given time. In the auditory system, this is called the ‘cocktail party problem’. In a crowd of talking people, the ears are overwhelmed by the sound of many different voices; yet, remarkably, the brain is able to isolate a particular voice of interest so that it is perceived much more clearly than the rest.
This clarification appears to take place only at the final stage of processing in the auditory cortex. The feature-based representation of sound in the IC contains information about all of the different voices, but the cortex selectively amplifies the activity of those IC neurons that are sensitive to the sound features that best distinguish the voice of interest from the other voices. The mechanisms by which this selective amplification is carried out are still not well understood, and are one of the main topics of our research.