Wednesday, November 6, 2013
8:50 am Welcome Statements
Rachel Wu, University of Rochester
9:00 am Opening Statements
Jeff Elman, University of California, San Diego
9:15 am Attending to learn, learning to attend: Developmental dynamics
Rachel Wu, University of Rochester
There are extensive literatures on studies of attention and studies of learning in both human and non-human animals, but surprisingly little research has addressed how attention and learning interact. This separation is misleading, even as a first approximation, because attention enables learning and learning guides attention in an information-rich environment. This fundamental interaction between attention and learning is most evident in human infants, who must actively select what to learn when facing intractable levels of informational complexity. I will present studies from two streams of research showing both sides of the interaction in infants, as well as adults in infant learning situations (i.e., not given many explicit task demands). Studying the attention and learning strategies of infants, arguably the best learners, not only helps us understand them, but also provides insights into lifelong learning.
Questions:
9:45 am Understanding attention as an information seeking mechanism
Jackie Gottlieb, Columbia University
Selective attention has been subject to intensive investigation, and we made significant progress in elucidating its neural mechanisms. However, most of these investigations focus on attentional modulations of sensory perception, and less so on the goal of attention. Why do we direct attention in the context of a task, and how do we decide when and to what to attend? Working in the system of spatial attention and eye movement control, I propose that attention (and specifically, eye movements) are information seeking mechanisms whose goal is to select accurate predictors and reduce uncertainty for subsequent actions (e.g., looking at the traffic light at an intersection). This implies that oculomotor decisions are guided by a metric of the validity (or accuracy) of sensory cues. I discuss new results from our laboratory regarding the coding of cue validity in the lateral intraparietal area and its relation to the reward and uncertainty in a given task.
Questions:
10:45 am Multiple learning systems, automaticity and attention
Nathaniel Daw, New York University
Much evidence supports the idea that the brain contains multiple dissociable systems for decision making. We have argued that this dissociation can be quantitatively understood in terms of two distinct computational algorithms for learning to choose advantageous actions, known as model-based and model-free RL. But since this theory mainly originates in the computational neuroscience and behavioral psychology of animal learning, it is less clear how this dichotomy relates to other seemingly related notions of automatic vs deliberative control from human cognitive neuroscience. It is also unclear how the brain arbitrates between the two controllers. I report experiments suggesting that model-free learning indeed dominates when automaticity is expected -- e.g., under dual-task interference and following acute stress -- while model-based learning is strongest when subjects are engaging more attentive cognitive control.
Questions:
11:15 am How attention and reinforcement guide learning
Aaron Seitz, University of California, Riverside
Both attention and reinforcement are known to guide learning, however, it is difficult to disentangle their individual roles in the learning process. Here, I present a series of studies using the framework of task-irrelevant learning, where we dissociate reinforcement and directed attentional signals by examining how attention and reinforcement related to one task guides learning on a secondary, independent, task. I suggest that reinforcement up-regulates learning nonspecifically (benefitting even task-irrelevant stimuli), whereas directed attention is selective and regulates stimulus signals according to behavioral goals (ie up-regulates stimuli of interest and down-regulates distracting stimuli). This simple dissociation between directed attention and reinforcement both fits within existing literatures and explains otherwise counterintuitive findings.
Questions:
12:15 pm Discussion panel
John Richards, University of South Carolina
Marvin Chun, Yale University
1:45 pm Generalized perceptual learning modifies responses attentional modulation in early visual cortex
John Serences, University of California, San Diego
Learning to better discriminate a specific visual feature (i.e. a specific orientation in a specific region of space) has been associated with plasticity in the early sensory areas (sensory modification) and with improvements in the transfer of sensory information from early visual areas to downstream sensorimotor and decision mechanisms (enhanced read-out). However, in most real-world scenarios requiring perceptual expertise, an observer must be able to efficiently process many exemplars from a broad stimulus class as opposed to just a single feature. The extent to which this type of generalized perceptual learning acts via the modification of sensory responses in early areas of human visual cortex remains largely unexplored. Here, we used fMRI and a multivariate analysis method to assess orientation-selective modulations in visual cortex while subjects made difficult discriminations about small offsets from one of nine different ‘pedestal’ orientations. Subjects performed the behavioral task in the fMRI scanner before and after 10 days of training, and behavioral performance improved across training sessions. Importantly, there was a training-related increase in the selectivity of feature-based attentional modulations in visual areas V1, V2, and V3, such that the gain of orientation-selective responses increased when orientation was task-relevant and decreased when it was irrelevant. These results show that general expertise is supported by amplified attentional gain on early sensory responses in visual cortex.
2:15 pm Infants’ everyday experience and learning to attend and attending to learn
Lisa Oakes, University of California, Davis
Infants’ looking behavior has been the primary source of information about their attention. Studies have examined how infants look at novel versus familiar stimuli, how they change their looking over time, how they learn to look in anticipation of an event and so on. Assumptions are made about how infants’ looking reflects their existing knowledge versus on-line learning. This talk will discuss how each of these factors might contribute to infants’ looking behavior, and how infants’ learning in everyday experiences shapes looking—and attentional—strategies. Importantly, evidence suggests that infants learn as a function of their attention, and that infants’ attention changes as a function of their learning. Data collected using a variety of methods, including habituation and familiarization procedures and eye-tracking, illustrate how infants attend—or look—to learn, and how learning from everyday experiences (such as those with household pets) shapes their attention.
Questions:
2:45 pm How Learning in Infancy Enhances and Constrains Face and Object Processing
Lisa Scott, University of Massachusetts Amherst
Using a combination of cross-sectional and longitudinal-training designs, measures of looking time, eye-tracking, and electrophysiological recordings of neural activity (event-related potentials; ERPs) we have begun to elucidate the perceptual and cognitive experiences that enhance or bias the development of face and object processing during the first year of life. In this talk, I will first present research examining the factors that contribute to face and object learning during infancy. Second, I will examine whether or not early experience in infancy influences later face and object processing in childhood. Finally, I will present results that speak to the contribution of sustained attention in the development of face and object representations. Combined, the results from this research demonstrate how specific perceptual and conceptual learning in infancy enhances versus constrains later perceptual, cognitive, and social abilities as well as the development of cortical representations.
Questions:
3:45 pm Learning to share attention and sharing attention to learn
Gedeon Deak, University of California, San Diego
Attention-sharing (e.g., gaze-following; pointing) is believed to facilitate learning in social contexts. In particular, attention-sharing is assumed to be a critical facilitator of language development. Yet this begs a question: How is attention-sharing learned? What precursors and processes for learning attention-sharing further contribute to early language learning? Is there continuity among and between early attention-sharing skills, later attention-sharing skills, and language learning? I will outline a theory of how infants learn attention-sharing skills, and describe supporting evidence from computational, experimental, and ethnographic studies. I will then briefly review what is now known about the relation between infant attention-sharing skills and early language development.
Questions:
4:15 pm Productive and counterproductive attention: Learning about signals of value
Mike Le Pelley, University of New South Wales
Recent studies have demonstrated that stimuli come to capture attention as an increasing function of the rewards that they predict (Anderson, Laurent & Yantis, 2011a, 2011b; Theeuwes & Belopolsky, 2012). In most of these studies, during an initial training phase participants receive a large reward for attending rapidly to stimulus X, but only a small reward for attending to stimulus Y. In a subsequent unrewarded test phase, attention is more likely to be captured by stimulus X than stimulus Y even though these stimuli are no longer goal-relevant. This effect may occur because people have been extensively trained to make an attentional response to stimulus X, or because stimulus X captures attention by virtue of its status as a signal of high reward. Using an analogue of an omission procedure, in which participants are effectively rewarded for ignoring stimuli, we show that attention is captured by stimuli which participants have been highly rewarded for ignoring. This counterproductive capture of attention suggests that attentional allocation is driven by Pavlovian learning about the signal value of stimuli, rather than instrumental learning about the consequences of responding to stimuli.
Questions:
4:45 pm Discussion panel
Steve Luck, University of California, San Diego
Richard Aslin, University of Rochester
Thursday, November 7, 2013
8:30 am Curiosity and attention in young children and macaques
Celeste Kidd, University of Rochester
Efficient attentional choices require accurate expectations about what is likely to happen in the future. Adults' attention is guided by their substantial experience in the world. Very young children, however, possess far less data. In this talk, I will discuss work that explores the mechanisms that guide young children's early visual attention
decisions and subsequent learning. I present eye-tracking experiments in both human and non-human primates which combine behavioral methods and computational modeling in order to test competing theories of attentional choice. I present evidence that young learners rely on rational utility maximization both to build complex models of the world starting from very little knowledge and, more generally, to guide their behavior. I will also discuss recent results from related on-going projects about learning and attention in macaque learners.
9:00 am Neural mechanisms of information seeking
Ethan Bromberg-Martin, National Eye Institute
Research on motivated behavior often focuses on decisions that help us collect a greater amount of reward. However, even when humans and animals cannot control the rewards in their environment, they often express a strong preference over the reward-related sensory cues they attend: choosing to view sensory cues that provide information to help predict future rewards, while ignoring cues that are uninformative. I will present evidence that this ‘information-seeking’ behavior is motivated by some of the same neural circuits as conventional forms of reward-seeking, and new data on how these circuits compute the value of information and use it to motivate behavior.
Questions:
9:30 am Life versus the laboratory: Learning what to attend to in a messy modern world
Natasha Kirkham, Birkbeck, University of London
To learn in a dynamic, busy, multisensory environment, infants must select which sensory events hold useful information and identify how those events relate to one another. Evidence from laboratory experiments suggests that infants can quickly learn statistically defined (or probabilistic) patterns in both auditory and visual domains, which allows them to segment streams of input (Baldwin, et al., 2008; Gomez & Gerken, 1999; Saffran, et al., 1996; Kirkham, et al., 2002; Wu et al., 2011), bind features within and across modalities (Fiser & Aslin, 2002; Richardson & Kirkham, 2004) and map words onto objects (Graf Estes, et al., 2007; Smith & Yu, 2008; Vouloumanos & Werker, 2009). This research has painted a compelling picture of the infant as a statistical tracker and prediction processor. However, particularly in the visual domain, investigations of statistical learning have presented infants with simplified patterns in an extremely sparse environment, limiting our understanding of their learning capabilities to these ideal conditions. Can infants learn about noisy events in a more natural, variable environment? What happens when attention is distracted, when cues are unreliable? In this talk, I will present evidence from a series of experiments (with Kristen Swan) that aim to investigate infants’ ability to learn when faced with multiple (occasionally unreliable) sources of information.
Questions:
10:30 am The interaction of cued attention and spatial learning in infancy: a computational model
Thomas Hannagan, Aix-Marseille University
Eight-month-olds learn more when they are cued to a target by social stimuli (e.g., a friendly human face), than when they are cued by other equally salient stimuli (flashing squares, Wu and Kirkham, 2010). To explain this finding, we introduce a connectionist model of cued learning in infancy. Its architecture is inspired by computational studies coming both from the fields of infant habituation and of visual attention. The model embodies in its simplest form the notion that attention and learning interact. We show that the learning differences obtained by Wu and Kirkham (2010) can be explained by the amount of information let through from non-cued locations. We discuss the role of self-reward signals in this model and tentatively describe ways to look at the model from a higher level of description. Finally, we present predicitions for future studies in this line of work.
Questions:
11:00 am Neural mechanisms of attention-dependent reductions of ongoing cortical activity
Jude Mitchell, The Salk Institute
Over the past four decades studies in non-human primates have shown that attention modulates the mean firing rates of neurons. I find that attention also profoundly reduces variability in neuronal responses. Much of this variability originates from ongoing activity that is shared across large recurrently connected populations. Attention-dependent reductions of variability account for most (80%) of the improvements in sensory processing. Reductions in the variability of neuronal responses are also associated with the changes in perceptual learning over longer timescales (Adab and Vogels, 2011). This raises questions for what role the variability from ongoing cortical activity plays in perception, and how attention and learning alter ongoing activity to give higher fidelity sensory responses.
I will outline a series of experiments that have helped illuminate the neural mechanisms underlying attention-dependent reductions in variability. First I show that attention modulation is not uniform across cell classes, but rather is stronger among putative fast spiking interneurons, suggesting a key role for inhibition. I will then describe a spiking model of cortical circuits with realistic recurrent activity that can account for the emergence of correlated variability and also its reduction by attention-dependent increases in inhibition. This model makes predictions for how attention-dependent regulation of ongoing activity could also regulate synaptic plasticity and thus the learning of efficient sensory representations. In the final part of my talk I describe steps that I am taking to test these predictions in the New World monkey, the common marmoset. Due to it lissencephalic (flat) cortex and the development of primate transgenic lines, it offers many new opportunities for research in attention and learning.
Questions:
11:30 am Discussion panel
Linda Smith, University of Indiana
Robert Desimone, Massachusetts Institute of Technology
2:15 pm Adaptation, attention, and prediction in perceptual classification
Chris Summerfield, University of Oxford
Decisions often involve integrating evidence from multiple sources. Where sources are equally reliable, the information they provide should contribute equally to choices. However, information processing is limited by capacity, so that some sources might enjoy attentional priority. Moreover, neural systems also adapt to the context provided by recent stimulation, so that information might influence choices differently according to whether it is expected or unexpected. I will describe behavioural and neural data from experiments in which observers view multiple discrete samples of evidence before making a category judgment. The weight given to each sample is strongly influenced by the statistics of the information accumulated thus far, and by the information provided by other simultaneously-available sources. I will outline a model in which the gain of information processing adapts to these contextual factors. The model is supported by data from EEG, fMRI and pupillometry.
2:45pm Paying attention to attention in statistical learning
Lauren Emberson, University of Rochester
The world around us is highly structured, and this structure is available to us through statistical information present in sensory input. Statistical learning is the ability to learn about the structure of the world around us incidentally and is present starting early in life. While statistical learning is considered a unitary behavioral phenomenon, it likely involves multiple stages from basic (passive?) pattern extraction to using statistical information to formulate and act upon hypotheses or predictions about the world. Statistical learning may also receive support from other independent cognitive systems such as exogenous attention and may in turn affect these systems when statistically-determined patterns are violated.
In the first half of my talk, I'll present a study examining both attentional sampling (eye tracking) and neural activity (fMRI) while participants change their visual perception as a result of incidental experience with environmental structure. The data suggest that attentional systems are biased as a result of the presence of learnable statistical information but that these changes in sampling are not sufficient for behavioral evidence of learning. Implications for the role of attention in statistical learning are explored.
In the second half of the talk, I'll present neuroimaging data from infants (fNIRS) examining changes in activity in temporal and occipital cortices as a result of exposure to statistical information. After learning that an auditory event predicts a visual event, we find that the unexpected absence of this visual event produces activity in the occipital cortex. While this result alone is evidence that perceptual systems are being modulated by statistical information, what are the behavioral consequences of this activity? While infants also extend their looking as a result of these visual omissions, is this a result of a modulation of attentional systems?
Questions:
3:45 pm Memory-guided attention: How learning begets further learning
Nick Turk-Browne, Princeton University
Past experience can shape our priorities in the future. Here I present two examples of such influences of memory on attention, or memory-guided attention. First, I show that learning processes can automatically recruit attention: In a statistical learning paradigm, spatial and feature-based attention were biased to structured over random sources of information without conscious effort or awareness. Second, I show that perceptual memory can bias processing toward novel information: In an fMRI adaptation paradigm, greater adaptation for an old stimulus in visual cortex was associated with better subsequent memory for a new stimulus presented concurrently. These effects are not easily accounted for by standard theories of attention build around the dichotomy between stimulus-driven and goal-directed control. Moreover, they demonstrate the cyclical nature of attention-learning interactions, whereby learning and memory influence attention, which in turn influences what gets processed and stored in memory, which influences attention, and so on.
Questions:
4:15 pm Discussion panel
Marisa Carrasco, New York University
Richard Aslin, University of Rochester
4:45 pm Closing Statements
Terry Sejnowski, The Salk Institute