Abstracts and Videos

Wednesday, November 6, 2013

8:50 am Welcome Statements

Rachel Wu, University of Rochester

9:00 am Opening Statements

Jeff Elman, University of California, San Diego

9:15 am Attending to learn, learning to attend: Developmental dynamics

Rachel Wu, University of Rochester

There are extensive literatures on studies of attention and studies of learning in both human and non-human animals, but surprisingly little research has addressed how attention and learning interact. This separation is misleading, even as a first approximation, because attention enables learning and learning guides attention in an information-rich environment. This fundamental interaction between attention and learning is most evident in human infants, who must actively select what to learn when facing intractable levels of informational complexity. I will present studies from two streams of research showing both sides of the interaction in infants, as well as adults in infant learning situations (i.e., not given many explicit task demands). Studying the attention and learning strategies of infants, arguably the best learners, not only helps us understand them, but also provides insights into lifelong learning.

Questions:

    1. How do learners use the information they acquire?
    2. How does learned information develop into a constraint to select future information?
    3. Given multiple events to learn or uncertain targets, what selection priors will learners use?
    4. How can we train “what” and “how” to learn in a unified paradigm?
    5. How do different learning situations (capitalizing on different selection priors – innate or learned) change what and how we learn across the lifespan?

9:45 am Understanding attention as an information seeking mechanism

Jackie Gottlieb, Columbia University

Selective attention has been subject to intensive investigation, and we made significant progress in elucidating its neural mechanisms. However, most of these investigations focus on attentional modulations of sensory perception, and less so on the goal of attention. Why do we direct attention in the context of a task, and how do we decide when and to what to attend? Working in the system of spatial attention and eye movement control, I propose that attention (and specifically, eye movements) are information seeking mechanisms whose goal is to select accurate predictors and reduce uncertainty for subsequent actions (e.g., looking at the traffic light at an intersection). This implies that oculomotor decisions are guided by a metric of the validity (or accuracy) of sensory cues. I discuss new results from our laboratory regarding the coding of cue validity in the lateral intraparietal area and its relation to the reward and uncertainty in a given task.

Questions:

    1. How does the brain learn where to attend while performing a task?
    2. Is "attention for learning" different from "attention for action" and if so, how?
    3. How does the brain detect novelty and surprise, and how do these factors impinge on attention areas?
    4. What are the neural substrates for attention-dependent learning? Does attention directly increase plasticity and if so, how?
    5. How does the brain generate intrinsic rewards to motivate "attention for learning" (e.g., curiosity) when agents do not seem to expect physical rewards?

10:45 am Multiple learning systems, automaticity and attention

Nathaniel Daw, New York University

Much evidence supports the idea that the brain contains multiple dissociable systems for decision making. We have argued that this dissociation can be quantitatively understood in terms of two distinct computational algorithms for learning to choose advantageous actions, known as model-based and model-free RL. But since this theory mainly originates in the computational neuroscience and behavioral psychology of animal learning, it is less clear how this dichotomy relates to other seemingly related notions of automatic vs deliberative control from human cognitive neuroscience. It is also unclear how the brain arbitrates between the two controllers. I report experiments suggesting that model-free learning indeed dominates when automaticity is expected -- e.g., under dual-task interference and following acute stress -- while model-based learning is strongest when subjects are engaging more attentive cognitive control.

Questions:

    1. Are there meaningful parallels between attention in the lower-level sense (e.g. in visual search) and higher level control processes as in learning or stroop?
    2. How does attention affect valuation?
    3. How do the well known attentional (cue competition) phenomena in conditioning differentially affect different sorts of learning?
    4. What aspects or step in decision (valuation, updating, etc.) require attention?

11:15 am How attention and reinforcement guide learning

Aaron Seitz, University of California, Riverside

Both attention and reinforcement are known to guide learning, however, it is difficult to disentangle their individual roles in the learning process. Here, I present a series of studies using the framework of task-irrelevant learning, where we dissociate reinforcement and directed attentional signals by examining how attention and reinforcement related to one task guides learning on a secondary, independent, task. I suggest that reinforcement up-regulates learning nonspecifically (benefitting even task-irrelevant stimuli), whereas directed attention is selective and regulates stimulus signals according to behavioral goals (ie up-regulates stimuli of interest and down-regulates distracting stimuli). This simple dissociation between directed attention and reinforcement both fits within existing literatures and explains otherwise counterintuitive findings.

Questions:

    1. How do these findings extend to typical task settings where attention and reinforcement are directed to the same task-relevant stimuli?
    2. How can one extend this model to account for the roles of other forms of attention (feature based, arousal, alerting, etc) in learning?
    3. To what extent are the underlying neural mechanisms between attention and reinforcement distinct? For example, Models of attention and reinforcement implicate common neurochemical factors (e.g. norepinephrine, acetylcholine, dopamine, etc).
    4. The model implies that attention is modulatory and doesn't gate learning. Is this true?
    5. The model implies learning is automatic and implicit. Can humans choose when or what to learn? Or do we merely attempt to "trick" our brains into the right state to capture what we hope to learn?

12:15 pm Discussion panel

John Richards, University of South Carolina

Marvin Chun, Yale University

1:45 pm Generalized perceptual learning modifies responses attentional modulation in early visual cortex

John Serences, University of California, San Diego

Learning to better discriminate a specific visual feature (i.e. a specific orientation in a specific region of space) has been associated with plasticity in the early sensory areas (sensory modification) and with improvements in the transfer of sensory information from early visual areas to downstream sensorimotor and decision mechanisms (enhanced read-out). However, in most real-world scenarios requiring perceptual expertise, an observer must be able to efficiently process many exemplars from a broad stimulus class as opposed to just a single feature. The extent to which this type of generalized perceptual learning acts via the modification of sensory responses in early areas of human visual cortex remains largely unexplored. Here, we used fMRI and a multivariate analysis method to assess orientation-selective modulations in visual cortex while subjects made difficult discriminations about small offsets from one of nine different ‘pedestal’ orientations. Subjects performed the behavioral task in the fMRI scanner before and after 10 days of training, and behavioral performance improved across training sessions. Importantly, there was a training-related increase in the selectivity of feature-based attentional modulations in visual areas V1, V2, and V3, such that the gain of orientation-selective responses increased when orientation was task-relevant and decreased when it was irrelevant. These results show that general expertise is supported by amplified attentional gain on early sensory responses in visual cortex.

2:15 pm Infants’ everyday experience and learning to attend and attending to learn

Lisa Oakes, University of California, Davis

Infants’ looking behavior has been the primary source of information about their attention. Studies have examined how infants look at novel versus familiar stimuli, how they change their looking over time, how they learn to look in anticipation of an event and so on. Assumptions are made about how infants’ looking reflects their existing knowledge versus on-line learning. This talk will discuss how each of these factors might contribute to infants’ looking behavior, and how infants’ learning in everyday experiences shapes looking—and attentional—strategies. Importantly, evidence suggests that infants learn as a function of their attention, and that infants’ attention changes as a function of their learning. Data collected using a variety of methods, including habituation and familiarization procedures and eye-tracking, illustrate how infants attend—or look—to learn, and how learning from everyday experiences (such as those with household pets) shapes their attention.

Questions:

    1. How do we relate infants’ on-line looking behaviors (such as switching glances from one stimulus to another, duration of looking, etc.) to aspects of attention?
    2. Can eye-tracking methods help clarify these issues? That is, can we use eye-tracking not to do what we’ve always done better, but to help us to ask new questions that uncover attentional processes?
    3. What are the mechanisms by which daily experience helps infants learn to learn? Is it mere exposure or do new strategies emerge through active processing and learning?
    4. Why and how would past learning interact with on-line learning strategies?
    5. Are there ways to experimentally manipulate these factors, or are we doomed to examine correlational relations?

2:45 pm How Learning in Infancy Enhances and Constrains Face and Object Processing

Lisa Scott, University of Massachusetts Amherst

Using a combination of cross-sectional and longitudinal-training designs, measures of looking time, eye-tracking, and electrophysiological recordings of neural activity (event-related potentials; ERPs) we have begun to elucidate the perceptual and cognitive experiences that enhance or bias the development of face and object processing during the first year of life. In this talk, I will first present research examining the factors that contribute to face and object learning during infancy. Second, I will examine whether or not early experience in infancy influences later face and object processing in childhood. Finally, I will present results that speak to the contribution of sustained attention in the development of face and object representations. Combined, the results from this research demonstrate how specific perceptual and conceptual learning in infancy enhances versus constrains later perceptual, cognitive, and social abilities as well as the development of cortical representations.

Questions:

    1. Our work suggests that individual-level learning (regardless of whether it is a face or object) during the first year of life influences later general face processing abilities in childhood. Are there any other possible explanations for these long lasting effects on face processing?
    2. We found a shift from attention based processing at 6-months to more perceptually based processing at 9 months. Have other’s found or looked at shifts in cortical or behavioral processing for speech or other domains during this same time in development?
    3. Much of our training involves labeling faces or objects. Labeling a face or object may shift our training from perceptual training to conceptual training. Is there a line between conception and perception? What role does attention (e.g., sustained attention) play for each?

3:45 pm Learning to share attention and sharing attention to learn

Gedeon Deak, University of California, San Diego


Attention-sharing (e.g., gaze-following; pointing) is believed to facilitate learning in social contexts. In particular, attention-sharing is assumed to be a critical facilitator of language development. Yet this begs a question: How is attention-sharing learned? What precursors and processes for learning attention-sharing further contribute to early language learning? Is there continuity among and between early attention-sharing skills, later attention-sharing skills, and language learning? I will outline a theory of how infants learn attention-sharing skills, and describe supporting evidence from computational, experimental, and ethnographic studies. I will then briefly review what is now known about the relation between infant attention-sharing skills and early language development.

Questions:

    1. How are interpersonal "attention policies" learned in the short-term (e.g., over several trials, involving spatial working
    2. memory) versus long-term (e.g., how much does each interlocutor tend to look at things that interest me)?
    3. How is reinforcement learning reflected in selective attention? That is, how is the process of policy-adjustment reflected in
    4. "shaping" of the content of the state representation at any given t?
    5. Selective attention vs. attentiveness policies: Whether or not one learns what information is relevant in a given situation, there seems to be "meta-attention" learning in development: that is, in a given situation, how important is it to focus attention on what is relevant, how easy is it to re-focus after being distracted, etc? Is learning WHAT to select a different process than learning HOW to attend (meta-attention policies) in particular contexts?
    6. For all of these, what are the neural mechanisms/systems in play, and how do they develop, particularly in the first 3-4 years?

4:15 pm Productive and counterproductive attention: Learning about signals of value

Mike Le Pelley, University of New South Wales

Recent studies have demonstrated that stimuli come to capture attention as an increasing function of the rewards that they predict (Anderson, Laurent & Yantis, 2011a, 2011b; Theeuwes & Belopolsky, 2012). In most of these studies, during an initial training phase participants receive a large reward for attending rapidly to stimulus X, but only a small reward for attending to stimulus Y. In a subsequent unrewarded test phase, attention is more likely to be captured by stimulus X than stimulus Y even though these stimuli are no longer goal-relevant. This effect may occur because people have been extensively trained to make an attentional response to stimulus X, or because stimulus X captures attention by virtue of its status as a signal of high reward. Using an analogue of an omission procedure, in which participants are effectively rewarded for ignoring stimuli, we show that attention is captured by stimuli which participants have been highly rewarded for ignoring. This counterproductive capture of attention suggests that attentional allocation is driven by Pavlovian learning about the signal value of stimuli, rather than instrumental learning about the consequences of responding to stimuli.

Questions:

    1. At what level is attention acting here? E.g., is the perception of stimuli altered? Or is it learning affecting guided attention?
    2. Is attentional learning about value dysfunctional in clinical disorders? E.g., Kapur’s (2003) “incentive salience” account of psychosis implicates a deficit in assigning motivational salience to stimuli appropriately. See also other disorders in which the signal value of stimuli conflicts with the response that ought to be made to them (e.g., addiction, obesity).
    3. What are the “healthy” psychological correlates of the extent of attentional capture by stimuli that should be ignored? Does this relate to impulsivity?
    4. Would learning about punishments have a similar or opposite effect on attention to learning about rewards?
    5. What is the best way to model these effects?

4:45 pm Discussion panel

Steve Luck, University of California, San Diego

Richard Aslin, University of Rochester

Thursday, November 7, 2013

8:30 am Curiosity and attention in young children and macaques

Celeste Kidd, University of Rochester

Efficient attentional choices require accurate expectations about what is likely to happen in the future. Adults' attention is guided by their substantial experience in the world. Very young children, however, possess far less data. In this talk, I will discuss work that explores the mechanisms that guide young children's early visual attention

decisions and subsequent learning. I present eye-tracking experiments in both human and non-human primates which combine behavioral methods and computational modeling in order to test competing theories of attentional choice. I present evidence that young learners rely on rational utility maximization both to build complex models of the world starting from very little knowledge and, more generally, to guide their behavior. I will also discuss recent results from related on-going projects about learning and attention in macaque learners.

9:00 am Neural mechanisms of information seeking

Ethan Bromberg-Martin, National Eye Institute

Research on motivated behavior often focuses on decisions that help us collect a greater amount of reward. However, even when humans and animals cannot control the rewards in their environment, they often express a strong preference over the reward-related sensory cues they attend: choosing to view sensory cues that provide information to help predict future rewards, while ignoring cues that are uninformative. I will present evidence that this ‘information-seeking’ behavior is motivated by some of the same neural circuits as conventional forms of reward-seeking, and new data on how these circuits compute the value of information and use it to motivate behavior.

Questions:

    1. Humans and animals are willing to pay for informative cues, even in experiments where they cannot use this information to increase their yield of rewards. This behavior seems suboptimal (costing rewards) in the experimental environment; what purpose does it serve in a more ecological setting?
    2. Does the brain use the same mechanism to motivate attention to information that helps predict rewards, as it does for information that helps control rewards? What about information that helps predict or control aspects of the environment that are not rewarding?
    3. There are many ways we can “pay attention” to a reward-informative cue: we can improve our perception of the cue’s appearance, our ability to use it to predict future rewards, or our ability to use it as feedback to learn a model of the world. Are these entirely separate forms of attention, or do they share common neural mechanisms?

9:30 am Life versus the laboratory: Learning what to attend to in a messy modern world

Natasha Kirkham, Birkbeck, University of London

To learn in a dynamic, busy, multisensory environment, infants must select which sensory events hold useful information and identify how those events relate to one another. Evidence from laboratory experiments suggests that infants can quickly learn statistically defined (or probabilistic) patterns in both auditory and visual domains, which allows them to segment streams of input (Baldwin, et al., 2008; Gomez & Gerken, 1999; Saffran, et al., 1996; Kirkham, et al., 2002; Wu et al., 2011), bind features within and across modalities (Fiser & Aslin, 2002; Richardson & Kirkham, 2004) and map words onto objects (Graf Estes, et al., 2007; Smith & Yu, 2008; Vouloumanos & Werker, 2009). This research has painted a compelling picture of the infant as a statistical tracker and prediction processor. However, particularly in the visual domain, investigations of statistical learning have presented infants with simplified patterns in an extremely sparse environment, limiting our understanding of their learning capabilities to these ideal conditions. Can infants learn about noisy events in a more natural, variable environment? What happens when attention is distracted, when cues are unreliable? In this talk, I will present evidence from a series of experiments (with Kristen Swan) that aim to investigate infants’ ability to learn when faced with multiple (occasionally unreliable) sources of information.

Questions:

    1. The method question – how do we get more ecological validity into our study of early learning/attention without getting rid of our ability to draw strong conclusions?
    2. The big picture question – what does infants’ sensitivity to patterns tell us about real-world learning?
    3. The modern world question – How much does learning depend on specific stimuli in infants’ environments (e.g., watching television versus watching people in a park)? Will the omnipresent screens change infant learning?
    4. The messy world question - How is attention deployed in the real world – when colour, motion, faces, sounds, smells etc. are all dragging attention around the scene?
    5. The gambling question - Is there really a sweet spot of statistical reliability?

10:30 am The interaction of cued attention and spatial learning in infancy: a computational model

Thomas Hannagan, Aix-Marseille University

Eight-month-olds learn more when they are cued to a target by social stimuli (e.g., a friendly human face), than when they are cued by other equally salient stimuli (flashing squares, Wu and Kirkham, 2010). To explain this finding, we introduce a connectionist model of cued learning in infancy. Its architecture is inspired by computational studies coming both from the fields of infant habituation and of visual attention. The model embodies in its simplest form the notion that attention and learning interact. We show that the learning differences obtained by Wu and Kirkham (2010) can be explained by the amount of information let through from non-cued locations. We discuss the role of self-reward signals in this model and tentatively describe ways to look at the model from a higher level of description. Finally, we present predicitions for future studies in this line of work.

Questions:

    1. What would the world look like, if attention and learning (A&L) were not interacting?
    2. How far can you go with cognitive models that do not capture the interaction between A&L?
    3. What would count as real progress in our understanding of the A&L interplay?
    4. What insights can we gain from other disciplines that deal with complex, interacting dynamical systems?
    5. What is the role of self-motivation/reward signals in the A&L interaction, and how can we test that?

11:00 am Neural mechanisms of attention-dependent reductions of ongoing cortical activity

Jude Mitchell, The Salk Institute

Over the past four decades studies in non-human primates have shown that attention modulates the mean firing rates of neurons. I find that attention also profoundly reduces variability in neuronal responses. Much of this variability originates from ongoing activity that is shared across large recurrently connected populations. Attention-dependent reductions of variability account for most (80%) of the improvements in sensory processing. Reductions in the variability of neuronal responses are also associated with the changes in perceptual learning over longer timescales (Adab and Vogels, 2011). This raises questions for what role the variability from ongoing cortical activity plays in perception, and how attention and learning alter ongoing activity to give higher fidelity sensory responses.

I will outline a series of experiments that have helped illuminate the neural mechanisms underlying attention-dependent reductions in variability. First I show that attention modulation is not uniform across cell classes, but rather is stronger among putative fast spiking interneurons, suggesting a key role for inhibition. I will then describe a spiking model of cortical circuits with realistic recurrent activity that can account for the emergence of correlated variability and also its reduction by attention-dependent increases in inhibition. This model makes predictions for how attention-dependent regulation of ongoing activity could also regulate synaptic plasticity and thus the learning of efficient sensory representations. In the final part of my talk I describe steps that I am taking to test these predictions in the New World monkey, the common marmoset. Due to it lissencephalic (flat) cortex and the development of primate transgenic lines, it offers many new opportunities for research in attention and learning.

Questions:

    1. If ongoing cortical activity limits the fidelity of sensory responses, why has it evolved as a key feature of cortical activity?
    2. What mechanisms contribute to the reductions of ongoing activity observed in attention and learning paradigms?
    3. How do attention-dependent reductions in ongoing activity contribute to synaptic plasticity, and learning efficient sensory representations?

11:30 am Discussion panel

Linda Smith, University of Indiana

Robert Desimone, Massachusetts Institute of Technology

2:15 pm Adaptation, attention, and prediction in perceptual classification

Chris Summerfield, University of Oxford

Decisions often involve integrating evidence from multiple sources. Where sources are equally reliable, the information they provide should contribute equally to choices. However, information processing is limited by capacity, so that some sources might enjoy attentional priority. Moreover, neural systems also adapt to the context provided by recent stimulation, so that information might influence choices differently according to whether it is expected or unexpected. I will describe behavioural and neural data from experiments in which observers view multiple discrete samples of evidence before making a category judgment. The weight given to each sample is strongly influenced by the statistics of the information accumulated thus far, and by the information provided by other simultaneously-available sources. I will outline a model in which the gain of information processing adapts to these contextual factors. The model is supported by data from EEG, fMRI and pupillometry.

2:45pm Paying attention to attention in statistical learning

Lauren Emberson, University of Rochester

The world around us is highly structured, and this structure is available to us through statistical information present in sensory input. Statistical learning is the ability to learn about the structure of the world around us incidentally and is present starting early in life. While statistical learning is considered a unitary behavioral phenomenon, it likely involves multiple stages from basic (passive?) pattern extraction to using statistical information to formulate and act upon hypotheses or predictions about the world. Statistical learning may also receive support from other independent cognitive systems such as exogenous attention and may in turn affect these systems when statistically-determined patterns are violated.

In the first half of my talk, I'll present a study examining both attentional sampling (eye tracking) and neural activity (fMRI) while participants change their visual perception as a result of incidental experience with environmental structure. The data suggest that attentional systems are biased as a result of the presence of learnable statistical information but that these changes in sampling are not sufficient for behavioral evidence of learning. Implications for the role of attention in statistical learning are explored.

In the second half of the talk, I'll present neuroimaging data from infants (fNIRS) examining changes in activity in temporal and occipital cortices as a result of exposure to statistical information. After learning that an auditory event predicts a visual event, we find that the unexpected absence of this visual event produces activity in the occipital cortex. While this result alone is evidence that perceptual systems are being modulated by statistical information, what are the behavioral consequences of this activity? While infants also extend their looking as a result of these visual omissions, is this a result of a modulation of attentional systems?

Questions:

    1. Does attention interact with/affect learning differently at different stages of learning? (e.g., during initial pattern extraction vs. later model testing)
    2. We often assume that if a neural or low-level behavioral phenomenon is present in adults, it will be present in infants. When is this the case and when isn't it? What does it mean if behavioral phenomena depend upon the experimental context (e.g, different less abstract stimuli are used)? Is this ability present across ages or not in this case?
    3. How can including studying development add clarity to some issues that are difficult to disentangle in adult state (humans and monkeys)?
    4. Given that both attention and learning are largely heterogenous terms, each encompassing numerous cognitive abilities and/or systems, what traction would be gained through being specific with definitions and operationalization of these broader concepts vs. examining patterns exhibited across many instantiations of these concepts? i.e., will being more reductionistic help or just help us lose the forest through the trees
    5. Big answers will likely emerge across multiple fields/methods/species, but how can we be certain that we are investigating the same cognitive ability in diverse experimental contexts? Are there simplifying assumptions that can be made to make interdisciplinary investigations easier? (e.g., assuming that eye movements are a correlate of attentional focus and not worry about endogenous vs. exogenous attention in infancy)

3:45 pm Memory-guided attention: How learning begets further learning

Nick Turk-Browne, Princeton University

Past experience can shape our priorities in the future. Here I present two examples of such influences of memory on attention, or memory-guided attention. First, I show that learning processes can automatically recruit attention: In a statistical learning paradigm, spatial and feature-based attention were biased to structured over random sources of information without conscious effort or awareness. Second, I show that perceptual memory can bias processing toward novel information: In an fMRI adaptation paradigm, greater adaptation for an old stimulus in visual cortex was associated with better subsequent memory for a new stimulus presented concurrently. These effects are not easily accounted for by standard theories of attention build around the dichotomy between stimulus-driven and goal-directed control. Moreover, they demonstrate the cyclical nature of attention-learning interactions, whereby learning and memory influence attention, which in turn influences what gets processed and stored in memory, which influences attention, and so on.

Questions:

  1. When is attention allocated to familiar vs. novel information?
  2. How automatic is the control of attention by learning/memory?
  3. How do different forms of learning and memory influence attention?
  4. What memory systems in the brain influence attention and how?
  5. What are the dynamics of the bidirectional interaction between attention and learning?

4:15 pm Discussion panel

Marisa Carrasco, New York University

Richard Aslin, University of Rochester

4:45 pm Closing Statements

Terry Sejnowski, The Salk Institute