Abstract
Recent research has shown that simple motor actions, such as pointing or grasping, can modulate the way we perceive and attend to our visual environment. Here we examine the role of action in spatial context learning. Previous studies using keyboard responses have revealed that people are faster locating a target on repeated visual search displays ("contextual cueing"). However, this learning appears to depend on the task and response requirements. In Experiment 1, participants searched for a T-target among L-distractors and responded either by pressing a key or by touching the screen. Comparable contextual cueing was found in both response modes. Moreover, learning transferred between keyboard and touch screen responses. Experiment 2 showed that learning occurred even for repeated displays that required no response, and this learning was as strong as learning for displays that required a response. Learning on no-response trials cannot be accounted for by oculomotor responses, as learning was observed when eye movements were discouraged (Experiment 3). We suggest that spatial context learning is abstracted from motor actions.
Original language | English |
---|---|
Pages (from-to) | 1563-1579 |
Number of pages | 17 |
Journal | Quarterly Journal of Experimental Psychology |
Volume | 64 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2011 |
Externally published | Yes |
Bibliographical note
Funding Information:Correspondence should be sent to Tal Makovski, Department of Psychology, University of Minnesota, N218 Elliott Hall, Minneapolis, MN 55455, USA. E-mail: [email protected] This study was supported in part by NIH 071788. We thank Khena Swallow and Ming Bao for help with eye tracking, Ameante Lacoste, Birgit Fink, and Sarah Rudek for comments, and Eric Bressler, Jacqueline Caston, and Jen Decker for help with data collection.
Keywords
- Contextual cueing
- Eye movement
- Perception and action
- Touch response
- Visual search