Saturday, May 25, 2024
HomeNewsResearch NewsGesture gives memory a hand

Gesture gives memory a hand

Presenting new words with accompanying gestures activates deeper brain processing in the learner, resulting in better retention, report Manuela Macedonia and colleagues at Universities in Austria, Italy and Germany.

When any stimulus, such as a new word, is presented, information from the senses is ‘encoded’. This encoding may be shallow and transient (a novel word heard just once is unlikely to be remembered) or the encoding may be deeper and then pass – hopefully – into long-term memory. Repetition deepens the encoding process and thereby increases the likelihood that the new word will pass into long-term storage for later retrieval. This is why learning new vocabulary can be so time consuming and laborious. Vocabulary that does not move from the shallow encoding of the moment to storage cannot later be retrieved.

Previous research has shown that foreign language learners encode new vocabulary more deeply when it is presented in more than one modality. For example, presenting words audio visually (as the written word and/ or picture, as well as the spoken word) facilitates learning more than either visual or auditory presentation alone.

A third modality is to use gestures. These gestures could come initially from the teacher as they present the new word, and later be used by the learners during practice. The resulting boost to learning is called ‘the enactment effect’.

How this comes about, however, is not well understood, and consequently it is not clear how best to make use of these techniques. It could be that the action, especially when performed by the learner, produces a motor memory that strengthens the overall memory. This makes sense, as the more paths there are to a single memory the more securely it is embedded in the memory network.

Alternatively, watching the gestures may increase perception and attention, which are crucial to moving incoming information into storage as memories. In this case it is not really the gestures themselves that are responsible.

Macedonia and colleagues wanted to find out which process best accounted for the effect of gestures on word learning by scanning the brain activity of 31 German native-speaking students while they learned new words presented in different modalities. The students were asked to read, read and listen or read, listen and watch a set of 30 words (while lying in a brain scanner), for which they were offered the princely sum of 10 euros.

To be certain that the 30 words really were new to all the participants, a new language, Vimmi, was generated for the study by randomly computer generating three-syllable words based on Italian phonotactic rules, but controlling for possible confounding similarities with real English or German words, or words made especially memorable due to being distinctive or bizarre. For example the German word ‘blume’ for ‘flower’ was translated as ‘giketa’, and the German for ‘knife’, ‘messer’ was translated as ‘ganuma’.

The set of 30 concrete nouns for everyday objects were split into three sets of 10, to be presented visually (by just reading the word), audio-visually (by reading and listening to the word) or audio-visually with a video of an actress making related gestures. Recall was tested after 5 minutes – and again after 45 days. Sample size is always a limiting factor when using expensive, high-demand equipment such as an fMRI scanner and this was unfortunately compounded as only 18 came back for the final recall test after 45 days (presumably funds precluded further incentives).

When asked to freely recall the German or Vimmi words presented, recall was significantly greater when students read, listened and also watched gestures of new words being presented than when they just read or read and listened to new words. The influence was not significant after 45 days, which could be due to the missing students in the returning sample.

Brain scans during the learning experience showed activity in areas associated with language learning. As the number of modalities used in presenting new words increased – so did the complexity of the neural networks involved, supporting the idea that using gestures deepens processing of the new word and embeds it more firmly, with more connections, into the memory network.

Macedonia and colleagues were expecting to see brain activity reflecting increased semantic processing when learners viewed gestures, but an increase was actually observed in the motor areas of the brain. This could mean that rather than deepening semantic long-term memory, using gestures taps into procedural long-term memory – the kind of memory stores used when we learn to ride a bike.


■ Macedonia, M., Repetto, C., Ischebeck, A. and Mueller, K. (2019) ‘Depth of Encoding Through Observed Gestures in Foreign Language Word Learning’, Frontiers in Psychology, 10 Article 33 doi: 10.3389/ fpsyg.2019.00033

Image courtesy of Ruth Hartnup
Previous article
Next article
Gill Ragsdale
Gill Ragsdale
Gill has a PhD in Evolutionary Psychology from Cambridge, and teaches Psychology with the Open University, but also holds an RSA-Cert TEFL. Gill has taught EFL in the UK, Turkey, Egypt and to the refugees in the Calais 'Jungle' in France. She currently teaches English to refugees in the UK.
- Advertisment -

Latest Posts