The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Читать онлайн.
Название The Handbook of Multimodal-Multisensor Interfaces, Volume 1
Автор произведения Sharon Oviatt
Жанр Программы
Серия ACM Books
Издательство Программы
Год выпуска 0
isbn 9781970001662



Скачать книгу

of letters is facilitated by handwriting experience.

      The ability to recognize written symbols, such as letters, is made easier by producing them by hand [Molfese et al. 2011, Longcamp et al. 2005, Hall et al. 2015]. For instance, the National Research Council and the National Early Literacy Panel both found that letter writing in preschool had a significant impact on future literacy skills [Snow et al. 1998]. Why handwriting facilitates letter recognition above and beyond other types of practice can be understood from the multimodal-multisensory learning perspective. Although it is generally accepted that the multisensory learning of letters (e.g., hearing and seeing with no motor action) facilities letter learning beyond unisensory learning, incorporating multimodal production of letters contributes even more to the learning experience. The act of producing a letterform by hand is a complicated task, requiring efficient coordination between multiple systems. We have hypothesized that handwriting accomplishes this through the production of variable forms. Each letter production is accompanied by a unique combination of visual and somatosensory stimulation. Manual dexterity in children is somewhat poor, resulting in a variety of possible visual and tactile combinations every time a child attempts to write a letter. The variability of this experience is exacerbated by the use of tools (writing implements), which requires fine motor skill, an ability that matures at a slower rate than gross motor skill. The perceptual result is the production of letterforms that are often quite variable and “messy” (see Figure 2.5). We have recently found that children who produce variable forms while handwriting or through tracing handwritten symbols are better able to recognize a novel set of symbols than their peers who trace the same typed symbols [Li and James 2016]. It is well known that learning a category through variable exemplars facilitates learning of that category compared to studying more similar exemplars [Namy and Gentner 2002]. The more variability that is perceived and integrated into a named category (such as the letter “A”), the more novel instances that can then be matched to this information. Put simply, once children understand the many instances of the letter “p” they can begin to recognize new, unique instances of that letter. Thus, the multimodal production of a letterform has the benefit of creating perceptually variable instances that facilitate category learning, that is in addition to the development of a visual and somatosensory history with that category.

       2.4.4 Neural Changes as a Result of Active Experience

      The fact that the brain produces all behavior would not receive the attention it deserves, if we were to try to understand human behavior without some understanding of the neural circuitry that underlies the behavior in question. The claim here is that to truly understand behavior, we must also understand how the brain produces that behavior. Because our understanding of the brain is in its infancy, however, this is difficult and controversial. Nonetheless, we can still interpret and predict behavior based on what we know so far regarding neural functioning from comparative work with other species and human neuroimaging studies. This is especially true when we consider the effects that multimodal-multisensory experiences have on learning. As outlined in the introduction, learning in the brain occurs through association of inputs. We argue here that human action serves to combine multisensory inputs, and as such, is a crucial component of learning. This claim is based on the assumption that there are bidirectional, reciprocal relations between perception and action (e.g., [Dewey 1896, Gibson 1979]). From this perspective, action and perception are intimately linked: the ultimate purpose of perception is to guide action (see, e.g., [Craighero et al. 1996]), and actions (e.g., movements of the eyes, heads, and hands) are necessary in order to perceive (e.g., [Campos et al. 2000, O’Regan and Noë 2001]). When humans perceive objects, they automatically generate actions appropriate for manipulating or interacting with those objects if they have had active interactions with them previously [Ellis and Tucker 2000, Tucker and Ellis 1998]. Therefore, we know that actions and perceptions form associated networks in the brain under some circumstances. Thus, perception and action become linked through our multimodal experiences.

      Figure 2.5 Examples of handwritten letters by 4- year-old children. Top row are traces, bottom two rows are handwritten free-hand.

      In what follows, we provide some evidence of this linking in the brain and the experiences that are required to form these multimodal-multisensory networks. We will focus on functional magnetic resonance imaging (fMRI) as a method of human neuroimaging, given its high spatial resolution of patterns of neural activation, safety, widespread use in human research, and applicability to research involving the neural pathways created through learning. We will focus on a multimodal network that includes: (1) the fusiform gyrus, a structure in the ventral temporal-occipital cortex that has long been known to underlie visual object processing and that becomes tuned, with experience, for processing faces in the right hemisphere (e.g., [Kanwisher et al. 1997]) and letters/words in the left hemisphere (e.g., [Cohen and Dehaene 2004]); (2) the dorsal precentral gyrus, which is in the top portion of the primary motor cortex, a region that has long been known to produce actions [Penfield and Boldfrey 1937]; (3) the middle frontal gyrus, a region in the premotor cortex, involved with motor programming and traditionally thought to underlie fine-motor skills (e.g., [Exner 1881, Roux et al. 2009]); and (4) the ventral primary motor/premotor cortex, that overlaps with Broca’s area, a region thought to underlie speech production (e.g., [Broca 1861]]) and that has, more recently, become associated with numerous types of fine motor production skills (for review see [Petrides 2015]). As outlined below, this network only becomes linked after individuals experience the world through multimodal-multisensory learning.

      Functional neuroimaging methods provide us with unique information about neural patterns that occur with an overt behavioral response. As such, the resultant data is correlational, but highly suggestive of the neural patterns that underlie human behavior.

      In additional to providing the neural correlate of overt behavior, neuroimaging studies can also generate hypotheses and theories about human cognition. In what follows, we briefly outline the use of this method in the service of understanding the experiences that are required to link sensory and motor systems.

       2.5.1 The Effects of Action on Sensory Processing of Objects

      According to embodied cognition models, a distributed representation of an object concept is created by brain-body-environment interactions (see [Barsalou et al. 2003]). Perception of a stimulus via a single sensory modality (e.g., vision) can therefore engage the entire distributed representation. In neuroimaging work, this is demonstrated when motor systems in the brain are activated when participants simply look at objects that they are accustomed to manipulating (e.g., [Greezes and Decety 2002]), even without acting upon the objects at that time. Motor system activation is more pronounced when participants need to make judgments about the manipulability of objects rather than about the function of objects [Boronat et al. 2005, Buxbaum and Saffran 2002, Simmons and Barsalou 2003]. The motor system is also recruited in simple visual perception tasks. For example, in a recent study, we asked adult participants to study actual novel objects that were constructed to produce a sound upon specific types of interactions (e.g., pressing a top made a novel, rattling sound) (see Figure 2.6). Participants learned these novel sound-action-object associations in one of two ways: either by actively producing the sound themselves (active interaction) or through watching an experimenter produce the sounds (passive interaction). Note that in both cases, there was equal exposure to the visual and auditory information. The only difference was that in one case they produced the sound themselves instead of watching another produce the sound.

      Figure 2.6 Examples of novel, sound producing objects. (From Butler and James [2013])

      After participants studied these objects and learned the pairings, they underwent fMRI scanning while they