91Ƭ

Skip to main content

Using both sides of brain to speak American Sign Language

Using both sides of brain to speak American Sign Language

At a talk Thursday evening, CU Boulder researcher Karen Boyd spoke about two of her studies on American Sign Language (ASL) conducted with colleagues in linguistics and psychology


American Sign Language is both a spatial and visual language, meaning it uses both hemispheres of the brain. Even for something as simple as pauses, different brain functions are necessary to best communicate in American Sign Language (ASL).

University of Colorado Boulder researcher Karen Boyd, an instructor in the Department of Speech, Language and Hearing Sciences, has studied which brain hemispheres are used for specific parts of the grammatical structures and pauses in ASL.

Boyd was born to deaf parents and has deaf siblings, so ASL is her heritage language. Growing up, she struggled to learn English, but had a breakthrough when she began to understand the language “not through a normal English class, but through a linguistics class,” she said at a Thursday afternoon talk focused on her research findings.

Karen Boyd ASL presentation

CU Boulder researcher and ASL instructor Karen Boyd (center, at her Thursday presentation) studies which brain hemispheres are used for specific parts of the grammatical structures and pauses in ASL.

Boyd highlighted a prominent aspect of her research: the second-language learner perceptions of spatial tracing constructions in ASL.

“There's a formal term within ASL called size and space specifiers, also known as SASSes, which help describe a person or an object,” she explained. “This study focuses on your perceptions of understanding where my hands move in space, how your brain and those hemispheres can cross and understanding what exactly you're seeing being constructed in space.

“I decided on the research topic of learning ASL as a second language and how that affects your dominant hand. While you are signing, you are required to pick one hand that acts as the dominant throughout your formation of the language. In ASL, there are signs that only require one hand, but there are also two-handed non-symmetrical signs, where your non-dominant hand stays stationary and your dominant hand is the one that moves, and two-handed symmetrical signs.”

However, Boyd noted that these differences in signs are what can cause issues for second-language learners.

“In my research, I noticed that some people would incorrectly switch their dominant hand when indicating shapes. So, why is it that the brain is switching things?”

Boyd chose to focus on the right hemisphere, the part of the brain that deals with deals with objects in space. She cited a study by her mentor, Karen Emmorey of San Diego State University, that found second-language learners struggled with their spatial awareness compared to people who are deaf.

“The brain is a phenomenal thing,” Boyd said. “You may be wondering, do hearing people use their right hemisphere? And they do, but not in the same way that I do, because I have to sign every day.”

Translating pauses

Boyd also had researched the use of pauses in both spoken English and ASL.

“Something that I noticed that was quite cool in hearing culture is called Utterance Boundary Pause,” she explained. “With spoken language, we're used to pauses in a sentence, where we use filler words such as ‘ah,’ or ‘um.’ I wanted to see what the people were looking for within these pauses and how that translates over into ASL.

“When language is involved, where there is a greater vocabulary, there are more pauses. Within spoken language, there tend to be pauses right after the end of a word. In ASL, it follows almost the same rules, except if we were to have a pause while signing, we would repeat the last signed word over and over. That's considered a pause. You can't really do that in spoken language. It is very unique.”

Boyd notes this difference in how pauses work in each language can also bring up some issues, especially when someone is speaking a blend of the two languages.

“It is important that these pauses are conveyed to students in educational settings, especially in K-12 settings,” she said. “For example, a deaf child might get a little frustrated if they’re listening to the English that’s being spoken by the teacher, and the teacher takes a pause. The interpreter will never convey that pause with an ‘um,’ or a pause, which might lead the deaf children to think that spoken language is always perfect and that they are able to speak without issue. That's not true.

“The interpreters must convey to those deaf kids that the speaker is struggling for the word to teach the child that when you are stuck trying to think of a word, it's OK, that’s perfectly normal.”


Did you enjoy this article?  Passionate about speech, language and hearing sciences? Show your support.