We Aren’t Outliers

“You had strong family support.”

“You went to a good school.”

“You got lots of one-on-one time, didn’t you?”

“You were exposed to other cuers.”

Sometimes, when I tell others about what Cued Speech had done for me growing up, someone will mention the above, as if those factors somehow negate or diminish Cued Speech’s efficacy. It’s like they’re implying that Cued Speech itself didn’t work, that the other factors had to compensate, or that I was the exception that proved the rule.

It’s true that family and educational support are immensely important, and often if not usually a deciding factor in a child’s success. Home and school are where the child spends most of his time. However, communication access and literacy depend highly on what the people in those environments are equipped to provide.

In a residential school, or a mainstreamed program with a strong Deaf presence, everyone is either d/hh, more visual-oriented, or have (ideally!) received training and support to meet language requirements. Staff are able to act as appropriate language models, so that ensures communication access and, to some degree, academic success.

Outside of residential schools, though, getting that access to appropriate language models can be much more challenging– not to mention the complexities of using a manual language to impart literacy in a completely separate aural language. That’s if you have access to ASL; more often, what I’ve seen is a mixture of auditory-verbal therapy and manually-coded sign systems, and the results can vary just as much from very, very good to very, very bad. In fact, many cueing parents took up Cued Speech precisely because their local programs or residential schools were not a viable option for one reason or another.

In evaluating different approaches in d/hh education, we need to look at that approach’s overall results, not just specific examples. We can’t cherry-pick outliers to prove our point. That’s probably why those statements at the beginning somewhat annoy me, because in my experience, success at attaining language and literacy through Cued Speech is the norm, not the exception.

In my experience, signing d/hh people who can write or read well tend to be in the minority. On the flip side, cueing d/hh people who have those odd grammatical or spelling flukes– not typos, but more like what you might see from ESL speakers– are the exception; the rest read, write, and talk like native hearing speakers (with varying degrees of a “deaf” voice). I’ve had more than one person tell me that they wouldn’t know I was deaf just by reading my posts.

The studies on Cued Speech that I’ve read bear this out– in fact, I haven’t yet found any studies with negative results on Cued Speech’s use. (I do recall one with “meh” results in a group of hard-of-hearing students, but that’s about it.)

I suspect that you won’t see such consistent results among deaf signers mainly due to these reasons:

  1. The learning curve involved in picking up any manually-coded or signed system, which demands greater commitment and effort from parents and teachers over the long term, so you’re much more likely to see a wider variation in usage and proficiency.
  2. The linguistic and conceptual gap between sign language and spoken language (or even just two different languages). You can patch that gap somewhat, but it’ll never replace incidental learning through full linguistic immersion (and not necessarily just reading and writing).

This isn’t to make Cued Speech out to be a magic bullet that bestows language and literacy the instant someone starts using it for their kid. What it does do is enable one to visually “recode” a language she already knows, without the delay of learning and translating through a second language. In this way, the d/hh kid is put on the same playing field as a hearing child for literacy and language acquisition, so d/hh cuers are much more likely to pick up spoken/written language at the same pace as their hearing counterparts.

Advertisements

Cued Speech and Sign Language: Spoken Language Accommodation

Disclaimer: This is not meant to be a value comparison between ASL and Cued Speech. I’m sharing my personal experience with both in different areas, and it depends on several factors.

For spoken language accommodation, my personal preference is Cued Speech, hands down. Not ASL, not Signed English, not CASE, not LOVE.

Since leaving college, I’ve usually used sign language interpreters because that is what is available here in TX, but it really is not my preferred method. Captioning is fine for lecture-based presentations, but a bit slow for discussion-type forums.

It’s my opinion that signed language cannot accurately represent all of the nuances of spoken language on the hands alone. Or if it can be done, it’ll be difficult and cumbersome. That’s why Dr. Cornett designed Cued Speech the way he did: half of the information on the lips, half on the hands, and all based in phonemes, not meaning.

With Signed English, if you already know English and/or have enough hearing or enough context, or you happen to be a superb lipreader/prolific reader… basically, if you have extra support, you can fill in the gaps. Somewhat.

I have had some less-than-ideal experiences with interpreters because my native language is English, and the other person voicing in English, but we have to communicate through a sign language medium, and it’s quite challenging to be precise… especially when the interpreter is used to interpretation rather than transliteration. It’s worse when the interpreter does not have any background information, especially in specialized fields like medicine or engineering. Often (but not always), she can relay that information to me– even if I have to mentally translate it back into English– but if I try to feed it back through her, it falls apart.

Knowing the context is, I think, more essential for sign language interpretation because you are working with vocabulary and semantics. Context does help cued language transliterators too, but I think there is less demand for it, because CLT is word-for-word (well, really, cue-for-sound) and not concept-to-concept. With a CLT, I usually feel like I have a much solider grasp of the other person’s message than I do with a sign language interpreter; there is far less reliance on her understanding of the subject matter or the context.

Cued Speech and Sign Language: Availability of Services

Disclaimer: This is not meant to be a value comparison between ASL and Cued Speech. I’m sharing my personal experience with both in different areas, and it depends on several factors.

American Sign Language beats Cued Speech in terms of availability, especially for socialization and finding real-time accommodations. Most everyone knows of sign language or some variant of it (Signed English, LOVE, CASE, etc.). Although a lot of cuers, particularly those affiliated with the NCSA, are trying to expand resources so it’s more available, Cued Speech is still very much in the minority.

Hence, you can find sign language interpreters in just about every sizable city. Cued Speech… it depends on the area. That said, Daily Cues is working on this nifty Cue Connector that will show you a geographical concentration of cuers all around the world so you can see what the availability is in various areas.

For sure, I know that Chicago, Minnesota, central Colorado, the East Coast, and maybe California and Seattle, have a sizable population of cuers and cueing service providers. Austin, TX, also has a small cue community.

I am the only cuer in DFW that I know of, and was the only known cuer in Milwaukee– maybe the entire state, since I first learned it in 1994 or thereabouts. That isn’t an unusual scenario for cuers, incidentally: being the only one in the school, or even the entire state, that uses Cued Speech– although it’s getting better as we develop more cue communities around the nation.

Cueing Expressively as a Receptive Cuer

One thing about being the only cuer in the entire state: you get really, really good at cuereading. If you have only a few transliterators (or only one!), sometimes you get really, really good at reading their particular style of cueing. When I reconnect with other cuers in Illinois and Colorado, it takes me a while to adjust myself to reading their cues– partly because I see them only once, maybe twice a year. I don’t have that issue with my transliterators in Wisconsin.

Conversely, the transliterator gets used to your voice so you find that you don’t need to cue as much with them, or you don’t need to cue as accurately with them. As a matter of fact, I know many cuers who just voice for themselves without any cues whatsoever. I don’t know the ratio of cuers who cue expressively versus those who don’t, but I’ve seen more in the latter category. My guess is that for the majority of d/hh cuers, it’s just easier to drop the hands and talk.

The downside is that, well, these cuers don’t get to practice expressive cueing a lot, so either they can’t do it, or they do it sloppily. I was/am in the latter category, although I have been much more mindful of it over the past few years. By cueing sloppily, I mean we drop certain handshapes, or don’t put our hands in the right position (e.g., placing the hand on the cheek instead of at the corner of the mouth for “ee” and “ur” sounds). It usually doesn’t impact our overall comprehension, I think, but it’s not technically the correct way to cue.

I do suspect that part of it is probably just cuers co-opting the system to their own style, like how native signers or native speakers become a bit sloppier in everyday conversation. Part of it is due to cuers not getting enough exposure to correct cueing, and/or not being around other cuers. I imagine as cueing becomes more mainstream, and hopefully as we establish a stronger base of cued speech transliterators, we’ll have more good models to work off of. For now, this is a good issue to be aware of, especially with young d/hh cuers.

How Cued Speech Represents Spoken Language

Anyone who’s familiar with manually coded English (MCE) such as Signed English or Visual Phonics may wonder, rightly so, how Cued Speech can provide 100% access to English on the lips and hands. Fortunately, Aaron Rose of Cue Cognatio has designed and illustrated a 3-D model that shows the relationship between Cued Speech and spoken language.

Image courtesy of Aaron Rose.

Image courtesy of Aaron Rose.

Aaron explains this model as follows:

“There are three components to each ‘system’ [speech and Cued speech] for the purpose of expressing traditionally spoken language via speech and Cued Speech.

1.) Both systems use the same mouth shapes. 
2.) The hand shapes take the place of the tongue placements (place of articulation)
3.) The hand placements take the place of the voicing/air (manner of articulation). 

This is a general model and should not be used strictly for research purposes, but is intended to provide a better idea of how and why spoken language and cued language express the same linguistic information in different modes.”

TED Talk and Captioning

It’s finally been released: a TED talk on Cued Speech by Cathy Rasmussen.

Now, a fellow cuer, Benjamin Lachman, posted the video to our Facebook page and asked for some crowdsourcing on adding accurate captions. Another cuer, Aaron Rose, took him up on that request and that link up there on the amara.org website now has accurate captions– although for some reason, the direct Youtube link still transcribes Cued Speech as “cute speech” among other things (which admittedly makes me giggle).

For me, just seeing that request made me think of possibilities for captioning Cued Speech videos. See, I’ve captioned for sign language videos before, both my own and others. Captioning is not extraordinarily difficult, but it can be very time-consuming. Essentially, you’ve got to break up the caption lines and align them with the correct timestamps, and this entails a lot of right-clicking the frame and watching mouth movements to make sure you end on the right word. It’s even trickier when you have to translate the content into a different language, and a phrase in the original language doesn’t match up with the timing for the captioned language. This applies even when you’re the one who produced the content.

But with Cued Speech, I think seeing the handshapes with the mouth would help facilitate that process, especially when combined with speech recognition software that will automatically sync a pre-uploaded transcript with the correct timestamps. It would also enable other cuers to contribute captions to the video (as Aaron did) without any discrepancies in interpretation, because it’s straight-up transliteration. Not to mention, it would be excellent cue-reading practice for budding cuers.

It’s kind of exciting to think how accessible Cued Speech videos can be with the captioning process. In that kind of work, every little bit to make it easier helps.

Talk to the Experts!

If there’s just one thing I could tell anybody trying to learn more about the myriad of issues involved in deafness, it’d be this:

If you want to learn more about Cued Speech, ask someone who uses Cued Speech. If you want to learn more about American Sign Language, ask someone who uses American Sign Language. Same for cochlear implants, hearing aids, visual phonics, whatever. And take their word for it. Don’t patronize by implying that they’re an outlier. And don’t mix ’em up– that is, don’t expect an in-depth, balanced view on Alexander Graham Bell or cochlear implants from a 70-year-old Deaf signer. Likewise, a spoken-language proponent may not be terribly knowledgeable about nor sympathetic to Deaf Culture and ASL.

This isn’t to say that you can’t share opinions and resources. But like any other community, the d/hh population has its share of controversial topics, especially regarding children. Bias is always, always a factor. So is lack of knowledge and direct experience. It’s worse if the community itself tends to be rather homogeneous. As a result, misinformation can spread quickly, with no one to correct these. And I can assure you, I’ve seen my share of these with Cued Speech, especially in deaf education.

This isn’t necessarily deliberate, by the way. In my experience, most educational professionals are simply not aware of Cued Speech. If they are, they fall into four broad categories:

1) They don’t know of anyone who uses it and/or have not seen the research, so they may assume that it doesn’t work.

2) They think it’s another variant of Visual Phonics and/or may not see it as a viable communication option.

3) They don’t see the need for it, citing that they use Signed English or a Bi-Bi approach with ASL.

4) They are open to it, but don’t know of any local resources nor demand for it.

Likewise, most d/hh people don’t use or see Cued Speech in action, although most people I meet are very accepting of the fact that I use it, and many are curious about how it works. But for the most part, they don’t know anything beyond what I showed them. Often, a good portion of our initial conversation is debunking misconceptions about Cued Speech.

As for those who had experience with it, including me, most of the feedback has been very positive. I did meet a few who had tried Cued Speech and decided it didn’t work for them, either because of resources or because they just didn’t ‘click’ with it. And that’s fair; everyone is different. The key here is that they tried it out for themselves, and formulated their opinions based on what they had personally encountered. More than that, these people could share the nuances that factored into their situation: a strong family network, mental and physical health, finances, access to resources, etc.

This, by the way, applies to anything in the deaf and hard-of-hearing community. Take any second-hand experience with a grain of salt.