Anyone who’s familiar with manually coded English (MCE) such as Signed English or Visual Phonics may wonder, rightly so, how Cued Speech can provide 100% access to English on the lips and hands. Fortunately, Aaron Rose of Cue Cognatio has designed and illustrated a 3-D model that shows the relationship between Cued Speech and spoken language.
Aaron explains this model as follows:
“There are three components to each ‘system’ [speech and Cued speech] for the purpose of expressing traditionally spoken language via speech and Cued Speech.
1.) Both systems use the same mouth shapes.
2.) The hand shapes take the place of the tongue placements (place of articulation)
3.) The hand placements take the place of the voicing/air (manner of articulation).
This is a general model and should not be used strictly for research purposes, but is intended to provide a better idea of how and why spoken language and cued language express the same linguistic information in different modes.”