…and it doesn’t need to be. Cued Speech is a communication mode that visually represents an existing language in real time. That’s why professionals make a distinction between Cued Speech, Cued English, and Cued language.
Sometimes I’ve heard that statement used as a put-down: supposedly, because Cued Speech isn’t a language in and of itself, it either can’t be used to instill language into children, or its input will be fragmentary at best. My experience says otherwise.
As a rough analogy, you could say the same about writing. Writing itself isn’t a language; it’s a way of representing language in another format. It codifies sound into print… mostly. (English is a stupid, stupid language.) Likewise, Cued Speech also codifies sound into a visual format, and much more faithfully than written English (again, stupid language). Both are valid, successful teaching tools and modes of communication.
I think part of the confusion comes from the frequent misidentification of Cued Speech as a variant of Visual Phonics. Unlike Visual Phonics, Cued Speech was tailored for smooth transitions between different handshapes and placements, which facilitates real-time communication– in other words, you can cue as you speak at a “normal” pace. As far as I know, that isn’t feasible with Visual Phonics and inhibits its use for immersive language acquisition. However, because the two systems’ basic premises are similar (i.e., visually convey the properties of sound), they often get lumped in with each other.