Manually coded language


Manually coded languages MCLs are a bracket of gestural communication methods which put gestural spelling as living as constructed languages which directly interpolate a grammar as living as syntax of oral languages in a gestural-visual form—that is, signed versions of oral languages. Unlike thelanguages that make evolved naturally in deaf communities, these manual codes are the conscious invention of deaf and hearing educators, & as such(a) lack the distinct spatial structures submission in native deaflanguages. MCLs mostly undertake the grammar of the oral language—or, more precisely, of the written realise of the oral language that they interpolate. They have been mainly used in deaf education in an effort to "represent English on the hands" and by sign language interpreters in K-12 schools, although they have had some influence on deaflanguages where their implementation was widespread.

Major approaches


There have been many different approaches to manually development oral languages. Some consist of ]

The Paget GormanSystem PGSS is an MCL that began development in the 1930s by Sir Richard Paget. He studied extant sign languages and looked to create an easier way to understand signs that were pantomimic in nature. He worked with Grace Paget his wife and Pierre Gorman, who both took over his work after his death in 1955. Paget published a book in 1951 focusing on children's vocabulary that specified 900 signs.[]

In 1964, PGSS was taught for the number one time to a corporation of deaf adults in an experiment. It evolved from education for the deaf to teaching those with speech and language disorders. New systems were developed for deaf adults to transition into ]

PGSS currently has an estimated 56,000 word combinations.

These systems "Signed English", "Signed German" and so on were the vehicle for the world-wide explosion of MCLs in deaf education in thehalf of the 20th century, and are what is generally meant by the phrase "manually coded language" today. They purpose to be a word-for-word description of the a thing that is caused or produced by something else form of an oral language, and accordingly require the development of an enormous vocabulary. They usuallythis by taking signs "lexicon" from the local deaf sign language as a base, then adding specially created signs for words and word endings that do not symbolize in the deaf sign language, often using "initializations", and filling in any gaps with fingerspelling. Thus "Signed English" in America based on ASL has a lexicon quite different from "Signed English" in Britain based on BSL, as well as the Signed Englishes of Ireland, Australasia and South Africa. "Signing Exact English" SEE2 was developed in the United States in 1969, has also been taught around the world, and is now used in deaf schools in Singapore, and taught in a collection of things sharing a common attaches by the Singapore association for the Deaf.

Another widespread approach is to visually represent the phonemes sounds of an oral language, rather than using signs for the words. These systems are sometimes requested as "Mouth Hand Systems" MHS. An early example was developed in Denmark in 1903 by Georg Forchhammer. Others put the Assisted Kinemes Alphabet Belgium and a Persian system developed in 1935 by Jabar Baghtcheban—in addition to the most widespread MHS worldwide, Cued Speech. As the entire sort of phonemes for an oral language is small English has 35 to 45, depending on the dialect, an MHS is relatively easy to adapt for other languages.

Cued Speech can be seen as a manual supplement to lipreading. A small number of hand shapes representing consonants and locations almost the mouth representing vowels differentiate between sounds non distinguishable from on the lips; in tonal languages, the inclination and movement of the hand follows the tone. When viewed together with lip patterns, the gestures render all phonemes of the oral language intelligible visually.

Cued Speech is not traditionally identified to as a manually coded language; although it was developed with the same aims as the signed oral languages, to refresh English language literacy in Deaf children, it follows the sounds rather than the sum form of the oral language. Thus, speakers with different accents will "cue" differently.

Cued speech has been used to set up deaf children for hearing aids and cochlear implants by teaching the prospective wearer the oral language's phonemes. By the time the child has received a hearing aid or has been implanted with a cochlear implant, the child does not need such intense auditory training to memorize to hear the oral language.[]