Design and the Digital Divide. Alan F. Newell

Читать онлайн.
Название Design and the Digital Divide
Автор произведения Alan F. Newell
Жанр Программы
Серия Synthesis Lectures on Assistive, Rehabilitative, and Health-Preserving Technologies
Издательство Программы
Год выпуска 0
isbn 9781608457410



Скачать книгу

2.3(b) used light emitting diode array. The whole system consisted of the display mounted in a breast pocket, a battery pack and a keyboard. This fulfilled the requirements I had laid down, and an indication of the success of the Talking Brooch idea was given by a child’s parent who said that the very first time he had told a joke was via his Talking Brooch.

image

      Figure 2.3: The Talking Brooch. (a) A prototype Talking Brooch; (b) the commercially available Talking Brooch.

      A portable device the ELKOMI 2 marketed by Diode (Amsterdam) had a 9-letter walking display, but, even though the display was longer, our results would indicate that it would be less easy to read at normal typing speeds.

      At much the same time Toby Churchill of Toby Churchill Ltd had developed the Lightwriter, which consisted of a much longer single line display integrated with a keyboard [Lowe et al., 1974], and is shown in Figure 2.4(a). In later versions of the Lightwriter, such as that shown in Figure 2.4(b), there is a two-sided display—one side facing the communication partner, and another identical display facing the operator—again with an integrated keyboard. Although the Lightwriter was not as good at promoting eye contact, it did facilitate an appropriate body language for face-to-face communication. In addition, the integrated nature of the system meant that there was only one “box”, and no external wiring. The only other portable device available at that time was the Cannon Communicator, which essentially was similar in style to a pocket calculator, but with an alphanumeric keyboard and a strip printer.

image

      Figure 2.4: The Lightwriter. (a) Earliest version; (b) 2010 version.

      In 1976, Vanderheiden [1976] reviewed the literature in this field, addressing the issues of accessing communication aids and the relative merits of direct selection (as employed in the Talking Brooch) and scanning and encoding techniques. He also surveyed the range of AAC devices that were available at that time. There were very few portable devices but, in addition to the ones mentioned above, he described the MCM device marketed by Micon Industries (California) that had been primarily designed as a communication for the deaf. He also cited the Versicom and Autocom, developed by the Trace Centre at the University of Wisconsin-Madison, as examples of wheelchair portable systems.

      The Lightwriter and the Talking Brooch had made slightly different design compromises. The Talking Brooch majored on eye contact and immediacy, whereas the Lightwriter allowed the disabled user to see what they were typing, and had no external wiring or sockets, with their associated fragility. The Cannon Communicator had the advantages of a single box, but did not facilitate appropriate body language. The Talking Brooch [Newell, A., 1974a] was marketed by the University of Southampton, and had modest sales. The Lightwriter is still selling well in the early 21st Century. This shows how important it is to really examine the use of any system in real contexts and, where necessary, to compromise on the “purity” of the goal for pragmatic reasons.

       A full appreciation of the potential uses of systems in real contexts is essential.

      The experience of developing the Talking Brooch led to a range of projects all designed to improve the efficacy of communication aids for people with speech and language dysfunction. It also led to my award of a Winston Churchill Travel Fellowship to investigate communication aids for non-speaking people in the U.S. This formed the basis of much of my future work in this field. I met Arlene Kraat, who subsequently became my mentor from the field of Speech Therapy, and President of the International Society of Augmentative and Alterative Communication. Other very important friends and colleagues from that Fellowship included Greg Vanderheiden, from the University of Wisconsin Madison—who has made a major contribution to technological development for disabled people at a research and development and political levels—and Rick Foulds, who led very exciting research in this area for many years at the Universities of Tufts and Delaware.

      I had noted that the Talking Brooch could also be used for deaf people, but the catalyst for my next research projects was a visit to the Department of Lewis Carter Jones, MP. He was a colleague of Jack (now Lord) Ashley [Ashley, J., 1973] who had become deaf and was struggling to continue his parliamentary career. It is impossible to lip read in the Chamber, and he was surviving by relying on a fellow MP, sitting next to him in the House, writing notes for him on what was said. I arranged to meet Jack and his wife Pauline in the House to demonstrate the Talking Brooch. His (accurate) assessment was that it would be no better than written notes—what he required was a verbatim transcript of what was being said. A good typist can type at 60-80 words per minute, but speech can reach over 200 words per minute. In the British Parliament, particularly at Prime Minister’s Questions, it often happened that an innocent aside (which would not be deemed worth writing down for Jack) would be picked up a couple of speeches later—often as a joke. If he did not have a verbatim transcript, Jack was likely to miss the point of these references [Ashley, J., 1992]. This was similar to the reported complaints of deaf students who were offered a real-time version of lectures on a visual system using an operator who listened to the lecture and dictated a synopsis to a typist [Hales, G., 1976].

      Fortune favors the prepared mind. (Pasteur 1854) Therefore be a research “butterfly” and read widely.

      It was clear to me that automatic speech recognition would not work within this environment: a couple of years previously I had written that “we must put firmly out of our minds any thoughts of, or hopes for, a ‘mechanical typist’. If we do this we will be in a better position to specify the sort of machine that can be built and may be useful in helping the deaf” [Newell, A., 1974b]. (The limitations of speech recognition are discussed in more detail in Newell [Newell, A., 1992c]). This was my opportunity to put these comments into effect. During my ASR research I had come across attempts, some ten years previously, to transcribe the British Palantype machine shorthand [Price, W., 1971], and the American machine shorthand system, Stenograph [Newitt and Odarchenko, 1970]. I thus knew that it was possible to input Palantype data into a computer, but also that current systems required large computers and were not accurate enough to make them a commercial possibility for Court Reporting. Jack Ashley, however, did not need a correct transcription just one which was readable, but he did need a portable system which had to work in real-time. Thus research which had been a commercial failure at that time led my team to develop a system that was appropriate for people with disabilities.

       The excellent is an enemy of the good.

      Palantype, Stenograph, and the French Grand Jean system work in similar ways (Figure 2.5(a) shows a Palantype Machine). All these systems have chord keyboards, where a number of keys are pressed at the same time, and they work in a syllabic mode, which means that each syllable is encoded in one stroke in a pseudo phonetic form. The left-hand keys being used to encode the initial phoneme, the right-hand keys encoding the final phoneme and the center keys the vowels. Word boundaries are not encoded. The output from these machines is a roll of paper on which the coded speech is printed. An example output from a Palantype Machine is shown in Figure 2.5(b).

      Palantype follows relatively strict phonetic rules, but Stenograph uses more complex, less phonetic coding. Grand Jean, being weak on final consonants, is not appropriate for English. These machines provide a record of verbatim speech in the form of printed strips of paper that require significant skills to read. They are translated into orthography by trained operators. Automatic translation would clearly be valuable and was being investigated in the UK and the U.S. A major challenge with transcription of machine shorthand is to determine word boundaries, and this