Information and Society Research Division
Information and Society Research Division Associate Professor
Introduction of research by science writer
Breaking through the conventional wisdom of language, from the field of communication
How can human beings communicate so well? Even though everyday conversation is often full of grammatical errors, the listener usually correctly understands what the speaker wants to say. I would like to gain a deeper understanding of the structures of the Japanese conversation and of Japanese
sign language conversation through a close analysis of interactions, including speech and gestures.
Focusing on what linguistics overlooks
Originally my area of specialization is linguistics, which has traditionally focused on grammar. Until now, the main subject of the research has been text. However, spoken language not simply conveys meaning through speech , for example using intonation, but also involves simultaneous communication through other modalities such as eye direction and body movement.
To understand these multimodal language activities of human beings, I will adopt an approach of comparing sign language with spoken language, utilizing the methods of comparative linguistics. Such methods involve looking individually at matters such as how hand and mouth movements differ when comparing the two. Incidentally, did you know that sign language is a natural language just like spoken language, and that it was not created artificially? Spoken language has an abundant corpus and computer processing of such language has already seen significant advances; the fact that methods for writing down everyday interactions in detail and accumulating data on such interactions still have not been established is a grievous oversight.
There is no method for recording in textual form sign language, for which originally no written language existed. For example, while sign language includes numerous variations corresponding to dialects, today these are in the process of being lost, and will indeed be lost if not retained in data form. In addition, using multimodal data to show that sign language is a language just like spoken language will unravel peoples' misunderstandings of sign language and open up a route toward reforming people's awareness of sign language.
Toward development of a multimodal corpus
Accordingly, I am collecting data from a variety of locations. For example, changes are apparent in conversational structures, such as the conversation of a six-person group splitting into two or three groups and then joining together again, and it is clear that certain rules apply, such as participants taking turns speaking. The "takoyaki party" activity is well suited to observing how multiple modalities combine together. It is very interesting how two activities overlap as participants appear to converse with their sense of hearing while using their sense of sight to check on the takoyaki as they cook.
The rules governing how a conversation starts and ends are also an issue that remains unclear. However, since this is an issue that has been taken into account in today's media design, the future advent of next-generation media such as teleconferencing and humanoid robots will provide an excellent opportunity for groundbreaking research in this area. I am working to proactively create a variety of opportunities in which I can gather data, taking a long-term point of view.