November 19, 2021 14:52 ITmedia NEWS
Photo |
"See-Through Captions", a digital nature laboratory at the University of Tsukuba, recognizes spoken voice in real time at the exhibition "DC EXPO 2021" (Makuhari Messe, November 17-19) and displays it as subtitles on a transparent display. Was exhibited. I actually experienced it, but it was convenient because the speed of understanding the story increased dramatically.
See-Through Captions is a system that uses Google's voice recognition engine to translate the voice acquired by a microphone into characters and displays it on a transparent display under development by Japan Display. Demonstration experiments were also conducted at the Japan Science Museum and Tsukuba City Hall in Ibaraki Prefecture as a technology to facilitate communication with people with hearing impairments and people who are deaf.
Reporters are a little deaf, especially in noisy places like exhibitions, where people's voices are hard to hear, but even so, conversations using See-Through Captions were comfortable. Above all, I am grateful to be able to follow long sentences without a time lag.
The transparent display is a little bluish, but you can see the other side clearly without any problems. On the other hand, the characters are not transparent and are clearly and clearly reflected, so there is no difficulty in reading. Character recognition accuracy is also high, and there are no complaints. Subtitles are also displayed on the speaker side so that the speaker can check if there are any mistakes in the subtitles. If you make a mistake, you can restate it yourself.
The meaning of a transparent display is that you can communicate while looking at the other person's face and gestures. Some people with hearing impairments capture the content of the story through mouth movements, facial expressions, and gestures. Although it is possible to convey information using only textual information, the transparent display that allows the other party to see is valuable because the emotions and intentions of the other party may be understood by looking at gestures.
However, in reality, there are some parts that do not go that way. After using it, you need to look at the characters on the display to understand the contents, so the other person's face can be seen a little at the edge of the field of vision. If your face is far from the display, you will not be able to focus at the same time, so you will have to look at one or the other. I am grateful that the facial expressions and gestures are only vaguely visible, but the research members also feel that this is an issue.
In the future, while verifying the effect of See-Through Captions through demonstration experiments, the range of provision will be expanded so that it can be used by the general public.
0 コメント:
コメントを投稿