Context-sensitive speech synthesiser
Currently, there are many speech synthesisers out there. For example, they read out text that have been typed into a computer, or often take the format of electing a prepared conversational sentence.
But, to use it for day-to-day conversations, both might feel unnatural. Even with the same word, its’ meaning can differ with time and situation. However, having said that, it is impossible to have prepared for all sorts of situations.
Perhaps the conversation will finish while the synthesiser selects the right wording.
Now let’s apply BoI here.
First, let’s assume the synthesiser takes a shape which can be held in one hand, so that the user can control the device whilst looking at their conversation partner’s face. This’ll limit the actions that the user can take while being in the same room as the conversation partner.
Yes! Think swinging, tilting etc, - almost like playing with a Wii remote whilst looking at a screen. This should help the synthesiser to read between the lines!
Nevertheless, this is a BoI speech synthesiser.
The limits to the sounds that can be made with the use of an acceleration sensor are vowels and sounds in between them. Furthermore, when tilted, the sound will be played, so it should be difficult for beginners to play clear sounds like “ah” or “ee”. But, after experimenting with how to swing or twist the device, it’s all up to you. For example, sounds like “ah” to “ee” which might sound like “hi” or “oh” “eh” like “okay” could be made.
With this, it’s likely possible to have conversations which have a wide range of expression and fully reading between the lines.
Why not try a new conversation method, using gestures?