Shall we talk?

by Vittoria Marino on 08/01/2014

In recent months we’ve seen growing confirmation of the importance that voice interfaces will have in the applications of the future. Just consider how many wearable devices have come to market, and the extent to which Google is still continuing to add voice functions to its browser. We are increasingly certain that our applications will have to have a voice interface.

To reach our goal, the first step was to introduce Voice Commands in Instant Developer 13.0, which, thanks to the use of Google’s Speech API, make it possible to use voice to interact with web applications inside Chrome. A review of the feedback we received through our roadmap enabled us to choose the right direction for our next step.

The first implementation let you continue hands free while giving commands to the application, but you still had to look at the device to read the answer. Also, only panels supported voice commands, making it necessary to use your hands when working with interfaces based on books.

And this was our starting point in deciding what to introduce next in the second version of Instant Developer voice commands, which I’m pleased to present to you today:

  • voice synthesizer;
  • support for voice commands for books;
  • new, even more natural interaction.

Thanks to the voice synthesizer, the application now responds by speaking to user commands, making it possible to interact with the device without needing to look at it to see the effect of a command. New usage options are now possible in cases where conventional interaction with the application is difficult, such as when the screen is very small or the device must be used while moving. I’m thinking of small, round devices with a strap.

Support for voice commands for books now makes it possible to move between pages and search for text in a report without using your hands.

Finally, given that the language we use is important, in order to break down even more barriers between app and user we have improved the interaction, making it more natural and more accommodating of language that sounds closer to how we’d speak to another person. And all of this while still preserving options for the developer to customize commands and voice recognizer responses, so they fit better into the context of your app.

Stay tuned, because the next step will deal with offline apps: they’re just itching to talk to you.

Do you already have an application that you'd like to talk to?

  •    I've already implemented voice commands; I just need to update to 13.1.
  •    Now that apps respond, I know where to use voice commands.
  •    Not yet. I need voice commands in offline apps.
  •    I don't think the interface of the future will involve voice.
Loading ... Loading ...

Image: Mustafa Khayat

Leave a Comment

Previous post:

Next post: