Wednesday, November 10, 2010

Voice Search in Underrepresented Languages



Welkom*!

Today we’re introducing Voice Search support for Zulu and Afrikaans, as well as South African-accented English. The addition of Zulu in particular represents our first effort in building Voice Search for underrepresented languages.

We define underrepresented languages as those which, while spoken by millions, have little presence in electronic and physical media, e.g., webpages, newspapers and magazines. Underrepresented languages have also often received little attention from the speech research community. Their phonetics, grammar, acoustics, etc., haven’t been extensively studied, making the development of ASR (automatic speech recognition) voice search systems challenging.

We believe that the speech research community needs to start working on many of these underrepresented languages to advance progress and build speech recognition, translation and other Natural Language Processing (NLP) technologies. The development of NLP technologies in these languages is critical for enabling information access for everybody. Indeed, these technologies have the potential to break language barriers.

We also think it’s important that researchers in these countries take a leading role in advancing the state of the art in their own languages. To this end, we’ve collaborated with the Multilingual Speech Technology group at South Africa’s North-West University led by Prof. Ettiene Barnard (also of the Meraka Research Institute), an authority in speech technology for South African languages. Our development effort was spearheaded by Charl van Heerden, a South African intern and a student of Prof. Barnard. With the help of Prof. Barnard’s team, we collected acoustic data in the three languages, developed lexicons and grammars, and Charl and others used those to develop the three Voice Search systems. A team of language specialists traveled to several cities collecting audio samples from hundreds of speakers in multiple acoustic conditions such as street noise, background speech, etc. Speakers were asked to read typical search queries into an Android app specifically designed for audio data collection.

For Zulu, we faced the additional challenge of few text sources on the web. We often analyze the search queries from local versions of Google to build our lexicons and language models. However, for Zulu there weren’t enough queries to build a useful language model. Furthermore, since it has few online data sources, native speakers have learned to use a mix of Zulu and English when searching for information on the web. So for our Zulu Voice Search product, we had to build a truly hybrid recognizer, allowing free mixture of both languages. Our phonetic inventory covers both English and Zulu and our grammars allow natural switching from Zulu to English, emulating speaker behavior.

This is our first release of Voice Search in a native African language, and we hope that it won’t be the last. We’ll continue to work on technology for languages that have until now received little attention from the speech recognition community.

Salani kahle!**

* “Welcome” in Afrikaans
** “Stay well” in Zulu

Friday, November 5, 2010

Suggesting a Better Remote Control



It seems clear that the TV is a growing source of online audio-video content that you select by searching. Entering characters of a search string one by one using a traditional remote control and onscreen keyboard is extremely tiresome. People have been working on building better ways to search on the TV, ranging from small keyboards to voice input to interesting gestures you might make to let the TV know what you want. But currently the traditional left-right-up-down clicker dominates as the family room input device. To enter the letters of a show, you click over and over until you get to the desired letter on the on-screen keyboard and then you hit enter to select it. You repeat this mind-numbingly slow process until you type in your query string or at least enough letters that the system can put up a list of suggested completions. Can we instead use a Google AutoComplete style recommendation model and novel interface to make character entry less painful?

We have developed an interaction model that reduces the distance to the predicted next letter without scrambling or moving letters on the underlying keyboard (which is annoying and increases the time it takes to find the next letter). We reuse the highlight ring around the currently selected letter and fill it with 4 possible characters that might be next, but we do not change the underlying keyboard layout. With 4 slots to suggest the next letter and a good prediction model trained on the target corpus, the next letter is often right where you are looking and just a click away.

To learn more about this combination of User Experience and Machine Learning to address a growing problem with searching on TVs, check out our WWW 2010 publication,QuickSuggest.