Showing posts with label UI. Show all posts
Showing posts with label UI. Show all posts

Thursday, August 2, 2012

Reflections on Digital Interactions: Thoughts from the 2012 NA Faculty Summit



Last week, we held our eighth annual North America Computer Science Faculty Summit at our headquarters in Mountain View. Over 100 leading faculty joined us from 65 universities located in North America, Asia Pacific and Latin America to attend the two-day Summit, which focused on new interactions in our increasingly digital world.

In my introductory remarks, I shared some themes that are shaping our research agenda. The first relates to the amazing scale of systems we now can contemplate. How can we get to computational clouds of, perhaps, a billion cores (or processing elements)? How can such clouds be efficient and manageable, and what will they be capable of? Google is actively working on most aspects of large scale systems, and we continue to look for opportunities to collaborate with our academic colleagues. I note that we announced a cloud-based program to support Education based on Google App Engine technology.

Another theme in my introduction was semantic understanding. With the introduction of our Knowledge Graph and other work, we are making great progress toward data-driven analysis of the meaning of information. Users, who provide a continual stream of subtle feedback, drive continuous improvement in the quality of our systems, whether about a celebrity, the meaning of a word in context, or a historical event. In addition, we have found that the combination of information from multiple sources helps us understand meaning more efficiently. When multiple signals are aggregated, particularly with different types of analysis, we have fewer errors and improved semantic understanding. Applying the “combination hypothesis,” makes systems more intelligent.

Finally, I talked about User Experience. Our field is developing ever more creative user interfaces (which both present information to users, and accept information from them), partially due to the revolution in mobile computing but also due in-part to the availability of large-scale processing in the cloud and deeper semantic understanding. There is no doubt that our interactions with computers will be vastly different 10 years from now, and they will be significantly more fluid, or natural.

This page lists the Googler and Faculty presentations at the summit.

One of the highest intensity sessions we had was the panel on online learning with Daphne Koller from Stanford/Coursera, and Peter Norvig and Bradley Horowitz from Google. While there is a long way to go, I am so pleased that academicians are now thinking seriously about how information technology can be used to make education more effective and efficient. The infrastructure and user-device building blocks are there, and I think the community can now quickly get creative and provide the experiences we want for our students. Certainly, our own recent experience with our online Power Searching Course shows that the baseline approach works, but it also illustrates how much more can be done.

I asked Elliot Solloway (University of Michigan) and Cathleen Norris (University of North Texas), two faculty attendees, to provide their perspective on the panel and they have posted their reflections on their blog.

The digital era is changing the human experience. The summit talks and sessions exemplified the new ways in which we interact with devices, each other, and the world around us, and revealed the vast potential for further innovation in this space. Events such as these keep ideas flowing and it’s immensely fun to be part of very broadly-based, computer science community.

Friday, November 5, 2010

Suggesting a Better Remote Control



It seems clear that the TV is a growing source of online audio-video content that you select by searching. Entering characters of a search string one by one using a traditional remote control and onscreen keyboard is extremely tiresome. People have been working on building better ways to search on the TV, ranging from small keyboards to voice input to interesting gestures you might make to let the TV know what you want. But currently the traditional left-right-up-down clicker dominates as the family room input device. To enter the letters of a show, you click over and over until you get to the desired letter on the on-screen keyboard and then you hit enter to select it. You repeat this mind-numbingly slow process until you type in your query string or at least enough letters that the system can put up a list of suggested completions. Can we instead use a Google AutoComplete style recommendation model and novel interface to make character entry less painful?

We have developed an interaction model that reduces the distance to the predicted next letter without scrambling or moving letters on the underlying keyboard (which is annoying and increases the time it takes to find the next letter). We reuse the highlight ring around the currently selected letter and fill it with 4 possible characters that might be next, but we do not change the underlying keyboard layout. With 4 slots to suggest the next letter and a good prediction model trained on the target corpus, the next letter is often right where you are looking and just a click away.

To learn more about this combination of User Experience and Machine Learning to address a growing problem with searching on TVs, check out our WWW 2010 publication,QuickSuggest.