Showing posts with label Faculty Summit. Show all posts
Showing posts with label Faculty Summit. Show all posts
Wednesday, October 3, 2012
EMEA Faculty Summit 2012
Last week we held our fifth Europe, Middle East and Africa (EMEA) Faculty Summit in London, bringing together 94 of EMEA’s foremost computer science academics from 65 universities representing 25 countries, together with more than 60 Googlers.
This year’s jam-packed agenda included a welcome reception at the Science Museum (plus a tour of the special exhibition: “Codebreaker - Alan Turing’s life and legacy”), a keynote on “Research at Google” by Alfred Spector, Vice President of Research and Special Initiatives and a welcome address by Nelson Mattos, Vice President of Engineering and Products in EMEA, covering Google’s engineering activity and recent innovations in the region.
The Faculty Summit is a chance for us to meet with academics in Computer Science and other areas to discuss the latest exciting developments in research and education, and to explore ways in which we can collaborate via our our University Relations programs.
The two and a half day program consisted of tech talks, break out sessions, a panel on online education, and demos. The program covered a variety of computer science topics including Infrastructure, Cloud Computing Applications, Information Retrieval, Machine Translation, Audio/Video, Machine Learning, User Interface, e-Commerce, Digital Humanities, Social Media, and Privacy. For example, Ed H. Chi summarized how researchers use data analysis to understand the ways users share content with their audiences using the Circle feature in Google+. Jens Riegelsberger summarized how UI design and user experience research is essential to creating a seamless experience on Google Maps. John Wilkes discussed some of the research challenges - and opportunities - associated with building, managing, and using computer systems at massive scale. Breakout sessions ranged from technical follow-ups on the talk topics to discussing ways to increase the presence of women in computer science.
We also held one-on-one sessions where academics and Googlers could meet privately and discuss topics of personal interest, such as how to develop a compelling research award proposal, how to apply for a sabbatical at Google or how to gain Google support for a conference in a particular research area.
The Summit provides a great opportunity to build and strengthen research and academic collaborations. Our hope is to drive research and education forward by fostering mutually beneficial relationships with our academic colleagues and their universities.
Tuesday, August 21, 2012
Faculty Summit 2012: Online Education Panel
Posted by Peter Norvig, Director of Research
On July 26th, Google's 2012 Faculty Summit hosted computer science professors from around the world for a chance to talk and hear about some of the work done by Google and by our faculty partners. One of the sessions was a panel on Online Education. Daphne Koller's presentation on "Education at Scale" describes how a talk about YouTube at the 2009 Google Faculty Summit was an early inspiration for her, as she was formulating her approach that led to the founding of Coursera. Koller started with the goal of allowing Stanford professors to have more time for meaningful interaction with their students, rather than just lecturing, and ended up with a model based on the flipped classroom, where students watch videos out of class, and then come together to discuss what they have learned. She then refined the flipped classroom to work when there is no classroom, when the interactions occur in online discussion forums rather than in person. She described some fascinating experiments that allow for more flexible types of questions (beyond multiple choice and fill-in-the-blank) by using peer grading of exercises.
In my talk, I describe how I arrived at a similar approach but starting with a different motivation: I wanted a textbook that was more interactive and engaging than a static paper-based book, so I too incorporated short videos and frequent interactions for the Intro to AI class I taught with Sebastian Thrun.
Finally, Bradley Horowitz, Vice President of Product Management for Google+ gave a talk describing the goals of Google+. It is not to build the largest social network; rather it is to understand our users better, so that we can serve them better, while respecting their privacy, and keeping each of their conversations within the appropriate circle of friends. This allows people to have more meaningful conversations, within a limited context, and turns out to be very appropriate to education.
By bringing people together at events like the Faculty Summit, we hope to spark the conversations and ideas that will lead to the next breakthroughs, perhaps in online education, or perhaps in other fields. We'll find out a few years from now what ideas took root at this year's Summit.
On July 26th, Google's 2012 Faculty Summit hosted computer science professors from around the world for a chance to talk and hear about some of the work done by Google and by our faculty partners. One of the sessions was a panel on Online Education. Daphne Koller's presentation on "Education at Scale" describes how a talk about YouTube at the 2009 Google Faculty Summit was an early inspiration for her, as she was formulating her approach that led to the founding of Coursera. Koller started with the goal of allowing Stanford professors to have more time for meaningful interaction with their students, rather than just lecturing, and ended up with a model based on the flipped classroom, where students watch videos out of class, and then come together to discuss what they have learned. She then refined the flipped classroom to work when there is no classroom, when the interactions occur in online discussion forums rather than in person. She described some fascinating experiments that allow for more flexible types of questions (beyond multiple choice and fill-in-the-blank) by using peer grading of exercises.
In my talk, I describe how I arrived at a similar approach but starting with a different motivation: I wanted a textbook that was more interactive and engaging than a static paper-based book, so I too incorporated short videos and frequent interactions for the Intro to AI class I taught with Sebastian Thrun.
Finally, Bradley Horowitz, Vice President of Product Management for Google+ gave a talk describing the goals of Google+. It is not to build the largest social network; rather it is to understand our users better, so that we can serve them better, while respecting their privacy, and keeping each of their conversations within the appropriate circle of friends. This allows people to have more meaningful conversations, within a limited context, and turns out to be very appropriate to education.
By bringing people together at events like the Faculty Summit, we hope to spark the conversations and ideas that will lead to the next breakthroughs, perhaps in online education, or perhaps in other fields. We'll find out a few years from now what ideas took root at this year's Summit.
Thursday, August 2, 2012
Reflections on Digital Interactions: Thoughts from the 2012 NA Faculty Summit
Posted by Alfred Spector, Vice President of Research and Special Initiatives
Last week, we held our eighth annual North America Computer Science Faculty Summit at our headquarters in Mountain View. Over 100 leading faculty joined us from 65 universities located in North America, Asia Pacific and Latin America to attend the two-day Summit, which focused on new interactions in our increasingly digital world.
In my introductory remarks, I shared some themes that are shaping our research agenda. The first relates to the amazing scale of systems we now can contemplate. How can we get to computational clouds of, perhaps, a billion cores (or processing elements)? How can such clouds be efficient and manageable, and what will they be capable of? Google is actively working on most aspects of large scale systems, and we continue to look for opportunities to collaborate with our academic colleagues. I note that we announced a cloud-based program to support Education based on Google App Engine technology.
Another theme in my introduction was semantic understanding. With the introduction of our Knowledge Graph and other work, we are making great progress toward data-driven analysis of the meaning of information. Users, who provide a continual stream of subtle feedback, drive continuous improvement in the quality of our systems, whether about a celebrity, the meaning of a word in context, or a historical event. In addition, we have found that the combination of information from multiple sources helps us understand meaning more efficiently. When multiple signals are aggregated, particularly with different types of analysis, we have fewer errors and improved semantic understanding. Applying the “combination hypothesis,” makes systems more intelligent.
Finally, I talked about User Experience. Our field is developing ever more creative user interfaces (which both present information to users, and accept information from them), partially due to the revolution in mobile computing but also due in-part to the availability of large-scale processing in the cloud and deeper semantic understanding. There is no doubt that our interactions with computers will be vastly different 10 years from now, and they will be significantly more fluid, or natural.
This page lists the Googler and Faculty presentations at the summit.
One of the highest intensity sessions we had was the panel on online learning with Daphne Koller from Stanford/Coursera, and Peter Norvig and Bradley Horowitz from Google. While there is a long way to go, I am so pleased that academicians are now thinking seriously about how information technology can be used to make education more effective and efficient. The infrastructure and user-device building blocks are there, and I think the community can now quickly get creative and provide the experiences we want for our students. Certainly, our own recent experience with our online Power Searching Course shows that the baseline approach works, but it also illustrates how much more can be done.
I asked Elliot Solloway (University of Michigan) and Cathleen Norris (University of North Texas), two faculty attendees, to provide their perspective on the panel and they have posted their reflections on their blog.
The digital era is changing the human experience. The summit talks and sessions exemplified the new ways in which we interact with devices, each other, and the world around us, and revealed the vast potential for further innovation in this space. Events such as these keep ideas flowing and it’s immensely fun to be part of very broadly-based, computer science community.
Last week, we held our eighth annual North America Computer Science Faculty Summit at our headquarters in Mountain View. Over 100 leading faculty joined us from 65 universities located in North America, Asia Pacific and Latin America to attend the two-day Summit, which focused on new interactions in our increasingly digital world.
In my introductory remarks, I shared some themes that are shaping our research agenda. The first relates to the amazing scale of systems we now can contemplate. How can we get to computational clouds of, perhaps, a billion cores (or processing elements)? How can such clouds be efficient and manageable, and what will they be capable of? Google is actively working on most aspects of large scale systems, and we continue to look for opportunities to collaborate with our academic colleagues. I note that we announced a cloud-based program to support Education based on Google App Engine technology.
Another theme in my introduction was semantic understanding. With the introduction of our Knowledge Graph and other work, we are making great progress toward data-driven analysis of the meaning of information. Users, who provide a continual stream of subtle feedback, drive continuous improvement in the quality of our systems, whether about a celebrity, the meaning of a word in context, or a historical event. In addition, we have found that the combination of information from multiple sources helps us understand meaning more efficiently. When multiple signals are aggregated, particularly with different types of analysis, we have fewer errors and improved semantic understanding. Applying the “combination hypothesis,” makes systems more intelligent.
Finally, I talked about User Experience. Our field is developing ever more creative user interfaces (which both present information to users, and accept information from them), partially due to the revolution in mobile computing but also due in-part to the availability of large-scale processing in the cloud and deeper semantic understanding. There is no doubt that our interactions with computers will be vastly different 10 years from now, and they will be significantly more fluid, or natural.
This page lists the Googler and Faculty presentations at the summit.
One of the highest intensity sessions we had was the panel on online learning with Daphne Koller from Stanford/Coursera, and Peter Norvig and Bradley Horowitz from Google. While there is a long way to go, I am so pleased that academicians are now thinking seriously about how information technology can be used to make education more effective and efficient. The infrastructure and user-device building blocks are there, and I think the community can now quickly get creative and provide the experiences we want for our students. Certainly, our own recent experience with our online Power Searching Course shows that the baseline approach works, but it also illustrates how much more can be done.
I asked Elliot Solloway (University of Michigan) and Cathleen Norris (University of North Texas), two faculty attendees, to provide their perspective on the panel and they have posted their reflections on their blog.
The digital era is changing the human experience. The summit talks and sessions exemplified the new ways in which we interact with devices, each other, and the world around us, and revealed the vast potential for further innovation in this space. Events such as these keep ideas flowing and it’s immensely fun to be part of very broadly-based, computer science community.
Wednesday, August 1, 2012
Natural Language in Voice Search
Posted by Jakob Uszkoreit, Software Engineer
On July 26 and 27, we held our eighth annual Computer Science Faculty Summit on our Mountain View Campus. During the event, we brought you a series of blog posts dedicated to sharing the Summit's talks, panels and sessions, and we continue with this glimpse into natural language in voice search. --Ed
At this year’s Faculty Summit, I had the opportunity to showcase the newest version of Google Voice Search. This version hints at how Google Search, in particular on mobile devices and by voice, will become increasingly capable of responding to natural language queries.
I first outlined the trajectory of Google Voice Search, which was initially released in 2007. Voice actions, launched in 2010 for Android devices, made it possible to control your device by speaking to it. For example, if you wanted to set your device alarm for 10:00 AM, you could say “set alarm for 10:00 AM. Label: meeting on voice actions.” To indicate the subject of the alarm, a meeting about voice actions, you would have to use the keyword “label”! Certainly not everyone would think to frame the requested action this way. What if you could speak to your device in a more natural way and have it understand you?
At last month’s Google I/O 2012, we announced a version of voice actions that supports much more natural commands. For instance, your device will now set an alarm if you say “my meeting is at 10:00 AM, remind me”. This makes even previously existing functionality, such as sending a text message or calling someone, more discoverable on the device -- that is, if you express a voice command in whatever way feels natural to you, whether it be “let David know I’ll be late via text” or “make sure I buy milk by 3 pm”, there is now a good chance that your device will respond how you anticipated it to.
I then discussed some of the possibly unexpected decisions we made when designing the system we now use for interpreting natural language queries or requests. For example, as you would expect from Google, our approach to interpreting natural language queries is data-driven and relies heavily on machine learning. In complex machine learning systems, however, it is often difficult to figure out the underlying cause for an error: after supplying them with training and test data, you merely obtain a set of metrics that hopefully give a reasonable indication about the system’s quality but they fail to provide an explanation for why a certain input lead to a given, possibly wrong output.
As a result, even understanding why some mistakes were made requires experts in the field and detailed analysis, rendering it nearly impossible to harness non-experts in analyzing and improving such systems. To avoid this, we aim to make every partial decision of the system as interpretable as possible. In many cases, any random speaker of English could look at its possibly erroneous behavior in response to some input and quickly identify the underlying issue - and in some cases even fix it!
We are especially interested in working with our academic colleagues on some of the many fascinating research and engineering challenges in building large-scale, yet interpretable natural language understanding systems and devising the machine learning algorithms this requires.
On July 26 and 27, we held our eighth annual Computer Science Faculty Summit on our Mountain View Campus. During the event, we brought you a series of blog posts dedicated to sharing the Summit's talks, panels and sessions, and we continue with this glimpse into natural language in voice search. --Ed
At this year’s Faculty Summit, I had the opportunity to showcase the newest version of Google Voice Search. This version hints at how Google Search, in particular on mobile devices and by voice, will become increasingly capable of responding to natural language queries.
I first outlined the trajectory of Google Voice Search, which was initially released in 2007. Voice actions, launched in 2010 for Android devices, made it possible to control your device by speaking to it. For example, if you wanted to set your device alarm for 10:00 AM, you could say “set alarm for 10:00 AM. Label: meeting on voice actions.” To indicate the subject of the alarm, a meeting about voice actions, you would have to use the keyword “label”! Certainly not everyone would think to frame the requested action this way. What if you could speak to your device in a more natural way and have it understand you?
At last month’s Google I/O 2012, we announced a version of voice actions that supports much more natural commands. For instance, your device will now set an alarm if you say “my meeting is at 10:00 AM, remind me”. This makes even previously existing functionality, such as sending a text message or calling someone, more discoverable on the device -- that is, if you express a voice command in whatever way feels natural to you, whether it be “let David know I’ll be late via text” or “make sure I buy milk by 3 pm”, there is now a good chance that your device will respond how you anticipated it to.
I then discussed some of the possibly unexpected decisions we made when designing the system we now use for interpreting natural language queries or requests. For example, as you would expect from Google, our approach to interpreting natural language queries is data-driven and relies heavily on machine learning. In complex machine learning systems, however, it is often difficult to figure out the underlying cause for an error: after supplying them with training and test data, you merely obtain a set of metrics that hopefully give a reasonable indication about the system’s quality but they fail to provide an explanation for why a certain input lead to a given, possibly wrong output.
As a result, even understanding why some mistakes were made requires experts in the field and detailed analysis, rendering it nearly impossible to harness non-experts in analyzing and improving such systems. To avoid this, we aim to make every partial decision of the system as interpretable as possible. In many cases, any random speaker of English could look at its possibly erroneous behavior in response to some input and quickly identify the underlying issue - and in some cases even fix it!
We are especially interested in working with our academic colleagues on some of the many fascinating research and engineering challenges in building large-scale, yet interpretable natural language understanding systems and devising the machine learning algorithms this requires.
Friday, July 27, 2012
Big Pictures with Big Messages
Posted by Maggie Johnson, Director of Education and University Relations
Google’s Eighth Annual Computer Science Faculty Summit opened today in Mountain View with a fascinating talk by Fernanda ViĆ©gas and Martin Wattenberg, leaders of the data visualization group at our Cambridge office. They provided insight into their design process in visualizing big data, by highlighting Google+ Ripples and a map of the wind they created.
To preface his explanation of the design process, Martin shared that his team “wants visualization to be ‘G-rated,’ showing the full detail of the data - there’s no need to simplify it, if complexity is done right.” Martin discussed how their wind map started as a personal art project, but has gained interest particularly among groups that are interested in information on the wind (sailors, surfers, firefighters). The map displays surface wind data from the US National Digital Forecast Database and updates hourly. You can zoom around the United States looking for where the winds are fastest - often around lakes or just offshore - or check out the gallery to see snapshots of the wind from days past.
Fernanda discussed the development of Google+ Ripples, a visualization that shows how news spreads on Google+. The visualization shows spheres of influence and different patterns of spread. For example, someone might post a video to their Google+ page and if it goes viral, we’ll see several circles in the visualization. This depicts the influence of different individuals sharing content, both in terms of the number of their followers and the re-shares of the video, and has revealed that individuals are at times more influential than organizations in the social media domain.
Martin and Fernanda closed with two important lessons in data visualization: first, don’t “dumb down” the data. If complexity is handled correctly and in interesting ways, our users find the details appealing and find their own ways to interact with and expand upon the data. Second, users like to see their personal world in a visualization. Being able to see the spread of a Google+ post, or zoom in to see the wind around one’s town is what makes a visualization personal and compelling-- we call this the “I can see my house from here” feature.
The Faculty Summit will continue through Friday, July 27 with talks by Googlers and faculty guests as well as breakout sessions on specific topics related to this year’s theme of digital interactions. We will be looking closely at how computation and bits have permeated our everyday experiences via smart phones, wearable computing, social interactions, and education.
We will be posting here throughout the summit with updates and news as it happens.
Google’s Eighth Annual Computer Science Faculty Summit opened today in Mountain View with a fascinating talk by Fernanda ViĆ©gas and Martin Wattenberg, leaders of the data visualization group at our Cambridge office. They provided insight into their design process in visualizing big data, by highlighting Google+ Ripples and a map of the wind they created.
To preface his explanation of the design process, Martin shared that his team “wants visualization to be ‘G-rated,’ showing the full detail of the data - there’s no need to simplify it, if complexity is done right.” Martin discussed how their wind map started as a personal art project, but has gained interest particularly among groups that are interested in information on the wind (sailors, surfers, firefighters). The map displays surface wind data from the US National Digital Forecast Database and updates hourly. You can zoom around the United States looking for where the winds are fastest - often around lakes or just offshore - or check out the gallery to see snapshots of the wind from days past.
Fernanda discussed the development of Google+ Ripples, a visualization that shows how news spreads on Google+. The visualization shows spheres of influence and different patterns of spread. For example, someone might post a video to their Google+ page and if it goes viral, we’ll see several circles in the visualization. This depicts the influence of different individuals sharing content, both in terms of the number of their followers and the re-shares of the video, and has revealed that individuals are at times more influential than organizations in the social media domain.
Martin and Fernanda closed with two important lessons in data visualization: first, don’t “dumb down” the data. If complexity is handled correctly and in interesting ways, our users find the details appealing and find their own ways to interact with and expand upon the data. Second, users like to see their personal world in a visualization. Being able to see the spread of a Google+ post, or zoom in to see the wind around one’s town is what makes a visualization personal and compelling-- we call this the “I can see my house from here” feature.
The Faculty Summit will continue through Friday, July 27 with talks by Googlers and faculty guests as well as breakout sessions on specific topics related to this year’s theme of digital interactions. We will be looking closely at how computation and bits have permeated our everyday experiences via smart phones, wearable computing, social interactions, and education.
We will be posting here throughout the summit with updates and news as it happens.
Subscribe to:
Posts (Atom)