Thursday, March 28, 2013

Imagery on Google Maps of Fukushima Exclusion Zone Town Namie-machi

From time to time we invite guests to post about items or interest and are pleased to have Mister Tamotsu Baba, Mayor of Namie-machi, Fukushima, Japan, join us here. - Ed.

Namie-machi is a small city in Fukushima Prefecture sitting along the coast of the Pacific. We are blessed with both ocean and mountains, and known as a place where you can experience both the beauty of the sea and the forests. Tragically, however, since the nuclear accident caused by the Great East Japan Earthquake of March 11, 2011, all of Namie-machi’s 21,000 townspeople have had to flee their homes.

Two years have passed since the disaster, but people still aren’t allowed to enter Namie-machi. Many of the displaced townspeople have asked to see the current state of their city, and there are surely many people around the world who want a better sense of how the nuclear incident affected surrounding communities.

Working with Google, we were able to drive Street View cars through Namie-machi to capture panoramic images of the abandoned city exactly as it stands today. Starting today, this Street View imagery is available on Google Maps and the Memories for the Future site, so anyone from Namie or around the world can view it.



Here is one of Namie-machi’s main streets, which we often used for outdoor events like our big Ten Days of Autumn festival that saw 300 street stalls and 100,000 visitors.



Many buildings, like this one in the foreground, collapsed during the earthquake, and we still have not been able to remove them. We are also unable to repair damaged buildings and shops nor prepare them for the potential impact of further aftershocks.



This image shows an area located one kilometer inland from the Pacific Ocean. In the distance you can see Ukedo Elementary School. Nearby Ukedo Harbor once proudly boasted 140 fishing boats and 500 buildings, but suffered some of the worst tsunami damage. After being set off-limits, we have not been able to clean up the wreckage on the side of the road, including the many fishing boats that were washed several kilometers inland.

Ever since the March disaster, the rest of the world has been moving forward, and many places in Japan have started recovering. But in Namie-machi time stands still. With the lingering nuclear hazard, we have only been able to do cursory work for two whole years. We would greatly appreciate it if you viewed this Street View imagery to understand the current state of Namie-machi and the tremendous gravity of the situation.

Those of us in the older generation feel that we received this town from our forebearers, and we feel great pain that we cannot pass it down to our children. It has become our generation’s duty to make sure future generations understand the city’s history and culture—maybe even those who will not remember the Fukushima nuclear accident. We want this Street View imagery to become a permanent record of what happened to Namie-machi in the earthquake, tsunami, and nuclear disaster.

Finally, I want to make a renewed commitment to recovering from the nuclear hazard. It may take many years and many people’s help, but we will never give up taking back our hometown.

(Cross-posted and translated from the Google Japan Blog)

Education Awards on Google App Engine



Cross-posted with Google Developers Blog

Last year we invited proposals for innovative projects built on Google’s infrastructure. Today we are pleased to announce the 11 recipients of a Google App Engine Education Award. Professors and their students are using the award in cloud computing courses to study databases, distributed systems, web mashups and to build educational applications. Each selected project received $1000 in Google App Engine credits.

Awarding computational resources to classroom projects is always gratifying. It is impressive to see the creative ideas students and educators bring to these programs.
Below is a brief introduction to each project. Congratulations to the recipients!

John David N. Dionisio, Loyola Marymount University
Project description: The objective of this undergraduate database systems course is for students to implement one database application in two technology stacks, a traditional relational database and on Google App Engine. Students are asked to study both models and provide concrete comparison points.

Xiaohui (Helen) Gu, North Carolina State University
Project description: Advanced Distributed Systems Class
The goal of the project is to allow the students to learn distributed system concepts by developing real distributed system management systems and testing them on real world cloud computing infrastructures such as Google App Engine.

Shriram Krishnamurthi, Brown University
Project description: WeScheme is a programming environment that runs in the Web browser and supports interactive development. WeScheme uses App Engine to handle user accounts, serverside compilation, and file management.

Feifei Li, University of Utah
Project description: A graduate-level course that will be offered in Fall 2013 on the design and implementation of large data management system kernels. The objective is to integrate features from a relational database engine with some of the new features from NoSQL systems to enable efficient and scalable data management over a cluster of commodity machines.

Mark Liffiton, Illinois Wesleyan University
Project description: TeacherTap is a free, simple classroom-response system built on Google App Engine. It lets students give instant, anonymous feedback to teachers about a lecture or discussion from any computer or mobile device with a web browser, facilitating more adaptive class sessions.

Eni Mustafaraj, Wellesley College
Project description: Topics in Computer Science: Web Mashups. A CS2 course that combines Google App Engine and MIT App Inventor. Students will learn to build apps with App Inventor to collect data about their life on campus. They will use Google App Engine to build web services and apps to host the data and remix it to create web mashups. Offered in the 2013 Spring semester.

Manish Parashar, Rutgers University
Project description: Cloud Computing for Scientific Applications -- Autonomic Cloud Computing teaches students how a hybrid HPC/Grid + Cloud cyber infrastructure can be effectively used to support real-world science and engineering applications. The goal of our efforts is to explore application formulations, Cloud and hybrid HPC/Grid + Cloud infrastructure usage modes that are meaningful for various classes of science and engineering application workflows.

Orit Shaer, Wellesley College
Project description: GreenTouch
GreenTouch is a collaborative environment that enables novice users to engage in authentic scientific inquiry. It consists of a mobile user interface for capturing data in the field, a web application for data curation in the cloud, and a tabletop user interface for exploratory analysis of heterogeneous data.

Elliot Soloway, University of Michigan
Project description: WeLearn Mobile Platform: Making Mobile Devices Effective Tools for K-12. The platform makes mobile devices (Android, iOS, WP8) effective, essential tools for all-the-time, everywhere learning. WeLearn’s suite of productivity and communication apps enable learners to work collaboratively; WeLearn’s portal, hosted on Google App Engine, enables teachers to send assignments, review, and grade student artifacts. WeLearn is available to educators at no charge.

Jonathan White, Harding University
Project description: Teaching Cloud Computing in an Introduction to Engineering class for freshmen. We explore how well-designed systems are built to withstand unpredictable stresses, whether that system is a building, a piece of software or even the human body. The grant from Google is allowing us to add an overview of cloud computing as a platform that is robust under diverse loads.

Dr. Jiaofei Zhong, University of Central Missouri
Project description: By building an online Course Management System, students will be able to work on their team projects in the cloud. The system allows instructors and students to manage the course materials, including course syllabus, slides, assignments and tests in the cloud; the tool can be shared with educational institutions worldwide.

Wednesday, March 27, 2013

Enabling the next generation of computer scientists with CS4HS

For the past four years, Google has sponsored an initiative called Computer Science for High School (CS4HS). The mission of this aptly named collaboration is simple: to bring computer science professional development to educators through hands-on workshops. In collaboration with universities, colleges and technical schools, we have helped K-12 educators bring CS into their classrooms around the world; to date, we have helped train more than 6,000 teachers worldwide—from Canada to China, Germany to New Zealand, our programs reach more and more countries with every iteration.

Today, we are pleased to announce the recipients of the 5th annual CS4HS Google grant. (To see the full list, visit our site.) As our program grows, we are working to engage as many teachers as possible in our CS efforts. To that end, this year we are offering four free MOOC courses for educators who may not be physically close to one of our workshops, but who are eager to learn the basics of computer science. In addition, we are launching our new CS4HS Community page; join the conversation and help shape the next generation of computer scientists!


Monday, March 25, 2013

Global Impact Awards’ hunt for U.K.’s most innovative social entrepreneurs starts today

From cracking the human genome to advancing medical research through computer games, British social entrepreneurs have a proud history of using technology to make the world a better place.

Last year, we launched the Global Impact Awards to support nonprofits using technology to tackle some of the world’s toughest problems. We gave $23 million to seven organizations working on projects ranging from aerial technology that protects wildlife to data algorithms that ensure more girls and minorities get placed in advanced math and science classes.

Today, as the next step in the Impact Awards, we’re kicking off our first Global Impact Challenge in the U.K., inviting British nonprofits to tell us how they would use technology to transform lives. Four nonprofits will each receive a £500,000 Global Impact Award, as well as Chromebooks and technical assistance from Googlers to help make their project a reality.

Applications open today, and registered British nonprofits are invited to apply online at g.co/impactchallenge. We’ll review applications and announce 10 finalists on May 22. At that point, people across the U.K. can learn more about the projects of the top 10 finalists, donate to the ones they like and cast a vote for fan favorite. On June 3, the top 10 finalists will pitch their concepts to a judging panel that includes us (Matt Brittin and Jacquelline Fuller), Sir Richard Branson, Sir Tim Berners-Lee and Jilly Forster. The three awardees and the fan favorite will be revealed at the event, which will take place at Google London.

Technology can help solve some of the world’s most pressing challenges and we’re eager to back innovators who are finding new ways to make an impact. Today we’re starting the hunt in the U.K., but we also know that nonprofits all over the world are using techy approaches to develop new solutions in their sector. Who knows, the Global Impact Challenge might head your way next.


Thursday, March 21, 2013

Urban art, zoomorphic whistles and Hungarian poetry

There are few places (if any) in the world where you could find urban art, zoomorphic whistles* and Hungarian poetry in a single place—except, of course, on the Internet.

Today 30 new partners are joining the Google Art Project, contributing nearly 2,000 diverse works including contemporary art from Latin America, ancient art from China, rare Japanese paintings and Palaeolithic flint heads from Spain.

One highlight of the new collection is a project to capture the growing trend of urban art and graffiti in Brazil. More than 100 works from walls, doors and galleries in São Paulo have been photographed and will be included in the Art Project. The pieces were chosen by a group of journalists, artists and graffiti experts and include artists such as Speto, Kobra and Space Invader, as well as images of São Paulo’s most famous building-size murals. You can see the contrast in styles in the Compare tool and image below.


Photography features strongly in the works our partners are bringing online this time around. The Fundacion MAPFRE in Spain showcases one of the largest collections with more than 300 photos from a number of renowned photographers. For example, you can explore Mexican photographer Graciela Iturbide’s black and white images of indigenous Mexican culture inspired by themes of ritual, death and feminism.

The Art Project is also becoming a home to rare and precious items which move beyond paintings. Petőfi Literary Museum in Hungary has contributed the Nemzeti Dal or “National Song,” a Hungarian poem which is said to have been the inspiration for the Hungarian Revolution of 1848. The original document has rarely been seen in public to prevent humidity and light fading the script further. Online now for the first time, it can be explored by anyone in the world.

With 40,000+ artworks to explore from more than 200 museums in more than 40 countries, we look forward to seeing these new works feature in hundreds of thousands of user galleries you have created to date. Keep an eye on our Google+ page for more details about the new collections.

*ceramic whistles in the shape of animals!

Google Keep—Save what’s on your mind

Every day we all see, hear or think of things we need to remember. Usually we grab a pad of sticky-notes, scribble a reminder and put it on the desk, the fridge or the relevant page of a magazine. Unfortunately, if you’re like me you probably often discover that the desk, fridge or magazine wasn’t such a clever place to leave the note after all...it’s rarely where you need it when you need it.

To solve this problem we’ve created Google Keep. With Keep you can quickly jot ideas down when you think of them and even include checklists and photos to keep track of what’s important to you. Your notes are safely stored in Google Drive and synced to all your devices so you can always have them at hand.

If it’s more convenient to speak than to type that’s fine—Keep transcribes voice memos for you automatically. There’s super-fast search to find what you’re looking for and when you’re finished with a note you can archive or delete it.



Changing priorities isn’t a problem: just open Keep on your Android phone or tablet (there’s a widget so you can have Keep front and center all the time) and drag your notes around to reflect what matters. You can choose the color for each note too.

Pro tip: for adding thoughts quickly without unlocking your device there's a lock screen widget (on devices running Android 4.2+).


Google Keep is available on Google Play for devices running Android 4.0, Ice Cream Sandwich and above. You can access, edit and create new notes on the web at http://drive.google.com/keep and in the coming weeks you'll be able to do the same directly from Google Drive.

Wednesday, March 20, 2013

Make a silent movie by talking to Chrome

Last month, the Web Speech API brought voice recognition to Chrome users in more than 30 languages. We thought it would be fun to demonstrate this new technology by using an old one: silent film.

The Peanut Gallery lets you add intertitles to old black-and-white movie clips just by talking out loud while you watch them. Create a film and share it with friends, so they can bring out their inner screenwriters too.



We hope that developers will find many uses for the Web Speech API, both fun and practical—including new ways to navigate, search, enter text, and interact with the web. We can’t wait to see how people use it.



(Cross-posted from the Chrome blog)

Monday, March 18, 2013

Explore Everest, Kilimanjaro and more with Google Maps

Most of us have a bucket list of the places we want to visit in our lifetime. If you’re like me, the list is pretty long—to be honest I’d be lucky to get to all of mine. Google Maps has a bucket list too, and today we’re checking off a couple of our favorites so we can make our map more comprehensive and share it with you. And if tall mountains are your thing, you’re in luck.

Now you can explore some of the most famous mountains on Earth, including Aconcagua (South America), Kilimanjaro (Africa), Mount Elbrus (Europe) and Everest Base Camp (Asia) on Google Maps. These mountains belong to the group of peaks known as the Seven Summits—the highest mountain on each of the seven continents. While there’s nothing quite like standing on the mountain, with Google Maps you can instantly transport yourself to the top of these peaks and enjoy the sights without all of the avalanches, rock slides, crevasses, and dangers from altitude and weather that mountaineers face.

Start your adventure on Tanzania’s Mount Kilimanjaro, the dormant volcano known as the Roof of Africa. See amazing views of the highest freestanding mountain in the world covered in snow just three degrees south of the equator.


At 19,341 ft, Uhuru is the highest point on Mount Kilimanjaro

Next, travel to the tallest mountain in Europe, Russia’s Mount Elbrus, and see huts made from Soviet-era fuel barrels. Climbers have to take refuge in the huts built on the mountain when the weather turns wretched.

Get imagery of Mt. Elbrus and all of the other mountains on Google Maps on your iPhone and Android device

Explore Argentina’s mighty Aconcagua, the highest peak in both the Western and Southern Hemispheres. See how a base camp is set up amongst the exposed rock in Plaza Argentina and how expeditions eat, camp and prepare for their ascent.


A permanent park ranger camp, as well as a helipad and medical center are available during climbing seasons at Plaza Argentina

Finally, make your way to Everest Base Camp, where expeditions stage their attempts to reach the top of the world. Along the ascent, steal glimpses of the snow-capped Himalayan mountain peaks and the awesome Khumbu glacier.

The route to Everest Base Camp is one of the most popular trekking routes in the Himalayas and is visited by thousands of trekkers each year

This imagery was collected with a simple lightweight tripod and digital camera with a fisheye lens—equipment typically used for our Business Photos program. See the slideshow below and our Lat Long Blog for a behind-the-scenes look at the regular Googlers that actually climbed these mountains to capture this stunning photography.


Behind-the-scenes shots of the expedition team

Whether you’re scoping out the mountain for your next big adventure or exploring it from the comfort and warmth of your home, we hope you enjoy these views from the top of the world. See more of our favorite shots on the Street View Gallery. We’ll also be hosting a Hangout on Air today at 10:00 am PT where we’ll share stories from our expeditions and answer questions about this special collection.

Thursday, March 14, 2013

Think Insights: Marketer data, information and inspiration just got a new address

Today marks the debut of the new Think Insights, Google’s hub for marketing insights and inspiration for advertisers and agencies. On google.com/think, you can learn about the latest research in digital marketing, be inspired by creative brand campaigns, and find useful products and tools. You’ll also find industry-leading case studies and Google’s latest research, strategic perspectives, interviews with innovators and experts and more—all to help you make the most of the web.

Every week, we’ll feature content that spans industries and interests. Here’s a snapshot of our top stories:

  • In Understanding the Full Value of Mobile, learn how sporting goods industry leader adidas worked with digital performance agency iProspect to understand how mobile drives value beyond mobile commerce, particularly in-store sales. The campaign proved that mobile brought a 680% incremental increase in ROI.
  • The Hyundai Elantra: Driveway Decision Maker campaign lets you watch your favorite Hyundai model drive right to your driveway, using a combination of Google Maps Street View, projection mapping and real-time 3D animation.
  • YouTube Ads Leaderboard shows which YouTube ads most moved audiences this month, through a winning combination of savvy promotion and smart creative strategy; a new list is featured each month.

In our Perspectives section, we tap our own experts—as well as heads of industry, digital visionaries and Wharton professors—to lend their insights and analyses on the topics that matter most to marketers. The Product & Tools section contains information about our products and advertising platforms, as well as Planning Tools like the popular Real-Time Insights finder.

We built google.com/think to help you do it all—stay up-to-date on the latest in digital marketing, arm yourself with data to support your business cases and create inspiring campaigns. Explore the site now, and if you like what you discover, don't forget to subscribe to our Think Letter for a monthly round-up of our most popular content.

Sharing stories of Bletchley Park: home of the code-breakers

For decades, the World War II codebreaking centre at Bletchley Park was one of the U.K.’s most closely guarded secrets. Today, it’s a poignant place to visit and reflect on the achievements of those who worked there. Their outstanding feats of intellect, coupled with breakthrough engineering and dogged determination, were crucial to the Allied victory—and in parallel, helped kickstart the computing age.

We’ve long been keen to help preserve and promote the importance of Bletchley Park. Today we’re announcing two new initiatives that we hope will bring its story to a wider online audience.

First, we’re welcoming the Bletchley Park Trust as the latest partner to join Google’s Cultural Institute. Their digital exhibit features material from Bletchley’s archives, providing a vivid snapshot of the work that went on cracking secret messages and the role this played in shortening the war. Included are images of the Bombe machines that helped crack the Enigma code; and of Colossus, the world’s first programmable electronic computer, used to crack the German High Command code—including this message showing the Germans had been successfully duped about the location for the D-Day invasion.


Second, as a followup to our film about Colossus, we’re pleased to share a personal story of the Bombe, as told by one of its original operators, Jean Valentine. Women like Jean made up the majority of Bletchley Park’s personnel—ranging from cryptographers, to machine operators, to clerks. In her role operating the Bombe, Jean directly helped to decipher messages encoded by Enigma. In this film Jean gives us a firsthand account of life at Bletchley Park during the war, and demonstrates how the Bombe worked using a replica machine now on show at the museum.



We hope you enjoy learning more about Bletchley Park and its fundamental wartime role and legacy. For more glimpses of history, explore the Cultural Institute’s other exhibitions on www.google.com/culturalinstitute.

A second spring of cleaning

We’re living in a new kind of computing environment. Everyone has a device, sometimes multiple devices. It’s been a long time since we have had this rate of change—it probably hasn’t happened since the birth of personal computing 40 years ago. To make the most of these opportunities, we need to focus—otherwise we spread ourselves too thin and lack impact. So today we’re announcing some more closures, bringing the total to 70 features or services closed since our spring cleaning began in 2011:

  • Apps Script will be deprecating the GUI Builder and five UiApp widgets in order to focus efforts on Html Service. The rest of the Ui Service will not be affected. The GUI Builder will continue to be available until September 16, 2013. For more information see our post on the Google Apps Developer Blog.
  • CalDAV API will become available for whitelisted developers, and will be shut down for other developers on September 16, 2013. Most developers’ use cases are handled well by Google Calendar API, which we recommend using instead. If you’re a developer and the Calendar API won’t work for you, please fill out this form to tell us about your use case and request access to whitelisted-only CalDAV API.
  • Google Building Maker helped people to make three-dimensional building models for Google Earth and Maps. It will be retired on June 1, but users are still able to access and export their models from the 3D Warehouse. We’ll continue to expand the availability of comprehensive and accurate new 3D imagery on Google Earth, and people can still use Google Map Maker to add building information such as outlines and heights to Google Maps.
  • Google Cloud Connect is a plug-in to help people work in the cloud by automatically saving Microsoft Office files from Windows PCs in Google Drive. But installing Google Drive on your desktop achieves the same thing more effectively—and Drive works not only on Windows, but also on Mac, Android and iOS devices. Existing users will no longer be able to use Cloud Connect as of April 30.
  • We launched Google Reader in 2005 in an effort to make it easy for people to discover and keep tabs on their favorite websites. While the product has a loyal following, over the years usage has declined. So, on July 1, 2013, we will retire Google Reader. Users and developers interested in RSS alternatives can export their data, including their subscriptions, with Google Takeout over the course of the next four months.
  • Beginning next week, we're ending support for the Google Voice App for Blackberry. For Blackberry users who want to continue using Google Voice, we recommend they use our HTML5 app, which is more secure and easier for us to keep up to date. Our HTML5 site is compatible with users with Blackberry version 6 and newer.
  • We’re deprecating our Search API for Shopping, which has enabled developers to create shopping apps based on Google’s Product Search data. While we believe in the value this offering provided, we’re shifting our focus to concentrate on creating a better shopping experience for users through Google Shopping. We’ll shut the API down completely on September 16, 2013.
  • Beginning today we’ll no longer sell or provide updates for Snapseed Desktop for Macintosh and Windows. Existing customers will continue to be able to download the software and can contact us for support. We’ll continue to offer the Snapseed mobile app on iOS and Android for free.

These changes are never easy. But by focusing our efforts, we can concentrate on building great products that really help in their lives.

Update March 15, 2013: We worked with the developers who provide 98 percent of our current CalDAV traffic to assure access to the CalDAV API, which means many popular products will not be impacted. We remain committed to supporting open protocols like CalDAV.

Scaling Computer Science Education



Last week, I attended the annual SIGCSE (Special Interest Group, Computer Science Education) conference in Denver, CO. Google has been a platinum sponsor of SIGCSE for many years now, and the conference provides an opportunity for hundreds of computer science (CS) educators to share ideas and work on strategies to bring high quality CS education to K12 and undergraduate students.

Significant accomplishments over the last few years have laid a strong foundation for scaling CS curriculum, professional development (PD) and related programs in this country. The NSF has been funding curriculum and PD around the new CS Principles Advanced Placement course. The CSTA has published standards for K12 CS and a report on the limited extent to which schools, districts and states provide CS instruction to their students. CS Advocacy group, Computing in the Core, even provides a toolkit for communities to follow as they urge legislators for integration of Computer Science education into core K12 curriculum.

All of this work has made an impact, but there is still more to do.

I see our priorities in CS education to be ones of awareness and access. As CS educators, we must continue to raise awareness about the tremendous demand for jobs in the computing sector, and balance misconceptions with accurate data. Many students, parents, teachers and administrators remember the hype and disillusionment of the Dotcom period and myths on outsourcing and dwindling jobs yet the US Bureau of Labor Statistics (BLS) reports that ⅔ of all job growth in Science and Engineering will be in Computer Science employment over the next decade. (See 2010 BLS report here.) Clearing up this misconception is essential if we hope to satisfy US labor needs with recent graduates over the next several years.
Source: Gianchandani, Erwin. Revisiting ‘Where the Jobs Are’. The Computing Community Consortium Blog post on 23 May 2012. Link accessed on 8 March 2013.

Another misconception surrounds the range of CS-focused occupations that exist. The world of CS is expanding rapidly and we should celebrate the diversity of CS applications that are gaining momentum. Instead of the archetype of a sun-starved computer scientist, or software engineers working in isolation with little teamwork or communication opportunities, educators can encourage project-based learning, video game development, robotics, and graphic design as more concrete representations for abstract computational thinking.

Google believes that computing and CS are critical to our future, not only in the high tech sector, but for everyone. Our economy is becoming more and more dependent on technology-based solutions, which will require a future workforce with significant levels of CS knowledge and experience. In addition, we anticipate new career opportunities opening up in the next 3-5 years as more businesses move into the cloud and shift the way they run their IT departments.

Help us get the word out about the great opportunities in computing through organizations such as code.org, ACM, and NCWIT. Google is doing its part to support CS education and outreach through many programs including CS4HS, our Exploring Computational Thinking curriculum, and several student and teacher programs. So much opportunity, so little time!

Wednesday, March 13, 2013

Update from the CEO

Sergey and I first heard about Android back in 2004, when Andy Rubin came to visit us at Google. He believed that aligning standards around an open-source operating system would drive innovation across the mobile industry. Most people thought he was nuts. But his insight immediately struck a chord because at the time it was extremely painful developing services for mobile devices. We had a closet full of more than 100 phones and were building our software pretty much device by device. It was nearly impossible for us to make truly great mobile experiences.

Fast forward to today. The pace of innovation has never been greater, and Android is the most used mobile operating system in the world: we have a global partnership of over 60 manufacturers; more than 750 million devices have been activated globally; and 25 billion apps have now been downloaded from Google Play. Pretty extraordinary progress for a decade’s work. Having exceeded even the crazy ambitious goals we dreamed of for Android—and with a really strong leadership team in place—Andy’s decided it’s time to hand over the reins and start a new chapter at Google. Andy, more moonshots please!

Going forward, Sundar Pichai will lead Android, in addition to his existing work with Chrome and Apps. Sundar has a talent for creating products that are technically excellent yet easy to use—and he loves a big bet. Take Chrome, for example. In 2008, people asked whether the world really needed another browser. Today Chrome has hundreds of millions of happy users and is growing fast thanks to its speed, simplicity and security. So while Andy’s a really hard act to follow, I know Sundar will do a tremendous job doubling down on Android as we work to push the ecosystem forward.

Today we’re living in a new computing environment. People are really excited about technology and spending a lot of money on devices. This is driving faster adoption than we have ever seen before. The Nexus program—developed in conjunction with our partners Asus, HTC, LG and Samsung—has become a beacon of innovation for the industry, and services such as Google Now have the potential to really improve your life. We’re getting closer to a world where technology takes care of the hard work—discovery, organization, communication—so that you can get on with what makes you happiest… living and loving. It’s an exciting time to be at Google.

Tenth annual Global Code Jam registration opens today

Algorithmic competitions are to programmers what tournaments are to tennis players: an opportunity to feel the rush of competition, learn new techniques and face off against their best counterparts from around the globe. Code Jam, Google's worldwide online programming competition, gives developers a chance to use their favorite programming languages to solve algorithmic problems created by a team of contest champions at Google.

Our 10th annual global Code Jam kicks off next month, starting with a qualification round on April 12. After three more online rounds, the top 25 contestants will be invited to Google’s London office on August 16 for a final matchup and a chance to win the coveted title of Code Jam Champion.

With more than 20,000 participants last year, Code Jam has grown leaps and bounds since it began in 2003*. To celebrate the competition's 10th anniversary, we’ve raised the stakes: the winner will claim $15,000, and will automatically qualify for the 2014 Code Jam finals to defend his or her title.

If you’re up to the challenge of solving tough problems and coding elegant solutions (and perhaps debugging less elegant solutions), then register now. Want to warm up for the Qualification Round with a problem or two? How about finding the margin of safety for contestants on a television show, optimizing a tower defense game or swinging through the jungle on vines? You have a whole month to prepare yourself for the first hurdle on Friday, April 12.



*To the mathematically inclined (all of our competitors), 2003-2013 sounds like enough time for 11 Code Jams. Nevertheless, this one will actually be our tenth global contest: we went through a major format change between 2006 and 2008, and there wasn't a global contest in 2007.

Tuesday, March 12, 2013

Our Commitment to Social Computing Research: Social Interactions Focused Awards Announcement



Social interactions have always been an important part of the human experience. Social interaction research has shown results ranging from influences on our behavior from social networks [Aral2012] to our understanding of social belonging on health [Walton2011], as well as how conflicts and coordination play out in Wikipedia [Kittur2007]. Interestingly, social scientists have studied social interactions for many years, but it wasn’t until very recently that researchers can study these mechanisms through the explosion of services and data available on web-based social systems.

From information dissemination and the spread of innovation and ideas, to scientific discovery, we are seeing how a deep understanding of social interactions is affecting many different fields, such as health and education. For instance, scientists now have strong evidence that social interactions underlie many fundamental learning mechanisms starting from infancy well into adulthood [Meltzoff2009], and that peer discussions are critical in conceptual learning in college classes [Smith2009]. How might these learning science findings be built into social systems and products so that users maximize what they learn on the Web?

We know that interactions on the Web are diverse and people-centered. Google now enables social interactions to occur across many of our products, from Google+ to Search to YouTube. To understand the future of this socially connected web, we need to investigate fundamental patterns, design principles, and laws that shape and govern these social interactions.

We envision research at the intersection of disciplines including Computer Science, Human-Computer Interaction (HCI), Social Science, Social Psychology, Machine Learning, Big Data Analytics, Statistics and Economics. These fields are central to the study of how social interactions work, particularly driven by new sources of data, for example, open data sets from Web2.0 and social media sites, government databases, crowdsourcing, new survey techniques, and crisis management data collections. New techniques from network science and computational modeling, social network and sentiment analysis, application of statistical and machine learning, as well as theories from evolutionary theory, physics, and information theory, are actively being used in social interaction research.

We’re pleased to announce that Google has awarded over $1.2 million dollars to support the Social Interactions Research Awards, which are given to university research groups doing work in social computing and interactions. Research topics range from crowdsourcing, social annotations, a social media behavioral study, social learning, conversation curation, and scientific studies of how to start online communities.

We have awarded 15 researchers in 7 universities. We selected these proposals after a rigorous internal review. We believe the results will be broadly useful to product development and will further scientific research.

  • Joseph Konstan, Loren Terveen, and John Riedl from University of Minnesota. Precision Crowdsourcing: Closing the Loop to turn Information Consumers into Information Contributors.
  • Mor Naaman from Rutgers University, and Oded Nov from Polytechnic Institute of New York University. Examining the Impact of Social Traces on Page Visitors’ Opinions and Engagement.
  • Paul Resnick, Eytan Adar, and Cliff Lampe from University of Michigan. MTogether: A Living Lab for Social Media Research.
  • Marti Hearst from UC Berkeley. Understanding Social Learning Among Subgroups Within Large Online Learning Environments.
  • David Karger and Rob Miller from MIT. Crowdsourced Curation of Conversations.
  • Robert Kraut, Laura Dabbish, Jason Hong, Aniket Kittur from CMU. Successfully Starting Online Groups.

We look forward to working with these researchers, and we hope that we will jointly push the frontier of social interactions research to the next level.

References
[1] Aral, S., & Walker, D. (2012). Identifying Influential and Susceptible Members of Social Networks. Science , 337 (6092 ), 337–341. doi:10.1126/science.1215842
[2] Walton, G. M., & Cohen, G. L. (2011). A Brief Social-Belonging Intervention Improves Academic and Health Outcomes of Minority Students. Science , 331 (6023 ), 1447–1451. doi:10.1126/science.1198364
[3] Aniket Kittur, Bongwon Suh, Bryan Pendleton, Ed H. Chi. He Says, She Says: Conflict and Coordination in Wikipedia. In Proc. of ACM Conference on Human Factors in Computing Systems (CHI2007), pp. 453--462, April 2007. ACM Press. San Jose, CA.
[4] Meltzoff, A. N., Kuhl, P. K., Movellan, J., & Sejnowski, T. J. (2009). Foundations for a New Science of Learning. Science , 325 (5938), 284–288. doi:10.1126/science.1175626
[5] Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why Peer Discussion Improves Student Performance on In-Class Concept Questions. Science , 323 (5910), 122–124. doi:10.1126/science.1165919

Saturday, March 9, 2013

Learning from Big Data: 40 Million Entities in Context



When someone mentions Mercury, are they talking about the planet, the god, the car, the element, Freddie, or one of some 89 other possibilities? This problem is called disambiguation (a word that is itself ambiguous), and while it’s necessary for communication, and humans are amazingly good at it (when was the last time you confused a fruit with a giant tech company?), computers need help.

To provide that help, we are releasing the Wikilinks Corpus: 40 million total disambiguated mentions within over 10 million web pages -- over 100 times bigger than the next largest corpus (about 100,000 documents, see the table below for mention and entity counts). The mentions are found by looking for links to Wikipedia pages where the anchor text of the link closely matches the title of the target Wikipedia page. If we think of each page on Wikipedia as an entity (an idea we’ve discussed before), then the anchor text can be thought of as a mention of the corresponding entity.

Dataset Number of Mentions Number of Entities
Bentivogli et al. (data) (2008) 43,704 709
Day et al. (2008) less than 55,0003,660
Artiles et al. (data) (2010) 57,357 300
Wikilinks Corpus 40,323,863 2,933,659

What might you do with this data? Well, we’ve already written one ACL paper on cross-document co-reference (and received lots of requests for the underlying data, which partly motivates this release). And really, we look forward to seeing what you are going to do with it! But here are a few ideas:
  • Look into coreference -- when different mentions mention the same entity -- or entity resolution -- matching a mention to the underlying entity
  • Work on the bigger problem of cross-document coreference, which is how to find out if different web pages are talking about the same person or other entity
  • Learn things about entities by aggregating information across all the documents they’re mentioned in
  • Type tagging tries to assign types (they could be broad, like person, location, or specific, like amusement park ride) to entities. To the extent that the Wikipedia pages contain the type information you’re interested in, it would be easy to construct a training set that annotates the Wikilinks entities with types from Wikipedia.
  • Work on any of the above, or more, on subsets of the data. With existing datasets, it wasn’t possible to work on just musicians or chefs or train stations, because the sample sizes would be too small. But with 10 million Web pages, you can find a decent sampling of almost anything.

Gory Details

How do you actually get the data? It’s right here: Google’s Wikilinks Corpus. Tools and data with extra context can be found on our partners’ page: UMass Wiki-links. Understanding the corpus, however, is a little bit involved.

For copyright reasons, we cannot distribute actual annotated web pages. Instead, we’re providing an index of URLs, and the tools to create the dataset, or whichever slice of it you care about, yourself. Specifically, we’re providing:
  • The URLs of all the pages that contain labeled mentions, which are links to English Wikipedia
  • The anchor text of the link (the mention string), the Wikipedia link target, and the byte offset of the link for every page in the set
  • The byte offset of the 10 least frequent words on the page, to act as a signature to ensure that the underlying text hasn’t changed -- think of this as a version, or fingerprint, of the page
  • Software tools (on the UMass site) to: download the web pages; extract the mentions, with ways to recover if the byte offsets don’t match; select the text around the mentions as local context; and compute evaluation metrics over predicted entities.
The format looks like this:

URL http://1967mercurycougar.blogspot.com/2009_10_01_archive.html
MENTION Lincoln Continental Mark IV 40110 http://en.wikipedia.org/wiki/Lincoln_Continental_Mark_IV
MENTION 1975 MGB roadster 41481 http://en.wikipedia.org/wiki/MG_MGB
MENTION Buick Riviera 43316 http://en.wikipedia.org/wiki/Buick_Riviera
MENTION Oldsmobile Toronado 43397 http://en.wikipedia.org/wiki/Oldsmobile_Toronado
TOKEN seen 58190
TOKEN crush 63118
TOKEN owners 69290
TOKEN desk 59772
TOKEN relocate 70683
TOKEN promote 35016
TOKEN between 70846
TOKEN re 52821
TOKEN getting 68968
TOKEN felt 41508


We’d love to hear what you’re working on, and look forward to what you can do with 40 million mentions across over 10 million web pages!

Thanks to our collaborators at UMass Amherst: Sameer Singh and Andrew McCallum.

Friday, March 8, 2013

Voices of women in technology

A diverse workforce is critical in helping us build products that can help people change the world. That includes diversity of all life experiences, including gender.

Women were some of the first programmers and continue to make a major impact on the programming world today. We think it’s important to highlight the great work women are doing in computer science, to help provide role models for young women thinking about careers in computing.

Tomorrow is International Women’s Day, and as one of our contributions to the celebration, we’re proud to support Voices Global Conference, presented by Global Tech Women. As part of this 24-hour live streamed event, Google will provide more than a dozen hours of free talks featuring women working in computer science, beginning today. To access the full schedule and our ongoing broadcasts, see our section on the Voices website, which will be updated throughout the day.

The Voices Global Conference is the brainchild of Global Tech Women’s founder Deanna Kosaraju, who also started India’s Grace Hopper Celebration of Women in Computing in 2010 with grant support from Google. The India conferences, which provide a forum for women to share their professional and research work in computing, have grown rapidly, with more than 800 attendees in 2012. So when Deanna proposed this global, 24-hour streamed conference, we knew it was a great opportunity to help women and other audiences around the world learn more and get inspired about the contributions women are making to technology and computer science.

Our sessions will feature a range of material, from new episodes of the Women Techmakers series and interviews with women leaders like the head of Lexity India Mani Abrol, to discussions focusing on technologies like Google Compute Engine. For a sneak peek of the type of content we’ll be providing, check out Pavni’s story below, produced in conjunction with PBS’ MAKERS series. I’ve provided advice to many young people in India interested in studying computer science and pursuing their own dreams—so Pavni’s tenacity, coupled with the encouragement and support she received from her father, resonated with me. We’re excited to share her story and others like it alongside technical conversations and discussions on women in technology as part of this conference.



I hope you’ll join us for our sessions—and in the meantime, you can learn more about our efforts to support women at Google and beyond.

Thursday, March 7, 2013

Art, Copy & Code: a series of experiments to re-imagine advertising

Last year, we started a program to partner with advertisers and agencies to re-imagine how brands tell stories in a connected world. Project Re: Brief set out to recreate some of the advertising industry’s most iconic, classic campaigns using the latest technology tools. This year we’re expanding that program to work with some of today’s most iconic brands and innovative marketers, in our new project: Art, Copy & Code.

Art, Copy & Code is a series of projects and experiments to show how creativity and technology can work hand in hand. Some of these will include familiar brands like Volkswagen, Burberry and adidas—projects developed in partnership with their creative teams and agencies. Others will be creative experiments with innovative filmmakers, creative directors and technologists to explore how brands can connect with consumers through a whole range of digital tools—including ads, mobile apps and social experiences. Our first partner project is a new social driving experience—Volkswagen Smileage.

Building off their 2012 campaign, “It’s not the miles, it’s how you live them,” Volkswagen Smileage is a mobile app and web service that aims to add a little bit of fun to every drive, from your daily commutes to holiday road trips. The app measures the fun factor of each trip using a metric called “smileage,” based on signals like weather, traffic, location, time and social interactions (e.g., a long drive on a sunny Saturday afternoon might accumulate more smileage than a morning commute in the snow). You can use it with any car, not just Volkswagens.

Powered by the new Google+ sign-in, you can choose to share Smileage experience with friends and family. For example, during a road trip, photos and videos taken by you and your co-passengers can be automatically added to a live interactive map. The inspiration for the service came from a recent study showing that every day, 144 million Americans on average spend 52 minutes in a car—76 percent of them alone. We wanted to make that time a more shareable experience. Volkswagen Smileage will be available soon in beta—you can sign up on this webpage for early access.


We’ll have many more experiments to share in the Art, Copy & Code project soon—subscribe for updates at ArtCopyCode.com. We’re committed to investing in technology and tools over the long term to help brands and their agencies succeed not just today, but in a digital future that will look very different.

If you’re planning on attending SXSW, stop by the Google Playground on March 9 to see demos of these experiments, or attend our talk on March 10.

Public Alerts for Google Search, Google Now and Google Maps available in Japan

With nearly 5,000 earthquakes a year, it’s important for people in Japan to have crisis preparedness and response information available at their fingertips. And from our own research, we know that when a disaster strikes, people turn to the Internet for more information about what is happening.

With this in mind, we’re launching Google Public Alerts today in Japan—the first international expansion of a service we debuted last year in the United States. Google Public Alerts is a platform designed to provide accurate and relevant emergency alerts when and where you’re searching for them online.

Relevant earthquake and tsunami warnings for Japan will now appear on Google Search, Google Maps and Google Now when you search online during a time of crisis. If a major earthquake alert is issued in Kanagawa Prefecture, for example, the alert information will appear on your desktop and mobile screens when you search for relevant information on Google Search and Google Maps.

Example of a Google Search result on a tablet showing a tsunami warning

Example of a tsunami warning on Google Maps

If you click “詳細” (“More info”) right under the alert, you’ll see more details about the announcement, including the full description from the Japan Meteorological Agency, a link to their site, and other useful information like observed arrival times and wave heights for tsunamis.

Example of how a tsunami alert would work in Fukushima

And when you open Google Now on your Android device, recommended actions and information will be tailored to where you are. For example, if you happen to be in Tokyo at a time when a tsunami alert is issued, Google Now will show you a card containing information about the tsunami alert, as well as any available evacuation instructions:

Example of a tsunami warning card on Google Now

We’re able to provide Public Alerts in Japan thanks to the Japan Meteorological Agency, whose publication of data enables Google and others to make critical and life-saving information more widely available.

We hope our technology, including Public Alerts, will help people better prepare for future crises and create more far-reaching support for crisis recovery. This is why in Japan, Google has newly partnered with 14 Japanese prefectures and cities, including seven from the Tōhoku region, to make their government data available online and more easily accessible to users, both during a time of crisis and after. The devastating Tōhoku Earthquake struck Japan only two years ago, and the region is still slowly recovering from the tragedy.

We look forward to expanding Google Public Alerts to more countries and working with more warning providers soon. We also encourage potential partners to read our FAQ and to consider putting data in an open format, such as the Common Alerting Protocol. To learn more about Public Alerts, visit our Public Alerts homepage.

Wednesday, March 6, 2013

Celebrating Google Play’s first birthday

Accessing digital entertainment should be simple, whether you like to read books on your tablet, listen to music on your phone and computer, or watch movies on all three. That’s why one year ago today we launched Google Play, where you can find and enjoy your favorite music, movies, books and apps on your Android phone and tablet, or on the web.

Google Play has grown rapidly in the last year, bringing you more content in more languages and places around the globe. In addition to offering more than 700,000 apps and games, we’ve partnered with all of the major music companies, movie studios and publishers to bring you the music, movies, TV shows, books and magazines you love. And we’ve added more ways for you to buy them, including paying through your phone bill and gift cards, which we're beginning to roll out in the U.K. this week.

Since no birthday is complete without presents, we’re celebrating with a bunch of special offers across the store on songs, TV shows, movies and books. We’re even offering a collection of games with some fun birthday surprises created by developers.

It’s been a busy year, but we’re just getting started. We look forward to many more years of bringing you the best in entertainment!


Transparency Report: Shedding more light on National Security Letters

Our users trust Google with a lot of very important data, whether it’s emails, photos, documents, posts or videos. We work exceptionally hard to keep that information safe—hiring some of the best security experts in the world, investing millions of dollars in technology and baking security protections such as 2-step verification into our products.

Of course, people don’t always use our services for good, and it’s important that law enforcement be able to investigate illegal activity. This may involve requests for personal information. When we receive these requests, we:

  • scrutinize them carefully to ensure they satisfy the law and our policies;
  • seek to narrow requests that are overly broad;
  • notify users when appropriate so they can contact the entity requesting the information or consult a lawyer; and
  • require that government agencies use a search warrant if they’re seeking search query information or private content, like Gmail and documents, stored in a Google Account.

When conducting national security investigations, the U.S. Federal Bureau of Investigation can issue a National Security Letter (NSL) to obtain identifying information about a subscriber from telephone and Internet companies. The FBI has the authority to prohibit companies from talking about these requests. But we’ve been trying to find a way to provide more information about the NSLs we get—particularly as people have voiced concerns about the increase in their use since 9/11.

Starting today, we’re now including data about NSLs in our Transparency Report. We’re thankful to U.S. government officials for working with us to provide greater insight into the use of NSLs. Visit our page on user data requests in the U.S. and you’ll see, in broad strokes, how many NSLs for user data Google receives, as well as the number of accounts in question. In addition, you can now find answers to some common questions we get asked about NSLs on our Transparency Report FAQ.


You'll notice that we're reporting numerical ranges rather than exact numbers. This is to address concerns raised by the FBI, Justice Department and other agencies that releasing exact numbers might reveal information about investigations. We plan to update these figures annually.



(Cross-posted on the Public Policy Blog)

Monday, March 4, 2013

Introducing Art Talks on Google+

An excellent guide often best brings an art gallery or museum’s collections to life. Starting this week, we’re hoping to bring this experience online with “Art Talks,” a series of Hangouts on Air on our Google Art Project Google+ page. Each month, curators, museum directors, historians and educators from some of the world’s most renowned cultural institutions will reveal the hidden stories behind particular works, examine the curation process and provide insights into particular masterpieces or artists.

The first guided visit will be held this Wednesday, March 6 at 8pm ET from The Museum of Modern Art. Deborah Howes, Director of Digital Learning, along with a panel of artists and students, will discuss how to teach art online. To post a question, visit the event page. If this talk falls too late for you to tune in live, you can watch afterward on our Google Art Project YouTube channel.

The next talk is from London. On March 20, Caroline Campbell and Arnika Schmidt from the National Gallery will discuss depictions of the female nude. Details are available on the Art Project’s event page. In April we’ll host a panel examining one of the Google Art Project’s popular gigapixel works, Bruegel’s “Tower of Babel,” featuring Peter Parshall, curator at the National Gallery of Art in Washington.



Additional talks are planned by curators from high-profile institutions such as The Metropolitan Museum of Art, the Museum of Contemporary Art in Los Angeles, the Museo Nacional de Arte in Mexico and the Museum of Islamic Art in Qatar.

Google Art Project aims to make art more accessible to all. We hope that Art Talks is the next step in bringing art to your armchair, wherever you are in the world, with just a click of a button. Stay tuned to the Art Project and Cultural Institute Google+ pages for more information on dates and times of these online lectures.

Friday, March 1, 2013

Making the cloud more accessible with Chrome and Android

If you’re a blind or low-vision user, you know that working in the cloud poses unique challenges. Our accessibility team had an opportunity to address some of those challenges at the 28th annual CSUN International Technology and Persons with Disabilities Conference this week. While there, we led a workshop on how we’ve been improving the accessibility of Google technologies. For all those who weren’t at the conference, we want to share just a few of those improvements and updates:

Chrome and Google Apps
  • Chrome OS now supports a high-quality text-to-speech voice (starting with U.S. English). We’ve also made spoken feedback, along with screen magnification and high-contrast mode available out-of-the-box to make Chromebook and Chromebox setup easier for users with accessibility needs.
  • Gmail now has a consistent navigation interface, backed by HTML5 ARIA, which enables blind and low-vision users to effectively navigate using a set of keyboard commands.
  • It’s now much easier to access content in your Google Drive using a keyboard—for example, you can navigate a list of files with just the arrow keys. In Docs, you can access features using the keyboard, with a new way to search menu and toolbar options. New keyboard shortcuts and verbalization improvements also make it easier to use Docs, Sheets and Slides with a screenreader.
  • The latest stable version of Chrome, released last week, includes support for the Web Speech API, which developers can use to integrate speech recognition capabilities into their apps. At CSUN, our friends from Bookshare demonstrated how they use this new functionality to deliver ReadNow—a fully integrated ebook reader for users with print disabilities.
  • Finally, we released a new Help Center Guide specifically for blind and low-vision users to ease the transition to using Google Apps.

Android
  • We added Braille support to Android 4.1; since then, Braille support has been expanded on Google Drive for Android, making it easier to read and edit your documents. You can also use Talkback with Docs and Sheets to edit on the go.
  • With Gesture Mode in Android 4.1, you can reliably navigate the UI using touch and swipe gestures in combination with speech output.
  • Screen magnification is now built into Android 4.2—just enable “Magnification gestures,” then triple tap to enter full screen magnification.
  • The latest release of TalkBack (available on Play soon) includes several highly-requested features like structured browsing of web content and the ability to easily suspend/resume TalkBack via an easy-to-use radial menu.

These updates to Chrome, Google Apps, and Android will help create a better overall experience for our blind and low-vision users, but there’s still room for improvement. Looking ahead, we’re focused on the use of accessibility APIs that will make it easier for third-party developers to create accessible web applications, as well as pushing the state of the art forward with technologies like speech recognition and text-to-speech. We’re looking forward to working with the rest of the industry to make computers and the web more accessible for everyone.