Tuesday, August 23, 2011

Google at the Joint Statistical Meetings in Miami



The Joint Statistical Meetings (JSM) were held in Miami, Florida, this year. Nearly 5,000 participants from academia and industry came to present and discuss the latest in statistical research, methodology, and applications. Similar to previous years, several Googlers shared expertise in large-scale experimental design and implementation, statistical inference with massive datasets and forecasting, data mining, parallel computing, and much more.

Our session "Statistics: The Secret Weapon of Successful Web Giants" attracted over one hundred people; surprising for an 8:30 AM session! Revolution Analytics reviewed this in their official blog post "How Google uses R to make online advertising more effective"

The following talks were given by Googlers at JSM 2011. Please check the upcoming Proceedings of the JSM 2011 for the full papers.

Google has participated at JSM each year since 2004. We have been increasing our involvement significantly by providing sponsorship, organizing and giving talks at sessions and roundtables, teaching courses and workshops, hosting a booth with new Google products demo, submitting posters, and more. This year Googlers participated in sessions sponsored by ASA sections for Statistical Learning and Data Mining, Statistics and Marketing, Statistical Computing, Bayesian Statistical Science , Health Policy Statistics, Statistical Graphics, Quality and Productivity, Physical and Engineering Sciences, and Statistical Education.

We also hosted the Google faculty reception, which was well-attended by faculty and their promising students. Google hires a growing number of statisticians and we were happy to participate in JSM again this year. People had a chance to talk to Googlers, ask about working here, encounter elements of Google culture (good food! T-shirts! 3D puzzles!), meet old and make new friends, and just have fun!

Thanks to everyone that presented, attended, or otherwise engaged with the statistical community at JSM this year. We’re looking forward to seeing you in San Diego next year.

Tuesday, August 16, 2011

A new MIT center for mobile learning, with support from Google



MIT and Google have a long-standing relationship based on mutual interests in education and technology. Today, we took another step forward in our shared goals with the establishment of the MIT Center for Mobile Learning, which will strive to transform learning and education through innovation in mobile computing. The new center will be actively engaged in studying and extending App Inventor for Android, which Google recently announced it will be open sourcing.

The new center, housed at MIT’s Media Lab, will focus on designing and studying new mobile technologies that enable people to learn anywhere, anytime, with anyone. The center was made possible in part by support from Google University Relations and will be run by myself and two distinguished MIT colleagues: Professors Eric Klopfer (science education) and Mitchel Resnick (media arts and sciences).

App Inventor for Android—a programming system that makes it easy for learners to create mobile apps for Android smartphones—currently supports a community of about 100,000 educators, students and hobbyists. Through the new initiatives at the MIT Center for Mobile Learning, App Inventor will be connected to MIT’s premier research in educational technology and MIT’s long track record of creating and supporting open software.

Google first launched App Inventor internally in order to move it forward with speed and focus, and then developed it to a point where it started to gain critical mass. Now, its impact can be amplified by collaboration with a top academic institution. At MIT, App Inventor will adopt an enriched research agenda with increased opportunities to influence the educational community. In a way, App Inventor has now come full circle, as I actually initiated App Inventor at Google by proposing it as a project during my sabbatical with the company in 2008. The core code for App Inventor came from Eric Klopfer’s lab, and the inspiration came from Mitch Resnick’s Scratch project. The new center is a perfect example of how industry and academia can collaborate effectively to create change enabled by technology, and we look forward to seeing what we can do next, together.

Saturday, August 13, 2011

Our Faculty Institute brings faculty back to the drawing board

Posted by Nina Kim Schultz, Google Education Research

Cross-posted with the Official Google Blog

School may still be out for summer, but teachers remain hard at work. This week, we hosted Google’s inaugural Faculty Institute at our Mountain View, Calif. headquarters. The three-day event was created for esteemed faculty from schools of education and math and science to explore teaching paradigms that leverage technology in K-12 classrooms. Selected via a rigorous nomination and application process, the 39 faculty members hail from 19 California State Universities (CSUs), as well as Stanford and UC Berkeley, and teach high school STEM (Science, Technology, Engineering and Math) teachers currently getting their teaching credentials. CSU programs credential 60 percent of California’s teachers—or 10 percent of all U.S. K-12 teachers—and one CSU campus alone can credential around 1,000 new teachers in a year. The purpose of gathering together at the Institute was to ensure our teachers’ teachers have the support they need to help educators adjust to a changing landscape.

There is so much technology available to educators today, but unless they learn how to use it effectively, it does little to change what is happening in our classrooms. Without the right training and inspiration, interactive displays become merely expensive projection screens, and laptops simply replace paper rather than shifting the way teachers teach and students learn. Although the possibilities for technology use in schools are endless, teacher preparation for the 21st century classroom also has many constraints. For example: beyond the expense involved, there’s the time it costs educators to match a technological innovation to the improvement of pedagogy and curriculum; there’s a distinct shift in thinking that needs to take place to change classrooms; and there’s an essential challenge to help teachers develop the dispositions and confidence to be lifelong evaluators, learners and teachers of technology, instead of continuing to rely on traditional skill sets that will soon be outdated.

The Institute featured keynote addresses from respected professors from Stanford and Berkeley, case studies from distinguished high school teachers from across California, hands-on technology workshops with a variety of Google and non-Google tools, and panels with professionals in the tech-education industry. Notable guests included representatives from Teach for America, The New Teacher Project, the Department of Education and Edutopia. Topics covered the ability to distinguish learning paths, how to use technology to transform classrooms into project-based, collaborative spaces and how to utilize a more interactive teaching style rather than the traditional lecture model.

On the last day of the Institute, faculty members were invited to submit grant proposals to scale best practices outside of the meeting. Deans of the participating universities will convene at the end of the month to further brainstorm ways to scale new ideas in teacher preparation programs. Congratulations to all of the faculty members who were accepted into the inaugural Institute, and thank you for all that you do to help bring technology and new ways of thinking into the classroom.



This program is a part of Google’s continued commitment to supporting STEM education. Details on our other programs can be found on www.google.com/education.

Thursday, August 11, 2011

Culturomics, Ngrams and new power tools for Science



Four years ago, we set out to create a research engine that would help people explore our cultural history by statistically analyzing the world’s books. In January 2011, the resulting method, culturomics, was featured on the cover of the journal Science. More importantly, Google implemented and launched a web-based version of our prototype research engine, the Google Books Ngram Viewer.

Now scientists, scholars, and web surfers around the world can take advantage of the Ngram Viewer to study a vast array of phenomena. And that's exactly what they've done. Here are a few of our favorite examples.

Poverty
Martin Ravallion, head of the Development Research Group at the World Bank, has been using the ngrams to study the history of poverty. In a paper published in the journal Poverty and Public Policy, he argues for the existence of two ‘poverty enlightenments’ marked by increased awareness of the problem: one towards the end of the 18th century, and another in the 1970s and 80s. But he makes the point that only the second of these enlightenments brought with it a truly enlightened idea: that poverty can be and should be completely eradicated.



The Science Hall of Fame
Adrian Veres and John Bohannon wondered who the most famous scientists of the past two centuries were. But there was no hall of fame for scientists, or a committee that determines who deserves to get into such a hall. So they used the ngrams data to define a metric for celebrity – the milliDarwin – and algorithmically created a Science Hall of Fame listing the most famous scientists born since 1800. They found that things like a popular book or a major controversy did more to increase discussion of a scientist than, for instance, winning a Nobel Prize.

(Other users have been exploring the history of particular sciences with the Ngram Viewer, covering everything from neuroscience to the nuclear age.)


The History of Typography
When we introduced the Ngram Viewer, we pointed out some potential pitfalls with the data. For instance, the ‘medial s’ ( ſ ), an older form of the letter s that looked like an integral sign and appeared in the beginning or middle of words, tends to be classified as an instance of the letter ‘f’ by the OCR algorithm used to create our version of the data. Andrew West, blogging at Babelstone, found a clever way to exploit this error: using queries like ‘husband’ and ‘hufband’ to study the history of medial s typography, he pinned down the precise moment when the medial s disappeared from English (around 1800), French (1780), and Spanish (1760).

People are clearly having a good time with the Ngram Viewer, and they have been learning a few things about science and history in the process. Indeed, the tool has proven so popular and so useful that Google recently announced that its imminent graduation from Google Labs to become a permanent part of Google Books.

Similar ‘big data’ approaches can also be applied to a wide variety of other problems. From books to maps to the structure of the web itself, 'the world's information' is one amazing dataset.

Erez Lieberman Aiden is Visiting Faculty at Google and a Fellow of the Harvard Society of Fellows. Jean-Baptiste Michel is Visiting Faculty at Google and a Postdoctoral Fellow in Harvard's Department of Psychology.

Friday, July 29, 2011

President's Council Recommends Open Data for Federal Agencies



Cross-posted with the Public Sector and Elections Lab Blog

One of the things I most enjoy about working on data management is the ability to work on a variety of problems, both in the private sector and in government. I recently had the privilege of serving on a working group of the President’s Council of Advisors on Science and Technology (PCAST) studying the challenges of conserving the nation’s ecosystems. The report, titled “Sustaining Environmental Capital: Protecting Society and the Economy” was presented to President Obama on July 18th, 2011. The full report is now available to the public.

The press release announcing the report summarizes its recommendations:
The Federal Government should launch a series of efforts to assess thoroughly the condition of U.S. ecosystems and the social and economic value of the services those ecosystems provide, according to a new report by the President’s Council of Advisors on Science and Technology (PCAST), an independent council of the Nation’s leading scientists and engineers. The report also recommends that the Nation apply modern informatics technologies to the vast stores of biodiversity data already collected by various Federal agencies in order to increase the usefulness of those data for decision- and policy-making.

One of the key challenges we face in assessing the condition of ecosystems is that a lot of the data pertaining to these systems is locked up in individual databases. Even though this data is often collected using government funds, it is not always available to the public and in other cases available but not in usable formats. This is a classical example of a data integration problem that occurs in many other domains.

The report calls for creating an ecosystem, EcoINFORMA, around data. The crucial piece of this ecosystem is to make the relevant data publicly available in a timely manner and, most importantly, in a machine readable form. Publishing data embedded in a PDF file is a classical example of what does not count as being machine readable. For example, if you are publishing a tabular data set, then a computer program should be able to directly access the meta-data (e.g., column names, date collected) and the data rows without having to heuristically extract it from surrounding text.

Once the data is published, it can be discovered by search engines. Data from multiple sources can be combined to provide additional insight, and the data can be visualized and analyzed by sophisticated tools. The main point is that innovation should be pursued by many parties (academics, commercial, government), each applying their own expertise and passions.

There is a subtle point about how much meta-data should be provided before publishing the data. Unfortunately, requiring too much meta-data (e.g., standard schemas) often stymies publication. When meta-data exists, that’s great, but when it’s not there or is not complete, we should still publish the data in a timely manner. If the data is valuable and discoverable, there will be someone in the ecosystem who will enhance the data in an appropriate fashion.

I look forward to seeing this ecosystem evolve and excited that Google Fusion Tables, our own cloud-based service for visualizing, sharing and integrating structured data, can contribute to its development.

Thursday, July 21, 2011

Studies Show Search Ads Drive 89% Incremental Traffic



Advertisers often wonder whether search ads cannibalize their organic traffic. In other words, if search ads were paused, would clicks on organic results increase, and make up for the loss in paid traffic? Google statisticians recently ran over 400 studies on paused accounts to answer this question.

In what we call “Search Ads Pause Studies”, our group of researchers observed organic click volume in the absence of search ads. Then they built a statistical model to predict the click volume for given levels of ad spend using spend and organic impression volume as predictors. These models generated estimates for the incremental clicks attributable to search ads (IAC), or in other words, the percentage of paid clicks that are not made up for by organic clicks when search ads are paused.

The results were surprising. On average, the incremental ad clicks percentage across verticals is 89%. This means that a full 89% of the traffic generated by search ads is not replaced by organic clicks when ads are paused. This number was consistently high across verticals. The full study can be found on here.

Faculty from across the Americas meet in New York for the Faculty Summit



(Cross-posted from the Official Google Blog)

Last week, we held our seventh annual Computer Science Faculty Summit. For the first time, the event took place at our New York City office; nearly 100 faculty members from universities in the U.S., Canada and Latin America attended. The two-day Summit focused on systems, artificial intelligence and mobile computing. Alfred Spector, VP of research and special initiatives, hosted the conference and led lively discussions on privacy, security and Google’s approach to research.

Google’s Internet evangelist, Vint Cerf, opened the Summit with a talk on the challenges involved in securing the “Internet of things”—that is, uniquely identifiable objects (“things”) and their virtual representations. With almost 2 billion international Internet users and 5 billion mobile devices out there in the world, Vint expounded upon the idea that Internet security is not just about technology, but also about policy and global institutions. He stressed that our new digital ecosystem is complex and large in scale, and includes both hardware and software. It also has multiple stakeholders, diverse business models and a range of legal frameworks. Vint argued that making and keeping the Internet secure over the next few years will require technical innovation and global collaboration.

After Vint kicked things off, faculty spent the two days attending presentations by Google software engineers and research scientists, including John Wilkes on the management of Google's large hardware infrastructure, Andrew Chatham on the self-driving car, Johan Schalkwyk on mobile speech technology and Andrew Moore on the research challenges in commerce services. Craig Nevill-Manning, the engineering founder of Google’s NYC office, gave an update on Google.org, particularly its recent work in crisis response. Other talks covered the engineering work behind products like Ad Exchange and Google Docs, and the range of engineering projects taking place across 35 Google offices in 20 countries. For a complete list of the topics and sessions, visit the Faculty Summit site. Also, a few of our attendees heeded Alfred’s call to recap their breakout sessions in verse—download a PDF of one of our favorite poems, about the future of mobile computing, penned by NYU professor Ken Perlin.

A highlight of this year’s Summit was Bill Schilit’s presentation of the Library Wall, a Chrome OS experiment featuring an eight-foot tall full-color virtual display of ebooks that can be browsed and examined individually via touch screen. Faculty members were invited to play around with the digital-age “bookshelf,” which is one of the newest additions to our NYC office.

We’ve already posted deeper dives on a few of the talks—including cluster management, mobile search and commerce. We also collected some interesting faculty reflections. For more information on all of our programs, visit our University Relations website. The Faculty Summit is meant to connect forerunners across the computer science community—in business, research and academia—and we hope all our attendees returned home feeling informed and inspired.

Wednesday, July 20, 2011

Google Americas Faculty Summit: Reflections from our attendees



Last week, we held our seventh annual Americas Computer Science Faculty Summit at our New York City office. About 100 faculty members from universities in the Western Hemisphere attended the two-day Summit, which focused on systems, artificial intelligence and mobile. To finish up our series of Summit recaps, I asked four faculty members to provide us their perspective on the summit, thinking their views would complement our own blog: Jeannette Wing from Carnegie Mellon, Rebecca Wright from Rutgers, Andrew Williams from Spelman and Christos Kozyrakis from Stanford.

Jeannette M. Wing, Carnegie Mellon University
Fun, cool, edgy and irreverent. Those words describe my impression of Google after attending the Google Faculty Summit, held for the first time at its New York City location. Fun and cool: The Library Wall prototype, which attendees were privileged to see, is a peek at the the future where e-books have replaced physical books, but where physical space, equipped with wall-sized interactive displays, still encourages the kind of serendipitous browsing we enjoy in the grand libraries of today. Cool and edgy: Being in the immense old Port Authority building in the midst of the Chelsea district of Manhattan is just plain cool and adds an edgy character to Google not found at the corporate campuses of Silicon Valley. Edgy, or more precisely “on the edge,” is Google as it explores new directions: social networking (Google+), mobile voice search (check out the microphone icon in your search bar) and commerce (e.g. selling soft goods on-line). Why these directions? Some are definitely for business reasons, but some are also simply because Google can (self-driving cars) and because it’s good for society (e.g., emergency response in Haiti, Chile, New Zealand and Japan). “Irreverent” is Alfred Spector’s word and sums it up—Google is a fun place to work, where smart people can be creative, build cool products and make a difference in untraditional ways.

But the one word that epitomizes Google is “scale.” How do you manage clusters on the order of hundreds of thousands of processors where the focus is faults, not performance or power? What knowledge about humanity can machine learning discover from 12 million scanned books in 400 languages that generated five billion pages and two trillion words digitized? Beyond Google, how do you secure the Internet of Things when eventually everything from light bulbs to pets will all be Internet-enabled and accessible?

One conundrum. Google’s hybrid model of research clearly works for Google and for Googlers. It is producing exciting advances in technology and having an immeasurable impact on society. Evident from our open and intimate breakout sessions, Google stays abreast of cutting-edge academic research, often by hiring our Ph.D. students. The challenge for computer science research is, “how can academia build on the shoulders of Google’s scientific results?”

Academia does not have access to the scale of data or the complexity of system constraints found within Google. For the good of the entire industry-academia-government research ecosystem, I hope that Google continues to maintain an open dialogue with academia—through faculty summits, participation and promotion of open standards, robust university relations programs and much more.
-----

Rebecca Wright, Rutgers University
This was my first time attending a Google Faculty Summit. It was great to see it held in my "backyard," which emphasized the message that much of Google's work takes place outside their Mountain View campus. There was a broad variety of excellent talks, each of which only addressed the tip of the iceberg of the particular problem area. The scope and scale of the work being done at Google is really mind-boggling. It both drives Google’s need for new solutions and allows the company to consider new approaches. At Google’s scale, automation is critical and almost everything requires research advances, engineering advances, considerable development effort and engagement of people outside Google (including academics, the open source community, policymakers and "the crowd").

A unifying theme in much of Google’s work is the use of approaches that leverage its scale rather than fight it (such as MapMaker, which combines Google's data and computational resources with people's knowledge about and interest in their own geographic areas). In addition to hearing presentations, the opportunity to interact with the broad variety of Googlers present as well as other faculty was really useful and interesting. As a final thought, I would like to see Google get more into education, particularly in terms of advancing hybrid in-class/on-line technologies that take advantage of the best features of each.
-----

Andrew Williams, Spelman College
At the 2011 Google Faculty Summit in New York, the idea that we are moving past the Internet of computers to an "Internet of Things" became a clear theme. After hearing presentations by Googlers, such as Vint Cerf dapperly dressed in a three piece suit, I realized that we are in fact moving to an Internet of Things and People. The pervasiveness of connected computing devices and very large systems for cloud computing all interacting with socially connected people were expounded upon both in presentations and in informal discussions with faculty from around the world. The "Internet of people" aspect was also evident in emerging policies we touched on, involving security, privacy and social networks (like the Google+ project). I also enjoyed the demonstration of the Google self-driving car as an advanced application of artificial intelligence that integrates computer vision, localization and decision making in a real world transportation setting. I was impressed with how Google volunteers its talent, technology and time to help people, as it did with its crisis response efforts in Haiti, Japan and other parts of the world.

As an educator and researcher in humanoid robotics and AI at a historically black college for women in Atlanta, the Google Faculty Summit motivated me to improve how I educate our students to eventually tackle the grand challenges posed by the Internet of Things and People. It was fun to learn how Google is actively seeking to solve these grand challenges on a global scale.
-----

Christos Kozyrakis, Stanford University
What makes the Google Faculty Summit a unique event to attend is its wide-reaching focus. Our discipline-focused conferences facilitate in-depth debates over a narrow set of challenges. In contrast, the Faculty Summit is about bringing together virtually all disciplines of computer science to turn information into services with an immediate impact on our everyday lives. It is fascinating to discuss how large data centers and distributed software systems allow us to use machine learning algorithms on massive datasets and get voice based search, tailored shopping recommendations or driver-less cars. Apart from the general satisfaction of seeing these applications in action, one of the important takeaways for me is that specifying and managing the behavior of large systems in an end-to-end manner is currently a major challenge for our field. Now is probably the best time to be a computer scientist, and I am leaving with a better understanding of what advances in my area of expertise can have the biggest overall impact.

I also enjoyed having the summit at the New York City office, away from Google headquarters in Silicon Valley. It’s great to see in practice how the products of our field (networking, video-conferencing and online collaboration tools) allow for technology development anywhere in the world.
-----

As per Jeannette Wing’s comments about Google being “irreverent,” I own up to using the term—initially about a subject on which Aristophanes once wrote (I’ll leave that riddle open). As long as you take my usage in the right way (that is, we’re very serious about the work we do, but perhaps not about all the things one would expect of a large company), I’m fine with it. There’s so much in the future of computer science and its potential impact that we should always be coming at things in new ways, with the highest aspirations and with joy at the prospects.

Tuesday, July 19, 2011

Google Americas Faculty Summit Day 2: Shopping, Coupons and Data



On July 14 and 15, we held our seventh annual Faculty Summit for the Americas with our New York City offices hosting for the first time. Over the next few days, we will be bringing you a series of blog posts dedicated to sharing the Summit's events, topics and speakers. --Ed

Google is ramping up its commitment to making shopping and commerce fun, convenient and useful. As a computer scientist with a background in algorithms and large scale artificial intelligence, what's most interesting to me is the breadth of fundamental new technologies needed in this area. They range from the computer vision technology that recognizes fashion styles and visually similar items of clothing, to a deep understanding of (potentially) all goods for sale in the world, to new and convenient payments technologies, to the intelligence that can be brought to the mobile shopping experience, to the infrastructure needed to make these technologies work on a global scale.

At the Faculty Summit this week, I took the opportunity to engage faculty in some of the fascinating research questions that we are working on within Google Commerce. For example, consider the processing flow required to present a user with an appropriate set of shoes from which to choose, given the input of an image of a high heel shoe. First, we need to segment or identify the object of interest in the input image. If the input is an image of a high heel with the Alps in the background, we don’t want to find images of different types of shoes with the Alps in the background, we want images of high heels.

The second step is to extract the object’s “visual signature” and build an index using color, shape, pattern and metadata. Then, a search is performed using a variety of similarity measures. The implementation of this processing flow raises several research challenges. For example, the calculations required to determine similar shoes could be slow due to the number of factors that must be considered. Segmentation can also pose a difficult problem because of the complexity of the feature extraction algorithms.

Another important consideration is personalization. Consumers want items that correspond to their interests, so we should include results based on historical search and shopping data for a particular person (who has opted-in to such features). More importantly, we want to downweight styles that the shopper has indicated he does not like. Finally, we also need to include some creative items to simulate the serendipitous connections one makes when shopping in a store. This is a new kind of search experience, which requires a new kind of architecture and new ways to infer shopper satisfaction. As a result, we find ourselves exploring new kinds of statistical models and the underlying infrastructure to support them.


Saturday, July 16, 2011

Google Americas Faculty Summit Day 1: Cluster Management



On July 14 and 15, we held our seventh annual Faculty Summit for the Americas with our New York City offices hosting for the first time. Over the next few days, we will be bringing you a series of blog posts dedicated to sharing the Summit's events, topics and speakers. --Ed

At this year’s Faculty Summit, I had the opportunity to provide a glimpse into the world of cluster management at Google. My goal was to brief the audience on the challenges of this complex system and explain a few of the research opportunities that these kinds of systems provide.

First, a little background. Google’s fleet of machines are spread across many data centers, each of which consists of a number of clusters (a set of machines with a high-speed network between them). Each cluster is managed as one or more cells. A user (in this case, a Google engineer) submits jobs to a cell for it to run. A job could be a service that runs for an extended period, or a batch job that runs, for example, a MapReduce updating an index.

Cluster management operates on a very large scale: whereas a storage system that can hold a petabyte of data is considered large by most people, our storage systems will send us an emergency page when it has only a few petabytes of free space remaining. This scale give us opportunities (e.g., a single job may use several thousand machines at a time), but also many challenges (e.g., we constantly need to worry about the effects of failures). The cluster management system juggles the needs of a large number of jobs in order to achieve good utilization, trying to strike a balance between a number of conflicting goals.

To complicate things, data centers can have multiple types of machines, different network and power-distribution topologies, a range of OS versions and so on. We also need to handle changes, such as rolling out a software or a hardware upgrade, while the system is running.

Our current cluster management system is about seven years old now (several generations for most Google software) and, although it has been a huge success, it is beginning to show its age. We are currently prototyping a new system that will replace it; most of my talk was about the challenges we face in building this system. We are building it to handle larger cells, to look into the future (by means of a calendar of resource reservations) to provide predictable behavior, to support failures as a first-class concept, to unify a number of today’s disjoint systems and to give us the flexibility to add new features easily. A key goal is that it should provide predictable, understandable behavior to users and system administrators. For example, the latter want to know answers to questions like “Are we in trouble? Are we about to be in trouble? If so, what should we do about it?”

Putting all this together requires advances in a great many areas. I touched on a few of them, including scheduling and ways of representing and reasoning with user intentions. One of the areas that I think doesn’t receive nearly enough attention is system configuration—describing how systems should behave, how they should be set up, how those setups should change, etc. Systems at Google typically rely on dozens of other services and systems. It’s vital to simplify the process of making controlled changes to configurations that result in predictable outcomes, every time, even in the face of heterogeneous infrastructure environments and constant flux.

We’ll be taking steps toward these goals ourselves, but the intent of today’s discussion was to encourage people in the academic community to think about some of these problems and come up with new and better solutions, thereby raising the level for us all.