On the last day of the Google I/O developers conference, we sat down with engineering director Peter Magnusson to digest the introduction ofCompute Engine, which adds Google-scale processing power to the company’s list of cloud offerings designed to take on Amazon Web Services. Here are the announcement’s five key implications:
1. Forget Web vs. Native. There’s Only One Cloud
According to Magnusson, all the hand-wringing over Web versus native apps is really nothing to worry about. It’s a short-term problem. “We’ll have both for now,” he said, “and small teams will have to prioritize” based on what they learn from their customers. But eventually, Google intends to make so much possible from the cloud that the particular interface to an application won’t matter.
We’re almost at the inflection point, in fact. Magnusson points out that high-profile exits of small apps such as Instagram wouldn’t be possible without a managed cloud infrastructure. The new pieces of Google’s cloud offerings are trying to expand that flexibility to broader kinds of applications. “We’re trying to build the future cloud global computer,” Magnusson said.
2. The Trend Over Time Is Toward Managed Services
The early offerings in Google App Engine – the forerunner to Compute Engine – solved certain kinds of computing problems for developers, but many App Engine apps handle their traffic capacity on infrastructure elsewhere. That’s costly and confusing, and Google thinks that era is nearly over. What comes next, in Google’s view, is an era in which developers have to worry about only their products, not their uptime. “The trend over time is toward managed services,” Magnusson said.
There are already more than a million AppEngine applications, and Magnusson points out that the number of engineers required to manage the infrastructure is vastly lower at Google’s scale than if every company reinvented the wheel with its own machines. Those savings get passed down to customers, he said.
3. If You Need Computing Power, Google Can Handle It
At Google I/O on Thursday, SVP of technical infrastructure Urs HÃ¶lzle unveiled Compute Enginewith a flashy demo. The Institute for Systems Biology can use a whopping 600,000 computing cores for genetic analysis at speeds that were impossible before, all on Google’s infrastructure.
“There’s not a lot of startups that need 600,000 cores,” Magnusson pointed out, but the demo was meant to prove a point. “To enterprise and startups: Don’t worry. You aren’t going to outgrow the stack. It’s spare capacity.”
4. You Don’t Have to Know Everything to Make Something
With an infrastructure company handling the computing, the hosting and the flexible scaling for these applications, developers no longer need to be experts in everything. If Google’s dream comes true, it will lower the threshold for people to write their own programs, and many more people will be able to create many more apps.
Google provides the most basic kinds of applications – search, email, calendars and maps – itself, and the rest of the capacity is open to what Magnusson called the “very, very, very long” tail of other uses. There are over a million Google-hosted apps now, Google expects it to be 10 million within a few years, and Magnusson said the vision is for “tens of billions of applications.”
Who makes all those apps? What do they do? In response to this question, Magnusson asked rhetorically, “How many spreadsheets have you had to look at recently?” In other words, the amount of data we generate is endless, and the ways to slice and dice them are even more so. “You’re going to get teachers writing custom applications for a particular class,” Magnusson suggested, and their students, too, for that matter.
5. Google Wants to Scale With You
Google’s main goal is to get new applications in on the ground floor. Eventually, Google hopes it will be cost-effective for you to host your pet project there, and Google’s capacity will be able to expand with you. “It doesn’t matter whether it’s two [virtual machines] or 10,000,” Magnusson said.