by Rachel Simmons and Bruce Wang
We have found, through thousands of conversations, that business owners often assume there is nothing they can do to cut their energy bill and therefore don’t measure their energy data.
Because that’s how Google rolls.
The company has put focus on reducing their energy usage for the past decade now. As they share in a blog post, they’ve built super-efficient servers, created ways to cool their data centers more efficiently, and invested heavily in cleaner energy sources—setting a goal to be 100% powered by renewable energy.
They even dedicate a page on their website to their data center efficiency detailing how they do it and how others can too.
Still, as they say, “major breakthroughs…are few and far between.” So reducing the amount of energy they use for cooling by 40%—especially given all the progress they’ve already made and how sophisticated their energy efficiency at their data centers already is—is a pretty big deal.
So how did Google do it and what is machine learning?
Earlier this summer, my husband and I happened to catch “Sleepless in Seattle” on TV one night. Call me a 90’s girl, but I’m a sucker for a Meg Ryan/Tom Hanks love story and never fail to get drawn into one.
There’s a scene in the movie where Meg Ryan’s character, Annie—a reporter for the Baltimore Sun—uses her work database to track down Tom Hank’s Sam, an architect in Seattle whose son calls into a radio show because he wants his broken-hearted father to find a new wife.
Annie’s Microsoft-DOS program is basically text on a screen asking questions and searching as she types in her answers. She inputs a function and it performs the task she’s asking it to (which is basically, find out who this Sam guy is so she can learn what happened to his former wife and decide if it’s star-crossed love for them).
Our computer experience is worlds more sophisticated now, but behind the scenes, computers still pretty much do whatever we tell them to. They don’t think for themselves; they’re not capable of it.
Machine learning, however, is the science of getting computers to think for themselves—to take action without being explicitly programmed or told what to do. It involves building generic algorithms, feeding those algorithms with data, and then allowing the program to build its own logic based on that data.
In other words, machine learning programs look for patterns in data and then are programmed to take action based on what they learn from those patterns.
Your Gmail account, for example, employs a machine-learning algorithm that identifies and categorizes spam. How does it do this magic? By examining large sets of data and developing logic around what, according to that data, equals spam. Cornell is building an algorithm that would identify whales and their location based on audio recordings, which would help ships avoid hitting them. And a number of companies are using machine learning to build algorithms that work on health issues, from predicting emergency room wait times to an phone app that can distinguish everyday jostles from emergency situations, such as strokes or seizures.
One step further: Google DeepMind
Google DeepMind is an artificial intelligence division that was created after the company acquired British startup, also named DeepMind, in 2014.
DeepMind creates artificial intelligence (AI) software that thinks for itself by drawing on huge sets of data that then teach DeepMind’s AI how to accomplish certain goals or tasks and predict outcomes. Ultimately the team behind DeepMind wants to create what they call “general artificial intelligence,” which would closely mimic intelligence’s ability to take on any task—not just the specific one it’s been trained to focus on.
They are working toward this by programming their software using reinforcement learning—programming software to explore an environment and then adjust its behavior in order to increase a virtual reward.
Cofounder Mustafa Suleyman explains to Business Insider:
“Everything starts with an agent. You can think of an agent as a control system for a robotic arm or a self-driving car or a recommendation engine and that agent has some goal that it’s trying to optimise.
We hand code that goal. It’s the only thing we give the agent. We say these are the things that you should find rewarding in some environment. And the environment can also be very general, so it could be a simulator to train a self-driving car, or it could be YouTube where you’re trying to recommend videos that people find entertaining and engaging.
The agent is able to take a set of actions in some environment [and is] able to experimentally interact, independent and autonomously, in the environment and that environment then provides back a set of observations about how the state has changed as a result of the agent interacting with that environment. And, of course, the environment passes back a reward, which the agent is able to learn from. So it’s really learning through feedback or through the reinforcement learning process.”
Recently, DeepMind made news when its AI agent, AlphaGo, beat a human Go grandmaster—Go being a game more complex than chess, in fact, “too complex to be tackled by software that primarily relies on calculating the possible outcomes of different moves, the method that IBM’s DeepBlue used to defeat world chess champion Garry Kasparov in 1997.”
Reducing energy usage with machine learning
Google, which represents 5% of the internet’s cloud usage, has a lot of servers. Servers that power Search, Gmail, YouTube, and more. And one of the biggest sources of energy usage at those data centers is cooling. Our devices create a lot of heat when they’re powered on and servers are no different, only that heat must be removed in order to keep servers running (and cat videos streaming).
In order to focus on Google’s energy usage at its data centers, the DeepMind team used sensors to collect five years of historical energy data—temperatures, power usage, pump speeds, etc. That data informed and trained a predictive model for how much energy would be needed by the data center based on the amount of likely server usage.
So the goal they wanted to accomplish was “improve data centre energy efficiency.” They accomplished this by training their DeepMind agent on a formula called average future PUE—Power Usage Effectiveness, or the ratio of total building energy usage to IT energy usage.
This is a formula that has long been in use at Google. As I mentioned earlier, Google has spent the past decade “focused on reducing [their] energy use while serving the growth of the Internet.” And in true Google fashion, they like to share what has worked for them—a massive energy user—with the rest of us.
Their number one best practice? Measure energy data, or more specifically, measure PUE. “You can’t manage what you don’t measure, so be sure to track your data center’s energy use,” they say. They measure PUE often—at least once per second—as well as capture it over the year. This is what we call measuring both real-time and historical energy data.
So they’ve been measuring this for awhile. And they’ve taken action to already significantly reduce their energy costs. What they did next was train their machine learning programs to measure PUE. They’ve also trained additional programs to “predict the future temperature and pressure of the data centre over the next hour.”
Then they deployed their model live to test it.
The big dip and spike you see in the energy usage graph above are essentially when DeepMind applied the machine learning recommendations (dip) and then turned them off again (spike).
This is cool, but I’m not Google. So how can I use machine learning to cut my energy bill?
Google operates on a scale that is nearly unrivaled, so what they’re doing seems quite daunting to most of us. So, what do you do when you’re not Google?
A lot of the technology that Google uses to measure their energy data (sensors, controls, etc.) is available to us all, and Google’s methods can also be applied on a smaller scale. It’s actually really similar to what Google is trying to do with their Cloud platform: take the highly complex and expensive technology and software they’ve built and then let others take advantage of that. DeepMind and other machine-learning platforms like it are no different.
For example, here at Brightergy we gather a variety of data points on behalf of our clients—from their utility bills and real-time consumption monitoring to solar generation and thermostat readings—and help digest it all by making simple recommendations to them.
While we haven’t fully adopted machine learning just yet, we see its potential to really change the way people understand and take action to control their energy. The nice part for our clients is that they don’t have to go out and research all this themselves; they can rely on us to not only do the heavy lifting—assessing the virtues of DeepMind vs Watson or keeping an eye on the developing IoT infrastructure—but also apply it to their data.
This way, when our clients analyze their energy data, it has more nuance to it than simply a collection of graphs. And they uncover the details they need to take action to reduce their energy bill.
None of this means we believe Skynet is ready to take over just yet.
Because machine learning doesn’t take the human out of the equation—it only enhances it. While the science is starting to make real progress, we believe in combining technology with human intelligence and interaction to super-charge our offerings and really make that extra bit of impact that will save our clients money.
See? You really can be just like Google.