Next Wave: AI in the Data Center

Artificial intelligence is changing the world. AI innovations are evolutionary, revolutionary and intriguing. Some make headlines, like Sophia the Robot. Others are already in the fabric of our lives, changing how we shop, watch and listen. But how we got here – how machine learning happens – is equally intriguing. Take self-driving cars.

In Detroit this winter, Waymo (formerly the Google Self-Driving Car Project) had a fleet of cars experiencing all the gale-force conditions a Wolverine winter has to offer. With a human in the driver’s seat ready if needed, Waymo’s Chrysler Pacifica’s spent months navigating the kind of winter-weather conditions human drivers dread. White outs. Black ice. Fishtailing. The works.

In the lab, Waymo engineers analysed data from sensors, incorporating vast amounts of information from every available source, including cameras, radars and advanced sensors called “lidars” (light detection and ranging) that provide detailed 3D views, and used them to model a single situation in all specificity.

Then they added thousands of variations – snow to sleet, lane changes, turns without signal, emergency vehicles, wildlife in the road – and train the car to respond to each situation safely. Once mastered, the learned skills are added to the knowledge hub feeding the entire network of cars. Shared intelligence in the age of AI.

From aviation and fleet transport, to security and education, industries worldwide are exploring the potential for machine learning to increase productivity, reliability and eventually, when it comes to cars, safety. About 90% of crashes are human error. Driverless cars are looking to reduce if not erase that number, although the tragic death of a pedestrian in a driverless Uber in Arizona this month is calling capabilities into question and indicating that further work is necessary when it comes to safety training.

IDC estimates the artificial intelligence market will grow from $8 billion dollars to more than $47 billion by 2020. In data centers, it’s fair to say that outside of the area of power optimization, machine learning has had a limited impact on operations to-date. Traditionally we’ve developed operational expertise specific to each piece of software and hardware, incorporated a network of electronic sensors that monitored the power and cooling infrastructure, and employed a specially trained team to oversee daily operations.

This matrix – complex technology supported by highly skilled engineers, who, if they encounter the unexpected, adapt quickly and find solutions – works.

But here’s the thing: data generation is growing 50 per cent per year, a rate never before seen in history. Corporate systems, mobile devices, and Internet of Things devices are the primary drivers. Major cloud providers are anticipating a need to triple their infrastructure by 2020.

Today the world’s largest data centers can occupy acres. For example, the largest, China’s Range facility, is the size of the Pentagon. Increasing scope and scale equals increased complexity. A data center environment can have thousands of application, hundreds of thousands of software and physical parts supporting those applications, and perhaps millions of end users. Operations are affected by changing loads and weather conditions. The sheer volume of interactions means a risk of downtime at even the best run data centers.

In addressing the risk of downtime, Artificial Intelligence is in the driver’s seat. The potential of AI in the data center is inspiring pioneers to break new ground in pursuing reliable 100 per cent uptime. To learn about new work using AI to meet the biggest challenge facing the data center industry, click Artificial Intelligence in the Data Center: ROOT Data Center First to Use Machine Learning to Maintain 100 Per Cent Reliability