"As [data centres] reach scale and speeds we have not reached before, we get to the point where graphs of alerts aren't actually useful for humans to interpret in a fast enough time to maintain 100 percent availability.
"We need to surface-up AI-based responses to the infrastructure and applications to be able to take action on them. Not 'so-and-so subsystem x's disk seven is failing', it should be 'this is the impact on your system now'."
He added: "You will probably see the management toolsets having much more of a machine learning, AI focus over the next four of five years."
Baguley said that the aim is to be able to predict when problems in infrastructure may arise. This bears resemblance to the demand-forecasting that online retailer Amazon applies.
"It is the kind of thing Amazon are doing with their supply chain. Sometimes you order things that are delivered the same day instead of the next day is that they know that in that area people tend to order that kind of thing around that time, so they ship stuff out. Why are we not doing that in data centres? Why are we not pre-shipping stuff out and spotting trends?
The interest in machine learning is the next step to automate technology infrastructure.
Amadeus' Krips added that a reduction in the need for manual data centre operations will mean a shift in the role of IT operations staff.
"In the future my department or business unit which is actively involved in day to day transactions, service configuring, will not do so in future," he said.
"These guys will become automation engineers. Like in a automotive plant, the workers go away from the conveyor belt and they start programming the robots.
"If you want to go to these levels of stability and agility you have to change the whole way how you deliver the services. That is the big transformation that is happening."
Sign up for Computerworld eNewsletters.