Evidence is mounting that AI is moving out of the shadows of predominantly niche applications and into the mainstream spotlight. And as companies transition from proof-of-concept pilots and individual use cases to broader, enterprise-wide AI strategies, the machine learning platforms at the heart of many of those strategies often change as well.
“We’re moving out of the phase of narrow AI into the beginnings of the phase of broader AI use,” said Jamie Thomas, general manager of systems strategy and development at IBM.
For production-level projects, machine learning platforms need to be scalable as well as provide the ability to compare and retrain models and integrate with enterprise data systems and other technology infrastructure. More often than not, that means migrating to the cloud.
Companies can now store their data, run their applications and access extensive machine learning and AI tool sets and libraries in cloud systems. “All the main cloud computing platforms are working to provide [these capabilities],” said Jennifer Fernick, head of research and engineering at NCC Group, an IT security company.
Cloud providers are also making it easier to move small, on-premises projects — often built with open source tools — into cloud environments. “Google supports the use of TensorFlow natively in the cloud,” Fernick noted. “That’s because, often, the things that people start with when first exploring machine learning will become the things that they use in the cloud as well.”