Adversarial machine learning explained: How attackers disrupt AI and ML systems

As more companies roll out artificial intelligence (AI) and machine learning (ML) projects, securing them becomes more important. A report released by IBM and Morning Consult in May stated that of more than 7,500 global businesses, 35% of companies are already using AI, up 13% from last year, while another 42% are exploring it. However, almost 20% of companies say that they were having difficulties securing data and that it is slowing down AI adoption.

In a survey conducted last spring by Gartner, security concerns were a top obstacle to adopting AI, tied for first place with the complexity of integrating AI solutions into existing infrastructure.

According to a paper Microsoft released last spring, 90% of organizations aren’t ready to defend themselves against adversarial machine learning. Of the 28 large and small organizations covered in the report, 25 didn’t have the tools in place that they needed to secure their ML systems.

Securing AI and machine learning systems poses significant challenges. Some are not unique to AI. For example, AI and ML systems need data, and if that data contains sensitive or proprietary information, then it will be a target of attackers. Other aspects of AI and ML security are new, including defending against adversarial machine learning.

Read full article at CSO magazine.