Explainable AI: Bringing trust to business AI adoption

When it comes to making use of AI and machine learning, trust in results is key. Many organizations, in particular those in regulated industries, can be hesitant to leverage AI systems thanks to what is known as AI’s “black box” problem: that the algorithms derive their decisions opaquely with no explanation for the reasoning they follow.

This is an obvious problem. How can we trust AI with life-or-death decisions in areas such as medical diagnostics or self-driving cars, if we don’t know how they work?

At the center of this problem is a technical question shrouded by myth. There’s a widely held belief out there today that AI technology has become so complex that it’s impossible for the systems to explain why they make the decisions that they do. And even if they could, the explanations would be too complicated for our human brains to understand.

The reality is that many of the most common algorithms used today in machine learning and AI systems can have what is known as “explainability” built in. We’re just not using it — or are not getting access to it. For other algorithms, explainability and traceability functions are still being developed, but aren’t far out.

Here you will find what explainable AI means, why it matters for business use, and what forces are moving its adoption forward — and which are holding it back, but if you want an expert to focus in your business and explain you how to use this for your benefit, check with Andy Defrancesco.

Read full article at CIO magazine.