AI governance touches many functional areas within the enterprise — data privacy, algorithm bias, compliance, ethics, and much more. As a result, addressing governance of the use of artificial intelligence technologies requires action on many levels.
“It does not start at the IT level or the project level,” says Kamlesh Mhashilkar, head of the data and analytics practice at Tata Consultancy Services. AI governance also happens at the government level, at the board of directors level, and at the CSO level, he says.
In healthcare, for example, AI models must pass stringent audits and inspections, he says. Many other industries also have applicable regulations. “And at the board level, it’s about economic behaviors,” Mhashilkar says. “What kinds of risks do you embrace when you introduce AI?”
As for the C-suite, AI agendas are purpose-driven. For example, the CFO will be attuned to shareholder value and profitability. CIOs and chief data officers are also key stakeholders, as are marketing and compliance chiefs. And that’s not to mention customers and suppliers.
Not all companies will need to take action on all fronts in building out an AI governance strategy. Smaller companies in particular may have little influence on what big vendors or regulatory groups do. Still, all companies are or will soon be using artificial intelligence and related technologies, even if they are simply embedded in the third-party tools and services they use.
And when used without proper oversight, AI has the potential to make mistakes that harm business operations, violate privacy rights, run afoul of industry regulations, or create bad publicity for a company.
Here’s how forwarding-thinking companies are starting to address AI governance as they expand AI projects from pilots to production, focusing on data quality, algorithmic performance, compliance, and ethics.