Depending on which Terminator movies you watch, the evil artificial intelligence Skynet has either already taken over humanity or is about to do so. But it’s not just science fiction writers who are worried about the dangers of uncontrolled AI.
In a 2019 survey by Emerj, an AI research and advisory company, 14% of AI researchers said that AI was an “existential threat” to humanity. Even if the AI apocalypse doesn’t come to pass, shortchanging AI ethics poses big risks to society — and to the enterprises that deploy those AI systems.
Central to these risks are factors inherent to the technology — for example, how a particular AI system arrives at a given conclusion, known as its “explainability” — and those endemic to an enterprise’s use of AI, including reliance on biased data sets or deploying AI without adequate governance in place.
And while AI can provide businesses competitive advantage in a variety of ways, from uncovering overlooked business opportunities to streamlining costly processes, the downsides of AI without adequate attention paid to AI governance, ethics, and evolving regulations can be catastrophic.