“Would I trust if my doctor says, ‘This is what ChatGPT is saying, and based on that, I’m treating you.’ I wouldn’t want that,” says Bhavani Thuraisingham, professor of computer science and founding director of the Cyber Security Research and Education Institute at UT Dallas.
And that was long before news came out that ChatGPT advised a man to replace table salt with sodium bromide, causing him to hallucinate and endure three weeks of treatment. “Today, for critical systems, we need a human in the loop,” Thuraisingham says.
She’s not the only one who thinks so. Human in the loop is the most common advice given to reduce the risks associated with AI, and is core to how many companies roll it out. At Thomson Reuters, for instance, keeping humans involved is integral to the company’s AI deployments.