OpenAI head calls for slow, careful release of AI — after releasing ChatGPT with no warning

(Image by Maria Korolov via Midjourney.)

I can’t tell if he’s just being tone deaf, or trying desperately to do some damage control, but after releasing ChatGPT without any warning on an unsuspecting world late last year, OpenAI CEO Sam Altman is now calling for slow and careful release of AI.

If you remember, ChatGPT was released on November 30 of 2022, just in time for take-home exams and final papers. Everyone started using it. Not just to make homework easier, but to save time on their jobs — or to create phishing emails and computer viruses. It reached one million users in just five days. According to UBS analysts, 100 million people were using it by January, making it the fastest-growing consumer application in history.

And according to a February survey by Fishbowl, a work-oriented social network, 43 percent of professionals now use ChatGPT or similar tools at work, up from 27 percent a month prior. And when they do, 70 percent of them don’t tell their bosses.

Last week, OpenAI released an API for ChatGPT allowing developers to integrate it into their apps. Approval is automatic, and the cost is only a tenth of what OpenAI was charging for the previous versions of its GPT AI models.

So. Slow and careful, right?

According to Altman, the company’s mission is to create artificial general intelligence.

That means building AIs that are smarter than humans.

He admits that there are risks.

“AGI would also come with serious risk of misuse, drastic accidents, and societal disruption,” he said.

He forgot about the killer robots that will wipe us all out, but okay.

(Image by Maria Korolov via Midjourney.)

He says that AGI can’t be stopped. It’s coming, and there’s nothing we can do about it. But it’s all good, because the potential benefits are so great.

Still, he says that the rollout of progressively more powerful AIs should be slow.

“A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place,” he said.

Maybe he should have considered that before putting ChatGPT out there.

“We think it’s important that efforts like ours submit to independent audits before releasing new systems,” he added.

Again, I’m sure that there are plenty of high school teachers and college professors who would have appreciated a heads-up.

However, he also said that he’s in favor of open source AI projects.

He’s not the only one — there are plenty of competitors out there furiously trying to come up with an open source version of ChatGPT that companies and individuals can run on their own computers without fear of leaking information to OpenAI. Or without having to deal with all the safeguards that OpenAI has been trying to put in place to keep people from using ChatGPT maliciously.

The thing about open source is that, by definition, it’s not within anyone’s control. People can take the code, tweak it, do whatever they want with it.

“Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history,” he said. “Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.”

There is one part of the statement that I found particularly interesting, however. He said that OpenAI had a cap on shareholder returns and are governed by a non-profit, which means that, if needed, the company can cancel its equity obligations to shareholders “and sponsor the world’s most comprehensive UBI experiment.”

UBI — or universal basic income — would be something like getting your Social Security check early. Instead of having to adapt to the new world, learn new skills, find new meaningful work, you could retire to Florida and play shuffleboard. Assuming Florida is still above sea level. Or you could use the money to pursue your hobbies or your creative passions.  As a journalist whose career is most definitely in the AI cross-hairs, let’s color me curious.