Generative AI is one of the world’s top three geopolitical risks this year — right after Russia and China — according to a report released last month by the Eurasia Group, a US-based risk consultancy.
“This year will be a tipping point for disruptive technology’s role in society,” the report said.
According to the organization, the generative AI technology that was all over the news in 2022 is capable of creating realistic images, videos, and text with just a few sentences of guidance.
“Large language models like GPT-3 and the soon-to-be-released GPT-4 will be able to reliably pass the Turing test,” the report said.
The most famous use of this large language model is in OpenAI’s ChatGPT, but the technology has also been licensed to many other vendors, and Microsoft has already begun adding it to Bing and announced plans to embed it in Office and other Microsoft applications.
The Turing test is an experiment in which a human interacts with another entity via a computer and has to guess whether the entity on the other side is another human or an AI.
Some users are already convinced that ChatGPT is either sentient or is actually manned by an army of humans in the Philippines. And a Google engineer was fired last summer because he became convinced that Google’s version of the technology, LaMDa, had become self-aware.
Now, these tools have become simple enough to use that anyone can harness the power of AI, the report said.
“These advances represent a step-change in AI’s potential to manipulate people and sow political chaos,” the report said. “When barriers to entry for creating content no longer exist, the volume of content rises exponentially, making it impossible for most citizens to reliably distinguish fact from fiction.”
This will have adverse impacts on political discourse. Conspiracy theorists can generate bot armies to support their views, as can dictators.
And companies can also be affected, since key executives can be impersonated by malicious actors, legitimate product reviews drowned in a sea of AI-generated comments, and social media posts can impact stock prices and overwhelm sentiment-driven investment strategies.
Implications for small business owners
If you are a small business owner, it’s time to create a strategy for responding to these threats.
In the OpenSim ecosystem, we’ve occasionally seen instances where individuals were impersonated by someone else in order to harm their reputations — or people created fake personas in order to promote a particular grid or service.
We can expect this kind of activity to accelerate as AI technology allows bad actors to operate on a much more massive scale than before.
At Hypergrid Business, we haven’t — yet — seen a flood of AI-generated comment spam. Hopefully, the Disqus platform we use for comments will be able to filter the worse of it out before we have to deal with it.
Grids that have a social media presence should start thinking about a possible strategy, or a reaction plan, in case something happens. It’s always better to come up with a plan ahead of time instead of reacting in the moment based on emotion, which will usually just make the situation worse.
But there, of course, also opportunities for business to use generative AI for good.
OpenSim grids can use the technology to create AI-powered interactive NPCs to create interesting interactions for visitors to their grids, use ChatGPT to create in-world storylines for users to experience, and use generative art platforms to create textures, images, 3D objects, and even entire scenes.
Grids can also use AI to help create marketing and promotional content such as articles, videos, or podcast episodes.
Marketing is the single biggest challenge that OpenSim grids and service companies have today. If AI can reduce some of the burden, that will be a big win for the whole ecosystem.
Source: Hypergrid Business