How ChatGPT Can Help and Hinder Data Center Cybersecurity

The world changed on Nov. 30, when OpenAI released ChatGPT to an unsuspecting public.

Universities scrambled to figure out how to give take-home essays if students could just ask ChatGPT to write it for them. Then ChatGPT passed law school tests, business school tests, and even medical licensing exams. Employees everywhere began using it to create emails, reports, and even write computer code.

It’s not perfect and isn’t up to date on current news, but it’s more powerful than any AI system that the average person has ever had access to before. It’s also more user-friendly than enterprise-grade systems’ artificial intelligence.

It seems that once a large language model like ChatGPT gets big enough, and has enough training data, enough parameters, and enough layers in its neural networks, weird things start to happen. It develops “emergent properties” not evident or possible in smaller models. In other words, it starts to act as if it has common sense and an understanding of the world – or at least some kind of approximation of those things.

Major technology companies scrambled to react. Microsoft invested $10 billion in OpenAI and added ChatGPT functionality to Bing, suddenly making the search engine a topic of conversation for the first time in a long time.

Read full story at Data Center Knowledge.