The five threat actors had names just like the mechs seen in Guillermo del Toro’s “Pacific Rim.”
Warner Bros. Pictures; Jonathan Raa/NurPhoto via Getty Images
OpenAI says it has terminated the accounts of five malicious state actors. The actors were given codenames like “Charcoal Typhoon” and “Crimson Sandstorm.”OpenAI said they were affiliated with countries such as China, Iran, North Korea, and Russia.
OpenAI says it thwarted the accounts of five state-linked parties attempting to use their platform for nefarious means. Which is well and good, but their codenames also sound like something straight out of a sci-fi movie.
On Tuesday, the ChatGPT maker said in a blog post that it had worked with Microsoft to “disrupt five state-affiliated malicious actors.”
The company named the five actors — China-affiliated “Charcoal Typhoon” and “Salmon Typhoon,” Iran-affiliated “Crimson Sandstorm,” North Korea-affiliated “Emerald Sleet,” and Russia-affiliated “Forest Blizzard.”
Fans of the monster-fighting mechs in Guillermo del Toro’s 2013 sci-fi film “Pacific Rim” might find those names a little familiar.
The movie, which spawned a sequel and anime series, featured giant robots with equally rad-sounding names. The Chinese mecha in the film was named “Crimson Typhoon,” while the Russian mecha was called “Cherno Alpha.”
In “Pacific Rim,” the Chinese mecha was called “Crimson Typhoon” (center).
Warner Bros. Pictures
OpenAI and Microsoft, which worked together to identify the five threat actors, named them according to Microsoft’s weather-themed naming convention.
For instance, threat actors associated with China are given the family name “Typhoon” and so on. Adjectives are then added at the front to distinguish between threat actors within the same family.
This has resulted in a vast assortment of names being logged by Microsoft. Besides “Crimson Sandstorm,” Iran is also affiliated with “Lemon Sandstorm” and “Marigold Sandstorm.”
“These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks,” OpenAI alleged in its blog post.
OpenAI said it terminated the accounts associated with the five actors and would be “taking a multi-pronged approach to combating malicious state-affiliated actors’ use of our platform.”
Besides monitoring and disrupting threat actors, the company said it has been working with industry partners to exchange information.
Managing the threats posed by AI appears to be a big priority for the company and its CEO, Sam Altman.
On Tuesday, Altman made a virtual appearance at Dubai’s World Governments Summit where he talked about what keeps him up at night.
“Easy. It’s all of the sci-fi stuff,” Altman told attendees.
“I’m not that interested in the killer robots walking on the street direction of things going wrong,” he said. “I’m much more interested in the very subtle societal misalignments, where we just have these systems out in society, and through no particular ill intention, things just go horribly wrong.”