Wed. Jul 3rd, 2024

OpenAI Releases Plan to Prevent a Robot Apocalypse<!-- wp:html --><p>REUTERS/Dado Ruvic/Illustration</p> <p>OpenAI is putting together a new team of experts solely dedicated to preventing a potential robot uprising. The <a href="http://thedailybeast.com/keyword/artificial-intelligence">artificial intelligence</a> company behind ChatGPT announced on Monday its plans for mitigating the dangers that may emerge from its <a href="http://thedailybeast.com/keyword/technology">technology</a>—including cybersecurity risks and the potential that their bots may be used to create nuclear or biological weapons.</p> <p>The company outlined the goals for the new “Preparedness Framework” in a <a href="https://cdn.openai.com/openai-preparedness-framework-beta.pdf">27-page document</a>, saying that it would be used specifically to conduct regular tests and monitor their advanced models for any <a href="https://www.thedailybeast.com/ai-industry-leaders-issue-stark-warning-on-risk-of-extinction">dangers it may eventually pose</a>. The team would be dedicated to <em>preventing </em>such threats from emerging, while also ensuring that their products are deployed responsibly.</p> <p>“The central thesis behind our Preparedness Framework is that a robust approach to AI catastrophic risk safety requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment,” the paper reads.</p> <p><a href="https://www.thedailybeast.com/openai-announces-preparedness-team-to-prevent-a-robot-apocalypse">Read more at The Daily Beast.</a></p><!-- /wp:html -->

REUTERS/Dado Ruvic/Illustration

OpenAI is putting together a new team of experts solely dedicated to preventing a potential robot uprising. The artificial intelligence company behind ChatGPT announced on Monday its plans for mitigating the dangers that may emerge from its technology—including cybersecurity risks and the potential that their bots may be used to create nuclear or biological weapons.

The company outlined the goals for the new “Preparedness Framework” in a 27-page document, saying that it would be used specifically to conduct regular tests and monitor their advanced models for any dangers it may eventually pose. The team would be dedicated to preventing such threats from emerging, while also ensuring that their products are deployed responsibly.

“The central thesis behind our Preparedness Framework is that a robust approach to AI catastrophic risk safety requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment,” the paper reads.

Read more at The Daily Beast.

By