Mon. Jul 1st, 2024

OpenAI is so worried about AI causing human extinction, it’s putting together a team to control ‘superintelligence’<!-- wp:html --><p>OpenAI is so worried about AI causing human extinction, it's putting together a team to control 'superintelligence.'</p> <p class="copyright">Beata Zawrzel/NurPhoto via Getty Images</p> <p>OpenAI fears that superintelligent AI could lead to human extinction.<br /> It is putting together a team to ensure that superintelligent AI aligns with human interests.<br /> Beyond recruiting for the team, it is also dedicating 20% of its computing power towards this aim.</p> <p>ChatGPT creator OpenAI is so worried about the potential dangers of smarter-than-humans AI that it's putting together a new team to ensure these advanced systems work for the good of people, not against them.</p> <p>In a <a href="https://openai.com/blog/introducing-superalignment" target="_blank" rel="noopener">July 5 blog post</a>, OpenAI said that though "superintelligent" artificial intelligence seemed far off, it could arrive within the decade. </p> <p>"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," OpenAI co-founder Ilya Sutskever and this new team's co-head Jan Leike wrote in the blog post.</p> <p>And though this technology could help solve many of the world's most pressing problems, superintelligent AI "could lead to the disempowerment of humanity or even human extinction," the authors wrote. </p> <p>The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years.</p> <p>OpenAI is <a href="https://openai.com/blog/introducing-superalignment">currently hiring</a> for the team. The company said it plans to dedicate 20% of its computing power towards this research, per the blog post.</p> <p>OpenAI CEO Sam Altman has <a href="https://www.businessinsider.com/ai-extinction-risk-openai-deepmind-anthropic-ceos-sam-altman-2023-5">long been calling for regulators to address AI risk as a global priority</a>.</p> <p>In May, Altman joined <a href="https://www.safe.ai/statement-on-ai-risk" target="_blank" rel="noopener">hundreds of other key figures in tech</a> in signing an open letter containing one sentence: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."</p> <p>Other key figures in tech, including Elon Musk, have also sounded the alarm over regulating AI <a href="https://www.businessinsider.com/elon-musk-steve-wozniak-are-right-ai-slow-down-2023-3" target="_blank" rel="noopener">and called for a six-month pause on AI development,</a> though some saw this as a ploy for Musk to <a href="https://www.businessinsider.com/elon-musk-slow-ai-progress-catch-up-tactic-vinod-khosla-2023-5" target="_blank" rel="noopener">play catch-up</a>.</p> <p>To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI. </p> <p>In a letter <a href="https://www.dair-institute.org/blog/letter-statement-March2023/" target="_blank" rel="noopener">published by the Distributed AI Research Institute</a> on 31 March, prominent AI ethicists called attention to concrete and present-day issues which AI companies are currently exacerbating.</p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/ai-openai-chatgpt-tackle-superintelligence-human-extinction-sam-altman-2023-7">Business Insider</a></div><!-- /wp:html -->

OpenAI is so worried about AI causing human extinction, it’s putting together a team to control ‘superintelligence.’

OpenAI fears that superintelligent AI could lead to human extinction.
It is putting together a team to ensure that superintelligent AI aligns with human interests.
Beyond recruiting for the team, it is also dedicating 20% of its computing power towards this aim.

ChatGPT creator OpenAI is so worried about the potential dangers of smarter-than-humans AI that it’s putting together a new team to ensure these advanced systems work for the good of people, not against them.

In a July 5 blog post, OpenAI said that though “superintelligent” artificial intelligence seemed far off, it could arrive within the decade. 

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” OpenAI co-founder Ilya Sutskever and this new team’s co-head Jan Leike wrote in the blog post.

And though this technology could help solve many of the world’s most pressing problems, superintelligent AI “could lead to the disempowerment of humanity or even human extinction,” the authors wrote. 

The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years.

OpenAI is currently hiring for the team. The company said it plans to dedicate 20% of its computing power towards this research, per the blog post.

OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority.

In May, Altman joined hundreds of other key figures in tech in signing an open letter containing one sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Other key figures in tech, including Elon Musk, have also sounded the alarm over regulating AI and called for a six-month pause on AI development, though some saw this as a ploy for Musk to play catch-up.

To be sure, not everyone shares OpenAI’s concerns about future problems posed by superintelligent AI. 

In a letter published by the Distributed AI Research Institute on 31 March, prominent AI ethicists called attention to concrete and present-day issues which AI companies are currently exacerbating.

Read the original article on Business Insider

By