Sat. Nov 9th, 2024

AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather<!-- wp:html --><p>AI godfather Yann LeCun has fired shots at notable AI leaders.</p> <p class="copyright">Kevin Dietsch/Getty Images</p> <p>An AI godfather has had it with the doomsdayers.Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good.The naysaying is actually about keeping control of AI in the hands of a few, he said.</p> <p><a target="_blank" href="https://www.businessinsider.com/yann-lecun-artificial-intelligence-generative-ai-threaten-humanity-existential-risk-2023-6" rel="noopener">AI godfather Yann LeCun</a> wants us to forget some of the more far-fetched <a target="_blank" href="https://www.businessinsider.com/billionaire-bunker-openai-sam-altman-joked-ai-apocalypse-2023-10" rel="noopener">doomsday scenarios.</a></p> <p>He sees a different, real threat on the horizon: the rise of power hungry one-percenters who rob everyone else of AI's riches.</p> <p>Over the weekend, Meta's chief AI scientist accused some of the most prominent founders in AI of <a target="_blank" href="https://x.com/ylecun/status/1718670073391378694?s=20" rel="noopener">"fear-mongering" and "massive corporate lobbying"</a> to serve their own interests.</p> <p>He named <a target="_blank" href="https://www.businessinsider.com/sam-altman-chatgpt-openai-ceo-career-net-worth-ycombinator-prepper-2023-1" rel="noopener">OpenAI's Sam Altman</a>, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei in a lengthy weekend post on X.</p> <p><span>"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote, referring to these founders' role in shaping regulatory conversations about AI safety. "They are the ones who are attempting to perform a regulatory capture of the AI industry."</span></p> <p><span>He added that if these efforts succeed, the outcome would be a "catastrophe" because "a small number of companies will control AI." </span></p> <p><span>That's significant since, as almost everyone who matters in tech agrees, AI </span><a target="_blank" href="https://www.businessinsider.com/bill-gates-chatgpt-ai-artificial-intelligenct-as-important-pc-internet-2023-2" rel="noopener"><span>is the biggest development in technology</span></a><span> since the microchip or the internet.</span></p> <p>Altman, Hassabis, and Amodei did not immediately respond to Insider's request for comment.</p> <p>LeCun's comments came in response to <a target="_blank" href="https://x.com/tegmark/status/1718663322738868595?s=20" rel="noopener">a post on X from physicist Max Tegmark</a>, who suggested that LeCun wasn't taking the AI doomsday arguments seriously enough. </p> <p>"Thanks to<a target="_blank" href="https://twitter.com/RishiSunak" rel="noopener"> @RishiSunak</a> &<a target="_blank" href="https://twitter.com/vonderleyen" rel="noopener"> @vonderleyen</a> for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone," Tegmark wrote, referring to the UK's upcoming global AI safety summit.</p> <div class="insider-raw-embed"> <p>Yann, I'd love to hear you make arguments rather than acronyms. Thanks to <a href="https://twitter.com/RishiSunak?ref_src=twsrc%5Etfw">@RishiSunak</a> & <a href="https://twitter.com/vonderleyen?ref_src=twsrc%5Etfw">@vonderleyen</a> for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone. <a href="https://t.co/Zv1rvOA3Zz">https://t.co/Zv1rvOA3Zz</a></p> <p>— Max Tegmark (@tegmark) <a href="https://twitter.com/tegmark/status/1718663322738868595?ref_src=twsrc%5Etfw">October 29, 2023</a> </p></div> <h2><strong>LeCun says founder fretting is just lobbying</strong></h2> <p>Since the <a target="_blank" href="https://www.businessinsider.com/everything-you-need-to-know-about-chat-gpt-2023-1" rel="noopener">launch of ChatGPT</a>, AI's power players have become major public figures.</p> <p>But, LeCun said, founders such as Altman and Hassabis have spent a lot of time drumming up fear about the very technology they're selling. </p> <p>In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, <a target="_blank" href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" rel="noopener">signed a letter calling for a minimum six-month pause on AI development</a>. </p> <p>The letter cited "profound risks to society and humanity" posed by hypothetical AI systems. Tegmark, one of the letter's signatories, has <a target="_blank" href="https://www.youtube.com/watch?v=VcVfceTsD0A" rel="noopener">described AI development as "a suicide race."</a></p> <p>LeCun and others say these kinds of headline-grabbing warnings are just about cementing power and skating over the real, imminent risks of AI.</p> <p>Those risks include worker exploitation and data theft that generates profit for "a handful of entities," according to the Distributed AI Research Institute (DAIR).</p> <p>The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape.</p> <p>LeCun has described how people are <a target="_blank" href="https://x.com/ylecun/status/1642524629137760259?s=20" rel="noopener">"hyperventilating about AI risk"</a> because they have fallen for what he describes as the myth of the "hard take-off." This is the idea that "the minute you turn on a super-intelligent system, humanity is doomed."</p> <p>But imminent doom is unlikely, he argues, because every new technology in fact goes through a very ordered development process before wider release.</p> <div class="insider-raw-embed"> <p>Every new technology is developed and deployed the same way: <br />You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely.<br />At that point, governments regulate it and establish safety standards.<br />1/</p> <p>— Yann LeCun (@ylecun) <a href="https://twitter.com/ylecun/status/1642523756332568577?ref_src=twsrc%5Etfw">April 2, 2023</a> </p></div> <p>So the area to focus on, is in fact, <em>how</em> AI is developed right now. And for LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while <a target="_blank" href="https://www.businessinsider.com/andrew-ng-google-brain-big-tech-ai-risks-2023-10" rel="noopener">AI's open-source community</a> gets obliterated. </p> <p>His consequent worry is that regulators let it happen because they're distracted by killer robot arguments.</p> <p>Leaders like LeCun have championed open-source developers as their work on tools that rival, say, OpenAI's ChatpGPT, brings a new level of transparency to AI development. </p> <p>LeCun's employer, Meta, <a target="_blank" href="https://www.businessinsider.com/meta-llama2-open-source-mark-zuckerberg-balls-replit-amjad-masad-2023-10" rel="noopener">made its own large language model</a> that competes with GPT, LLaMa 2, (somewhat) open source. The idea is that the broader tech community can look under the hood of the model. No other big tech company has done a similar open-source release, though OpenAI is <a target="_blank" href="https://www.businessinsider.com/openai-chatgpt-release-open-source-ai-model-2023-5" rel="noopener">rumored to be thinking about it</a>.</p> <p>For LeCun, keeping AI development closed is a real reason for alarm.</p> <p>"<span>The alternative, which will </span><em><span>inevitably</span></em><span> happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet," he wrote. </span></p> <p><span>"What does that mean for democracy? What does that mean for cultural diversity?"</span></p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/sam-altman-and-demis-hassabis-just-want-to-control-ai-2023-10">Business Insider</a></div><!-- /wp:html -->

AI godfather Yann LeCun has fired shots at notable AI leaders.

An AI godfather has had it with the doomsdayers.Meta’s Yann LeCun thinks tech bosses’ bleak comments on AI risks could do more harm than good.The naysaying is actually about keeping control of AI in the hands of a few, he said.

AI godfather Yann LeCun wants us to forget some of the more far-fetched doomsday scenarios.

He sees a different, real threat on the horizon: the rise of power hungry one-percenters who rob everyone else of AI’s riches.

Over the weekend, Meta’s chief AI scientist accused some of the most prominent founders in AI of “fear-mongering” and “massive corporate lobbying” to serve their own interests.

He named OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei in a lengthy weekend post on X.

“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun wrote, referring to these founders’ role in shaping regulatory conversations about AI safety. “They are the ones who are attempting to perform a regulatory capture of the AI industry.”

He added that if these efforts succeed, the outcome would be a “catastrophe” because “a small number of companies will control AI.”

That’s significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.

Altman, Hassabis, and Amodei did not immediately respond to Insider’s request for comment.

LeCun’s comments came in response to a post on X from physicist Max Tegmark, who suggested that LeCun wasn’t taking the AI doomsday arguments seriously enough.

“Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone,” Tegmark wrote, referring to the UK’s upcoming global AI safety summit.

Yann, I’d love to hear you make arguments rather than acronyms. Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone. https://t.co/Zv1rvOA3Zz

— Max Tegmark (@tegmark) October 29, 2023

LeCun says founder fretting is just lobbying

Since the launch of ChatGPT, AI’s power players have become major public figures.

But, LeCun said, founders such as Altman and Hassabis have spent a lot of time drumming up fear about the very technology they’re selling.

In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

The letter cited “profound risks to society and humanity” posed by hypothetical AI systems. Tegmark, one of the letter’s signatories, has described AI development as “a suicide race.”

LeCun and others say these kinds of headline-grabbing warnings are just about cementing power and skating over the real, imminent risks of AI.

Those risks include worker exploitation and data theft that generates profit for “a handful of entities,” according to the Distributed AI Research Institute (DAIR).

The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape.

LeCun has described how people are “hyperventilating about AI risk” because they have fallen for what he describes as the myth of the “hard take-off.” This is the idea that “the minute you turn on a super-intelligent system, humanity is doomed.”

But imminent doom is unlikely, he argues, because every new technology in fact goes through a very ordered development process before wider release.

Every new technology is developed and deployed the same way:
You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely.
At that point, governments regulate it and establish safety standards.
1/

— Yann LeCun (@ylecun) April 2, 2023

So the area to focus on, is in fact, how AI is developed right now. And for LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while AI’s open-source community gets obliterated.

His consequent worry is that regulators let it happen because they’re distracted by killer robot arguments.

Leaders like LeCun have championed open-source developers as their work on tools that rival, say, OpenAI’s ChatpGPT, brings a new level of transparency to AI development.

LeCun’s employer, Meta, made its own large language model that competes with GPT, LLaMa 2, (somewhat) open source. The idea is that the broader tech community can look under the hood of the model. No other big tech company has done a similar open-source release, though OpenAI is rumored to be thinking about it.

For LeCun, keeping AI development closed is a real reason for alarm.

The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet,” he wrote.

“What does that mean for democracy? What does that mean for cultural diversity?”

Read the original article on Business Insider

By