Fri. Nov 22nd, 2024

Meta’s plan to stop AI from ruining elections is about to get its first big test<!-- wp:html --><p class="copyright">Carol Yepes/Getty</p> <p><a target="_blank" href="https://www.businessinsider.com/meta" rel="noopener">Meta</a> is taking steps against AI abuse ahead of the EU's Parliament election.The company is adding labels to AI content and new ad restrictions to improve transparency. The company invested over $20 billion into safety and security since 2016.</p> <p>Meta says it's been working hard to prevent a rehash of the <a target="_blank" href="https://www.businessinsider.com/fake-news-outperformed-real-news-on-facebook-before-us-election-report-2016-11" rel="noopener">2016 election,</a> when Facebook became a harvesting ground of misinformation.</p> <p>Now, the EU Parliament elections will put the social networking platform to the test.</p> <p>Meta released a statement on Sunday outlining a new plan to ensure election integrity with the EU Parliament elections taking place June 6 through 9.</p> <p>"While each election is unique, this work drew on key lessons we have learned from more than 200 elections around the world since 2016," Marco Pancini, head of EU affairs at Meta said in a <a target="_blank" href="https://about.fb.com/news/2024/02/how-meta-is-preparing-for-the-eus-2024-parliament-elections/" rel="noopener">statement</a>.</p> <p>Pancini said that Meta is focusing on three key areas: misinformation, influence operations, and generative AI abuse.</p> <p><a target="_blank" href="https://www.businessinsider.com/guides/tech/misinformation-vs-disinformation" rel="noopener">Misinformation refers to the spread of incorrect information</a> and influence operations refers to deceptive campaigns. The company has so far partnered with 26 fact-checking organizations and releases a quarterly report on threat findings, according to the statement.</p> <p>Most recently, the company is fine-tuning its approach to GenAI. Meta joined the <a target="_blank" href="https://partnershiponai.org/" rel="noopener">Partnership on AI</a>, a platform designed to promote guidelines and best practices. It also signed the <a target="_blank" href="https://www.aielectionsaccord.com/" rel="noopener">Tech Accord</a>, which aims to prevent deceptive AI content on major platforms in the 2024 elections.</p> <p>While all posts are subject to the same policy guidelines, Meta is taking extra steps to monitor AI content, according to the statement. Meta partnered with independent <a target="_blank" href="https://www.facebook.com/business/help/341102040382165?id=673052479947730" rel="noopener">fact-checking</a> partners to review AI content. If content is identified as fake, manipulated, or transformed, it will rank down on users' feeds, according to the statement.</p> <p>As generative AI advances, it offers a quick and effective way to <a target="_blank" href="https://www.businessinsider.com/republican-democratic-political-election-campaigns-using-ai-disinformation-artificial-intelligence-2023-6" rel="noopener">produce material for political campaigns</a>. But it also may lead to more disinformation.</p> <p>Multiple instances of <a target="_blank" href="https://www.businessinsider.com/ai-deepfakes-threaten-havoc-with-the-worlds-year-of-elections-2024-1" rel="noopener">deepfakes impersonating political figures</a>, such as President Joe Biden or UK Prime Minister Rishi Sunak, threaten to spread fake news to anyone who has access to AI tools.</p> <p>Meanwhile, an <a target="_blank" href="https://www.businessinsider.com/developer-creates-ai-disinformation-system-using-openai-2023-9" rel="noopener">OpenAI developer built a "propaganda machine" </a>just to demonstrate how cheap and easy it was to create AI-powered propaganda. The project took two months to build and cost less than $400 a month to operate.</p> <p>The company is also working on creating tools to label AI-generated content that was made from other sources. It already automatically labels images created with Meta AI.</p> <p>Additionally, Meta will be adding a feature that lets users disclose <a target="_blank" href="https://www.businessinsider.com/tiktok-meta-google-must-warn-users-ai-generated-content-eu-2023-6" rel="noopener">whether content uses AI-generated video or audio</a>. Meta will add a separate label if the material is particularly high risk and those who fail to label their own content may face consequences.</p> <p>Similarly to users, <a target="_blank" href="https://www.businessinsider.com/meta-new-way-handle-ai-generated-political-ads-honor-system-2023-11" rel="noopener">advertisers will have to disclose if they used AI</a> to create the content.</p> <p>The ads also have to display a "paid for by" disclaimer to show users who is behind each ad. Between July and December of 2023, Meta removed 430,000 ads in the EU for failing to include a disclaimer, according to Meta's statement.</p> <p>The company will also have an <a target="_blank" href="https://www.facebook.com/ads/library/?active_status=all&ad_type=political_and_issue_ads&country=DE&media_type=all" rel="noopener">Ad Library</a> that shows what ads are running, who they are targeting, and how much was spent on them. Advertisers on Meta will have to go through a verification process to prove they are who they say they are and that they live in the EU.</p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/meta-shared-plan-to-combat-ai-before-eu-parliament-election-2024-2">Business Insider</a></div><!-- /wp:html -->

Meta is taking steps against AI abuse ahead of the EU’s Parliament election.The company is adding labels to AI content and new ad restrictions to improve transparency. The company invested over $20 billion into safety and security since 2016.

Meta says it’s been working hard to prevent a rehash of the 2016 election, when Facebook became a harvesting ground of misinformation.

Now, the EU Parliament elections will put the social networking platform to the test.

Meta released a statement on Sunday outlining a new plan to ensure election integrity with the EU Parliament elections taking place June 6 through 9.

“While each election is unique, this work drew on key lessons we have learned from more than 200 elections around the world since 2016,” Marco Pancini, head of EU affairs at Meta said in a statement.

Pancini said that Meta is focusing on three key areas: misinformation, influence operations, and generative AI abuse.

Misinformation refers to the spread of incorrect information and influence operations refers to deceptive campaigns. The company has so far partnered with 26 fact-checking organizations and releases a quarterly report on threat findings, according to the statement.

Most recently, the company is fine-tuning its approach to GenAI. Meta joined the Partnership on AI, a platform designed to promote guidelines and best practices. It also signed the Tech Accord, which aims to prevent deceptive AI content on major platforms in the 2024 elections.

While all posts are subject to the same policy guidelines, Meta is taking extra steps to monitor AI content, according to the statement. Meta partnered with independent fact-checking partners to review AI content. If content is identified as fake, manipulated, or transformed, it will rank down on users’ feeds, according to the statement.

As generative AI advances, it offers a quick and effective way to produce material for political campaigns. But it also may lead to more disinformation.

Multiple instances of deepfakes impersonating political figures, such as President Joe Biden or UK Prime Minister Rishi Sunak, threaten to spread fake news to anyone who has access to AI tools.

Meanwhile, an OpenAI developer built a “propaganda machine” just to demonstrate how cheap and easy it was to create AI-powered propaganda. The project took two months to build and cost less than $400 a month to operate.

The company is also working on creating tools to label AI-generated content that was made from other sources. It already automatically labels images created with Meta AI.

Additionally, Meta will be adding a feature that lets users disclose whether content uses AI-generated video or audio. Meta will add a separate label if the material is particularly high risk and those who fail to label their own content may face consequences.

Similarly to users, advertisers will have to disclose if they used AI to create the content.

The ads also have to display a “paid for by” disclaimer to show users who is behind each ad. Between July and December of 2023, Meta removed 430,000 ads in the EU for failing to include a disclaimer, according to Meta’s statement.

The company will also have an Ad Library that shows what ads are running, who they are targeting, and how much was spent on them. Advertisers on Meta will have to go through a verification process to prove they are who they say they are and that they live in the EU.

Read the original article on Business Insider

By