Sun. Jan 5th, 2025

The call to regulate AI is getting louder.  But how exactly do you regulate such technology?<!-- wp:html --><div></div> <p><a href="https://whatsnew2day.com/">WhatsNew2Day - Latest News And Breaking Headlines</a></p> <div> <p>Last week, artificial intelligence pioneers and experts urged major AI labs to immediately suspend training of AI systems more powerful than GPT-4 for at least six months. </p> <p>A <a target="_blank" href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" rel="noopener">open letter</a> written by the <a target="_blank" href="https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous" rel="noopener">Future of Life Institute</a> warned that AI systems with “human-competitive intelligence” could become a major threat to humanity. One of the risks is the possibility of AI outsmarting humans, making us obsolete, and <a target="_blank" href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/" rel="noopener">take control of civilization</a>.</p> <p>The letter highlights the need to develop a comprehensive set of protocols to govern the development and deployment of AI. There stands that:</p> <p>These protocols must ensure that systems that adhere to them are secure beyond reasonable doubt. This is not a pause in AI development in general, but merely a step back from the perilous race to ever-larger, unpredictable black-box models with emerging capabilities.</p> <p>Typically, the battle for regulation has pitted governments and big tech companies against each other. But the recent open letter — signed so far by more than 5,000 signatories, including Elon Musk, CEO of Twitter and Tesla, Apple co-founder Steve Wozniak, and OpenAI scientist Yonas Kassa — seems to suggest that more parties are finally moving in one direction. </p> <p>Can we really implement a streamlined, global framework for AI regulation? And if so, what would this look like?</p> <p> <em><br /> <strong></strong></em></p> <p> Read more: I used to work at Google and now I’m an AI researcher. This is why it makes sense to slow down AI development</p> <h2>What regulations already exist?</h2> <p>In Australia, the government has <a target="_blank" href="https://www.csiro.au/en/work-with-us/industries/technology/national-ai-centre" rel="noopener">National AI Center</a> to help develop the nation <a target="_blank" href="https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence" rel="noopener">AI and digital ecosystem</a>. Under this umbrella is the <a target="_blank" href="https://www.csiro.au/en/work-with-us/industries/technology/National-AI-Centre/Responsible-AI-Network" rel="noopener">Responsible AI network</a>which aims to promote responsible practices and provide leadership on laws and standards. </p> <p>However, there is currently no specific regulation on AI and algorithmic decision-making. The government has taken a light-hearted approach that broadly embraces the concept of responsible AI, but stops short of setting parameters that ensure it is achieved.</p> <p>Similarly, the US has one <a target="_blank" href="https://dataconomy.com/2022/10/artificial-intelligence-laws-and-regulations/" rel="noopener">hands off strategy</a>. Lawmakers have shown none <a target="_blank" href="https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html" rel="noopener">urgency</a> in efforts to regulate AI, and have relied on existing laws to regulate its use. The <a target="_blank" href="https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Exec-Summary.pdf" rel="noopener">American Chamber of Commerce</a> recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has yet been taken.</p> <p>Leading the way in AI regulation is the European Union, which is racing for a <a target="_blank" href="https://artificialintelligenceact.eu/" rel="noopener">Law Artificial Intelligence</a>. This bill will assign three risk categories related to AI:</p> <p>applications and systems that create “unacceptable risk” will be banned, such as government-run social scores used in China<br /> applications deemed “high risk”, such as resume scanning tools that rank applicants, will be subject to specific legal requirements, and<br /> all other uses will be largely unregulated.</p> <p>Although some groups argue that the EU’s approach will <a target="_blank" href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035" rel="noopener">suppress innovation</a>it’s one Australia should watch closely as it strikes a balance between providing predictability and keeping pace with the development of AI. </p> <p>China’s approach to AI focuses on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms generating malicious information, for example. While this approach offers specificity, there is a risk of creating rules that quickly fall behind <a target="_blank" href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035" rel="noopener">evolving technology</a>.</p> <p> <em><br /> <strong></strong></em></p> <p> Read more: AI chatbots with Chinese features: why Baidu’s ChatGPT rival may never measure up</p> <h2>The pros and cons</h2> <p>There are several arguments for and against allowing caution to guide AI control.</p> <p>On the one hand, AI is celebrated for being able to generate all forms of content, perform everyday tasks, and detect cancer, among other things. On the other hand, it can mislead, perpetuate bias, plagiarize and – of course – some experts worry about the collective future of humanity. Even the CTO of OpenAI, <a target="_blank" href="https://time.com/6252404/mira-murati-chatgpt-openai-interview/" rel="noopener">Mira Murati</a>has suggested that there should be movement towards regulation of AI.</p> <p>Some scholars have argued that excessive regulation can hinder and disrupt AI’s full potential <a target="_blank" href="https://www.sciencedirect.com/science/article/pii/S0267364916300814?casa_token=f7xPY8ocOt4AAAAA:V6gTZa4OSBsJ-DOL-5gSSwV-KKATNIxWTg7YZUenSoHY8JrZILH2ei6GdFX017upMIvspIDcAuND" rel="noopener">“creative destruction”</a> – a theory suggesting longstanding norms and practices must be broken apart for innovation to thrive.</p> <p>Likewise over the years <a target="_blank" href="https://www.businessroundtable.org/policy-perspectives/technology/ai" rel="noopener">business groups</a> have pushed for regulation that is flexible and limited to targeted applications so that it does not impede competition. And <a target="_blank" href="https://www.bitkom.org/sites/main/files/2020-06/03_bitkom_position-on-whitepaper-on-ai_all.pdf" rel="noopener">industry associations</a> have called for ethical “guidance” rather than regulation – arguing that AI development is evolving too quickly and too open to adequately regulate. </p> <p>But citizens seem to be calling for more supervision. According to reports from Bristows and KPMG, about two-thirds of <a target="_blank" href="https://www.abc.net.au/news/2023-03-29/australians-say-not-enough-done-to-regulate-ai/102158318" rel="noopener">Australian</a> And <a target="_blank" href="https://www.bristows.com/app/uploads/2019/06/Artificial-Intelligence-Public-Perception-Attitude-and-Trust.pdf" rel="noopener">British</a> people think the AI ​​industry should be regulated and held accountable.</p> <h2>What’s next?</h2> <p>A six-month break in the development of cutting-edge AI systems could provide a welcome respite from an AI arms race that doesn’t seem to end. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts around the world have been broken, delayed and generally lax.</p> <p>A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions about the role of governments, which have been largely silent on the potential harm of highly capable AI tools. </p> <p>If anything needs to change, governments and national and supranational regulators will need to take the lead in ensuring accountability and safety. As the letter argues, decisions about AI at the societal level should not be in the hands of “unelected technology leaders.”</p> <p>Governments must therefore work with industry to jointly develop a global framework with comprehensive rules for the development of AI. This is the best way to protect against harmful effects and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants compete for dominance over the future of AI. </p> <p> <em><br /> <strong></strong></em></p> <p> Read more: The AI ​​arms race highlights the urgent need for responsible innovation</p> </div> <p><a href="https://whatsnew2day.com/the-call-to-regulate-ai-is-getting-louder-but-how-exactly-do-you-regulate-such-technology/">The call to regulate AI is getting louder. But how exactly do you regulate such technology?</a></p><!-- /wp:html -->

WhatsNew2Day – Latest News And Breaking Headlines

Last week, artificial intelligence pioneers and experts urged major AI labs to immediately suspend training of AI systems more powerful than GPT-4 for at least six months.

A open letter written by the Future of Life Institute warned that AI systems with “human-competitive intelligence” could become a major threat to humanity. One of the risks is the possibility of AI outsmarting humans, making us obsolete, and take control of civilization.

The letter highlights the need to develop a comprehensive set of protocols to govern the development and deployment of AI. There stands that:

These protocols must ensure that systems that adhere to them are secure beyond reasonable doubt. This is not a pause in AI development in general, but merely a step back from the perilous race to ever-larger, unpredictable black-box models with emerging capabilities.

Typically, the battle for regulation has pitted governments and big tech companies against each other. But the recent open letter — signed so far by more than 5,000 signatories, including Elon Musk, CEO of Twitter and Tesla, Apple co-founder Steve Wozniak, and OpenAI scientist Yonas Kassa — seems to suggest that more parties are finally moving in one direction.

Can we really implement a streamlined, global framework for AI regulation? And if so, what would this look like?


Read more: I used to work at Google and now I’m an AI researcher. This is why it makes sense to slow down AI development

What regulations already exist?

In Australia, the government has National AI Center to help develop the nation AI and digital ecosystem. Under this umbrella is the Responsible AI networkwhich aims to promote responsible practices and provide leadership on laws and standards.

However, there is currently no specific regulation on AI and algorithmic decision-making. The government has taken a light-hearted approach that broadly embraces the concept of responsible AI, but stops short of setting parameters that ensure it is achieved.

Similarly, the US has one hands off strategy. Lawmakers have shown none urgency in efforts to regulate AI, and have relied on existing laws to regulate its use. The American Chamber of Commerce recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has yet been taken.

Leading the way in AI regulation is the European Union, which is racing for a Law Artificial Intelligence. This bill will assign three risk categories related to AI:

applications and systems that create “unacceptable risk” will be banned, such as government-run social scores used in China
applications deemed “high risk”, such as resume scanning tools that rank applicants, will be subject to specific legal requirements, and
all other uses will be largely unregulated.

Although some groups argue that the EU’s approach will suppress innovationit’s one Australia should watch closely as it strikes a balance between providing predictability and keeping pace with the development of AI.

China’s approach to AI focuses on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms generating malicious information, for example. While this approach offers specificity, there is a risk of creating rules that quickly fall behind evolving technology.


Read more: AI chatbots with Chinese features: why Baidu’s ChatGPT rival may never measure up

The pros and cons

There are several arguments for and against allowing caution to guide AI control.

On the one hand, AI is celebrated for being able to generate all forms of content, perform everyday tasks, and detect cancer, among other things. On the other hand, it can mislead, perpetuate bias, plagiarize and – of course – some experts worry about the collective future of humanity. Even the CTO of OpenAI, Mira Muratihas suggested that there should be movement towards regulation of AI.

Some scholars have argued that excessive regulation can hinder and disrupt AI’s full potential “creative destruction” – a theory suggesting longstanding norms and practices must be broken apart for innovation to thrive.

Likewise over the years business groups have pushed for regulation that is flexible and limited to targeted applications so that it does not impede competition. And industry associations have called for ethical “guidance” rather than regulation – arguing that AI development is evolving too quickly and too open to adequately regulate.

But citizens seem to be calling for more supervision. According to reports from Bristows and KPMG, about two-thirds of Australian And British people think the AI ​​industry should be regulated and held accountable.

What’s next?

A six-month break in the development of cutting-edge AI systems could provide a welcome respite from an AI arms race that doesn’t seem to end. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts around the world have been broken, delayed and generally lax.

A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions about the role of governments, which have been largely silent on the potential harm of highly capable AI tools.

If anything needs to change, governments and national and supranational regulators will need to take the lead in ensuring accountability and safety. As the letter argues, decisions about AI at the societal level should not be in the hands of “unelected technology leaders.”

Governments must therefore work with industry to jointly develop a global framework with comprehensive rules for the development of AI. This is the best way to protect against harmful effects and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants compete for dominance over the future of AI.


Read more: The AI ​​arms race highlights the urgent need for responsible innovation

The call to regulate AI is getting louder. But how exactly do you regulate such technology?

By