Fri. Nov 22nd, 2024

Q&A: White House AI advisor says US is ‘catching up’ to the tech, as Biden’s new executive order demands transparency for the world’s biggest AI models<!-- wp:html --><p>President Joe Biden.</p> <p class="copyright">AP Photo/Alex Brandon</p> <p>President Biden just signed an executive order regarding the development and use of AI technology.It demands a new level of transparency from companies developing AI, yet it has limitations."The tech's had a bit of a head start," White House advisor Ben Buchanan told Insider.</p> <p><em>This Q&A is based on a conversation with Ben Buchanan, advisor to the White House Office of Science and Technology Policy and the Biden administration.</em></p> <p><a target="_blank" href="https://www.businessinsider.com/biden-campaign-team-trump-truth-social-follower-count-2023-10" rel="noopener">President Biden</a> on Monday signed a new executive order demanding greater transparency from major tech companies on the development of artificial intelligence models and tools and new rules for how the tech is used. It may be too little, too late.</p> <p>The broad <a target="_blank" href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" rel="noopener">executive order</a> touches on more than a dozen possible uses of AI and generative AI that already are, or could in the future, directly impact people's lives. Like when AI technology is used in corporate decision-making related to housing, hiring, or even jail time. It also demands a new level of transparency from companies engaging in the <a target="_blank" href="https://www.businessinsider.com/meta-layoffs-responsible-ai-team-2023-10" rel="noopener">creation and development of AI tools</a>.</p> <p>All of the <a target="_blank" href="https://www.businessinsider.com/ai-regulation-2023-us-eu-china-100-10" rel="noopener">large language models</a> and related AI tools released in the last year by Meta, Google, Microsoft, OpenAI, and others, are not subject to safety testing outlined in the executive order. The size threshold is so high that currently most available models do not meet the criteria for further transparency called for in Biden's executive order. Although all of the major tech companies earlier this year <a target="_blank" href="https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf" rel="noopener">agreed to adhere to standards</a> of responsibility and training in their AI work.</p> <p>For companies that do meet the threshold, they will need to notify the federal government of their work and share safety testing results before release to the public. The National Institute of Standards and Technology has been directed to set "rigorous standards" for such testing, including testing for the possibility of generative AI tools to direct users in the weaponization of biological weapons. Meta's Llama 2 model, for instance, is <a target="_blank" href="https://www.businessinsider.com/mark-zuckerberg-senate-ai-forum-llama-2-gives-anthrax-instructions-2023-9" rel="noopener">already capable</a> of telling a user how to weaponize anthrax, as Insider reported.</p> <p>"The tech has had a bit of a head start, but I think we moved very quickly to catch up," Ben Buchanan, who has worked at the Biden White House since 2021, most recently as a special advisor on AI, told Insider in an 8-minute interview.</p> <p><em>For Insider's complete interview with Buchanan, edited for length and clarity, see below:</em></p> <p><strong>How often did you speak to the leaders of the big tech companies in the run-up to this executive order? I heard, for instance, that the White House spoke to Andrew Bosworth of Meta pretty frequently, a couple of times a month.</strong></p> <p>We had senior-level conversations with all of the companies, but not really about the EO. We had a lot of company conversations about the voluntary commitments that we rolled out in July. I don't think anyone has spoken to Andrew Bosworth in particular since July when we rolled out the voluntary commitments. But we did engage at that level and the CEO level in the run-up. We have spoken to the companies some since then, but more about the voluntary commitments than about the EO.</p> <p><strong>Was there anything about the voluntary commitments, as you worked on them, that was a sticking point for the companies, that you got a lot of pushback?</strong></p> <p>We had a pretty collaborative relationship and we had pretty high standards for that. The companies acknowledged it and they know they need to develop safe, secure, and trustworthy products. I think we got to a good spot.</p> <p><strong>Within the order, it says that companies developing any foundation model that poses a serious risk to national security will need to be more transparent and disclose their safety testing, et cetera. Does that cover most anyone developing an LLM, or is there a certain size or threshold that somebody needs to meet to be required to do that disclosure?</strong></p> <p>There is a threshold and we tried to make the fact sheet a little more plain English, but you'll see in the executive order itself when the text comes out, that the threshold applies to models that use more than 10 to the 26th FLOPs in training.</p> <p>That threshold is adjustable by the Department of Commerce. Technology is moving very fast in this space, but that's the baseline threshold. And then there's a lower threshold if it's a model trained primarily on biological sequence data because of the enhanced risk in that space. It's 10 to the 23rd for bio, but generally speaking, it's 10 to the 26th.</p> <p><strong>That's big, huge really. What happens when somebody, hypothetically, gives you training data that is flagged by a federal agency? What's the process then? Does everything just stop and the company is in trouble?</strong></p> <p>A lot of that's still in development. To the departments, we're laying that out in the EO the direction of travel here.</p> <p>You can imagine that there's some kind of legal authority that would come into play, depending on what the flag is. Just hypothetically, if there were a nuclear risk in play from a system, you would implicate the Atomic Energy Act and the like. So this is about getting the right information and then bringing the appropriate authorities to bear.</p> <p>You also could imagine a world in which Congress passes legislation that builds on what we're doing here and sets up a licensing regime. And a lot of members of Congress have talked about that. We're not taking a position on that. But what we're doing here is setting a direction of travel for the US government, which needs to know that these systems are safe, secure, and trustworthy before they're rolled out.</p> <p><strong>A lot of the order is a directive for these various agencies to act. How long do they have to set these standards?</strong></p> <p>On the Defense Production Act regulations that force the disclosure, that's 90 days. The companies are doing a lot of the red team testing already. The development of standards could be a little bit longer than that, but there's some provisional guidance on standards in the EO itself that largely mirrors what we agreed to in the voluntary commitments, saying these are the kinds of things you have to red-team for.</p> <p><strong>The transparency that's being asked for, is it just the future models or is it current models that have already been rolled out? Does it apply retroactively in any way?</strong></p> <p>The voluntary commitments apply to future models. It's hard to do as you can imagine, hard to do pre-release red team testing for a model that's out there.</p> <p><strong>Do you think AI is in a hype cycle, and everybody's overreacting to what it's going to mean? Given how many models are out there, how powerful they already are, et cetera, are government regulators everywhere a little bit behind the tech right now?</strong></p> <p>I won't comment on hype in terms of what you should invest in, but I think from a government perspective, we think there are real opportunities here and we think there are real risks. We wouldn't be doing this accelerated comprehensive action if we thought this was all going to fade away in a week. This is a priority for the President and his team.</p> <p>In terms of being behind the tech. The tech has had a bit of a head start, but I think we moved very quickly to catch up.</p> <p>One thing I point to is we secured agreement from the G7 on the first-ever international code of conduct on AI, which builds upon the voluntary commitments that have been signed off on by all seven governments in the G7. The United States is working in conjunction with allies and partners and is moving very quickly.</p> <p><strong>Do you use generative AI, Ben? Does anybody in your office use it?</strong></p> <p>B.B.: To my knowledge, we don't have ChatGPT on the systems here, but all of us use AI every single day as we go about daily life.</p> <p><em>Have a tip or insight to share? Contact Kali Hays at khays@insider.com, on the secure messaging app Signal at 949-280-0267, or through DM on X at @hayskali. Reach out using a nonwork device.</em></p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/white-house-advisor-catching-up-on-ai-biden-executive-order-2023-10">Business Insider</a></div><!-- /wp:html -->

President Joe Biden.

President Biden just signed an executive order regarding the development and use of AI technology.It demands a new level of transparency from companies developing AI, yet it has limitations.”The tech’s had a bit of a head start,” White House advisor Ben Buchanan told Insider.

This Q&A is based on a conversation with Ben Buchanan, advisor to the White House Office of Science and Technology Policy and the Biden administration.

President Biden on Monday signed a new executive order demanding greater transparency from major tech companies on the development of artificial intelligence models and tools and new rules for how the tech is used. It may be too little, too late.

The broad executive order touches on more than a dozen possible uses of AI and generative AI that already are, or could in the future, directly impact people’s lives. Like when AI technology is used in corporate decision-making related to housing, hiring, or even jail time. It also demands a new level of transparency from companies engaging in the creation and development of AI tools.

All of the large language models and related AI tools released in the last year by Meta, Google, Microsoft, OpenAI, and others, are not subject to safety testing outlined in the executive order. The size threshold is so high that currently most available models do not meet the criteria for further transparency called for in Biden’s executive order. Although all of the major tech companies earlier this year agreed to adhere to standards of responsibility and training in their AI work.

For companies that do meet the threshold, they will need to notify the federal government of their work and share safety testing results before release to the public. The National Institute of Standards and Technology has been directed to set “rigorous standards” for such testing, including testing for the possibility of generative AI tools to direct users in the weaponization of biological weapons. Meta’s Llama 2 model, for instance, is already capable of telling a user how to weaponize anthrax, as Insider reported.

“The tech has had a bit of a head start, but I think we moved very quickly to catch up,” Ben Buchanan, who has worked at the Biden White House since 2021, most recently as a special advisor on AI, told Insider in an 8-minute interview.

For Insider’s complete interview with Buchanan, edited for length and clarity, see below:

How often did you speak to the leaders of the big tech companies in the run-up to this executive order? I heard, for instance, that the White House spoke to Andrew Bosworth of Meta pretty frequently, a couple of times a month.

We had senior-level conversations with all of the companies, but not really about the EO. We had a lot of company conversations about the voluntary commitments that we rolled out in July. I don’t think anyone has spoken to Andrew Bosworth in particular since July when we rolled out the voluntary commitments. But we did engage at that level and the CEO level in the run-up. We have spoken to the companies some since then, but more about the voluntary commitments than about the EO.

Was there anything about the voluntary commitments, as you worked on them, that was a sticking point for the companies, that you got a lot of pushback?

We had a pretty collaborative relationship and we had pretty high standards for that. The companies acknowledged it and they know they need to develop safe, secure, and trustworthy products. I think we got to a good spot.

Within the order, it says that companies developing any foundation model that poses a serious risk to national security will need to be more transparent and disclose their safety testing, et cetera. Does that cover most anyone developing an LLM, or is there a certain size or threshold that somebody needs to meet to be required to do that disclosure?

There is a threshold and we tried to make the fact sheet a little more plain English, but you’ll see in the executive order itself when the text comes out, that the threshold applies to models that use more than 10 to the 26th FLOPs in training.

That threshold is adjustable by the Department of Commerce. Technology is moving very fast in this space, but that’s the baseline threshold. And then there’s a lower threshold if it’s a model trained primarily on biological sequence data because of the enhanced risk in that space. It’s 10 to the 23rd for bio, but generally speaking, it’s 10 to the 26th.

That’s big, huge really. What happens when somebody, hypothetically, gives you training data that is flagged by a federal agency? What’s the process then? Does everything just stop and the company is in trouble?

A lot of that’s still in development. To the departments, we’re laying that out in the EO the direction of travel here.

You can imagine that there’s some kind of legal authority that would come into play, depending on what the flag is. Just hypothetically, if there were a nuclear risk in play from a system, you would implicate the Atomic Energy Act and the like. So this is about getting the right information and then bringing the appropriate authorities to bear.

You also could imagine a world in which Congress passes legislation that builds on what we’re doing here and sets up a licensing regime. And a lot of members of Congress have talked about that. We’re not taking a position on that. But what we’re doing here is setting a direction of travel for the US government, which needs to know that these systems are safe, secure, and trustworthy before they’re rolled out.

A lot of the order is a directive for these various agencies to act. How long do they have to set these standards?

On the Defense Production Act regulations that force the disclosure, that’s 90 days. The companies are doing a lot of the red team testing already. The development of standards could be a little bit longer than that, but there’s some provisional guidance on standards in the EO itself that largely mirrors what we agreed to in the voluntary commitments, saying these are the kinds of things you have to red-team for.

The transparency that’s being asked for, is it just the future models or is it current models that have already been rolled out? Does it apply retroactively in any way?

The voluntary commitments apply to future models. It’s hard to do as you can imagine, hard to do pre-release red team testing for a model that’s out there.

Do you think AI is in a hype cycle, and everybody’s overreacting to what it’s going to mean? Given how many models are out there, how powerful they already are, et cetera, are government regulators everywhere a little bit behind the tech right now?

I won’t comment on hype in terms of what you should invest in, but I think from a government perspective, we think there are real opportunities here and we think there are real risks. We wouldn’t be doing this accelerated comprehensive action if we thought this was all going to fade away in a week. This is a priority for the President and his team.

In terms of being behind the tech. The tech has had a bit of a head start, but I think we moved very quickly to catch up.

One thing I point to is we secured agreement from the G7 on the first-ever international code of conduct on AI, which builds upon the voluntary commitments that have been signed off on by all seven governments in the G7. The United States is working in conjunction with allies and partners and is moving very quickly.

Do you use generative AI, Ben? Does anybody in your office use it?

B.B.: To my knowledge, we don’t have ChatGPT on the systems here, but all of us use AI every single day as we go about daily life.

Have a tip or insight to share? Contact Kali Hays at khays@insider.com, on the secure messaging app Signal at 949-280-0267, or through DM on X at @hayskali. Reach out using a nonwork device.

Read the original article on Business Insider

By