Tue. Dec 24th, 2024

A former OpenAI exec says AI could be the last technology humans ever invent<!-- wp:html --><p>Zack Kass, a former executive of OpenAI, told BI he is an eternal optimist about AI's ability to help solve our problems, but the world has to be willing to endure some unpleasant costs to achieve the technology's potential.</p> <p class="copyright">Andrea Verdelli via Getty Images</p> <p>Former OpenAI exec Zack Kass told BI he's optimistic about AI's ability to solve global problems.While AI can usher in unimaginable innovation, Kass said we'll also have to deal with growing pains.Since there's no going back now, he said it's best to focus on effective policy and managing corruption.</p> <p>With Artificial Intelligence influencing the way we <a target="_blank" href="https://markets.businessinsider.com/news/stocks/elon-musk-tesla-stock-price-ownership-ai-artificial-intelligence-wedbush-2024-1?_gl=1*1ohve9i*_ga*MTkwOTQ2MzI3MC4xNjg3OTA1NjQ4*_ga_E21CV80ZCZ*MTcwNTQ0NzY1MC40NzAuMS4xNzA1NDQ4MzIzLjQ0LjAuMA.." rel="noopener">do business</a>, study, interact with the world, and even <a target="_blank" href="https://www.businessinsider.com/ai-smart-grill-brisk-it-perfecta-seegrills-2024-1" rel="noopener">cook our favorite meals</a>, some researchers are concerned the latest AI advancements are also opening up a Pandora's box of new, technology-driven woes upon the world.</p> <p>But not Zack Kass. The former executive of OpenAI, one of the first hundred employees of the company who ran its Go-to-Market division before leaving last year, sees AI as a gift.</p> <p>Kass spoke to Business Insider to explain why he's an eternal optimist about AI's ability to help solve global problems, and why the world has to be willing to endure some unpleasant costs to unlock the technology's full potential.</p> <p>This conversation has been edited for length and clarity.</p> <p><strong>What'd you do at OpenAI and why'd you leave?</strong></p> <p>At <a target="_blank" href="https://www.businessinsider.com/chatgpt-plus-free-openai-paid-version-chatbot-2023-2" rel="noopener">OpenAI</a> I was one of the first 100 employees and was the first employee on the Go-to-Market team, the head of Go-to-Market, so I got to build the foundations of what, today, is a very large business, the sales, partnerships, and customer success. After some family challenges, I made the probably hardest decision I'll ever make: to come home to Santa Barbara and design a career around what really inspires me now, which is promoting AI as the future of the human experience. And when I left I spent a bunch of time talking to Sam [Altman] about how I could make the biggest impact, and where I landed was being the voice of the counter-narrative — that AI isn't doom and gloom, that the future isn't dystopian and post-apocalyptic, but that it is bright and full of more joy and less suffering. So now basically my purpose and mission is to promote a really exciting bright future. I think I'm accused sometimes of being naive, and I certainly am happy to wear that label, I almost wear that proudly. But I basically believe if we don't have people promoting this future, we don't have a chance of building it.</p> <p><strong>You describe yourself as an AI futurist. What does that mean to you?</strong></p> <p>For me, it's helping imagine a future in business and culture, in medicine, in education, powered by AI. It's helping people sort of imagine their business, and their lives influenced by AI, and also how AI should best be harnessed for <a target="_blank" href="https://www.businessinsider.com/ai-100-top-12-people-policy-ethics-and-research-2023-11" rel="noopener">ethical considerations</a> or societal considerations, and how it should best be policied.</p> <p><strong>So in the short term, what do you see as the most significant social benefits to AI?</strong></p> <p>In this short term, I think we're gonna see a whole lot of first-world implications that are really uncomfortable for people, but actually net good. I think we're gonna end up working a lot less, and people are terrified about those implications. But, frankly, I welcome it. Let's work less, let's go do the things that give us purpose and hope. In the second and third world, it's not hard to imagine every child has an <a target="_blank" href="https://www.businessinsider.com/ai-100-top-6-edtech-education-tech-2023-11" rel="noopener">AI-powered teacher</a> who knows what they know, knows how they like to learn and can communicate with their family how to best help them. It's not hard to imagine that everyone will have an AI-powered general physician who can help them diagnose and at least triage problems to a specialist. Last week, it went under the radar, but we discovered our first new antibiotic and 60 years, thanks to AI. So now you start compounding the implications on life sciences, and biosciences and it gets really, really exciting.</p> <p><strong>And is everyone you know in the industry as optimistic as you are?</strong></p> <p>That's a really good question. I will say this, I don't think that it is currently popular to espouse an overwhelmingly positive sentiment, because I think it is seen as naive. I think that the biggest thing that is true among industry insiders and peers is that they all recognize the incredible potential of this technology, but where people seem to split is the likelihood that we will realize it either as a result of bad policy, bad execution, or misalignment in the model. I have not met an industry insider who I respected who didn't acknowledge the fact that this could be the last technology humans ever invent and that, from here on out, AI just propels us at an exceptional rate and we live more and more fulfilling, joyful lives with less suffering and you know, we explore other worlds and galaxies, etc. But I think it's also very, very true that a lot of industry insiders take a much more dubious view of the downside than I do. I think there are people who will tell you that the risk is so great that even the upside isn't worth it. I just definitely don't agree with that. I just don't see any evidence to suggest that <a target="_blank" href="https://www.businessinsider.com/effective-accelerationism-humans-replaced-by-ai-2023-12" rel="noopener">AI will want to kill us all</a>.</p> <p><strong>So what are the risks that you see, if AI is not a Terminator-style world-ender?</strong></p> <p>I think that there are basically four downside risks. They are 1) idiocracy, 2) identity displacement, 3) the alignment problem or what we call existentialism, and 4) bad actors.</p> <p>I think there's a really real chance that a generation or multiple generations actually stop evolving because of the effects of no longer having to solve interesting, hard problems. If AI solves all of our problems, what will our brains do? And I think there's a reasonable likelihood that, for a little while, our brains stop evolving.</p> <p>Now I've got a whole lot of reasons to think that that's not a long-term problem, in the same way that the internet produced social media, and social media is going to be seen as a blight on a generation, and then we will solve for it in the same way that we invented coal, and coal killed a lot of coal miners and did a lot of bad things in the environment, and now we're phasing coal out. These are, in my opinion, sort of short-term consequences.</p> <p>The second is identity displacement. The real problem is going to be people who have spent lives, generations, you know, caring about a specific craft or trade, much like millworkers in the Industrial Revolution, or ironsmiths before them, who passed down these family trades and practices and apprenticeships, and suddenly, they lost their they lost their identities and their purpose as we invented these new tools to do their jobs. Their issue wasn't 'Hey, I can't feed my family.' They went and found other jobs. Their issue was, 'Hey, this was my purpose. This is what I love.' And I worry a ton about identity displacement. I worry a ton about you know, for a little while, people really struggling to figure out who they are.</p> <p>The really big one is <a target="_blank" href="https://www.businessinsider.com/ai-leaders-are-fighting-over-claims-ai-poses-extinction-threat-2023-11" rel="noopener">existentialism</a>, and the whole idea around existentialism is super reasonable, which is: can we train a model to be aligned with human interest? And my opinion is yes, we can and we will. There there's so many people interested in working on this it's almost certainly the thing that will get policied most aggressively. You can expect that that a congressional committee will want to review all models, that get released. By 2026 would be my guess, maybe 2025, and they'll need to meet international alignment standards in the same way that we built international standards very quickly or for nuclear disarmament and nuclear power. We will do the same here. Everyone is interested in solving this, even the most greedy actors would want this problem solved. You can't enjoy your winnings if AI has enslaved you. And, so it's just in everyone's interest for the alignment problem to be solved.</p> <p><strong>And you're confident AI wouldn't turn against us somehow?</strong></p> <p>No one can give me a reason AI would want to enslave us if it is aligned and trained on the human experience. Why would AI want to harm humans? I think we have so internalized the idea that humans are bad, that anything we create would have to inherently be bad and I just don't think that's true. I also think it is so dangerous to anthropomorphize AI. I think we are so wrapped up in the <a target="_blank" href="https://www.businessinsider.com/scariest-ai-robots-movie-history" rel="noopener">Terminator Skynet</a> idea, and I just don't think that's even remotely interesting given what we think we're building.</p> <p><strong>So what does regulating the industry look like to keep things in alignment while also promoting the innovation needed to get us to Artificial General Intelligence (AGI)?</strong></p> <p>The three things that would stop us from building AGI are a compute deficit, an energy deficit, and over-regulation or bad policy. So we built the climate crisis through policy when we made nuclear power effectively illegal in the 70s and 80s. Now we're all sitting around hand-wringing about climate change when we've been burning trillions of tons of coal. I point to nuclear as the best example of how one policy, especially influenced by public perception, can have a really, really incredible consequence on the human experience. I think we're going to be able to unwind a lot of climate change problems, but I don't think we would have to be staring down this barrel if we had simply built nuclear power plants, and accepted some of the costs of progress. You know, how many Three Mile Island and Chernobyls would we have accepted in exchange for burning trillions of tons of coal? I don't know what the number is. But gosh, it's more than two, and that's really hard math. But we definitely do need policy because I think simply saying, 'Hey, everyone's gonna behave appropriately at scale' is also a little naive. I think it'd be really smart for policymakers to basically require that companies release annual <a target="_blank" href="https://www.businessinsider.com/6-stocks-leading-ai-early-adoption-billions-of-people-allspring-2023-12" rel="noopener">economic impacts of their AI adoption</a> and to manage the corruption that's inherent in something like this. It's really hard to actually manage, but I think we should be very cognizant of what's happening in our society and in our business world as a result.</p> <p><strong>But how do we navigate those types of regulations when we're concerned about how other countries, like China for example, are developing AI?</strong></p> <p>I think there is not enough attention paid to the issue of AI versus AI. And one of the risks I think a lot of people point out is that moving this fast can inspire <a target="_blank" href="https://www.businessinsider.com/openai-chaos-ai-arms-race-competitors-ethics-2023-11" rel="noopener">a sort of arms race</a>. I think that it's really important to acknowledge, by definition, that AGI, if aligned properly, accrues value to the human species. And so the goal therefore, should be for the world's good actors, whomever they may be, and however you want to classify them, to build an aligned AGI and to share it with the world. Now that being said, I do think it just remains imperative that we build international coalitions to create reasonable standards for AI. That seems like a really good way to sort of force actors to work collectively, to build toward the right things. And it stands to reason the sooner we do that, the sooner we'll start to really define who is actually trying to build for the benefit of humanity versus who is trying to build for the benefit of a state or for the benefit of an individual or a group.</p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/future-ai-bright-if-accept-costs-progress-former-openai-executive-2024-1">Business Insider</a></div><!-- /wp:html -->

Zack Kass, a former executive of OpenAI, told BI he is an eternal optimist about AI’s ability to help solve our problems, but the world has to be willing to endure some unpleasant costs to achieve the technology’s potential.

Former OpenAI exec Zack Kass told BI he’s optimistic about AI’s ability to solve global problems.While AI can usher in unimaginable innovation, Kass said we’ll also have to deal with growing pains.Since there’s no going back now, he said it’s best to focus on effective policy and managing corruption.

With Artificial Intelligence influencing the way we do business, study, interact with the world, and even cook our favorite meals, some researchers are concerned the latest AI advancements are also opening up a Pandora’s box of new, technology-driven woes upon the world.

But not Zack Kass. The former executive of OpenAI, one of the first hundred employees of the company who ran its Go-to-Market division before leaving last year, sees AI as a gift.

Kass spoke to Business Insider to explain why he’s an eternal optimist about AI’s ability to help solve global problems, and why the world has to be willing to endure some unpleasant costs to unlock the technology’s full potential.

This conversation has been edited for length and clarity.

What’d you do at OpenAI and why’d you leave?

At OpenAI I was one of the first 100 employees and was the first employee on the Go-to-Market team, the head of Go-to-Market, so I got to build the foundations of what, today, is a very large business, the sales, partnerships, and customer success. After some family challenges, I made the probably hardest decision I’ll ever make: to come home to Santa Barbara and design a career around what really inspires me now, which is promoting AI as the future of the human experience. And when I left I spent a bunch of time talking to Sam [Altman] about how I could make the biggest impact, and where I landed was being the voice of the counter-narrative — that AI isn’t doom and gloom, that the future isn’t dystopian and post-apocalyptic, but that it is bright and full of more joy and less suffering. So now basically my purpose and mission is to promote a really exciting bright future. I think I’m accused sometimes of being naive, and I certainly am happy to wear that label, I almost wear that proudly. But I basically believe if we don’t have people promoting this future, we don’t have a chance of building it.

You describe yourself as an AI futurist. What does that mean to you?

For me, it’s helping imagine a future in business and culture, in medicine, in education, powered by AI. It’s helping people sort of imagine their business, and their lives influenced by AI, and also how AI should best be harnessed for ethical considerations or societal considerations, and how it should best be policied.

So in the short term, what do you see as the most significant social benefits to AI?

In this short term, I think we’re gonna see a whole lot of first-world implications that are really uncomfortable for people, but actually net good. I think we’re gonna end up working a lot less, and people are terrified about those implications. But, frankly, I welcome it. Let’s work less, let’s go do the things that give us purpose and hope. In the second and third world, it’s not hard to imagine every child has an AI-powered teacher who knows what they know, knows how they like to learn and can communicate with their family how to best help them. It’s not hard to imagine that everyone will have an AI-powered general physician who can help them diagnose and at least triage problems to a specialist. Last week, it went under the radar, but we discovered our first new antibiotic and 60 years, thanks to AI. So now you start compounding the implications on life sciences, and biosciences and it gets really, really exciting.

And is everyone you know in the industry as optimistic as you are?

That’s a really good question. I will say this, I don’t think that it is currently popular to espouse an overwhelmingly positive sentiment, because I think it is seen as naive. I think that the biggest thing that is true among industry insiders and peers is that they all recognize the incredible potential of this technology, but where people seem to split is the likelihood that we will realize it either as a result of bad policy, bad execution, or misalignment in the model. I have not met an industry insider who I respected who didn’t acknowledge the fact that this could be the last technology humans ever invent and that, from here on out, AI just propels us at an exceptional rate and we live more and more fulfilling, joyful lives with less suffering and you know, we explore other worlds and galaxies, etc. But I think it’s also very, very true that a lot of industry insiders take a much more dubious view of the downside than I do. I think there are people who will tell you that the risk is so great that even the upside isn’t worth it. I just definitely don’t agree with that. I just don’t see any evidence to suggest that AI will want to kill us all.

So what are the risks that you see, if AI is not a Terminator-style world-ender?

I think that there are basically four downside risks. They are 1) idiocracy, 2) identity displacement, 3) the alignment problem or what we call existentialism, and 4) bad actors.

I think there’s a really real chance that a generation or multiple generations actually stop evolving because of the effects of no longer having to solve interesting, hard problems. If AI solves all of our problems, what will our brains do? And I think there’s a reasonable likelihood that, for a little while, our brains stop evolving.

Now I’ve got a whole lot of reasons to think that that’s not a long-term problem, in the same way that the internet produced social media, and social media is going to be seen as a blight on a generation, and then we will solve for it in the same way that we invented coal, and coal killed a lot of coal miners and did a lot of bad things in the environment, and now we’re phasing coal out. These are, in my opinion, sort of short-term consequences.

The second is identity displacement. The real problem is going to be people who have spent lives, generations, you know, caring about a specific craft or trade, much like millworkers in the Industrial Revolution, or ironsmiths before them, who passed down these family trades and practices and apprenticeships, and suddenly, they lost their they lost their identities and their purpose as we invented these new tools to do their jobs. Their issue wasn’t ‘Hey, I can’t feed my family.’ They went and found other jobs. Their issue was, ‘Hey, this was my purpose. This is what I love.’ And I worry a ton about identity displacement. I worry a ton about you know, for a little while, people really struggling to figure out who they are.

The really big one is existentialism, and the whole idea around existentialism is super reasonable, which is: can we train a model to be aligned with human interest? And my opinion is yes, we can and we will. There there’s so many people interested in working on this it’s almost certainly the thing that will get policied most aggressively. You can expect that that a congressional committee will want to review all models, that get released. By 2026 would be my guess, maybe 2025, and they’ll need to meet international alignment standards in the same way that we built international standards very quickly or for nuclear disarmament and nuclear power. We will do the same here. Everyone is interested in solving this, even the most greedy actors would want this problem solved. You can’t enjoy your winnings if AI has enslaved you. And, so it’s just in everyone’s interest for the alignment problem to be solved.

And you’re confident AI wouldn’t turn against us somehow?

No one can give me a reason AI would want to enslave us if it is aligned and trained on the human experience. Why would AI want to harm humans? I think we have so internalized the idea that humans are bad, that anything we create would have to inherently be bad and I just don’t think that’s true. I also think it is so dangerous to anthropomorphize AI. I think we are so wrapped up in the Terminator Skynet idea, and I just don’t think that’s even remotely interesting given what we think we’re building.

So what does regulating the industry look like to keep things in alignment while also promoting the innovation needed to get us to Artificial General Intelligence (AGI)?

The three things that would stop us from building AGI are a compute deficit, an energy deficit, and over-regulation or bad policy. So we built the climate crisis through policy when we made nuclear power effectively illegal in the 70s and 80s. Now we’re all sitting around hand-wringing about climate change when we’ve been burning trillions of tons of coal. I point to nuclear as the best example of how one policy, especially influenced by public perception, can have a really, really incredible consequence on the human experience. I think we’re going to be able to unwind a lot of climate change problems, but I don’t think we would have to be staring down this barrel if we had simply built nuclear power plants, and accepted some of the costs of progress. You know, how many Three Mile Island and Chernobyls would we have accepted in exchange for burning trillions of tons of coal? I don’t know what the number is. But gosh, it’s more than two, and that’s really hard math. But we definitely do need policy because I think simply saying, ‘Hey, everyone’s gonna behave appropriately at scale’ is also a little naive. I think it’d be really smart for policymakers to basically require that companies release annual economic impacts of their AI adoption and to manage the corruption that’s inherent in something like this. It’s really hard to actually manage, but I think we should be very cognizant of what’s happening in our society and in our business world as a result.

But how do we navigate those types of regulations when we’re concerned about how other countries, like China for example, are developing AI?

I think there is not enough attention paid to the issue of AI versus AI. And one of the risks I think a lot of people point out is that moving this fast can inspire a sort of arms race. I think that it’s really important to acknowledge, by definition, that AGI, if aligned properly, accrues value to the human species. And so the goal therefore, should be for the world’s good actors, whomever they may be, and however you want to classify them, to build an aligned AGI and to share it with the world. Now that being said, I do think it just remains imperative that we build international coalitions to create reasonable standards for AI. That seems like a really good way to sort of force actors to work collectively, to build toward the right things. And it stands to reason the sooner we do that, the sooner we’ll start to really define who is actually trying to build for the benefit of humanity versus who is trying to build for the benefit of a state or for the benefit of an individual or a group.

Read the original article on Business Insider

By