Fri. Nov 8th, 2024

Facebook and Twitter scrambled our brains and poisoned our politics. AI poses an even bigger threat.<!-- wp:html --><p>As AI tools like ChatGPT, Bing, and Bard take over the internet, experts warn that the new tech could fundamentally reshape our economy and lives.</p> <p class="copyright">iStock; Alyssa Powell/Insider</p> <p class="enhanced-subtitle">Is it a handy tool — or a ticking time bomb?</p> <p class="drop-cap">When Sam Altman was sunsetting his first startup in early 2012, there was little indication that his path ahead would parallel that of Silicon Valley's then-wunderkind Mark Zuckerberg.</p> <p>While Altman was weighing his next moves after shutting down Loopt, his location-sharing startup, the Facebook CEO was at the forefront of social media's global takeover and leading his <a href="https://www.businessinsider.com/exclusive-heres-the-inside-story-of-what-happened-on-the-facebook-ipo-2012-5">company to a blockbuster initial public offering</a> that valued Zuckerberg's brainchild <a href="https://www.businessinsider.com/still-confused-by-the-facebook-ipo-heres-a-complete-qa-2012-5">at $104 billion</a>. But just over a decade later, the tables have dramatically turned. Nowadays, the promise of social media as a unifying force for good has all but collapsed, and <a href="https://www.businessinsider.com/meta-layoffs-second-round-next-week-2023-3">Zuckerberg is slashing thousands of jobs</a> after his company's <a href="https://www.businessinsider.com/facebook-meta-stock-price-market-colllapse-mark-zuckerberg-metaverse-mistake-2022-2">rocky pivot to the metaverse</a>. And it's Altman, <a href="https://www.businessinsider.com/sam-altman-chatgpt-openai-ceo-career-net-worth-ycombinator-prepper-2023-1">a 37-year-old Stanford dropout</a>, who's now seeing his star rise to dizzying heights — and who faces the pitfalls of great power.</p> <p>Altman and his company Open AI have put Silicon Valley on notice <a href="https://www.businessinsider.com/everything-you-need-to-know-about-chat-gpt-2023-1">since releasing ChatGPT</a> to the <a href="https://www.businessinsider.com/chatgpt-ai-chaos-openia-google-creatives-academics-2023-1#:~:text=via%20Getty%20Images-,An%20updated%20version%20of%20OpenAI's%20chatbot%2C%20ChatGPT%2C%20launched%20on%20November,creative%20ways%20to%20use%20it.">public in November</a>. The artificial-intelligence model, which can write prose, code, and much more, is perhaps the most powerful — and unpredictable — technology of his generation. It has also been a gold mine for Altman, leading to a <a href="https://www.businessinsider.com/microsoft-investing-billions-into-chatgpt-creator-openai-partnership-2023-1">multiyear, multibillion-dollar deal</a> from Microsoft and the onboarding of 100 million users in its first two months. The pace of growth far exceeds TikTok's and Instagram's march to that milestone, making it the <a href="https://www.businessinsider.com/chatgpt-may-be-fastest-growing-app-in-history-ubs-study-2023-2">fastest-growing consumer internet application</a> in history.</p> <p>Much like social media in 2012, the AI industry is standing on the precipice of immense change. And while social media went on to reshape our world over the next 10 years, experts told me that the consequences of AI's next steps would be an order of magnitude larger. According to researchers, the current AI models are barely scratching the surface of the tech's potential. And as Altman and his cohort charge ahead, AI could fundamentally reshape our economy and lives even more than social media. </p> <p>"AI has the potential to be a transformative technology in the same way that the internet was, the television, radio, the Gutenberg press," professor Michael Wooldridge, the director of foundational AI research at the Turing Institute, said. "But the way it's going to be used, I think, we can only really scarcely imagine."</p> <p>As Zuckerberg's track record at Facebook has proved, technology let loose <a href="https://www.businessinsider.com/mark-zuckerberg-whats-good-for-world-not-necessarily-facebook-2018-12">can have profound consequences</a> — and if AI is left unchecked or if growth is prioritized over safety, the repercussions could be irreparable.</p> <h2>Revolutionary tech, done dangerously</h2> <p>Dan J. Wang, now an associate professor of business and sociology at Columbia Business School, used to drive past Altman's Loopt office in Palo Alto, California, as a Stanford undergrad. He told me that he saw a lot of parallels between Altman and Zuckerberg: The pair are "technology evangelists" and "really compelling leaders" who can gain the faith of those around them. Neither man was the first person to strike out in their respective fields. In Facebook's case, rivals like Myspace and Friendster had a head start, while AI has been in development for decades. But what they lack in originality, they make up for in risk tolerance. Both Zuckerberg and Altman have a willingness to expand the public use of new technology at a far faster pace than their more cautious predecessors, Wang said. "The other thing that is really interesting about both of these leaders is that they're really good at making technologies accessible," he told me.</p> <p>But the line between releasing cutting-edge tech to make people's lives better and letting an untested product loose on an unsuspecting public can be a thin one. And Zuckerberg's track record provides plenty of examples of how it can go wrong. In the years since Facebook's 2012 IPO, the company has rolled out dozens of products to the public, while profoundly influencing the offline world. The <a href="https://www.businessinsider.com/cambridge-analytica-whistleblower-christopher-wylie-facebook-data-2019-10">Cambridge Analytica scandal</a> exposed the privacy problems that come with collecting the personal data of billions of people; the use of Facebook to facilitate violence like <a href="https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html" target="_blank" rel="noopener">the genocide in Myanmar</a> and the Capitol Hill insurrection showed just how toxic misinformation on social platforms could be; and the harms of services<a href="https://www.businessinsider.com/facebook-internal-report-shows-instagram-bad-for-teens-2021-9"> like Instagram on mental health</a> have posed uncomfortable questions about the role of social media in our everyday lives.</p> <p>Facebook's damaged reputation is the result of the company racing ahead of consumers, regulators, and investors who failed to understand the consequences of billions of people interacting online at a scale and speed unlike anything before it. Facebook and Zuckerberg have apologized for their mistakes — but not for the guns-blazing approach of its evangelist leader who has come to embody tech's "move fast and break things" mantra. If social media helped expose the worst impulses of humanity on a mass scale, generative AI could be a turbocharger that accelerates the spread of our faults.</p> <p>"The fact that generative-AI technology has been put out without a lot of due diligence or without a lot of mechanisms for consent — this kind of thing is really aligned with that 'move fast and break things' mindset," Margaret Mitchell, an AI research scientist who cofounded the AI-ethics division at Google, said.</p> <p>For Heidy Khlaaf, the director at the cybersecurity and -safety firm Trail of Bits and a former systems-safety engineer at OpenAI, the current hype cycle around generative AI that prioritizes commercial value over societal impact is, in part, being driven by companies making exaggerated claims about the technology for their own benefit.</p> <p>"Everyone is trying to deploy this and implement it without understanding the risks that a lot of really amazing researchers have been looking into for at least the past five years," she said. </p> <p>"Your new technology should not be going out to the world if it can cause these downstream harms."</p> <p>This should offer a stark warning to Altman, OpenAI, and the rest of the artificial-intelligence industry, Mitchell told me. "Once implemented in systems that people are relying on for either facts or life-critical decisions, it's not just a novelty," she said.</p> <h2>OpenAI opens Pandora's box </h2> <p>While the underlying tech that powers AI has been around for a while, the power balance between ethics and profitability in the industry is starting to shift in a different direction. After securing a multibillion-dollar investment from Microsoft in January, OpenAI is now rumored to be valued at $30 billion and has wasted no time in commercializing its technology. <a href="https://www.businessinsider.com/new-microsoft-bing-openai-more-powerful-chatgpt-2023-2">Microsoft announced the integration of OpenAI's technology</a> into its search engine Bing on February 7 and said it planned to infuse AI into other Microsoft products.</p> <p>The move has sparked something of an AI <a href="https://www.businessinsider.com/ai-email-inbox-revolution-chatgpt-google-openai-shortwave-productivity-tools-2023-3">arms race</a>. Google, which for a long time was Silicon Valley's dominant force on AI, has picked up the pace with its own commercialization efforts. The tech giant released a <a href="https://www.businessinsider.com/ai-chatbots-chatgpt-google-bard-microsoft-bing-break-internet-search-2023-2">ChatGPT competitor called Bard</a> just 68 days after the Bing announcement. But Bard's release also served as a cautionary tale for scaling too quickly: The launch announcement was riddled with errors, and Google's stock tumbled as a result. And it's not as if Bard is the only AI tool with problems. In its short life, ChatGPT has shown that it is prone to "<a href="https://www.businessinsider.com/ai-chatbot-chatgpt-google-microsofty-lying-search-belief-2023-2">hallucinations</a>" — confident responses that appear true but are false. Biases and inaccuracies have been common occurrences, too.</p> <p>This approach of throwing caution to the wind is unsurprising to experts: Mitchell told me that while many companies would have been too scared to be the first mover, given the attention it would have brought upon them, OpenAI's highly public projects have made it much easier for everyone else to follow. "It's kind of like when you're on the freeway and everyone is speeding, and you're like, 'Well, look at those other guys. They're speeding. I can do that, too,'" she said.</p> <p>Experts say that Bing, Bard, and other AI models should generally work out the technological kinks as they evolve. The real danger, they told me, is the human oversight of it all. "There's a technological challenge where I'm more confident that AI will get better over time, but then there is the governance challenge of how humans will govern AI — there, I'm a bit more skeptical that we're on a good path," Johann Laux, a postdoctoral fellow at the Oxford Internet Institute, said.</p> <p>The Turing Institute's Wooldridge reckons pernicious issues like fake news could see a real "industrialization" at the hands of AI, a big worry given that models are already "routinely producing very plausible falsehoods." "What this technology is going to do is it's just going to fill our world with imperceptible falsehoods," he said. "That makes it very hard to distinguish truth from fiction."</p> <p>Other problems could also ensue. Yacine Jernite, a research scientist at the AI company Hugging Face, sees plenty of reason to be concerned about AI chatbots being used for financial scams. "What you need to scam someone out of their money is to build a relationship with them. You need something that's going to chat to them and feel engaged," he said. "That is not just the misuse of chatbots — it is the primary use of the chatbots and what they're trying to be better at."</p> <p>Khlaaf, meanwhile, sees a much more widespread risk: a wholesale dismantling of scientific integrity, extreme exaggeration of "stereotypes that harm marginalized communities," and the untold physical dangers of AI's deployment into safety-critical domains such as medicine and transport.</p> <p>Experts are clear that AI is still far from its full potential, but the tech is developing fast. OpenAI itself is moving with speed to release new iterations of its model. GPT-4, an upgraded version of ChatGPT, is on the horizon. But the disruptive power of AI and the dangers it poses are already apparent. For leaders, it's the "ask for forgiveness later" approach, Mitchell said.</p> <p>Zuckerberg's biggest mistake was allowing ethics to play second fiddle to profitability. <a href="https://www.businessinsider.com/meta-vip-facebook-instagram-cross-check-oversight-board-2022-12">Facebook's creation of an oversight board</a> is a sign that the company is ready to take some responsibility, though many would argue that it's too little, too late to slay the demons unleashed by the platform. And now Altman faces the same dilemma.</p> <p>Altman has shown some signs that he is aware of AI's prospective harms. "If you think that you understand the impact of AI, you do not understand and have yet to be instructed further. if you know that you do not understand, then you truly understand. (-alan watts, sort of)," he <a href="https://twitter.com/sama/status/1621556257323888641?s=20&t=YgO5E091mFuaVJRjm9r-Iw" target="_blank" rel="noopener">tweeted</a> on February 3. That said, researchers have little insight into the data that has been fed into OpenAI's machine, despite several calls made for OpenAI to, in fact, be open. Lifting the lid on its black box would go a long way toward showing it's serious about its issues.</p> <p>Columbia's Wang does think Altman is grappling with the consequences of AI — whether it's to do with the fairness, accuracy, or transparency of it. But abiding by an ethics system that makes sure to do no harm, while trying to scale up the next big thing in tech, "is almost impossible," according to Wang.</p> <p>"If you look at his tweets recently, he's sensitive to all of these issues, but the problem with being sensitive to all of these issues is that there are invariably going to be contradictions with what you can achieve," Wang said.</p> <p>The glacial pace with which regulators decided to act against Facebook is unlikely to change once they decide to get serious about policing the threats posed by AI. It means Altman will be left largely unchecked to open AI's Pandora's box. Social media amplified society's issues, as Wooldridge puts it. But AI could very well create new ones. Altman will need to get this right, for everyone's sake. Otherwise, it could be lights out for all.</p> <p><em><a href="https://www.businessinsider.com/author/hasan-chowdhury">Hasan Chowdhury</a> is a technology reporter at Insider.</em></p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/ai-technology-chatgpt-openai-bing-threat-society-economy-social-media-2023-3">Business Insider</a></div><!-- /wp:html -->

As AI tools like ChatGPT, Bing, and Bard take over the internet, experts warn that the new tech could fundamentally reshape our economy and lives.

Is it a handy tool — or a ticking time bomb?

When Sam Altman was sunsetting his first startup in early 2012, there was little indication that his path ahead would parallel that of Silicon Valley’s then-wunderkind Mark Zuckerberg.

While Altman was weighing his next moves after shutting down Loopt, his location-sharing startup, the Facebook CEO was at the forefront of social media’s global takeover and leading his company to a blockbuster initial public offering that valued Zuckerberg’s brainchild at $104 billion. But just over a decade later, the tables have dramatically turned. Nowadays, the promise of social media as a unifying force for good has all but collapsed, and Zuckerberg is slashing thousands of jobs after his company’s rocky pivot to the metaverse. And it’s Altman, a 37-year-old Stanford dropout, who’s now seeing his star rise to dizzying heights — and who faces the pitfalls of great power.

Altman and his company Open AI have put Silicon Valley on notice since releasing ChatGPT to the public in November. The artificial-intelligence model, which can write prose, code, and much more, is perhaps the most powerful — and unpredictable — technology of his generation. It has also been a gold mine for Altman, leading to a multiyear, multibillion-dollar deal from Microsoft and the onboarding of 100 million users in its first two months. The pace of growth far exceeds TikTok’s and Instagram’s march to that milestone, making it the fastest-growing consumer internet application in history.

Much like social media in 2012, the AI industry is standing on the precipice of immense change. And while social media went on to reshape our world over the next 10 years, experts told me that the consequences of AI’s next steps would be an order of magnitude larger. According to researchers, the current AI models are barely scratching the surface of the tech’s potential. And as Altman and his cohort charge ahead, AI could fundamentally reshape our economy and lives even more than social media. 

“AI has the potential to be a transformative technology in the same way that the internet was, the television, radio, the Gutenberg press,” professor Michael Wooldridge, the director of foundational AI research at the Turing Institute, said. “But the way it’s going to be used, I think, we can only really scarcely imagine.”

As Zuckerberg’s track record at Facebook has proved, technology let loose can have profound consequences — and if AI is left unchecked or if growth is prioritized over safety, the repercussions could be irreparable.

Revolutionary tech, done dangerously

Dan J. Wang, now an associate professor of business and sociology at Columbia Business School, used to drive past Altman’s Loopt office in Palo Alto, California, as a Stanford undergrad. He told me that he saw a lot of parallels between Altman and Zuckerberg: The pair are “technology evangelists” and “really compelling leaders” who can gain the faith of those around them. Neither man was the first person to strike out in their respective fields. In Facebook’s case, rivals like Myspace and Friendster had a head start, while AI has been in development for decades. But what they lack in originality, they make up for in risk tolerance. Both Zuckerberg and Altman have a willingness to expand the public use of new technology at a far faster pace than their more cautious predecessors, Wang said. “The other thing that is really interesting about both of these leaders is that they’re really good at making technologies accessible,” he told me.

But the line between releasing cutting-edge tech to make people’s lives better and letting an untested product loose on an unsuspecting public can be a thin one. And Zuckerberg’s track record provides plenty of examples of how it can go wrong. In the years since Facebook’s 2012 IPO, the company has rolled out dozens of products to the public, while profoundly influencing the offline world. The Cambridge Analytica scandal exposed the privacy problems that come with collecting the personal data of billions of people; the use of Facebook to facilitate violence like the genocide in Myanmar and the Capitol Hill insurrection showed just how toxic misinformation on social platforms could be; and the harms of services like Instagram on mental health have posed uncomfortable questions about the role of social media in our everyday lives.

Facebook’s damaged reputation is the result of the company racing ahead of consumers, regulators, and investors who failed to understand the consequences of billions of people interacting online at a scale and speed unlike anything before it. Facebook and Zuckerberg have apologized for their mistakes — but not for the guns-blazing approach of its evangelist leader who has come to embody tech’s “move fast and break things” mantra. If social media helped expose the worst impulses of humanity on a mass scale, generative AI could be a turbocharger that accelerates the spread of our faults.

“The fact that generative-AI technology has been put out without a lot of due diligence or without a lot of mechanisms for consent — this kind of thing is really aligned with that ‘move fast and break things’ mindset,” Margaret Mitchell, an AI research scientist who cofounded the AI-ethics division at Google, said.

For Heidy Khlaaf, the director at the cybersecurity and -safety firm Trail of Bits and a former systems-safety engineer at OpenAI, the current hype cycle around generative AI that prioritizes commercial value over societal impact is, in part, being driven by companies making exaggerated claims about the technology for their own benefit.

“Everyone is trying to deploy this and implement it without understanding the risks that a lot of really amazing researchers have been looking into for at least the past five years,” she said. 

“Your new technology should not be going out to the world if it can cause these downstream harms.”

This should offer a stark warning to Altman, OpenAI, and the rest of the artificial-intelligence industry, Mitchell told me. “Once implemented in systems that people are relying on for either facts or life-critical decisions, it’s not just a novelty,” she said.

OpenAI opens Pandora’s box 

While the underlying tech that powers AI has been around for a while, the power balance between ethics and profitability in the industry is starting to shift in a different direction. After securing a multibillion-dollar investment from Microsoft in January, OpenAI is now rumored to be valued at $30 billion and has wasted no time in commercializing its technology. Microsoft announced the integration of OpenAI’s technology into its search engine Bing on February 7 and said it planned to infuse AI into other Microsoft products.

The move has sparked something of an AI arms race. Google, which for a long time was Silicon Valley’s dominant force on AI, has picked up the pace with its own commercialization efforts. The tech giant released a ChatGPT competitor called Bard just 68 days after the Bing announcement. But Bard’s release also served as a cautionary tale for scaling too quickly: The launch announcement was riddled with errors, and Google’s stock tumbled as a result. And it’s not as if Bard is the only AI tool with problems. In its short life, ChatGPT has shown that it is prone to “hallucinations” — confident responses that appear true but are false. Biases and inaccuracies have been common occurrences, too.

This approach of throwing caution to the wind is unsurprising to experts: Mitchell told me that while many companies would have been too scared to be the first mover, given the attention it would have brought upon them, OpenAI’s highly public projects have made it much easier for everyone else to follow. “It’s kind of like when you’re on the freeway and everyone is speeding, and you’re like, ‘Well, look at those other guys. They’re speeding. I can do that, too,'” she said.

Experts say that Bing, Bard, and other AI models should generally work out the technological kinks as they evolve. The real danger, they told me, is the human oversight of it all. “There’s a technological challenge where I’m more confident that AI will get better over time, but then there is the governance challenge of how humans will govern AI — there, I’m a bit more skeptical that we’re on a good path,” Johann Laux, a postdoctoral fellow at the Oxford Internet Institute, said.

The Turing Institute’s Wooldridge reckons pernicious issues like fake news could see a real “industrialization” at the hands of AI, a big worry given that models are already “routinely producing very plausible falsehoods.” “What this technology is going to do is it’s just going to fill our world with imperceptible falsehoods,” he said. “That makes it very hard to distinguish truth from fiction.”

Other problems could also ensue. Yacine Jernite, a research scientist at the AI company Hugging Face, sees plenty of reason to be concerned about AI chatbots being used for financial scams. “What you need to scam someone out of their money is to build a relationship with them. You need something that’s going to chat to them and feel engaged,” he said. “That is not just the misuse of chatbots — it is the primary use of the chatbots and what they’re trying to be better at.”

Khlaaf, meanwhile, sees a much more widespread risk: a wholesale dismantling of scientific integrity, extreme exaggeration of “stereotypes that harm marginalized communities,” and the untold physical dangers of AI’s deployment into safety-critical domains such as medicine and transport.

Experts are clear that AI is still far from its full potential, but the tech is developing fast. OpenAI itself is moving with speed to release new iterations of its model. GPT-4, an upgraded version of ChatGPT, is on the horizon. But the disruptive power of AI and the dangers it poses are already apparent. For leaders, it’s the “ask for forgiveness later” approach, Mitchell said.

Zuckerberg’s biggest mistake was allowing ethics to play second fiddle to profitability. Facebook’s creation of an oversight board is a sign that the company is ready to take some responsibility, though many would argue that it’s too little, too late to slay the demons unleashed by the platform. And now Altman faces the same dilemma.

Altman has shown some signs that he is aware of AI’s prospective harms. “If you think that you understand the impact of AI, you do not understand and have yet to be instructed further. if you know that you do not understand, then you truly understand. (-alan watts, sort of),” he tweeted on February 3. That said, researchers have little insight into the data that has been fed into OpenAI’s machine, despite several calls made for OpenAI to, in fact, be open. Lifting the lid on its black box would go a long way toward showing it’s serious about its issues.

Columbia’s Wang does think Altman is grappling with the consequences of AI — whether it’s to do with the fairness, accuracy, or transparency of it. But abiding by an ethics system that makes sure to do no harm, while trying to scale up the next big thing in tech, “is almost impossible,” according to Wang.

“If you look at his tweets recently, he’s sensitive to all of these issues, but the problem with being sensitive to all of these issues is that there are invariably going to be contradictions with what you can achieve,” Wang said.

The glacial pace with which regulators decided to act against Facebook is unlikely to change once they decide to get serious about policing the threats posed by AI. It means Altman will be left largely unchecked to open AI’s Pandora’s box. Social media amplified society’s issues, as Wooldridge puts it. But AI could very well create new ones. Altman will need to get this right, for everyone’s sake. Otherwise, it could be lights out for all.

Hasan Chowdhury is a technology reporter at Insider.

Read the original article on Business Insider

By