Mon. Jul 8th, 2024

I Flirted With Meta’s New Chatbot and Things Got Weird<!-- wp:html --><p>Illustration by Elizabeth Brockway/The Daily Beast</p> <p>Meta (AKA the company formerly known as Facebook) is throwing its hat into the chatbot wars. On August 5, the social media giant launched <a href="https://blenderbot.ai/chat">BlenderBot 3</a>, a bot that utilizes a highly sophisticated large language learning model that searches the internet in order to hold conversations with users. That means it's been trained to search for patterns in large text datasets in order to spit out somewhat coherent sentences.</p> <p>However, it’s also able to search the Internet too. That means if you ask it a question like, “What’s your favorite movie from the last year?” it’ll do a search crawl in order to help inform its response.</p> <p>It’s the latest in a growing line of increasingly sophisticated (and creepy) AI chatbots—many of which have a sordid history with problematic and outright toxic behavior. There’s the infamous <a href="https://www.thedailybeast.com/how-to-make-sure-your-robot-doesnt-become-a-nazi">Microsoft Twitter bot “Tay”</a> released in 2016, which was trained on tweets and messages sent to it by other Twitter users. Predictably, though, it was quickly shuttered after it began denying the Holocaust, promoting 9/11 conspiracy theories, and spouting wildly racist remarks mere hours after launch.</p> <p><a href="https://www.thedailybeast.com/i-flirted-with-metas-new-ai-chatbot-blenderbot-and-things-got-weird?source=articles&via=rss">Read more at The Daily Beast.</a></p> <p>Got a tip? Send it to The Daily Beast <a href="https://www.thedailybeast.com/tips">here</a></p><!-- /wp:html -->

Illustration by Elizabeth Brockway/The Daily Beast

Meta (AKA the company formerly known as Facebook) is throwing its hat into the chatbot wars. On August 5, the social media giant launched BlenderBot 3, a bot that utilizes a highly sophisticated large language learning model that searches the internet in order to hold conversations with users. That means it’s been trained to search for patterns in large text datasets in order to spit out somewhat coherent sentences.

However, it’s also able to search the Internet too. That means if you ask it a question like, “What’s your favorite movie from the last year?” it’ll do a search crawl in order to help inform its response.

It’s the latest in a growing line of increasingly sophisticated (and creepy) AI chatbots—many of which have a sordid history with problematic and outright toxic behavior. There’s the infamous Microsoft Twitter bot “Tay” released in 2016, which was trained on tweets and messages sent to it by other Twitter users. Predictably, though, it was quickly shuttered after it began denying the Holocaust, promoting 9/11 conspiracy theories, and spouting wildly racist remarks mere hours after launch.

Read more at The Daily Beast.

Got a tip? Send it to The Daily Beast here

By