Mon. Dec 16th, 2024

From deepfakes to Bing’s chatbot, AI-generated content is everywhere. Here’s how to spot it.<!-- wp:html --><p>OpenAI's ChatGPT, a chatbot that was released in November, has reignited concerns about the proliferation of AI-generated content.</p> <p class="copyright">Beata Zawrzel/NurPhoto via Getty Images</p> <p>ChatGPT's popularity is stirring concerns about the proliferation of AI-generated content.<br /> Researchers have developed tools to detect machine-made videos or text.<br /> Simple steps like checking your source are crucial for the average media consumer, one told Insider.</p> <p>Concerns about artificial intelligence programs taking over jobs or robots going rogue are nothing new. But the debut of <a href="https://www.businessinsider.com/everything-you-need-to-know-about-chat-gpt-2023-1">ChatGPT</a> and Microsoft's Bing chatbot has put some of those fears back in the forefront of the general public's mind — and with good reason.</p> <p>Professors are catching <a href="https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1">students cheating with ChatGPT</a>, jobs initially thought to require a human's judgment may soon be <a href="https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02#tech-jobs-coders-computer-programmers-software-engineers-data-analysts-1">on the chopping block</a>, and — like so many other AI models — tools like ChatGPT are still <a href="https://www.insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1">plagued by bias</a>. </p> <p>There's also the ever-growing threat of misinformation, which can be <a href="https://www.businessinsider.com/i-asked-chatgpt-to-write-insider-story-it-was-convincing-2022-12">all the more potent with AI chatbots</a>.</p> <p>Chelsea Finn, an assistant professor in computer science at Stanford University and a member of Google Brain's robotics team, sees valid use cases for tools like ChatGPT.</p> <p>"They're useful tools for certain things when we ourselves know the right answer, and we're just trying to use them to speed up our own work or to edit text, for example, that we've written," she told Insider. "There are reasonable uses for them."</p> <p>The concern for Finn is when people start to believe everything that is produced by these models and when bad actors use the tools to deliberately sway public perception.</p> <p>"A lot of the content these tools generate is inaccurate," Finn said. "The other thing is that these sorts of models could be used by people who don't have the best intentions and try to deceive people."</p> <p>Researchers have already developed some tools to spot AI-generated content and are claiming they have accuracy rates of up to 96%.</p> <p>The tools will only get better, Finn said, but the onus will be on the public to be constantly mindful of what it sees on the internet.</p> <p>Here's what you can do to detect AI-generated content.</p> <h2>AI detection tools exist</h2> <p>There are several tools available to the public that can detect text generated by large language models (LLM) — the more formal name for chatbots like ChatGPT.</p> <p>OpenAI, which developed ChatGPT, has an <a href="https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text" target="_blank" rel="noopener">AI classifier</a>, that aims to distinguish between human and AI-written text, as well as an <a href="https://huggingface.co/spaces/openai/openai-detector" target="_blank" rel="noopener">older detector demo</a>. <a href="https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1">One professor who spoke with Insider</a> used the latter tool to determine that a student essay was 99% likely to be AI-generated.</p> <p>Eric Anthony Mitchell, a computer science graduate student at Stanford, and his colleagues developed a ChatGPT detector aptly called DetectGPT. Finn acted as an advisor for the project. A <a href="https://detectgpt.ericmitchell.ai/" target="_blank" rel="noopener">demo</a> and <a href="https://arxiv.org/abs/2301.11305v1" target="_blank" rel="noopener">paper</a> on the tool were released in January</p> <p>All of these tools are in their early stages, have different approaches to detection, and have their unique limitations, Finn said.</p> <p>There are essentially two classes of tools, she explained. One relies on collecting large amounts of data — written by people and machine learning models — and then training the tool to distinguish between the text and the AI tool.</p> <p>The challenge behind this approach is that it relies on a large amount of "representative data," Finn said. This becomes an issue if, for example, the tool is only given data written in English or data that is mostly written in a colloquial language.</p> <p>If you were to feed this tool Spanish-language text or a technical text like something from a medical journal, the tool would then struggle to detect AI-generated content.</p> <p>OpenAI adds the caveat that its classifier is "not fully reliable" on short texts below 1,000 characters and texts written in other languages besides English. </p> <p>The second class of tools relies on the large language model's own prediction of a text being AI-generated or human. It's almost like asking ChatGPT if a text is AI-generated or not. This is essentially how Mitchell's DetectGPT operates.</p> <p>"One of the big upsides to this approach is you don't have to actually collect the representative dataset, you actually just look at the model's own predictions," Finn said.</p> <p>The limitation is that you need to have access to a representative model, which is not always publicly available, Finn explained. In other words, researchers need access to a model like ChatGPT to be able to run tests where they "ask" the program to detect human or AI-generated text. ChatGPT is not publicly available for researchers to test the model at the moment.</p> <p>Mitchell and his colleagues report their tool successfully identified large language model-generated text 95% of the time. </p> <p>Finn said every tool has its pros and cons but the main question to ask is what type of text is being evaluated. DetectGPT had similar accuracy to the first class of detection tools, but when it came to technical texts, DetectGPT performed better.</p> <h2><strong>Detecting Deepfakes? Human eyes — and veins — provide clues</strong></h2> <p>There are also tools to detect Deepfakes, a portmanteau of "deep-learning" and "fake" that refers to digitally-made images, videos, or audio.</p> <p>Image forensics is a field that has existed for a long time, Finn said. Since the 19th century, people were able to manipulate images using <a href="https://farid.berkeley.edu/downloads/tutorials/digitalimageforensics.pdf" target="_blank" rel="noopener">composites of multiple photos</a> — and then came Photoshop.</p> <p>Researchers at the University of Buffalo said they've developed <a href="https://www.businessinsider.com/researchers-spot-deepfakes-by-looking-at-reflections-in-eyes-2021-3">a tool to detect deepfake images</a> with 94% effectiveness. The tool looks closely at reflections in the eyes of people in the video. If the reflection is different, then it's a sign that the photo was digitally rendered.</p> <p>Microsoft announced its own deepfake detector called <a href="https://www.businessinsider.com/microsoft-launches-deepfake-detection-tech-ahead-of-the-2020-election-2020-9">Microsoft Video Authenticator</a> ahead of the 2020 election with the goal of catching misinformation. The company tested the tool with <a href="https://www.originproject.info/" target="_blank" rel="noopener">Project Origin</a>, an initiative that works with a team of media organizations, including BBC and The New York Times, to provide journalists the tools to track the source of origin for videos. According to the tech company, the detector closely examines small imperfections at the edge of a fake image that is undetectable by the human eye.</p> <p>Last year, Intel announced its "real-time" deepfake detector, <a href="https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-real-time-deepfake-detector.html#gs.r8gffm" target="_blank" rel="noopener">FakeCatcher</a>, and said that it has a 96% accuracy rate. The tool is able to look at the "blood flow" of a real human in a video and uses those clues to determine a video's authenticity, according to the company.</p> <p>"When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps," the company wrote in an announcement of its tool. "Then, using deep learning, we can instantly detect whether a video is real or fake."</p> <p>Detection tools are an evolving science. As models like ChatGPT or deepfake applications get better, the tools to detect them also have to improve.</p> <p>"Unlike other problems, this one is constantly changing," Ragavan Thurairatnam, founder of technology company Dessa, <a href="https://www.nytimes.com/2019/11/24/technology/tech-companies-deepfakes.html" target="_blank" rel="noopener">told The New York Times</a> in a story about internet companies' fight against deepfakes.</p> <h2>Other ways to spot AI-generated content</h2> <p>The effectiveness of detection tools still relies on an individual's better judgment.</p> <p>Darren Hick, a Furman University philosophy professor, <a href="https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1">previously told Insider</a> that he turned to a ChatGPT detector for a student essay only after he noticed that the paper was well written but "made no sense" and was "just flatly wrong."<strong> </strong></p> <p>As Finn said, ChatGPT can be helpful when the user already knows the right answer. For average media consumers, the old adage of checking one's source remains salient.</p> <p>"I think it's good to just try not to believe everything you read or see," Finn said, whether that's information from a large language model, from a person, or from the internet.</p> <p>Social media makes media consumption <a href="https://www.businessinsider.com/facebook-has-been-deliberately-designed-to-mimic-addictive-painkillers-2018-12">a seamless experience</a>, so it's important for users to pause for a moment and check the account or outlet from which they're seeing a piece of news, especially if it's something sensational or particularly shocking, <a href="https://libguides.wustl.edu/evaluate_news/spot" target="_blank" rel="noopener">according to St. Louis's Washington University's guide on spotting fake news</a>.  </p> <p>Viewers should ask themselves if they're seeing a video or text from a meme page, an entertainment site, an individual's account, or a news outlet. After seeing a piece of information online and confirming the source, it helps to compare what else is out there on that subject from other reliable sources, according to the university's guide. </p> <p>When it comes to AI-generated videos or images, there are also still visual cues the naked eye can detect. AI has been reported to have issues <a href="https://dataconomy.com/2023/01/how-to-fix-ai-drawing-hands-why-ai-art/#:~:text=This%20is%20because%20the%20complicated,combinations%20to%20make%20convincing%20hands." target="_blank" rel="noopener">drawing hands</a> or <a href="https://www.iflscience.com/ai-creates-perfect-photos-of-parties-that-never-happened-but-for-a-few-unsettling-details-67090" target="_blank" rel="noopener">teeth</a>.</p> <p>"Usually there are some small artifacts, maybe in people's eyes, or, if it's in a video, the way that their mouth is moving looks a little bit unrealistic," Finn said.</p> <p>The photo-editing app LensaAI, which also recently became popular with its <a href="https://www.businessinsider.com/lensa-app-review-turn-selfies-digital-art-ai-technology-photos-2022-12">Magic Avatar feature</a>, had a habit of leaving "ghost signatures" in the corner of its AI-generated portraits. That's because the tool was trained on pre-existing images, in which artists often left their signatures somewhere on their paintings, <a href="https://www.artnews.com/art-news/news/signatures-lensa-ai-portraits-1234649633/" target="_blank" rel="noopener">ARTnews reported</a>.</p> <p>"Right now it's still possible to spot some of these if you're looking for the right thing," Finn said. "That said, in the long run, I suspect that these kinds of machine learning models will probably get better, and that may not be a reliable way to detect images and video in the future."</p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/how-to-detect-ai-generated-content-text-chatgpt-deepfake-videos-2023-3">Business Insider</a></div><!-- /wp:html -->

OpenAI’s ChatGPT, a chatbot that was released in November, has reignited concerns about the proliferation of AI-generated content.

ChatGPT’s popularity is stirring concerns about the proliferation of AI-generated content.
Researchers have developed tools to detect machine-made videos or text.
Simple steps like checking your source are crucial for the average media consumer, one told Insider.

Concerns about artificial intelligence programs taking over jobs or robots going rogue are nothing new. But the debut of ChatGPT and Microsoft’s Bing chatbot has put some of those fears back in the forefront of the general public’s mind — and with good reason.

Professors are catching students cheating with ChatGPT, jobs initially thought to require a human’s judgment may soon be on the chopping block, and — like so many other AI models — tools like ChatGPT are still plagued by bias

There’s also the ever-growing threat of misinformation, which can be all the more potent with AI chatbots.

Chelsea Finn, an assistant professor in computer science at Stanford University and a member of Google Brain’s robotics team, sees valid use cases for tools like ChatGPT.

“They’re useful tools for certain things when we ourselves know the right answer, and we’re just trying to use them to speed up our own work or to edit text, for example, that we’ve written,” she told Insider. “There are reasonable uses for them.”

The concern for Finn is when people start to believe everything that is produced by these models and when bad actors use the tools to deliberately sway public perception.

“A lot of the content these tools generate is inaccurate,” Finn said. “The other thing is that these sorts of models could be used by people who don’t have the best intentions and try to deceive people.”

Researchers have already developed some tools to spot AI-generated content and are claiming they have accuracy rates of up to 96%.

The tools will only get better, Finn said, but the onus will be on the public to be constantly mindful of what it sees on the internet.

Here’s what you can do to detect AI-generated content.

AI detection tools exist

There are several tools available to the public that can detect text generated by large language models (LLM) — the more formal name for chatbots like ChatGPT.

OpenAI, which developed ChatGPT, has an AI classifier, that aims to distinguish between human and AI-written text, as well as an older detector demo. One professor who spoke with Insider used the latter tool to determine that a student essay was 99% likely to be AI-generated.

Eric Anthony Mitchell, a computer science graduate student at Stanford, and his colleagues developed a ChatGPT detector aptly called DetectGPT. Finn acted as an advisor for the project. A demo and paper on the tool were released in January

All of these tools are in their early stages, have different approaches to detection, and have their unique limitations, Finn said.

There are essentially two classes of tools, she explained. One relies on collecting large amounts of data — written by people and machine learning models — and then training the tool to distinguish between the text and the AI tool.

The challenge behind this approach is that it relies on a large amount of “representative data,” Finn said. This becomes an issue if, for example, the tool is only given data written in English or data that is mostly written in a colloquial language.

If you were to feed this tool Spanish-language text or a technical text like something from a medical journal, the tool would then struggle to detect AI-generated content.

OpenAI adds the caveat that its classifier is “not fully reliable” on short texts below 1,000 characters and texts written in other languages besides English. 

The second class of tools relies on the large language model’s own prediction of a text being AI-generated or human. It’s almost like asking ChatGPT if a text is AI-generated or not. This is essentially how Mitchell’s DetectGPT operates.

“One of the big upsides to this approach is you don’t have to actually collect the representative dataset, you actually just look at the model’s own predictions,” Finn said.

The limitation is that you need to have access to a representative model, which is not always publicly available, Finn explained. In other words, researchers need access to a model like ChatGPT to be able to run tests where they “ask” the program to detect human or AI-generated text. ChatGPT is not publicly available for researchers to test the model at the moment.

Mitchell and his colleagues report their tool successfully identified large language model-generated text 95% of the time. 

Finn said every tool has its pros and cons but the main question to ask is what type of text is being evaluated. DetectGPT had similar accuracy to the first class of detection tools, but when it came to technical texts, DetectGPT performed better.

Detecting Deepfakes? Human eyes — and veins — provide clues

There are also tools to detect Deepfakes, a portmanteau of “deep-learning” and “fake” that refers to digitally-made images, videos, or audio.

Image forensics is a field that has existed for a long time, Finn said. Since the 19th century, people were able to manipulate images using composites of multiple photos — and then came Photoshop.

Researchers at the University of Buffalo said they’ve developed a tool to detect deepfake images with 94% effectiveness. The tool looks closely at reflections in the eyes of people in the video. If the reflection is different, then it’s a sign that the photo was digitally rendered.

Microsoft announced its own deepfake detector called Microsoft Video Authenticator ahead of the 2020 election with the goal of catching misinformation. The company tested the tool with Project Origin, an initiative that works with a team of media organizations, including BBC and The New York Times, to provide journalists the tools to track the source of origin for videos. According to the tech company, the detector closely examines small imperfections at the edge of a fake image that is undetectable by the human eye.

Last year, Intel announced its “real-time” deepfake detector, FakeCatcher, and said that it has a 96% accuracy rate. The tool is able to look at the “blood flow” of a real human in a video and uses those clues to determine a video’s authenticity, according to the company.

“When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps,” the company wrote in an announcement of its tool. “Then, using deep learning, we can instantly detect whether a video is real or fake.”

Detection tools are an evolving science. As models like ChatGPT or deepfake applications get better, the tools to detect them also have to improve.

“Unlike other problems, this one is constantly changing,” Ragavan Thurairatnam, founder of technology company Dessa, told The New York Times in a story about internet companies’ fight against deepfakes.

Other ways to spot AI-generated content

The effectiveness of detection tools still relies on an individual’s better judgment.

Darren Hick, a Furman University philosophy professor, previously told Insider that he turned to a ChatGPT detector for a student essay only after he noticed that the paper was well written but “made no sense” and was “just flatly wrong.” 

As Finn said, ChatGPT can be helpful when the user already knows the right answer. For average media consumers, the old adage of checking one’s source remains salient.

“I think it’s good to just try not to believe everything you read or see,” Finn said, whether that’s information from a large language model, from a person, or from the internet.

Social media makes media consumption a seamless experience, so it’s important for users to pause for a moment and check the account or outlet from which they’re seeing a piece of news, especially if it’s something sensational or particularly shocking, according to St. Louis’s Washington University’s guide on spotting fake news.  

Viewers should ask themselves if they’re seeing a video or text from a meme page, an entertainment site, an individual’s account, or a news outlet. After seeing a piece of information online and confirming the source, it helps to compare what else is out there on that subject from other reliable sources, according to the university’s guide. 

When it comes to AI-generated videos or images, there are also still visual cues the naked eye can detect. AI has been reported to have issues drawing hands or teeth.

“Usually there are some small artifacts, maybe in people’s eyes, or, if it’s in a video, the way that their mouth is moving looks a little bit unrealistic,” Finn said.

The photo-editing app LensaAI, which also recently became popular with its Magic Avatar feature, had a habit of leaving “ghost signatures” in the corner of its AI-generated portraits. That’s because the tool was trained on pre-existing images, in which artists often left their signatures somewhere on their paintings, ARTnews reported.

“Right now it’s still possible to spot some of these if you’re looking for the right thing,” Finn said. “That said, in the long run, I suspect that these kinds of machine learning models will probably get better, and that may not be a reliable way to detect images and video in the future.”

Read the original article on Business Insider

By