Sun. Dec 15th, 2024

Can you delete your ChatGPT conversation history? Here’s what OpenAI says happens if you disable it.<!-- wp:html --><p>OpenAI is signaling efforts to give users more control over their conversation data.</p> <p class="copyright">Beata Zawrzel/NurPhoto via Getty Images</p> <p><strong>Open AI lets users disable their chat histories, saying those chats will be "permanently deleted."</strong><br /> <strong>Microsoft has also taken steps to inform users about ways to review and erase search histories.</strong><br /> <strong>AI tools can improve in part based on feedback and conversations with users.</strong></p> <p>OpenAI has said that one of the ways ChatGPT gets better is through interactions with users. But as the mass experiment underway — since <a href="https://www.businessinsider.com/everything-you-need-to-know-about-chat-gpt-2023-1">the AI chatbot's launch to the public last year</a> — moves past novelty, the company has signaled it is closely considering safety and trust.</p> <p>ChatGPT users are already greeted with a pop-up alert that their conversations can be seen by "AI trainers," and are warned not to type in "sensitive information." In April, OpenAI said it's also giving users the choice to disable their conversation record with ChatGPT, seeking to offer more visibility and control over data. (Insider's Sarah Jackson has <a href="https://www.businessinsider.com/how-to-disable-history-chatgpt-opt-out-training-ai-models-2023-4">a helpful explainer</a> on how to do that). </p> <p>Keeping conversation history off means those chats won't help train the tool, and that the company will delete after 30 days any conversations under that higher privacy mode, according to <a href="https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt" target="_blank" rel="noopener">OpenAI's website</a>. </p> <p>The stakes can be high for both regular users and companies dealing with confidential information, who should also consider their own policies for how such tools should be used at work, said Duane Pozza, a partner at Wiley Rein LLP who advises on privacy, data, and other matters. </p> <p>"When looking at AI chatbots, there is a potential for these tools to collect a lot of consumer personal information that could include things like conversation histories," he told Insider, speaking generally about such tools and not about any specific company. </p> <p>"Average consumers and businesses using these tools have to make sure they understand the privacy policies," he added. "They should understand if they have options or settings to understand how data is collected by these tools."  </p> <p>A representative for OpenAI did not comment beyond indicating the company's resources on its website. </p> <p>The privacy of user data on popular websites has been the subject of heightened consumer protection scrutiny over the past decade, amid the rise of social media sites. Meta, for instance, will be making <a href="https://www.nytimes.com/2023/04/20/business/facebook-settlement-apply.html" target="_blank" rel="noopener">payments to Facebook users after reaching a $725 million settlement</a> over data issues involving Cambridge Analytica.  </p> <p>The popularity of AI websites may raise similar concerns about the scope of potential users wrangling with privacy questions, said Rudina Seseri, founder and managing partner of Glasswing Ventures investing in AI.</p> <p>"I reiterate best practices here — absolutely don't share with ChatGPT what you don't want the world to know," she said. </p> <p>"And this is not somehow grounded on any mal-intent from OpenAI, or, forget ChatGPT, any large language model," she said. "It also has to do with the fact that, the more surface — if you were to think of the digital world as a surface — the more surface, the more reach, the more opportunity for exploitation."  </p> <p>Microsoft's <a href="https://www.businessinsider.com/new-microsoft-bing-openai-more-powerful-chatgpt-2023-2">new Bing search bot launched in February</a> has also rapidly gained ground, <a href="https://www.businessinsider.com/microsoft-chatgpt-bing-100-million-daily-active-update-2023-3">amassing more than 100 million daily users</a> in March. </p> <p>The company offers a "<a href="https://account.microsoft.com/privacy" target="_blank" rel="noopener">privacy dashboard</a>" where users can get a sense of how their search history is used, and explore options to clear that history. The dashboard essentially allows users to "view, export, and delete stored conversation history," according to a <a href="https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2023/04/RAI-for-the-new-Bing-April-2023.pdf" target="_blank" rel="noopener">recently updated Microsoft document entitled</a>, "The new Bing: Our approach to Responsible AI." </p> <p>A note on the page says Bing uses "web search history to improve your search experience by showing you suggestions as you type, providing personalized results, and more."  </p> <p>Microsoft generally also uses privacy measures like encryption, and keeps customer data "for as long as it is necessary," a company representative said in a statement. </p> <p>"We also provide users with transparency and control over their search data via the Microsoft Privacy Dashboard," the representative said.</p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/can-you-delete-chatgpt-conversation-history-openai-privacy-policy-2023-4">Business Insider</a></div><!-- /wp:html -->

OpenAI is signaling efforts to give users more control over their conversation data.

Open AI lets users disable their chat histories, saying those chats will be “permanently deleted.”
Microsoft has also taken steps to inform users about ways to review and erase search histories.
AI tools can improve in part based on feedback and conversations with users.

OpenAI has said that one of the ways ChatGPT gets better is through interactions with users. But as the mass experiment underway — since the AI chatbot’s launch to the public last year — moves past novelty, the company has signaled it is closely considering safety and trust.

ChatGPT users are already greeted with a pop-up alert that their conversations can be seen by “AI trainers,” and are warned not to type in “sensitive information.” In April, OpenAI said it’s also giving users the choice to disable their conversation record with ChatGPT, seeking to offer more visibility and control over data. (Insider’s Sarah Jackson has a helpful explainer on how to do that). 

Keeping conversation history off means those chats won’t help train the tool, and that the company will delete after 30 days any conversations under that higher privacy mode, according to OpenAI’s website

The stakes can be high for both regular users and companies dealing with confidential information, who should also consider their own policies for how such tools should be used at work, said Duane Pozza, a partner at Wiley Rein LLP who advises on privacy, data, and other matters. 

“When looking at AI chatbots, there is a potential for these tools to collect a lot of consumer personal information that could include things like conversation histories,” he told Insider, speaking generally about such tools and not about any specific company. 

“Average consumers and businesses using these tools have to make sure they understand the privacy policies,” he added. “They should understand if they have options or settings to understand how data is collected by these tools.”  

A representative for OpenAI did not comment beyond indicating the company’s resources on its website. 

The privacy of user data on popular websites has been the subject of heightened consumer protection scrutiny over the past decade, amid the rise of social media sites. Meta, for instance, will be making payments to Facebook users after reaching a $725 million settlement over data issues involving Cambridge Analytica.  

The popularity of AI websites may raise similar concerns about the scope of potential users wrangling with privacy questions, said Rudina Seseri, founder and managing partner of Glasswing Ventures investing in AI.

“I reiterate best practices here — absolutely don’t share with ChatGPT what you don’t want the world to know,” she said. 

“And this is not somehow grounded on any mal-intent from OpenAI, or, forget ChatGPT, any large language model,” she said. “It also has to do with the fact that, the more surface — if you were to think of the digital world as a surface — the more surface, the more reach, the more opportunity for exploitation.”  

Microsoft’s new Bing search bot launched in February has also rapidly gained ground, amassing more than 100 million daily users in March. 

The company offers a “privacy dashboard” where users can get a sense of how their search history is used, and explore options to clear that history. The dashboard essentially allows users to “view, export, and delete stored conversation history,” according to a recently updated Microsoft document entitled, “The new Bing: Our approach to Responsible AI.” 

A note on the page says Bing uses “web search history to improve your search experience by showing you suggestions as you type, providing personalized results, and more.”  

Microsoft generally also uses privacy measures like encryption, and keeps customer data “for as long as it is necessary,” a company representative said in a statement. 

“We also provide users with transparency and control over their search data via the Microsoft Privacy Dashboard,” the representative said.

Read the original article on Business Insider

By