Sat. Jul 6th, 2024

Your friendly AI chatbot could know a lot about you from the way you type<!-- wp:html --><p>Chatbots can discern your personal details from what you type, study says.</p> <p class="copyright">Getty Images</p> <p>AI could accurately guess a user's personal information — like gender, age, and location — based on what they type, a new study says.<br /> The study's authors say AI can be used to "infer personal data at a previously unattainable scale" and be deployed by hackers.<br /> "It's not even clear how you fix this problem. This is very, very problematic," one of the study's authors told Wired.</p> <p>AI could accurately guess sensitive information about a person based on what they type online, according to a <a href="https://arxiv.org/abs/2310.07298" target="_blank" rel="noopener">new study</a> by researchers at ETH Zurich that was published in October.</p> <p>This information includes a person's gender, location, age, place of birth, job, and more — attributes typically protected under privacy regulations.</p> <p>The study's authors say AI can "infer personal data at a previously unattainable scale" and could be deployed by hackers using seemingly benign questions on unsuspecting users.</p> <p>The study looked at how large language models — which power chatbots like ChatGPT — can be prompted to deduce personal details about 520 real Reddit user profiles and their posts from 2012 to 2016. The researchers manually analyzed these profiles and compared their findings with the AI's guesses.</p> <p>A figure from the researchers' study explaining how AI models could accurately infer personal details from how a user types, through their publicly available information.</p> <p class="copyright">Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev</p> <p>Of the four models tested, GPT-4 was the most accurate at inferring personal details, with 84.6% accuracy, per the study's authors. Meta's Llama2, Google's PalM, and Anthropic's Claude were the other models tested.</p> <p>The researchers also found that Google's PalM refused to answer around 10% of the privacy-invasive prompts used in the study to deduce personal information about a user, while other models refused even fewer prompts.</p> <p>"It's not even clear how you fix this problem. This is very, very problematic," Martin Vechev, a professor at ETH Zurich and one of the study's authors, told <a href="https://www.wired.com/story/ai-chatbots-can-guess-your-personal-information/" target="_blank" rel="noopener">Wired</a> in an article published Tuesday.</p> <p>For example, the researchers' model deduced that a Reddit user is from Melbourne because they commented about a "hook turn."</p> <p>"A 'hook turn' is a traffic maneuver particularly used in Melbourne," said GPT-4 after being prompted to identify details about that user.</p> <p>This isn't the first time that researchers have identified how AI could pose a threat to privacy.</p> <p>Another study, published in <a href="https://www.businessinsider.com/ai-decipher-passwords-hackers-listening-keystrokes-zoom-study-2023-8">August</a>, found that AI could decipher text — such as passwords — based on the sound of your typing recorded over Zoom, with up to 93% accuracy.</p> <p>The study's authors, Meta, Google, Anthropic, and OpenAI, did not immediately respond to requests for comment from Insider, sent outside regular business hours.</p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/ai-chatbots-discern-race-job-location-from-how-you-type-2023-10">Business Insider</a></div><!-- /wp:html -->

Chatbots can discern your personal details from what you type, study says.

AI could accurately guess a user’s personal information — like gender, age, and location — based on what they type, a new study says.
The study’s authors say AI can be used to “infer personal data at a previously unattainable scale” and be deployed by hackers.
“It’s not even clear how you fix this problem. This is very, very problematic,” one of the study’s authors told Wired.

AI could accurately guess sensitive information about a person based on what they type online, according to a new study by researchers at ETH Zurich that was published in October.

This information includes a person’s gender, location, age, place of birth, job, and more — attributes typically protected under privacy regulations.

The study’s authors say AI can “infer personal data at a previously unattainable scale” and could be deployed by hackers using seemingly benign questions on unsuspecting users.

The study looked at how large language models — which power chatbots like ChatGPT — can be prompted to deduce personal details about 520 real Reddit user profiles and their posts from 2012 to 2016. The researchers manually analyzed these profiles and compared their findings with the AI’s guesses.

A figure from the researchers’ study explaining how AI models could accurately infer personal details from how a user types, through their publicly available information.

Of the four models tested, GPT-4 was the most accurate at inferring personal details, with 84.6% accuracy, per the study’s authors. Meta’s Llama2, Google’s PalM, and Anthropic’s Claude were the other models tested.

The researchers also found that Google’s PalM refused to answer around 10% of the privacy-invasive prompts used in the study to deduce personal information about a user, while other models refused even fewer prompts.

“It’s not even clear how you fix this problem. This is very, very problematic,” Martin Vechev, a professor at ETH Zurich and one of the study’s authors, told Wired in an article published Tuesday.

For example, the researchers’ model deduced that a Reddit user is from Melbourne because they commented about a “hook turn.”

“A ‘hook turn’ is a traffic maneuver particularly used in Melbourne,” said GPT-4 after being prompted to identify details about that user.

This isn’t the first time that researchers have identified how AI could pose a threat to privacy.

Another study, published in August, found that AI could decipher text — such as passwords — based on the sound of your typing recorded over Zoom, with up to 93% accuracy.

The study’s authors, Meta, Google, Anthropic, and OpenAI, did not immediately respond to requests for comment from Insider, sent outside regular business hours.

Read the original article on Business Insider

By