Wed. Jul 3rd, 2024

Researchers use environmental justice questions to reveal geographic biases in ChatGPT<!-- wp:html --><p><a href="https://whatsnew2day.com/">WhatsNew2Day - Latest News And Breaking Headlines</a></p> <div> <div class="article-gallery lightGallery"> <div> <p> A US map shows counties where residents could (blue) or could not (pink) receive specific local information on environmental justice issues. Credit: Junghwan Kim. </p> </div> </div> <p>Virginia Tech researchers <a target="_blank" href="https://www.sciencedirect.com/science/article/abs/pii/S0736585323001491" rel="noopener">have discovered limitations in the capacity of ChatGPT</a> to provide location-specific information on environmental justice issues. His findings, published in the journal Telematics and ITsuggest the potential for existing geographic biases in current generative artificial intelligence (AI) models.</p> <p>ChatGPT is a large language model developed by OpenAI Inc., an artificial intelligence research organization. ChatGPT is designed to understand questions and generate text responses based on user requests. The technology has a wide range of applications, from content creation and information collection to data analysis and language translation.</p> <h2>A county-by-county overview</h2> <p>“As a geographer and geospatial data scientist, generative AI is a tool with powerful potential,” said Assistant Professor Junghwan Kim of the College of Natural Resources and Environment. “At the same time, we must investigate the limitations of the technology to ensure that future developers recognize the potential for bias. That was the motivation behind this research.”</p> <p>Using a list of the 3,108 contiguous United States counties, the research group asked the ChatGPT interface to answer a question about environmental justice issues in each county. The researchers selected environmental justice as a topic to expand the range of questions typically used to test the performance of generative AI tools. Asking questions by county allowed researchers to measure ChatGPT responses against sociodemographic considerations such as population density and median household income.</p> <p> <!-- TechX - News - In-article --></p> <h2>Key findings indicate limitations</h2> <p>By surveying counties with populations as varied as Los Angeles County, California, with a population of 10,019,635, and Loving County, Texas, with a population of 83, the generative AI tool showed an ability to identify Location-specific environmental justice in large and high-density areas. However, the tool had limited ability to identify and provide contextualized information on local environmental justice issues.</p> <p>ChatGPT was able to provide location-specific information on environmental justice issues for only 515 of the 3,018 counties entered, or 17 percent.<br /> In rural states like Idaho and New Hampshire, more than 90 percent of the population lived in counties that could not receive specific local information.<br /> In states with larger urban populations, such as Delaware or California, less than 1 percent of the population lived in counties that cannot receive specific information.</p> <h2>Impacts for AI developers and users</h2> <p>As generative AI emerges as a new input tool for obtaining information, testing potential biases in modeling results is an important part of improving programs like ChatGPT.</p> <p>“While more studies are needed, our findings reveal that geographic biases currently exist in the ChatGPT model,” said Kim, a professor in the Department of Geography. “This is a starting point to investigate how AI programmers and developers could anticipate and mitigate information disparity between large and small cities, between urban and rural environments.”</p> <p>Kim has previously published an article on how <a target="_blank" href="https://doi.org/10.32866/001c.72634" rel="noopener">ChatGPT understands transportation problems and solutions</a> in the United States and Canada. His <a target="_blank" href="https://www.junghwankim.org/smart-cities-for-good" rel="noopener">Smart cities for good</a> The research group explores the use of geospatial data science methods and technology to solve urban social and environmental challenges.</p> <h2>Improve future capabilities of tools.</h2> <p>Assistant Professor Ismini Lourentzou of the College of Engineering, co-author of the paper, cited three areas of additional research for large language models like ChatGPT:</p> <p>Refine localized and contextually informed knowledge, so that geographic biases are reduced.<br /> Protect models in long languages, such as ChatGPT, against challenging scenarios such as user instructions or ambiguous comments.<br /> Improve user awareness and policies so that people are better informed about strengths and weaknesses, particularly around sensitive topics.</p> <p>“There are many problems with the reliability and resilience of large language models,” said Lourentzou, who teaches in the Department of Computer Science and is affiliated with the Sanghani Center for Artificial Intelligence and Data Analytics. “I hope our research can guide future research to improve the capabilities of ChatGPT and other models.”</p> <div class="article-main__more p-4"> <p><strong>More information:</strong><br /> Junghwan Kim et al, Exploring Limitations in How ChatGPT Introduces Environmental Justice Issues in the United States: A Case Study of 3,108 Counties, Telematics and IT (2023). <a target="_blank" href="https://dx.doi.org/10.1016/j.tele.2023.102085" rel="noopener">DOI: 10.1016/j.tele.2023.102085</a></p> </div> <div class="d-inline-block text-medium my-4"> <p> Provided by Virginia Tech<br /> <a target="_blank" class="icon_open" href="http://www.vt.edu/" rel="noopener"></a></p> <p> </p> </div> <p> <!-- print only --></p> <div class="d-none d-print-block"> <p> <strong>Citation</strong>: Researchers use environmental justice questions to reveal geographic biases in ChatGPT (2023, December 16) retrieved December 16, 2023 from https://techxplore.com/news/2023-12-environmental-justice-reveal-geographic -biases.html </p> <p> This document is subject to copyright. Apart from any fair dealing for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only. </p> </div> </div> <p><a href="https://whatsnew2day.com/researchers-use-environmental-justice-questions-to-reveal-geographic-biases-in-chatgpt/">Researchers use environmental justice questions to reveal geographic biases in ChatGPT</a></p><!-- /wp:html -->

WhatsNew2Day – Latest News And Breaking Headlines

A US map shows counties where residents could (blue) or could not (pink) receive specific local information on environmental justice issues. Credit: Junghwan Kim.

Virginia Tech researchers have discovered limitations in the capacity of ChatGPT to provide location-specific information on environmental justice issues. His findings, published in the journal Telematics and ITsuggest the potential for existing geographic biases in current generative artificial intelligence (AI) models.

ChatGPT is a large language model developed by OpenAI Inc., an artificial intelligence research organization. ChatGPT is designed to understand questions and generate text responses based on user requests. The technology has a wide range of applications, from content creation and information collection to data analysis and language translation.

A county-by-county overview

“As a geographer and geospatial data scientist, generative AI is a tool with powerful potential,” said Assistant Professor Junghwan Kim of the College of Natural Resources and Environment. “At the same time, we must investigate the limitations of the technology to ensure that future developers recognize the potential for bias. That was the motivation behind this research.”

Using a list of the 3,108 contiguous United States counties, the research group asked the ChatGPT interface to answer a question about environmental justice issues in each county. The researchers selected environmental justice as a topic to expand the range of questions typically used to test the performance of generative AI tools. Asking questions by county allowed researchers to measure ChatGPT responses against sociodemographic considerations such as population density and median household income.

Key findings indicate limitations

By surveying counties with populations as varied as Los Angeles County, California, with a population of 10,019,635, and Loving County, Texas, with a population of 83, the generative AI tool showed an ability to identify Location-specific environmental justice in large and high-density areas. However, the tool had limited ability to identify and provide contextualized information on local environmental justice issues.

ChatGPT was able to provide location-specific information on environmental justice issues for only 515 of the 3,018 counties entered, or 17 percent.
In rural states like Idaho and New Hampshire, more than 90 percent of the population lived in counties that could not receive specific local information.
In states with larger urban populations, such as Delaware or California, less than 1 percent of the population lived in counties that cannot receive specific information.

Impacts for AI developers and users

As generative AI emerges as a new input tool for obtaining information, testing potential biases in modeling results is an important part of improving programs like ChatGPT.

“While more studies are needed, our findings reveal that geographic biases currently exist in the ChatGPT model,” said Kim, a professor in the Department of Geography. “This is a starting point to investigate how AI programmers and developers could anticipate and mitigate information disparity between large and small cities, between urban and rural environments.”

Kim has previously published an article on how ChatGPT understands transportation problems and solutions in the United States and Canada. His Smart cities for good The research group explores the use of geospatial data science methods and technology to solve urban social and environmental challenges.

Improve future capabilities of tools.

Assistant Professor Ismini Lourentzou of the College of Engineering, co-author of the paper, cited three areas of additional research for large language models like ChatGPT:

Refine localized and contextually informed knowledge, so that geographic biases are reduced.
Protect models in long languages, such as ChatGPT, against challenging scenarios such as user instructions or ambiguous comments.
Improve user awareness and policies so that people are better informed about strengths and weaknesses, particularly around sensitive topics.

“There are many problems with the reliability and resilience of large language models,” said Lourentzou, who teaches in the Department of Computer Science and is affiliated with the Sanghani Center for Artificial Intelligence and Data Analytics. “I hope our research can guide future research to improve the capabilities of ChatGPT and other models.”

More information:
Junghwan Kim et al, Exploring Limitations in How ChatGPT Introduces Environmental Justice Issues in the United States: A Case Study of 3,108 Counties, Telematics and IT (2023). DOI: 10.1016/j.tele.2023.102085

Provided by Virginia Tech

Citation: Researchers use environmental justice questions to reveal geographic biases in ChatGPT (2023, December 16) retrieved December 16, 2023 from https://techxplore.com/news/2023-12-environmental-justice-reveal-geographic -biases.html

This document is subject to copyright. Apart from any fair dealing for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.

Researchers use environmental justice questions to reveal geographic biases in ChatGPT

By