WhatsNew2Day – Latest News And Breaking Headlines
Artificial intelligence (AI) is already being used in healthcare. AI can look for patterns in medical images to help diagnose diseases. It can help predict who in a hospital room might deteriorate. Can quickly summarize medical research articles to help doctors stay up to date with the latest evidence.
These are examples of AI creation or shaping decisions that health professionals made previously. More applications are being developed.
But what do consumers think about the use of AI in healthcare? And how should your answers determine how it will be used in the future?
What do consumers think?
AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. Potentially, they can continually learn and get better at tasks over time.
If we unite international evidence, including our own and so? of othersIt appears that most consumers accept the potential value of AI in healthcare.
This value could include, for example, increasing the diagnostic accuracy or improve access to care. At present, these are largely potential benefits, rather than proven ones.
But consumers say their acceptance is conditional. They still have serious concerns.
Charging…
1. Does AI work?
A basic expectation is that AI tools should work well. Consumers often say that AI should be at least as good as a human doctor in the tasks he performs. They say we shouldn’t use AI if it will lead to more misdiagnoses or medical errors.
2. Who is responsible if the AI makes mistakes?
Consumers are also concerned that if AI systems generate decisions (such as diagnoses or treatment plans) without human input, it may not be clear who is responsible for errors. That’s why people often want doctors to remain responsible for final decisions and protecting patients Of Damage.
3. Will AI make healthcare less fair?
If health services are already discriminatoryAI systems can learn these patterns from data and repeat or worsen The discrimination. Therefore, AI used in healthcare can worsen health inequalities. In our studies, consumers said this Not well.
4. Will AI dehumanize healthcare?
Consumers are concerned that AI will eliminate the “human” elements of healthcare and consistently say AI tools should support rather than replace doctors. This is often because AI is perceived to lack important human traits, like empathy. Consumers say that the communication skills, care and contact of a health professional are especially important when they feel vulnerable.
5. Will AI downskill our healthcare workers?
Consumers value human doctors and their expertise. In our research with women Regarding AI in breast screening, women were concerned about the possible effect on radiologists’ skills and experience. Women saw this experience as a valuable shared resource: if there was too much reliance on AI tools, this resource could be lost.
Consumers and communities need a say
The Australian healthcare system cannot focus solely on the technical elements of AI tools. Social and ethical considerations, including high-quality engagement with consumers and communities, are essential in shaping the use of AI in healthcare.
Communities need opportunities to develop digital health literacy: digital skills access reliable and trustworthy health information, services and resources.
Respectful engagement with Aboriginal and Torres Strait Islander communities must be fundamental. This includes upholding Indigenous data sovereignty, which the Australian Institute of Aboriginal and Torres Strait Islander Studies describe how:
“The right of indigenous peoples to govern the collection, ownership and application of data on indigenous communities, peoples, lands and resources.”
This includes any use of data to create AI.
This critically important consumer and community engagement must take place before administrators design (further) AI into health systems, before regulators create a guide on how AI should and should not be used, and before doctors consider purchasing a new AI tool for their practice.
We are making some progress. Earlier this year, we conducted a citizen jury on AI in healthcare. We supported 30 diverse Australians, from all states and territories, to spend three weeks learning about AI in healthcare and developing recommendations for policymakers.
Their recommendations, which will be published in an upcoming issue of the Medical Journal of Australia, are based on a recently published article. national roadmap to use AI in healthcare.
Thats not all
Healthcare professionals also need to be trained and supported to use AI in healthcare. They must learn to be critical users of digital health tools, including understanding their advantages and disadvantages.
Our analysis A series of safety events reported to the Food and Drug Administration shows that the most serious harms reported to the US regulator did not come from a defective device, but from the way consumers and doctors used the device.
We must also consider when healthcare professionals should inform patients that an AI tool is being used for their care and when healthcare workers should request informed consent for that use.
Finally, people involved in every stage of AI development and use must get into the habit of asking themselves: do consumers and communities agree that this is a justified use of AI?
Only then will we have the AI-based healthcare system that consumers really want.
Stacy Carter is Professor and Director of the Australian Center for Health Engagement, Evidence and Values at the University of Wollongong. Emma Frost is a PhD candidate at the Australian Center for Health Engagement, Evidence and Values at the University of Wollongong. Farah Magrabi is a Professor of Biomedical and Health Informatics at the Australian Institute for Health Innovation at Macquarie University. Yves Saint James Aquino is a researcher at the Australian Center for Health Engagement, Evidence and Values at the University of Wollongong. This piece first appeared in The conversation.