//
2 mins read

Top AI chatbots can be easily manipulated to spread health disinformation – Report

A new international study has shown that it is easy to manipulate widely used AI chatbots to deliver false and potentially harmful health information.

In the study published in the Annals of Internal Medicine, researchers evaluated the five foundational and most advanced AI systems developed by OpenAI, Google, Anthropic, Meta, and X Corp to determine whether they could be programmed to operate as health disinformation chatbots.

The study which was conducted by researchers from the University of South Australia, Flinders University, University College London, Warsaw University of Technology and Harvard Medical School demonstrated that Large Language Models (LLMs), including some of the most advanced AI tools on the market, can be reprogrammed to spread convincing but entirely fabricated medical advice.

READ MORE: Global Fact-Check Chatbot launched to fight disinformation worldwide

Using instructions available only to developers, the researchers programmed each AI system, designed to operate as chatbots when embedded in web pages, to produce incorrect responses to health queries and include fabricated references from highly reputable sources to sound more authoritative and credible.

The ‘chatbots’ were then asked a series of health-related questions.

According to UniSA researcher, Natansh Modi, the results were disconcerting.

“In total, 88% of all responses were false,” Dr Modi says, “and yet they were presented with scientific terminology, a formal tone, and fabricated references that made the information appear legitimate.

“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne, and 5G causing infertility.”

Out of the five chatbots that were evaluated, four generated disinformation in 100% of their responses, while the fifth generated disinformation in 40% of its responses, showing some degree of robustness.

The team also explored the OpenAI GPT Store, a publicly accessible platform that allows users to easily create and share customised ChatGPT apps, to assess the ease with which the public could create disinformation tools.

“We successfully created a disinformation chatbot prototype using the platform, and we also identified existing public tools on the store that were actively producing health disinformation.

“Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers’ tools, but also tools available to the public.”

Modi said that these findings reveal a significant and previously under-explored risk in the health sector.

“Artificial intelligence is now deeply embedded in the way health information is accessed and delivered,” he said. “Millions of people are turning to AI tools for guidance on health-related questions.

DON’T MISS THIS: Meta chatbot spreads fake news, slams Facebook

He warned that if the systems can be manipulated to covertly produce false or misleading advice, then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate, and more persuasive than anything seen before.

“This is not a future risk. It is already possible, and it is already happening.”

He added: “Without immediate action, these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns.”

By Nurudeen Akewushola

First published by https://factcheckhub.com

 

Dare Akogun

Dare Akogun is a media innovator, strategic communication professional, and climate and energy transition journalist with over 11 years of impactful contributions to the media industry.

Leave a Reply

Your email address will not be published.

Previous Story

President Tinubu Demands Fair Energy Transition at ECO Summit, Advocates Climate Justice and Regional Integration