Evaluating AI systems for motivational interviewing in chronic disease management
Changing health habits – like quitting smoking, exercising more, or sticking to prescribed treatments – is difficult but crucial for preventing and managing chronic diseases. Motivational interviewing (MI), a patient-centered counseling method that helps people find their own motivation to change, has proven effective across many health care settings.
Yet despite strong evidence, MI is not widely used in clinical practice due to challenges like limited time, training demands and payment barriers. Advances in artificial intelligence, however, are opening new possibilities to bring MI to more people through digital tools.
AI-powered chatbots, apps and virtual agents can simulate the supportive, empathetic conversations at the heart of MI. Using approaches ranging from scripted dialogues to advanced large language models like GPT-4 (commonly known as ChatGPT), these tools provide around-the-clock, judgment-free support. They may be especially helpful for people who do not seek traditional behavioral health care.
Early studies suggest these AI tools are feasible and acceptable, but it remains unclear how closely they follow core MI principles such as empathy and promoting autonomy, and whether they effectively change behaviors. Evaluating this "MI fidelity" is challenging, as traditional methods need detailed human review and don't scale well.
To fill these important knowledge gaps, researchers from Florida Atlantic University's Charles E. Schmidt College of Medicine conducted the first scoping review of studies on AI-driven systems designed to deliver motivational interviewing.
They focused on exploring how AI tools such as chatbots and large language models are being used to deliver MI, what is known about their usability and acceptability, the extent to which these systems adhere to core MI principles, and the behavioral or psychological outcomes reported so far.
Results, published in the Journal of Medical Internet Research, reveal that the most used AI tools were chatbots, with some virtual agents and mobile apps, using technologies ranging from rule-based systems to advanced models like GPT-3.5 and GPT-4. While all aimed to simulate motivational interviewing, the quality and rigor of their evaluations varied. Only a few studies addressed safety concerns around AI-generated content, with most not detailing safeguards against misinformation or inappropriate responses.
While only a few studies reported actual behavioral changes, most focused on important psychological factors like readiness to change and feeling understood. Importantly, no studies looked at long-term behavioral outcomes, and follow-up periods were often short or missing entirely. So, while AI tools can effectively deliver motivational content and influence early signs of change, their ability to create lasting behavior shifts remains unclear.
#sciencefather #researchawards #scientists #researcher #MotivationalInterviewing #ChronicDiseaseManagement #AIinHealthcare #DigitalHealth #HealthTech #PatientEngagement #MedicalAI #BehavioralHealth #HealthcareInnovation #AIResearch
Yet despite strong evidence, MI is not widely used in clinical practice due to challenges like limited time, training demands and payment barriers. Advances in artificial intelligence, however, are opening new possibilities to bring MI to more people through digital tools.
AI-powered chatbots, apps and virtual agents can simulate the supportive, empathetic conversations at the heart of MI. Using approaches ranging from scripted dialogues to advanced large language models like GPT-4 (commonly known as ChatGPT), these tools provide around-the-clock, judgment-free support. They may be especially helpful for people who do not seek traditional behavioral health care.
Early studies suggest these AI tools are feasible and acceptable, but it remains unclear how closely they follow core MI principles such as empathy and promoting autonomy, and whether they effectively change behaviors. Evaluating this "MI fidelity" is challenging, as traditional methods need detailed human review and don't scale well.
To fill these important knowledge gaps, researchers from Florida Atlantic University's Charles E. Schmidt College of Medicine conducted the first scoping review of studies on AI-driven systems designed to deliver motivational interviewing.
They focused on exploring how AI tools such as chatbots and large language models are being used to deliver MI, what is known about their usability and acceptability, the extent to which these systems adhere to core MI principles, and the behavioral or psychological outcomes reported so far.
Results, published in the Journal of Medical Internet Research, reveal that the most used AI tools were chatbots, with some virtual agents and mobile apps, using technologies ranging from rule-based systems to advanced models like GPT-3.5 and GPT-4. While all aimed to simulate motivational interviewing, the quality and rigor of their evaluations varied. Only a few studies addressed safety concerns around AI-generated content, with most not detailing safeguards against misinformation or inappropriate responses.
While only a few studies reported actual behavioral changes, most focused on important psychological factors like readiness to change and feeling understood. Importantly, no studies looked at long-term behavioral outcomes, and follow-up periods were often short or missing entirely. So, while AI tools can effectively deliver motivational content and influence early signs of change, their ability to create lasting behavior shifts remains unclear.
Computer Scientists Awards
For Enquiries: info@computerscientist.net
Website: computerscientists.net
Comments
Post a Comment