Viewing Study NCT06865534


Ignite Creation Date: 2025-12-25 @ 2:32 AM
Ignite Modification Date: 2025-12-27 @ 11:14 PM
Study NCT ID: NCT06865534
Status: RECRUITING
Last Update Posted: 2025-08-26
First Post: 2025-02-11
Is NOT Gene Therapy: True
Has Adverse Events: False

Brief Title: Large Language Models to Aid Gynecological Oncology Treatment
Sponsor: Philipps University Marburg
Organization:

Study Overview

Official Title: Medical Students and Their Perception of Large Language Models (LLMs) in Gynecologic Oncology
Status: RECRUITING
Status Verified Date: 2025-08
Last Known Status: None
Delayed Posting: No
If Stopped, Why?: Not Stopped
Has Expanded Access: False
If Expanded Access, NCT#: N/A
Has Expanded Access, NCT# Status: N/A
Acronym: EASING
Brief Summary: This trial aims to assess the impact of providing medical students with access to large language models, in comparison to treatment guideline pdfs, on treatment concordance with a conventional multidisciplinary tumor board
Detailed Description: Advanced artificial intelligence (AI) technologies, particularly large language models such as OpenAI's ChatGPT, hold significant potential for enhancing medical decision-making. While ChatGPT was not specifically designed for medical applications, it has shown utility in various healthcare scenarios, including answering patient inquiries, drafting medical documentation, and aiding clinical consultations. Despite these advancements, its role in supporting treatment decision-making-particularly in complex oncological cases-remains underexplored.

Treatment decision-making in gynecological oncology is a multifaceted process that integrates evidence-based guidelines, tumor biology, patient-specific factors, and clinical expertise. AI tools like ChatGPT could potentially assist in synthesizing relevant guideline-based recommendations, improving decision accuracy, and facilitating more efficient clinical workflows. However, ChatGPT is not specifically tailored for oncological treatment decisions and lacks comprehensive validation in this domain. Additionally, it may generate misinformation or plausible-sounding but inaccurate recommendations, which could impact clinical judgment. Therefore, understanding how medical professionals, including students and early-career physicians, interact with such AI tools is essential before broader integration into clinical practice. Locally deployable models, such as Llama, enable secure, on-premise usage while retrieval-augmented generation ensures guideline-compliant recommendations.

This study will investigate the impact of language models on treatment decision support for medical students managing gynecological oncology cases. This is a crossover study, where participants will be randomized into two groups. All participants begin with access to ChatGPT for two vignettes. They then proceed with two cases using either a locally deployed language model, followed by two cases relying on guideline PDFs, or vice versa.

Each participant will analyze clinical cases, propose treatment plans, and rate their confidence in their decisions and decision support system usability. This study aims to provide insights into the potential benefits and limitations of integrating AI tools like ChatGPT into oncological treatment decision-making.

Study Oversight

Has Oversight DMC: False
Is a FDA Regulated Drug?: False
Is a FDA Regulated Device?: False
Is an Unapproved Device?: None
Is a PPSD?: None
Is a US Export?: None
Is an FDA AA801 Violation?:

Secondary ID Infos

Secondary ID Type Domain Link View
25-29 ANZ OTHER Philipps University Marburg View