Viewing Study NCT07179861


Ignite Creation Date: 2025-12-24 @ 11:47 AM
Ignite Modification Date: 2026-02-28 @ 6:07 AM
Study NCT ID: NCT07179861
Status: COMPLETED
Last Update Posted: 2025-09-23
First Post: 2025-09-11
Is NOT Gene Therapy: True
Has Adverse Events: False

Brief Title: Comparing Artificial Intelligence and Physicians: A Vignette-Based Study in Pediatric Clinical Decision-Making
Sponsor:
Organization:

Raw JSON

{'hasResults': False, 'derivedSection': {'miscInfoModule': {'versionHolder': '2025-12-24'}}, 'protocolSection': {'designModule': {'studyType': 'OBSERVATIONAL', 'designInfo': {'timePerspective': 'CROSS_SECTIONAL', 'observationalModel': 'COHORT'}, 'enrollmentInfo': {'type': 'ACTUAL', 'count': 30}, 'patientRegistry': False}, 'statusModule': {'overallStatus': 'COMPLETED', 'startDateStruct': {'date': '2025-08-27', 'type': 'ACTUAL'}, 'expandedAccessInfo': {'hasExpandedAccess': False}, 'statusVerifiedDate': '2025-09', 'completionDateStruct': {'date': '2025-09-11', 'type': 'ACTUAL'}, 'lastUpdateSubmitDate': '2025-09-17', 'studyFirstSubmitDate': '2025-09-11', 'studyFirstSubmitQcDate': '2025-09-11', 'lastUpdatePostDateStruct': {'date': '2025-09-23', 'type': 'ESTIMATED'}, 'studyFirstPostDateStruct': {'date': '2025-09-18', 'type': 'ESTIMATED'}, 'primaryCompletionDateStruct': {'date': '2025-09-10', 'type': 'ACTUAL'}}, 'outcomesModule': {'primaryOutcomes': [{'measure': 'AI Interpretation Accuracy (%)', 'timeFrame': 'Day 1', 'description': 'Proportion of correct laboratory/imaging interpretations or appropriate next-test selections, per AI tool and pooled; stratified by difficulty tier. Unit: percent (0-100).'}, {'measure': 'AI Diagnostic Accuracy (%)', 'timeFrame': 'Day 1', 'description': 'Proportion of vignettes with a correct primary diagnosis produced by each anonymized AI tool and pooled across tools. Correctness is defined against a pre-specified reference answer key; results are also stratified by pre-defined difficulty tiers (easy/moderate/difficult/very difficult). Unit of measure: percent (0-100).'}, {'measure': 'AI Medication-Dosing Accuracy (%)', 'timeFrame': 'Day 1', 'description': 'Proportion of dose recommendations meeting pediatric standards (weight- or BSA-based ranges, route, frequency) per reference rubric, per AI tool and pooled; stratified by difficulty tier. Unit: percent (0-100).'}], 'secondaryOutcomes': [{'measure': 'Change in Physician Diagnostic Accuracy (percentage points) (Group 2 only)', 'timeFrame': 'Day 1: Baseline (pre-AI) and immediate Post-AI within the same session (0-15 min after baseline).', 'description': 'Post-AI accuracy minus pre-AI accuracy per participant on the same case set; also categorized as beneficial (incorrect→correct), harmful (correct→incorrect), or no change. Accuracy is the proportion of cases with a correct final diagnosis according to a prespecified answer key.'}, {'measure': 'Confidence Shift (Δ on a 1-10 scale) (Group 2 only)', 'timeFrame': 'Day 1: Baseline (pre-AI) and immediate Post-AI within the same session (0-15 min after baseline).', 'description': 'Post-AI self-rated confidence minus pre-AI confidence; association with correctness is examined. Unit: scale points (-9 to +9).'}, {'measure': 'Answer-Change Frequency (%) (Group 2 only)', 'timeFrame': 'Day 1', 'description': 'Proportion of vignettes for which physicians revised their initial answer after AI suggestions; reported overall and by difficulty tier. Unit: percent (0-100).'}, {'measure': 'AI Response Time (seconds per vignette)', 'timeFrame': 'Day 1', 'description': 'Time from vignette display to final AI output, reported per tool and pooled; also by difficulty tier. Unit: seconds.'}, {'measure': 'Net Benefit Index of AI Exposure (percentage points) (Group 2 only)', 'timeFrame': 'Day 1', 'description': 'Beneficial change rate (incorrect→correct) minus harmful change rate (correct→incorrect) for diagnostic items; sensitivity analyses for dosing/interpretation. Unit: percentage points.'}]}, 'oversightModule': {'oversightHasDmc': False, 'isFdaRegulatedDrug': False, 'isFdaRegulatedDevice': False}, 'conditionsModule': {'conditions': ['Artificial Intelligence (AI) in Diagnosis', 'Decision Support Systems, Clinical', 'Clinical Decision-making', 'Pediatrics']}, 'referencesModule': {'references': [{'pmid': '40489764', 'type': 'BACKGROUND', 'citation': 'Su H, Sun Y, Li R, Zhang A, Yang Y, Xiao F, Duan Z, Chen J, Hu Q, Yang T, Xu B, Zhang Q, Zhao J, Li Y, Li H. Large Language Models in Medical Diagnostics: Scoping Review With Bibliometric Analysis. J Med Internet Res. 2025 Jun 9;27:e72062. doi: 10.2196/72062.'}, {'pmid': '39504445', 'type': 'BACKGROUND', 'citation': 'Bicknell BT, Butler D, Whalen S, Ricks J, Dixon CJ, Clark AB, Spaedy O, Skelton A, Edupuganti N, Dzubinski L, Tate H, Dyess G, Lindeman B, Lehmann LS. ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis. JMIR Med Educ. 2024 Nov 6;10:e63430. doi: 10.2196/63430.'}, {'pmid': '39509461', 'type': 'BACKGROUND', 'citation': 'Cross JL, Choma MA, Onofrey JA. Bias in medical AI: Implications for clinical decision-making. PLOS Digit Health. 2024 Nov 7;3(11):e0000651. doi: 10.1371/journal.pdig.0000651. eCollection 2024 Nov.'}, {'pmid': '32908284', 'type': 'BACKGROUND', 'citation': 'Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ; SPIRIT-AI and CONSORT-AI Working Group; SPIRIT-AI and CONSORT-AI Steering Group; SPIRIT-AI and CONSORT-AI Consensus Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020 Sep;26(9):1351-1363. doi: 10.1038/s41591-020-1037-7. Epub 2020 Sep 9.'}, {'pmid': '35584845', 'type': 'BACKGROUND', 'citation': 'Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, Denniston AK, Faes L, Geerts B, Ibrahim M, Liu X, Mateen BA, Mathur P, McCradden MD, Morgan L, Ordish J, Rogers C, Saria S, Ting DSW, Watkinson P, Weber W, Wheatstone P, McCulloch P; DECIDE-AI expert group. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ. 2022 May 18;377:e070904. doi: 10.1136/bmj-2022-070904.'}]}, 'descriptionModule': {'briefSummary': "This study evaluates how well anonymized artificial-intelligence (AI) tools perform on standardized pediatric case vignettes and whether showing AI suggestions can improve clinicians' answers. About 30 board-certified/eligible pediatric specialists at a single hospital complete a one-time session. Participants are randomized to two groups. Group A (n≈15): physicians answer each vignette once. Group B (n≈15): physicians answer and rate confidence (1-10), then review anonymized suggestions from five different AI tools (tool names not shown) and may keep or change their answer; changes and confidence are recorded.\n\nPrimary focus: measure AI performance (diagnostic accuracy, medication-dosing accuracy, interpretation accuracy) overall and by difficulty tier, and record AI response time. Secondary focus: quantify how AI suggestions affect human performance (change in accuracy, direction of change, confidence shift, and time). No patients or biospecimens are involved; risks are minimal (time and possible discomfort with performance review). Findings may inform safe, evidence-based ways to use AI alongside clinicians in pediatrics."}, 'eligibilityModule': {'sex': 'ALL', 'stdAges': ['ADULT'], 'maximumAge': '40 Years', 'minimumAge': '28 Years', 'samplingMethod': 'NON_PROBABILITY_SAMPLE', 'studyPopulation': 'Board-certified/eligible general pediatrics specialists recruited from SBÜ Sultangazi Haseki Training and Research Hospital network.', 'healthyVolunteers': True, 'eligibilityCriteria': 'Inclusion Criteria:\n\n* Board-certified or board-eligible pediatric specialist (general pediatrics) (in the first 10 years of expertise)\n* Actively practicing at the participating institution/network at the time of enrollment.\n* Able and willing to complete all vignette items individually in a single session and to follow study instructions for the assigned cohort (direct answers or confidence rating + viewing anonymized AI suggestions).\n* Fluent in Turkish and able to use a computer interface.\n* Provides written informed consent.\n\nExclusion Criteria:\n\n* Pediatric subspecialist practice as primary role (e.g., cardiology, infectious diseases, neurology, neonatology, etc.), to maintain a homogeneous general pediatrics cohort.\n* Prior access to or participation in creating the study vignettes, answer keys, or scoring rubrics; direct involvement with the study team.\n* Inability to complete the session without external help or use of non-protocol resources (internet/AI tools) during answering (outside of anonymized AI suggestions shown by the system in Group 2).\n* Failure to complete ≥90% of items or major protocol deviation (e.g., discussion with others during the task).\n* Any condition judged by investigators to interfere with valid participation (e.g., severe time constraints, inability to provide consent).'}, 'identificationModule': {'nctId': 'NCT07179861', 'briefTitle': 'Comparing Artificial Intelligence and Physicians: A Vignette-Based Study in Pediatric Clinical Decision-Making', 'organization': {'class': 'OTHER', 'fullName': 'Haseki Training and Research Hospital'}, 'officialTitle': 'A Prospective, Cross-Sectional, Vignette-Based Observational Study Comparing Clinical Decision-Making Performance of Pediatriciansand AI Models', 'orgStudyIdInfo': {'id': '140-2025'}}, 'armsInterventionsModule': {'armGroups': [{'label': 'Group/Cohort 1: Direct Answer (Physician-only)', 'description': 'Board-certified/eligible pediatric specialists answer each standardized vignette once, without confidence scoring and without viewing AI suggestions.\n\nOutcomes captured: Diagnostic accuracy, dosing accuracy, interpretation accuracy, completion time.'}, {'label': 'Group/Cohort 2: Confidence + AI Suggestions', 'description': 'Pediatric specialists first answer and rate confidence on a 1-10 scale; then they view anonymized suggestions from five distinct AI tools (tool names not shown) and may keep or revise their answer. All changes and confidence shifts are recorded.\n\nOutcomes captured: Pre- vs post-AI accuracy (and direction of change), dosing and interpretation accuracy changes, confidence shift, completion time.', 'interventionNames': ['Other: AI Suggestions (Anonymized 5-tool panel)', 'Other: Confidence Rating Task (1-10 Likert)']}], 'interventions': [{'name': 'AI Suggestions (Anonymized 5-tool panel)', 'type': 'OTHER', 'description': "What: Display of AI-generated suggestions for each vignette, aggregated from five large language model tools (names not shown to participants).\n\nWhen/Who: Shown only in Group 2, after the physician's initial answer and confidence score.\n\nPurpose: Measure AI performance (primary) and quantify the effect of AI suggestions on physicians' answers (secondary).\n\nApplies to: Group 2.", 'armGroupLabels': ['Group/Cohort 2: Confidence + AI Suggestions']}, {'name': 'Confidence Rating Task (1-10 Likert)', 'type': 'OTHER', 'description': 'What: Self-rated confidence for the initial answer on a 1-10 scale. When/Who: Group 2 before viewing AI suggestions. Purpose: Quantify confidence changes pre- vs post-AI and relate confidence to correctness.\n\nApplies to: Group 2.', 'armGroupLabels': ['Group/Cohort 2: Confidence + AI Suggestions']}]}, 'contactsLocationsModule': {'locations': [{'zip': '34010', 'city': 'Istanbul', 'state': 'Sultangazi', 'country': 'Turkey (Türkiye)', 'facility': 'SBÜ Sultangazi Haseki Training and Research Hospital', 'geoPoint': {'lat': 41.01384, 'lon': 28.94966}}]}, 'ipdSharingStatementModule': {'ipdSharing': 'NO', 'description': 'IPD Description:\n\nNo IPD will be shared. The dataset comprises detailed, item-level responses from a small, single-center cohort of pediatric specialists. Despite de-identification, the risk of re-identification is non-trivial given granular performance metrics and professional identifiers. The informed consent did not include permission to share raw individual responses outside the study team.\n\nPlan to Share Supporting Materials: Yes Supporting Documents: Study Protocol, Statistical Analysis Plan, scoring rubrics, redacted vignette templates, and analysis code.\n\nTime Frame: Available within 6 months after the primary manuscript is published and for at least 36 months thereafter.\n\nAccess Criteria and URL: Materials will be provided upon reasonable request to the Principal Investigator (email: drberkerokay@gmail.com). A data use agreement will be required; use is limited to non-commercial research and aggregate reporting.'}, 'sponsorCollaboratorsModule': {'leadSponsor': {'name': 'Haseki Training and Research Hospital', 'class': 'OTHER'}, 'responsibleParty': {'type': 'PRINCIPAL_INVESTIGATOR', 'investigatorTitle': 'MD - Pediatrician (Principal Investigator)', 'investigatorFullName': 'Berker Okay', 'investigatorAffiliation': 'Haseki Training and Research Hospital'}}}}