Raw JSON
{'hasResults': False, 'derivedSection': {'miscInfoModule': {'versionHolder': '2025-12-24'}}, 'protocolSection': {'designModule': {'phases': ['NA'], 'studyType': 'INTERVENTIONAL', 'designInfo': {'allocation': 'RANDOMIZED', 'maskingInfo': {'masking': 'NONE'}, 'primaryPurpose': 'OTHER', 'interventionModel': 'PARALLEL', 'interventionModelDescription': 'One intervention group and one control group'}, 'enrollmentInfo': {'type': 'ACTUAL', 'count': 41}}, 'statusModule': {'overallStatus': 'COMPLETED', 'startDateStruct': {'date': '2019-04-19', 'type': 'ACTUAL'}, 'expandedAccessInfo': {'hasExpandedAccess': False}, 'statusVerifiedDate': '2019-06', 'completionDateStruct': {'date': '2019-05-12', 'type': 'ACTUAL'}, 'lastUpdateSubmitDate': '2019-06-14', 'studyFirstSubmitDate': '2019-06-11', 'studyFirstSubmitQcDate': '2019-06-13', 'lastUpdatePostDateStruct': {'date': '2019-06-18', 'type': 'ACTUAL'}, 'studyFirstPostDateStruct': {'date': '2019-06-17', 'type': 'ACTUAL'}, 'primaryCompletionDateStruct': {'date': '2019-05-12', 'type': 'ACTUAL'}}, 'outcomesModule': {'primaryOutcomes': [{'measure': 'Change in empathy', 'timeFrame': 'before intervention to 20 minutes', 'description': 'Jefferson Scale of Empathy Questionnaire, Self administered Likert scale of 1-7 (Strongly Disagree to Strongly Agree)'}, {'measure': 'Change in geriatric attitudes', 'timeFrame': 'before intervention to 20 minutes', 'description': 'UCLA Geriatric Attitudes Scale, Self administered Likert scale of 1-7 (Strongly Disagree to Strongly Agree)'}, {'measure': 'Change in embodiment', 'timeFrame': 'before intervention to 20 minutes', 'description': 'VR Embodiment Scale, Self administered Likert scale of 1-7 (Strongly Disagree to Strongly Agree)'}]}, 'oversightModule': {'oversightHasDmc': False, 'isFdaRegulatedDrug': False, 'isFdaRegulatedDevice': False}, 'conditionsModule': {'conditions': ['Empathy']}, 'descriptionModule': {'briefSummary': 'The Embodied Empathy pilot study proposes to use VR technology to create original narratives of real life patients from their own perspectives for medical students to embody. Instead of using an animated avatar, researchers will use live-action first-person 220 degree video to capture these vignettes. In "Virtual Body Swap: A New Feasible Tool to Be Explored in Health and Education," Oliveria discusses the impact of using an actual person as an avatar as opposed to animation. "Many possibilities stem from the concept of body swapping. The relationship between individuals and their own bodies has implications on their ego and own personality. Feeling to be in another person\'s skin and controlling another body\'s movement, can facilitate the development of empathy, playing with one\'s ego and emotions. Such experiments could, for example, be used as a theme for discussion and behavioral changes related to issues such as racism, altruism, inclusion and anorexia, among others" (Oliveria, Bertrand, Lesur, Palomo, Demarzo, and Cebolla, 2016). Although the study will not focus on a true one-to-one body swap, as in BeAnotherLab\'s The Machine To Be Another, the assertion of a real person as an avatar is essential to our project.', 'detailedDescription': "The Embodied Empathy pilot study designed for first year medical students intends to bring baseline data, and a formal research study into the embodied VR and empathy training conversation. With very little data supported research happening in the field, there is an abundance of thought, and opinion concerning the potential of embodied VR by academics, the media, and the tech industry. The Embodied Empathy pilot study hopes to build the case for this kind of work as a viable method of experiential training to increase empathy and reduce bias with the implementation of validated scales, controlled data collection, and in depth research through a pilot study, because essentially, it is the study the field needs to move forward in our embodied VR work.\n\nWe are Alfred, created by Carrie Shaw of Embodied Labs, is an embodied VR narrative project of an elderly man who has difficulty understanding his doctor's explanation of his diagnosis due to both vision and hearing loss. The VR user sees a visual spot comparable to that which keeps Alfred from fully seeing his doctor and the papers before him. The user also struggles to hear the diagnosis - an embodied loss of hearing. Alfred tries to complete a cognitive test and fails, not because he is not intellectually sound, but because he cannot hear or see what is happening in a meaningful way (Wanshel, 2016, Shaw). Similarly, Embodied Empathy will illuminate the patient's experience, but with the addition of sensorial input, and engaged movement, bringing to light the subtle and not so subtle barriers to patient-centered care. In We are Alfred, the VR user sits passively as they watch Alfred's story unfold in a 360 degree virtual world.\n\nThe University of New England has initiated a multi-year educational project that uses VR technology and software from Embodied Labs, focusing on older adult health, but they did not utilize validated scales for data collection or a control group among other limitations (Dyer, Swartzlander, Gugliucci, 2018). A noted difference between the Embodied Labs software, and the Embodied Empathy pilot study is that researchers also incorporate user interaction between the VR world and the real world through sensorial input - touch, smell, interacting with objects, and mimicking the body movements of the patient, whose point of view the students are embodying. Beyond these differences, researchers also hope to produce a pilot study on firmer ground that can be further built upon by ourselves and others in the field with validated scales, and formal research study practices.\n\nThe Embodied Empathy pilot study will give medical students the opportunity to proverbially walk a mile in their patients' shoes. Allowing them to embody patients' stories, they will face their hardships and impairments to understand their perspectives and to honor the feeling of identifying with another and wanting to help from a place of deep understanding (Bertrand, Guegan, Robieux, McCall, and Zenasni, 2018). Embodied Empathy has the potential to impact patient care by highlighting movement patterns or lack of movement patterns, identifying states of being, detailing how the patient takes and receives information, being sensitive to the relationship of the patient to the space they occupy, as well as how they move through the world.\n\nResearchers hope the Embodied Empathy pilot study will have an impact on the first year medical student's understanding of what aging means to a body and a person's life with the hopes that the immersive, and embodied VR experience will inspire interest in the field of gerontology.\n\nThe VR experience will make immediately palpable a sensorial experience of geriatric patients' daily lives (e.g., difficulty opening a pill bottle, difficulty arising from a sitting position, blurred vision due to glaucoma, decreased memory, depression, etc.), ultimately allowing users to look past patients' external circumstances to humanize and increase the quality of their care."}, 'eligibilityModule': {'sex': 'ALL', 'stdAges': ['CHILD', 'ADULT', 'OLDER_ADULT'], 'healthyVolunteers': True, 'eligibilityCriteria': 'Inclusion Criteria:\n\n* Generally healthy 1st year medical students\n\nExclusion Criteria:\n\n* None'}, 'identificationModule': {'nctId': 'NCT03987269', 'briefTitle': 'Embodied Empathy; Virtual Reality and Experiencing Geriatrics', 'organization': {'class': 'OTHER', 'fullName': 'Virginia Commonwealth University'}, 'officialTitle': 'Embodied Empathy; Using Virtual Reality Embodiment to Increase Empathy and Reduce Elder Bias in 1st Year Medical Students.', 'orgStudyIdInfo': {'id': 'HM20015472'}}, 'armsInterventionsModule': {'armGroups': [{'type': 'ACTIVE_COMPARATOR', 'label': 'Intervention, Virtual Reality Group', 'description': 'The intervention group to experience the virtual reality embodied narrative experience.', 'interventionNames': ['Behavioral: Embodied Virtual Reality']}, {'type': 'PLACEBO_COMPARATOR', 'label': 'Control, Narrative Video Group', 'description': 'The control group to watch the same narrative video content that has been formatted for standard television', 'interventionNames': ['Behavioral: Narrative Video']}], 'interventions': [{'name': 'Embodied Virtual Reality', 'type': 'BEHAVIORAL', 'description': "The video will be formatted for viewing within a standard virtual reality headset. Once the video begins, subjects will follow the movements of the new body they are inhabiting in synchronicity to the video. Movement and haptic input include\n\n1. Mimicking the hand gestures at the beginning in order to establish the body transfer illusion.\n2. Shaking someone's hand\n3. Someone holding the hand and elbow to assist them in standing, taking a few steps and returning to a seated position.\n4. Shaking hands\n5. Having their blood pressure taken\n6. Having their pulse taken\n7. Fill a pill organizer\n8. Color a picture\n\nAll of the above interactions will be facilitated by the two Co-PI's (Blatter, Ware). Blatter and Ware have been creating and performing similar Virtual Reality Embodiments with haptics for the past year, and to date, have performed hundreds of them for demos, colleagues, students and the public.", 'armGroupLabels': ['Intervention, Virtual Reality Group']}, {'name': 'Narrative Video', 'type': 'BEHAVIORAL', 'description': 'The video will be formatted for a standard flat screen television. Participants will watch the video alone in a standard School of the Arts classroom.', 'armGroupLabels': ['Control, Narrative Video Group']}]}, 'contactsLocationsModule': {'locations': [{'zip': '23284', 'city': 'Richmond', 'state': 'Virginia', 'country': 'United States', 'facility': 'Virginia Commonwealth University', 'geoPoint': {'lat': 37.55376, 'lon': -77.46026}}], 'overallOfficials': [{'name': 'Jill B Ware, MFA', 'role': 'PRINCIPAL_INVESTIGATOR', 'affiliation': 'Virginia Commonwealth University'}, {'name': 'John H Blatter, MFA', 'role': 'PRINCIPAL_INVESTIGATOR', 'affiliation': 'Virginia Commonwealth University'}]}, 'ipdSharingStatementModule': {'ipdSharing': 'NO', 'description': 'Data will only be shared out in aggregate form with no personal identifiers.'}, 'sponsorCollaboratorsModule': {'leadSponsor': {'name': 'Virginia Commonwealth University', 'class': 'OTHER'}, 'responsibleParty': {'type': 'SPONSOR'}}}}