Description Module

Description Module

The Description Module contains narrative descriptions of the clinical trial, including a brief summary and detailed description. These descriptions provide important information about the study's purpose, methodology, and key details in language accessible to both researchers and the general public.

Description Module path is as follows:

Study -> Protocol Section -> Description Module

Description Module


Ignite Creation Date: 2025-12-25 @ 4:28 AM
Ignite Modification Date: 2025-12-25 @ 4:28 AM
NCT ID: NCT05105620
Brief Summary: Diabetic macular edema (DME) is one of the leading causes of visual impairment in patients with diabetes. Fluorescein angiography (FA) plays an important role in diabetic retinopathy (DR) staging and evaluation of retinal vasculature. However, FA is an invasive technique and does not permit the precise visualization of the retinal vasculature. Optical coherence tomography (OCT) is a non-invasive technique that has become popular in diagnosing and monitoring DR and its laser, medical, and surgical treatment. It provides a quantitative assessment of retinal thickness and location of edema in the macula. Automated OCT retinal thickness maps are routinely used in monitoring DME and its response to treatment. However, standard OCT provides only structural information and therefore does not delineate blood flow within the retinal vasculature. By combining the physiological information in FA with the structural information in the OCT, zones of leakage can be correlated to structural changes in the retina for better evaluation and monitoring of the response of DME to different treatment modalities. The occasional unavailability of either imaging modality may impair decision-making during the follow-up of patients with DME. The problem of medical data generation particularly images has been of great interest, and as such, it has been deeply studied in recent years especially with the advent of deep convolutional neural networks(DCNN), which are progressively becoming the standard approach in most machine learning tasks such as pattern recognition and image classification. Generative adversarial networks (GANs) are neural network models in which a generation and a discrimination networks are trained simultaneously. Integrated network performance effectively generates new plausible image samples. The aim of this work is to assess the efficacy of a GAN implementing pix2pix image translation for original FA to synthetic OCT color-coded macular thickness map image translation and the reverse (from original OCT color-coded macular thickness map to synthetic FA image translation).
Study: NCT05105620
Study Brief:
Protocol Section: NCT05105620