Description Module

Description Module

The Description Module contains narrative descriptions of the clinical trial, including a brief summary and detailed description. These descriptions provide important information about the study's purpose, methodology, and key details in language accessible to both researchers and the general public.

Description Module path is as follows:

Study -> Protocol Section -> Description Module

Description Module


Ignite Creation Date: 2025-12-24 @ 10:47 PM
Ignite Modification Date: 2025-12-24 @ 10:47 PM
NCT ID: NCT06081569
Brief Summary: Alzheimer's disease (AD) is the most common dementia and has been one of the most expensive diseases with the highest lethality. With the rapid increase of the aging population, more and more burdens will be posed on society and economics. The manifestations of AD are the progressive loss of memory, language and visuospatial function, executive and daily living abilities, and so forth. The Pathophysiological changes of AD occur 10-20 years before the clinical symptoms, while there is still a lack of effective strategy for early diagnosis. Mild cognitive impairment (MCI) is considered to be a transitional state between healthy aging and the clinical diagnosis of dementia and has received increasing attention as a separate diagnostic entity. To make the diagnosis, doctors ought to compressively consider the multimodal medical information including clinical symptoms, neuroimages, neuropsychological tests, laboratory examinations, etc. Multimodal deep learning has risen to this challenge, which could integrate the various modalities of biological information and capture the relationships among them contributing to higher accuracy and efficiency. It has been widely applied in imaging, tumor pathology, genomics, etc. Recently, the studies on AD based on deep learning still mainly focused on multimodal neuroimaging, while multimodal medical information requires comprehensive integration and intellectual analysis. Moreover, studies reveal that some imperceptible symptoms in MCI and the early stage of AD may also play an effective role in diagnosis and assessment, such as gait disorder, facial expression identification dysfunction, and speech and language impairment. However, doctors could hardly detect the slight and complex changes, which could rely on the full mining of the video and audio information by multimodal deep learning. In conclusion, we aim to explore the features of gait disorder, facial expression identification dysfunction, and speech and language impairment in MCI and AD, and analyze their diagnostic efficiency. We would identify the different degrees of dependency on multimodal medical information in diagnosis and finally build an optimal multimodal diagnostic method utilizing the most convenient and economical information. Besides, based on follow-up observations on the changes in multimodal medical information with the progress of AD and MCI, we expect to establish an effective and convenient diagnostic strategy.
Detailed Description: Our objective is to make the early diagnosis and assessment of AD and MCI based on multimodal deep learning. Initially, Gait disorder, facial expression identification dysfunction, and speech and language impairment are of great significance in the occurrence and development of AD and MCI. However, due to the high complexity and stealthiness of these clinical symptoms, no uniform conclusions have been made. Hence, we attempt to apply machine learning methods to recognize the video and audio information. In this way, we will explore the changing characteristics of gait, expression, and language in AD and MCI, and analyze their diagnostic effectiveness as diagnostic markers, finally providing new ideas and experimental data for the diagnosis of AD and MCI. Secondly, multimodal medical information needs to be integrated and comprehensively analyzed. We aim to propose an optimal diagnostic strategy referring to the different degrees of dependency on multimodal medical information in diagnosis. Moreover, observing the changes in multimodal medical information with the progress of AD and MCI, we expect to build a predicting model of AD diagnosis and prognosis. The methods are as follows: 1. Collecting multimodal medical information A variety of multimodal medical information would be carefully collected including the baseline demographic data, chief complaint and medical history, peripheral organ function assessment, laboratory examination, imaging examination, neuroelectrophysiological examination, neurocognitive and psychological examination, information on gait, expression, and language, and biological samples, etc. 2. Revealing the changes of gait, expression, and language in patients with AD and MCI, and verifying their diagnostic efficacy. For multimodal medical information on gait, OpenPose model was used to extract human key points and construct a human skeleton structure diagram. Based on graph neural networks and convolutional neural networks, instantaneous action analysis of single-frame images is carried out. And then utilizing the Transformer model, gait sequence analysis is carried out by integrating multi-frame video.For multimodal medical information on facial expression, the Dlib algorithm will be used to extract facial key points, combined with facial expression images, and the spatiotemporal Transformer model will be used for facial expression analysis. For multimodal medical information on language, ASRT model will be used for speech recognition and text content extraction. Simultaneously, the frequency domain Fourier transform and wavelet transform will be applied to extract frequency domain information and analyze the speech features by integrating language content, voice intonation, speech speed, and other information. Based on the attention model, the gait, expression, and language analysis results of AD and MCI will be compared with those of the control group to reveal the features of AD and MCI and provide evidence for disease diagnosis. 3. Analyzing the different degrees of dependency on multimodal information in the diagnosis of AD and MCI diseases, and establishing an optimal diagnosis strategy In the supervised learning process, the attention mechanism-based method will be used to analyze the influence of multimodal information on the final results. At the same time, based on the knowledge map, the patient's blood biochemical indicators, genomic information and other fields of knowledge would be added to the model. Based on Bayesian probability inference and causal inference theory, the causal programming method will be used to model the causal analysis of information and diagnosis results of different modes. Based on AutoML method, multimodal information will be combined and optimized, and a reliable optimal diagnosis strategy will be established according to experimental results. 4. Exploring the changes of multimodal medical information with the progression of the disease, and build a predicting model for early diagnosis and disease progression of AD. Viewing multimodal medical information as the control condition, the Transformer model will be used to model time sequence information, and the conditional diffusion model will be used to generate patients' MRI image changes and other disease progression-related information, providing the basis for disease progression prediction. Based on the large multimodal model technology, the output of the model will be interfered with and adjusted referring to the judgment and description of professional doctors, to generate the prediction in line with the judgment of professional doctors, and finally construct the interpretable early diagnosis and disease progression prediction model.
Study: NCT06081569
Study Brief:
Protocol Section: NCT06081569