We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Multi‐branch convolutional neural network for Alzheimer’s Disease versus normal control classification using PET images
- Authors
Sharma, Rishabh; Sibille, Ludovic; Fahmi, Rachid
- Abstract
BackgroundBrain PET imaging techniques provide in‐vivo information about brain metabolism and the density/distribution of amyloid and tau proteins, the two hallmarks of Alzheimer’s disease (AD). Combination of such imaging biomarkers has led to improved performances of deep‐learning models designed for disease‐classification and prediction of disease‐progression. However, training such networks is often a challenge, especially when subjects have missing imaging markers at given time‐points. Such incomplete data is often excluded from the training process. We propose a “multibranch”‐convolutional‐neural‐network architecture to cope with this issue and make use of all available data for better AD vs. normal‐controls (NC) classification performance.MethodWe obtained multi‐time point ADNI FDG‐ and AV45‐PET scans corresponding to 257 NC and 222 AD subjects. N = 124 subjects had only one PET scan available at different time‐points. We designed a CNN with three training “branches”, taking as input either FGD, AV45, or a combination of the two when available (see Fig.1). In total, 1832 scans (786 singles and 523 pairs) were used as input for training. Branches are weighted differently (with 0 or 1) to control how training weights are updated. When both imaging‐biomarkers are available, each branch is fed with the appropriate input and contributes to the overall training. The overall network architecture is shown on Fig.1. For each branch, the calculated loss is multiplied by the associated weight and backpropagated through the network to update its training parameters. For comparison purposes, we independently trained a CNN with the same image pairs as above.ResultFor validation, we only used cases with both scans available to assess the performance of each branch on the same number of inputs. Classification sensitivity, specificity, accuracy, and area‐under‐the‐curve averaged over 10 folds were: (i) FDG‐branch: 93.11%, 88.88%, 91.35%, and 0.954, (ii) AV45‐branch: 96.53%, 74.87%, 90.27%, and 0.941, (iii) Multimodal‐branch: 94.43%, 86.50%, 91.56%, and 0.961, iv) Independent multi‐input CNN: 93.71%, 71.80%, 88.60%, and 0.959.ConclusionWe designed a multibranch‐CNN to handle missing data when training a multimodal classification CNN. Better classification accuracy was achieved with the multi‐input branch compared to the independently trained multi‐modal CNN. We will test our network on other type and/or number of modalities.
- Publication
Alzheimer's & Dementia, 2023, Vol 19, Issue S3
- ISSN
1552-5260
- DOI
10.1002/alz.061092