April 24, 2019
Detecting Alzheimer’s desease with deep learning
The images named in this post are from the ADNI dataset (Alzheimer’s Disease Neuroimaging Initiative) (adni.loni.usc.edu)
Alzheimer’s is the most common type of dementia, this disease causes problems with memory, language, and patient’s behavior. These symptoms get worse over time and interfere with patient’s daily tasks.
Alzheimer’s has no cure and the cause of this disease is still unknown but treatments can improve the quality of life for patients. Alzheimer’s disease can be definitively diagnosed only after death with an examination of brain tissue.
Doctors use several methods to diagnose this disease when the patient presents symptoms. Doctors can perform brain scans such as computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET), we are interested in this last one.
Positron emission tomography scans show the brain and its tissues functions whereas computed tomography scans and magnetic resonance imaging scans only show the brain structure. PET scans use radioactive materials called radio-pharmaceuticals or radio-tracers, these are molecules linked or labeled with a small amount of radioactive material that can be detected on the PET scan. The most common radio-tracer is F-18 fluorodeoxyglucose, or FDG (This radio-tracer was used in the patients of the dataset) a molecule similar to glucose. The radio-tracer is injected, swallowed or inhaled as a gas and accumulates in the organ of the body being examined, in this case the brain.
Thanks to this radio-tracer we can detect different behaviors in the patient’s brain, if the patient has Alzheimer’s several nerve cells have died and we can observe this in the scans.
ADNI dataset scans
The scans produced are 3d scans of the brain, we can search scans with specifics parameters in the ADNI dataset like type of the scan CT, MRI, PET, the radio-tracer consumed by the patient (F-18-FDG) and the patient’s group: AD (Alzheimer's Disease, positive class), CN (Cognitively Normal, negative class). Once we have the scans filtered we can download these scans in the nii format.
There are convolutional neural networks with 3d convolutions that can receive 3d images as input but we need a very good hardware, a lot of space and time to train this networks, the size of the uncompressed 3d scans is almost 40gb and I use Google Colab to train neural networks, I could not handle this dataset on Google Colab, the alternative was convert the 3d scans to 2d jpg images, in order to achieve this I used a python library called nibabel that reads scans in the nii format, this library loads the scans as a numpy array of 4 dimensions, the first 3 dimensions are the axis x, y, z and the last dimension contains the color channels, in this case is only one since the scans are in grayscale. Once we have the scan as a numpy array we can access to specific sections with an index, for each scan I obtained multiple images from the coronal section of the brain.
You can use this notebook to check the details of the model and the training process.
At the end I obtained 79,506 2d images, 39,753 for each class, I used the 20% of these images for validation and the rest for training. I also used data augmentation to improve training and the Mobile Net V2 model with a learning rate of 0.0003 and SGD as optimizer. Finally I trained the mode, for 100 epochs.
I obtained 98% accuracy, a sensitivity and specificity of 99%.
Class Activation Maps
I used Activation Maps to explore the decisions that the neural network made. Due to these maps, it is easer to explore the important regions that the neural network took into account to compute the classification.
If you want to know more about activation maps and how implement this technique in TensorFlow or keras you can read this tutorial.
You can use and see and example of this network on this link.