Deep Learning, Machine Learning, and Learning
Read papers can be a hard task if we don't understand all the math behind the methods and models implemented in the paper. In this post, we will explore several models from a paper used to anime objects in images. Additionally, we will solve the math from this paper and explain why it's used.
In this post, we will learn about a tree network architecture from a paper that contains linear algebra maths. We will discover how we can comprehend this math and turn it into code.
In this post, we will talk about a paper called Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification where the authors implemented a neural network with the attention mechanism. We will also discuss some problems related to medical data and deep learning models trained on this data.
We have already seen Computed Tomography images, how we can load and preprocess these images to use them in machine learning models. Now is the turn of two popular medical images, MRI and PET.
There are a lot of techniques out there to train neural networks, some imply add layers like dropout or batch normalization. Somemore change the training process or hyper-parameters. In this post, we will check a technique related to the learning rate to improve the model training and a technique to reduce the size of the final model.
If you are working or want to work with medical images and use them to train deep learning models you must know a lot of information since these images are different from normal images that you take from your phone. In this post, I will do my best to explain these concepts needed to preprocess medical images.
Face recognition is an easy task for humans. We do it almost all the time. However, machines need more help to accomplish this task. In this post, we will see how face detection, which is needed to perform face recognition, works.