DL-CADET
Full Title
Deep multimodal Learning for breast CAncer DETectionDescription
Breast cancer is the most common cancer in women worldwide, but it has also a very successful survival rate when diagnosed early through annual screening. This comes at the cost of a large number of radiologist hours spent on evaluating mammograms. Artificial intelligent systems are being developed to improve the efficiency and effectiveness of the triage process. Deep learning (DL) has been hugely successful in a wide range of applications in computer vision including medical image analysis, but training deep convolutional neural networks (CNNs) typically requires large labeled datasets. This is an obstacle for using deep CNNs with medical imaging since it is particularly expensive to obtain large amounts of data and labels. Hence, on one hand CNNs are the state-of-the-art approach for breast cancer detection, but at the same time they pose demanding data requirements. This project brings together a group of scientists with background in AI and medical imaging to advance the application of state-of-the-art CNNs for breast cancer detection in realistic scenarios. We aim to relax the current data and label requirements for training DL nets through a new set of methods that are at the intersection of multiscale learning, self-supervised learning and multimodal learning. Addressing the data limitations of CNNs reduces costs of data acquisition and labeling, leading to: 1. faster development of new CNNs methods; 2. cheaper deployment of CNNs in real applications; 3. the democratization of access to CNNs for smaller health units; 4. faster adaptation of an existing system to a new reality, e.g. a new imaging sensor or the emergence of a new disease. Integrating deep CNNs with medical diagnostic processes can have a great impact in society. This project is a step forward in that direction.