Automated Detection and Classification of Polyps in Colonoscopy Videos

No Thumbnail Available
Journal Title
Journal ISSN
Volume Title
Continuous assessment for an early and accurate diagnosis of colorectal cancer (CRC) is most important for better prognosis and clinical management. CRC is considered one of the leading causes of death worldwide. Polyps are the precursor to such cancer. Colonoscopy is the medical screening modality used to detect polyps in the mucosa of the colon. Firstly, the doctors detect and localize the polyps using the captured colonoscopy video frames. Then the polyps are resected, i.e., the region of interest (ROIs) are segmented out from the normal mucosa. Subsequently, useful features of the polyps are analyzed for dysplasia grading. Therefore, a typical colonoscopy procedure includes polyp detection and localization, segmentation, and classification. However, manual inspection and annotation of the polyps are cumbersome and inefficient due to similar pathological manifestations of the diseases on the hugely acquired frames. The polyp features may not be visible to the naked eye, making it difficult to diagnose the abnormalities. Therefore, Our current work proposes automated and efficient frameworks for polyp analysis using the colonoscopy video frames. The proposed approaches can do a virtual biopsy in detecting dysplasia in polyps. These may help lessen the burden on the clinicians, provide early and quick diagnosis, better decision-making, and telemonitoring. The automated diagnostic systems proposed by our methods can be adopted in the medical setup to diagnose diseases in polyps effectively. In the proposed first method, important polyp cues like color, shape, and texture are incorporated into the modified particle filtering framework to track and localize the polyps in each frame of the colonoscopy video. Subsequently, the polyps are segmented using active contour (AC). This method can handle specularity and occlusion, which are generally encountered during colonoscopy. Our second approach simultaneously detects and localizes the polyps in the colonoscopy videos for real-time analysis. The proposed method is based on a deep learning-based attention YOLOv4 architecture. The proposed spatial and channel attention blocks are incorporated into the YOLOv4 framework. Results suggest that the the proposed model performs better than the state-of-the-art methods, and the generalization and robustness of this method could be validated. The localized polyps are further classified into malignant (cancerous/adenomatous) and benign (noncancerous/ hyperplastic). Delineation or segmentation of the polyps leads to 3-D visualization, better resection, and classification. As a preliminary work, clinically significant frames are extracted using the depth information of the polyps, followed by their segmentation. However, this approach cannot be applied to flat and serrated polyps. Therefore, we proposed two segmentation approaches utilizing the dominant polyp cues for different polyp structures. The first approach is based on an unsupervised adaptive Markov-random field (MRF), which encapsulates the polyp’s global texture and spatial information. Most of the existing methods in this domain are supervised. Our approach provides a competitive polyp segmentation performance while the requirement for massive labeled data is avoided. The third approach uses a saliency map guided geometric shape compactness prior for better polyp segmentation. Textural information and shape information is used for polyp segmentation. For the classification of the polyps, three methods are proposed. The first method uses the local texture and the polyps’ shape information, which the doctors generally study for cancer detection. The second method combines shape features and the deep embedded features learned via a deep siamese network. In this work, poly ROIs obtained using our proposed attention YOLOv4 are used. The proposed spatial shape descriptor, pyramid histogram of oriented gradients (PHOG) features, extracts the local shape and spatial layout information of polyp images of each class. In contrast, the embedded features extract the discriminating features from each category’s samples. Sometimes, the clinicians analyze the histopathology of the polyps for grading of cancer. In this view, we proposed a semi-supervised approach based on a generative adversarial network (GAN) for CRC grading using histopathological images. Experimental results validate the proposed method’s efficiency even in a minimal data environment.
Supervisor: Manas Kamal Bhuyan
Colorectal Cancer (CRC), Polyps, Attention YOLOv4, Active Contour (AC), Markov-random Field (MRF), Modified Particle Filtering, Generative Adversarial Network (GAN)