Automated Detection and Classification of Polyps in Colonoscopy Videos

dc.contributor.authorSasmal, Pradipta
dc.date.accessioned2022-09-29T06:18:03Z
dc.date.accessioned2023-10-26T11:57:44Z
dc.date.available2022-09-29T06:18:03Z
dc.date.available2023-10-26T11:57:44Z
dc.date.issued2022
dc.descriptionSupervisor: Manas Kamal Bhuyanen_US
dc.description.abstractContinuous assessment for an early and accurate diagnosis of colorectal cancer (CRC) is most important for better prognosis and clinical management. CRC is considered one of the leading causes of death worldwide. Polyps are the precursor to such cancer. Colonoscopy is the medical screening modality used to detect polyps in the mucosa of the colon. Firstly, the doctors detect and localize the polyps using the captured colonoscopy video frames. Then the polyps are resected, i.e., the region of interest (ROIs) are segmented out from the normal mucosa. Subsequently, useful features of the polyps are analyzed for dysplasia grading. Therefore, a typical colonoscopy procedure includes polyp detection and localization, segmentation, and classification. However, manual inspection and annotation of the polyps are cumbersome and inefficient due to similar pathological manifestations of the diseases on the hugely acquired frames. The polyp features may not be visible to the naked eye, making it difficult to diagnose the abnormalities. Therefore, Our current work proposes automated and efficient frameworks for polyp analysis using the colonoscopy video frames. The proposed approaches can do a virtual biopsy in detecting dysplasia in polyps. These may help lessen the burden on the clinicians, provide early and quick diagnosis, better decision-making, and telemonitoring. The automated diagnostic systems proposed by our methods can be adopted in the medical setup to diagnose diseases in polyps effectively. In the proposed first method, important polyp cues like color, shape, and texture are incorporated into the modified particle filtering framework to track and localize the polyps in each frame of the colonoscopy video. Subsequently, the polyps are segmented using active contour (AC). This method can handle specularity and occlusion, which are generally encountered during colonoscopy. Our second approach simultaneously detects and localizes the polyps in the colonoscopy videos for real-time analysis. The proposed method is based on a deep learning-based attention YOLOv4 architecture. The proposed spatial and channel attention blocks are incorporated into the YOLOv4 framework. Results suggest that the the proposed model performs better than the state-of-the-art methods, and the generalization and robustness of this method could be validated. The localized polyps are further classified into malignant (cancerous/adenomatous) and benign (noncancerous/ hyperplastic). Delineation or segmentation of the polyps leads to 3-D visualization, better resection, and classification. As a preliminary work, clinically significant frames are extracted using the depth information of the polyps, followed by their segmentation. However, this approach cannot be applied to flat and serrated polyps. Therefore, we proposed two segmentation approaches utilizing the dominant polyp cues for different polyp structures. The first approach is based on an unsupervised adaptive Markov-random field (MRF), which encapsulates the polyp’s global texture and spatial information. Most of the existing methods in this domain are supervised. Our approach provides a competitive polyp segmentation performance while the requirement for massive labeled data is avoided. The third approach uses a saliency map guided geometric shape compactness prior for better polyp segmentation. Textural information and shape information is used for polyp segmentation. For the classification of the polyps, three methods are proposed. The first method uses the local texture and the polyps’ shape information, which the doctors generally study for cancer detection. The second method combines shape features and the deep embedded features learned via a deep siamese network. In this work, poly ROIs obtained using our proposed attention YOLOv4 are used. The proposed spatial shape descriptor, pyramid histogram of oriented gradients (PHOG) features, extracts the local shape and spatial layout information of polyp images of each class. In contrast, the embedded features extract the discriminating features from each category’s samples. Sometimes, the clinicians analyze the histopathology of the polyps for grading of cancer. In this view, we proposed a semi-supervised approach based on a generative adversarial network (GAN) for CRC grading using histopathological images. Experimental results validate the proposed method’s efficiency even in a minimal data environment.en_US
dc.identifier.otherROLL NO.156102005
dc.identifier.urihttps://gyan.iitg.ac.in/handle/123456789/2182
dc.language.isoenen_US
dc.relation.ispartofseriesTH-2722;
dc.subjectColorectal Cancer (CRC)en_US
dc.subjectPolypsen_US
dc.subjectAttention YOLOv4en_US
dc.subjectActive Contour (AC)en_US
dc.subjectMarkov-random Field (MRF)en_US
dc.subjectModified Particle Filteringen_US
dc.subjectGenerative Adversarial Network (GAN)en_US
dc.titleAutomated Detection and Classification of Polyps in Colonoscopy Videosen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
Abstract-TH-2722_156102005.pdf
Size:
393.05 KB
Format:
Adobe Portable Document Format
Description:
ABSTRACT
No Thumbnail Available
Name:
TH-2722_156102005.pdf
Size:
190.08 MB
Format:
Adobe Portable Document Format
Description:
THESIS
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: