Automatic Discrimination of Color Retinal Images using the Bag of Words Approach

TitleAutomatic Discrimination of Color Retinal Images using the Bag of Words Approach
Publication TypeConference Paper
Year of Publication2015
AuthorsSadek, I., D. Sidibé, and F. Meriaudeau
Conference NameSPIE Medical Imaging. International Society for Optics and Photonics
Date Published03/2015
PublisherSPIE Medical Imaging. International Society for Optics and Photonics
Conference LocationOrlando, Florida, United States
KeywordsBag of words, Diabetic retinopathy, Drusen, Exudates, SVM

Diabetic retinopathy (DR) and age related macular degeneration (ARMD) are among the major causes of visual impairment worldwide. DR is mainly characterized by small red spots, namely microaneurysms and bright lesions, specifically exudates. However, ARMD is mainly identified by tiny yellow or white deposits called drusen. Since exudates might be the only visible signs of the early diabetic retinopathy, there is an increase demand for automatic retinopathy diagnosis. Exudates and drusen may share similar appearances; as a result discriminating between them plays a key role in improving screening performance.  In this research, we investigative the role of bag of words approach in the automatic diagnosis of retinopathy diabetes. Initially, the color retinal images are preprocessed in order to reduce the intra and inter patient variability. Subsequently, SURF (Speeded up Robust Features), HOG (Histogram of Oriented Gradients), and LBP (Local Binary Patterns) descriptors are extracted from retinal images. We proposed to use single-based and multiple-based methods to construct the visual dictionary by combining the histogram of word occurrences from each dictionary and building a single histogram. Finally, this histogram representation is fed into a support vector machine with a linear kernel for classification. The introduced approach is evaluated for automatic diagnosis of normal and abnormal color retinal images with bright lesions such as drusen and exudates. This approach has been implemented on 430 color retinal images, including six publicly available datasets, in addition to one local dataset. The mean accuracies achieved are 97.2% and 99.77% for single-based and multiple-based dictionaries respectively.