Chapter1IntroductionDiabetic retinopathy is an eyedisease that is caused by diabetes. It is a medical condition where patientswith diabetes tend to show abnormalities in the affected eyes because of thefluid leaks from blood vessels of the light-sensitive tissue at the back of theeye (retina). Ophthalmologists recognize diabetic retinopathy based on featuressuch as blood vessel area, exudes, hemorrhages, micro aneurysms and texture. 1.1 Project Overview In our project we focus ondiagnosis through Fundus photographs, which includes careful observation ofphotographs taken with expensive equipment by highly trained clinicians.

Thisdetection technique is very sophisticate and requires very specializedclinician knowledge 1. Here we want to approach a computer vision method thatmatches human performance.For more information on the Diabeticretinopathy (DR) and an analysis of how early detection of the disease can helpslow or even stop its spread, consult 2. For a comparative research paper onstudies of risk factors of DR, consult Yau et al. 3. Fluorescein angiographyis not the only a technique for diagnosis of DR; a extensive analysis on otherdetection techniques to diagnose DR can be found in 4.The general format of our method isas follows.

We took some sampled images to a tractable size as input images. Theseimages are preprocessed by normalizing and denoising technics and then we use aconvolutional neural network. After that, test images are passed through thenetwork and the model attempts to classifythe progression of DR. 1.2 Motivation of the Project EffortFor Diabetic Retinopathy (DR)diagnosis there exist multiple techniques, an ocular manifestation of diabetesthat affects more than 75% of patients with longstanding diabetes and is theleading cause of blindness for the age group 20-64 5.

Researches shows thatit contributes around 5% of total cases of blindness. WHO estimates that 347million of world population is having the disease diabetes and about 40-45% ofthem have some stage of the disease. By seeing below image one candifferentiate between image produced by normal eye and DR eye 6.                             (a)Normal Eye6                                                                (b)DR affected eye6 There are various factors affectingthe disease like age of diabetes, poor control, pregnancy but Researches showsthat if we can detect DR in early stage of the disease progression to visionimpairment can be slowed or averted. So the aim of our project is toprovide a automated, suitable and sophisticated model using image processingtechnic we can detect DR at early levels easily so that damage to retina can beminimized.          Chapter2Background LiteratureSurveyPrevious work has been done in using machine learning and variousmodels for automated DR screening.

For development of ourmethod and result analysis, we have conducted a literature survey describing DRfeatures and past work done to detect DR.Giri Babu Kande et al. 7 represented Segmentation of Vessels inFundus Images using Spatially Weighted Fuzzy c-Means Clustering an algorithmfor the extraction of Blood Vessels from Fundus images. They used a set oflinear filters sensitive to vessels of different thickness and orientation.A vessel  detection methodsrecently reported in the literature is simple and an experimental evaluationdemonstrates  excellent performance over globalthresholding. Their algorithm were expected to be applicable to a variety ofother applications due to its simplicity and general nature.Faust et al.

8 approaches an algorithms for the automated detection of diabeticretinopathy using digital fundus images whichprovides a brief analysis of models that use explicit feature extraction to DRscreening. These studies are in the magnitude of their scope derived from lessthan 400 total data points, the homogeneity of the dataset and the narrownessof the explicit features extracted from the images.Yuji Hatanaka et al 9 presented the improvement of automatichemorrhages detection methods using brightness correction on fundus Images. They indicates the importance of developing several automatedmodels for finding out the abnormalities in fundus images.

The purpose of thispaper was to improve their automated hemorrhage detection model to diagnose diabeticretinopathy. They represented a new method for preprocessing and false positiveelimination. They removed false positives by using a 45-feature analysis. To verifytheir new method, they examined 125 fundus images, including 35 images withhemorrhages and 90 normal images.

The sensitivity and specificity for thedetection of abnormal cases was were 80% and 88%, respectively. These verified resultsindicate that their new method may effectively improve the performance of theirdiagnosis system for hemorrhages.Vujosevic et al. 10 build a binary classifier on a dataset of 55patients by explicitly forming single lesion features. The scope of this study is limited in that the dataset.Wang et al. 11 used a CNN(LeNet-5 architecture) as a featureextractor for addressing blood vessel segmentation. The model has three headsat different layers of the convnet which then feed into three random forests.

The final classifier essembled the random forests for a final predictionachieving an accuracy and AUC on 0.97/0.94 using a standard dataset forcomparing models addressing vessel segmentationLim et al. 12 where the authors represent building aconvolutional neural network for lesion-level classification and then use thelearned feature representations for image-level classification. This scope ofthe study is limited in that the dataset which contains 200 images.

Clara I. Sánchez et al. 35 presented an automatic detection of diabeticretinopathy exudates from non-dilated retinal images using mathematicalmorphology methods. They showed the performance of asystem on a publicly available database which is independent. The performanceof the method on that dataset was comparable to that of human experts and withthe results obtained in previous studies. Their method shows retinopathyscreening programs a very fast solution to lessen the burden of screeningdiabetes while maintaining a high specificity and sensitivity.      Chapter3Dataset TheNational Eye Institute provides a standardized description of the severityclass of DR patients (which are the classes that our classifier predicts).

There are four severity classes, the first three describe non-proliferative DR(NPDR) and the last proliferative DR (PDR). The severity scales arecharacterized through a progression of four stages14,15:Ø Mild NPDR – Lesions of micro-aneurysms,small areas of balloon-like swelling in the retinas blood vessels. Ø Moderate NPDR – Swelling and distortion of bloodvessels, extensive micro-aneurysm, retinal hemorrhage, and hardexudates.Ø Severe NPDR – Various abnormalities, large blot hemorrhages, cottonwool spots and many blood vessels are blocked, which causes abnormal growth factorsecretion Ø PDR – Growth factors induceproliferation of new blood vessels inside surface of retina, the new vesselsare fragile and may leak or bleed, scar tissue from these can cause retinaldetachment.                             Fig: The severity scales of DR14This is an ongoing problem onkaggle16 which tries to develop a model for DR detection. Dataset is takenfrom the challenge-data part. Data set consists of high resolution eye imagesand graded by trained professionals in 5 classes(0-4) which is according tobelow table and figure below that17.

 Table: Class name descriptions Class name Meaning Class 0 Normal Eye Class 1 Mild DR eye Class 2 Moderate DR Eye Class 3 Severe DR eye Class 4 Poliferative DR Eye                                     (a) Normal eye                                                  (b) Mild DR eye                                     (c)Moderate DR Eye                      (d)Severe DR eye                                                                            (e)Poliferative DR Eye                                                                       Fig: 5 classes(0-4) of DR affectedeyes17   Chapter4Methodology4.1 HemorrhagesDetection The earliest sign ofretinopathy is small red dots in the superficial layers of retina. These aretermed as microaneurysms when they are small and depending on their depthwithin the retina they are termed as hemorrhages.  This occurs because of the leakage of bloodvessels of retina and indicates mild retinopathy. But when macula edemathickens within 2 disc diameters of the centre of macula this createsmicrovascular changes and causes leaking of plasma components in the area. Thisrepresents moderate type of retinopathy. Though hemorrhage is a hard work todetect we need some preprocessing in order to get a noise free and bright,contrast, enhanced image.

The steps includingpreprocessing to detect hemorrhages are : (a)Resize the image into 512 x 512 pixels(b)Convert the RGB image into grey scale image.(c)Use Median Filtering to remove artifacts such as vignetting.(d)Equalise the image and enhance contrast by Histogram Equalization.Atfirst we extract the green channel from the image because in this channel theaffected area is seen clearly and easy to identify. Then we apply median filterwith a radius of 8 pixels to create a background and remove the background fromoriginal image. This produces an image with blood vessels and hemorrhages. Usinga vessel mask we remove the blood vessel which results in an image withhemorrhages indicated.

                              Fig: Retina withhemorrhages and exudates18 4.2Exudates detectionThemethod we have applied to detect exudates on human retina is inspired by thework described in 35.  Since the dataset is of completely different characteristic as we have changed in varioussides.

That is why we are going to describe every step and the reason behindtaking it. Here we need to mention that we have implemented some libraryprovided by 19. We have also used MATLAB version 2017a for this project andthis detection consists of the following steps:       (a) Preprocessing the image.

       (b) Detection of Optic disc and otherartifacts. (c) Detection of exudates in terms of opticdisc and artifacts. Inthe preprocessing step first we extract intensity constituents from an image.Here we are going to work with gray-scale images because exudates are mostlyvisible in such images. We then apply Median Filtering for reducing the noiseand apply Histogram Equalization to enhance contrast and brightness. Theresulting image helps us to detect optic disc and accordingly exudates.

Thisworks as input image. Exudates are high intensity values as well as optic disk.Therefore in order to go for exudates detection we need to find optic disc andthen we need to differentiate between optic disc and exudates near and insidethe optic disc area. To do this we consider that optic disc is the largest and mostcircular part in brightest portion of the image. We apply Gray Scale Closing toremove blood vessels in the retina mostly in the optic disc area. Here we takea flat disc shaped structure element and consider the radius is eight. Wethreshold the image to binaries it and use the resulting image as a mask. Thenthe mask is inverted by pixels before overlaying into the original image.

Wethen apply reconstruction by dilation was on the overlaid image. We thresholdthe image and find the difference between the original image and thereconstructed image by the algorithm. Consequently, high intensity optic discis detected and rests are removed. Inthis part we faced a big problem of this approach. At the beginning of theprocess, vessels were removed by the Gray Scale Closing and reconstruction wasapplied on the image created from the original image.

Therefore we are going toreconstruct vessels in the optic disc area. But we face a problem is that weare not getting one big circular optic disc. Rather we are actually detectingtwo or three big connected components in this step. To solve this problem weapplied an addition dilation of the final mask. As a result the independentareas are connected together into a circular shape. Here we note that we havealready detected artifacts and other bright spots in the image.

That is why ifwe use too big dilation, it can lead to merge the optic disc with those areas.Forthe proper additional dilation we have considered a flat disc shaped structuredelement with a radius of four. Since the optic disc and also some brightartifacts are detected in this process, we have estimated for every componentof the mask in order to distinguish between the features some extra values.These additional values are termed as scores. Thus we have,                                    Score =area  circularity3 Herewe have some case to give attention. Since we have situation that the featurerather than optic disc can become much larger than optic disc, we needed togive circularity more importance. We take elements of size more than 1100pixels as an optic disc keeping the rest as artifacts. Here we do not classifysmall areas which can become exudates as artifacts.

At this stage after opticdisc extraction and artifacts detection we are going to detect exudates. Asbefore, high intensity blood vessels are removed by Grey Scale Closing. Then wego for to get a standard deviation image which shows the main characteristicsof nearly arranged exudates.

The resulting image is being threshold by takingthe radius is six. We than remove the outside shape of the retina and fill theholes by imfill(). We consider threshold to remove optic disc and artifacts.Finally the result is achieved when we apply a threshold at a level 0.01between the original and the reconstructed one.

The produced exudates maskimage is overlaid into the main image to get a proper vision.                                          ( a )                                    ( b )Fig:Exudates detection. (a) Original image (b) Exudates15 Chapter5ClassificationAfter allthe feature extraction has been done, now we are going to perform binaryclassification. Here we have used deep neural nets with two input layers, atotal of three layers, one representing the output. For this we have created afeed-forward backpropagation network (newff) .

Here the terms ‘newfit’ is for’regression’ and ‘newpr’ is for ‘classification’. They together are called’newff’ the generic name which is still available and gives better output inour classification20. First of all we have created a two later feedforward network. The first layer consists of three ‘transig’ neurons and thesecond layer has one ‘purelin’ neuron. Thus we have,                 net = newff(p,t,3,1,{‘tansig’,’purelin’});Here, p is the matrix of input vectors and t is the matrixfor target vectors. For the inputs vectors we have used three componentsnamely, optic disk mask, artifacts mask and exudates mask.Then the network is simulated and its output is plotted.

Thus we have,                  y = sim(net,p); Weneed to mention here that the network is trained for 5000 epochs, train-  parameter goal is 0.01. Whenthe training has been done a .mat file is created and further loaded to testour datasets. Once loaded we are able to distinguish our image as a good oneand as a bad one. Then the corr2 library function is used to find thecorrelation between four classes of images. The test image belongs to the classwith which it correlates most.The related algorithm we took is based on feed forward neural networkdescribed in 21   Chapter6Result and Discussion Byusing 250 images as training dataset we get the results on classification.

Herewe get results satisfactorily according to our analysis. The test process tookaround 5-6 hours to run over the images. Here the whole data set is also runfor validation.Herewe have used a confusion matrix which determines the accuracy of ourclassification and list the correct of them as ‘true positives’ or ‘truenegatives’ and incorrect of them as ‘false positives’ or ‘false negatives’.This method also determines sensitivity, specificity, positive predictivevalues and many more things. Here, we have taken plot confusion (targets,outputs) thatreturns a confusion matrix plot for the target and output and the plot is givenbelow :                                                                                       Fig : confusion matrixFromthe figure we see that our procedure gives 87.

5% accuracy in the results.Weare satisfied by our results although they provide slightly inaccurate findingsas they provide a pretty good approach based on morphological methods.  Chapter7Future Scope Althoughwe have faced many problems dealing with the findings, feature we have pickedmay seem rational but they are efficient according to our study. So possibleapproach would be based on feature extracting. One can study further in orderto detect hemorrhage more specifically. We would expect such feature engineering to make the method better toperform the whole 5-class classification perfectly.

Moreover, one can be donein exploring more nuanced data normalization and denoising techniques.  Chapter8                           Conclusion Our project is an analysisof a model to identify the severity of DR from Fundus Photographs. Our method performed well incomparison to other method. Itis a fact that better and accurate the diagnosis, the more exact will be thetreatment plan.

So diagnostic measures should aim towards accuracy for aneffective treatment regimen. In our study we were able to establish a goodaccuracy in the diagnosis results.

x

Hi!
I'm Erica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out