Sep 12, 2023

Public workspaceProtocol for Deep Learning-Based Classification of Vitreomacular Adhesion in Diabetic Macular Edema using SD-OCT.

  • 1Shri Bhagwan Mahavir Vitreoretinal Services, SankaraNethralaya, Chennai, Tamil Nadu, India;
  • 2School of Optometry and Vision Science, University of New South Wales, Syndey, NSW, 2052, Australia;
  • 3Department of Computer Science and Telecommunication Engineering, Noakhali Science and Technology University;
  • 4School of Computer Science and Engineering, The University of New South Wales, Sydney, Australia;
  • 5The University of Sydney Save Sight Institute, Sydney, New South Wales, Australia;
  • 6Sydney Hospital and Sydney Eye Hospital, Sydney, New South Wales, Australia
Icon indicating open access to content
QR code linking to this content
Protocol CitationBrughanya Subramanian, A. Q. M Sala Uddin Pathan, Maitreyee Roy, Dhanashree Ratra, Salil S. Kanhere, Matthew P Simunovic, Rajiv Raman 2023. Protocol for Deep Learning-Based Classification of Vitreomacular Adhesion in Diabetic Macular Edema using SD-OCT.. protocols.io https://dx.doi.org/10.17504/protocols.io.4r3l2295pl1y/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: In development
We are still developing and optimizing this protocol
Created: September 09, 2023
Last Modified: September 12, 2023
Protocol Integer ID: 87591
Keywords: vitreomacular adhesion, machine learning, diabetic macular edema, diabetic retinopathy,
Abstract
This study is a continuation of our previous work published with PLOS One [1] and this protocol provides a walk-through for designing a deep learning based AI model, that aids in classifying the vitreomacular adhesion noticed in patients with diabetic macular oedema using SD-OCT images.
Introduction
Introduction
VMA describes residual adhesion between the vitreous and macula, occurring within the context of an incomplete posterior vitreous detachment. It might not lead to any retinal abnormality but may exert traction on the underlying macula, distorting the retinal architecture and leading to VMT. [2] VMA can be broadly classified as either focal (<1500 mm) or broad (≥1500 mm) based on the size of the adhesion. [3]

The researchers so far have concentrated mainly on the outer retina in the OCT scans to look for biomarkers to predict the visual outcomes in patients with DME. The vitreomacular interface has largely been unexplored in connection with its potential to act as a biomarker. Recently, it was shown to be of some importance in DME, where it was seen that in the presence of VMA, the OCT biomarkers such as high reflective dots, bridging processes and inner nuclear layer cysts were more likely to be associated with visual impairment than when not associated with VMA. [1] It is worthwhile to explore this further and define the role of the vitreomacular interface as a biomarker in DME. 

The purpose of this protocol is to design an automated deep learning model that can effectively identify the presence of VMA and categorize OCT images into broad VMA, focal VMA, and control groups, which in turn can serve as a reliable and efficient diagnostic tool. This accurate classification has the potential to significantly benefit ophthalmologists in making well-informed decisions, thereby enhancing patient care.
Materials and methods
Materials and methods
Study design: Prospective observational study

Sample: We will use the same sample from our previous study to design the AI model.
Collection method followed

In our previous study we categorized participants into two groups; cases in the presence of VMA (+) and controls in the absence of VMA (-). The presence of VMA in our study was identified according to The International Vitreomacular Traction Study Group classification [4].Where the authors defined VMA according to following criteria 1) Evidence of perifoveal vitreous cortex detachment from retinal surface 2) Macular attachment of the vitreous cortex within a 3mm radius of the fovea 3) No detectable change in foveal contour or underlying retinal tissues.

We followed the same International Vitreomacular Traction Study Group classification to further subdivide VMA (+) group in our study into two groups, either focal (<1500 mm) or broad (≥1500 mm), based on the size of the adhesion. [4]

The medical records of all individuals were reviewed for baseline demographics, including age, gender, duration of DM, DR severity, cardiovascular co morbidities, dyslipidaemia, slit-lamp bio microscopy examination, intraocular pressure, the number and type of anti-VEGF injections given and dilated fundus evaluation. BCVA was measured with Snellen charts and converted to the Logarithm of the Minimum Angle of Resolution (Log MAR).
Inclusion and Exclusion criteria
Inclusion and exclusion criteria will remain the same as our prior work.
Inclusion:

Subjects were included in the study if they met the following criteria: (1) Individuals (18 years or older) with type 2 diabetes mellitus and DME, (2) availability of SD-OCT scans of sufficient quality for grading, and (3) no other confounding ocular condition that could decrease visual acuity other than DME.

Exclusion:

Subjects were excluded if they exhibit the following: (1) vitreomacular interface abnormalities besides VMA such as ERM, VMT, proliferative membranes, tractional retinal detachment, hazy media, vitreous haemorrhage and lamellar or full thickness macular hole, (2) pre-existing retinal or macular disease other than DR or DME and (3) SD-OCT images with poor quality that was insufficient for assessment.
Methods

1) Data processing

a. The dataset comprises of high quality SD-OCT tiff images from VMA Broad, VMA focal, and Control group cases, ensuring a representative sample of each class.

b. Pre-processing steps involve noise reduction techniques, intensity normalization to address variations in brightness, and cropping to remove irrelevant regions.

c. Anonymization and patient privacy measures are implemented to comply with ethical guidelines and ensure confidentiality.

2) Data Annotation and Labelling

a. In our previous work, two graders (optometrist) reviewed each OCT image and assigned appropriate labels by annotating the data and classified the data as, Broad VMA, focal VMA, and Control group, based on specific diagnostic criteria. In case of discrepancies the annotations were examined by an expert (retina specialist).

b. In this study we will be using the pre-annotated data for training the AI model.
3) Data Split and Validation

a. Divide the dataset into training and testing subsets to facilitate model development and evaluation.

b. The split ratio is determined, considering factors such as dataset size, class distribution, and the need for robust performance evaluation.

c. Employ imbalanced learning to ensure the dataset contains a proportional representation of VMA Broad, VMA focal, and Control group images, maintaining the original class distribution.

4) Deep Learning Model Architecture

a. Select different suitable deep learning architectures for the classification task, considering previous research and the complexity of the problem.

b. Describe the chosen architectures, in detail, including the layers, activation functions, and any specific modifications made for the VMA disease classification task.

c. Capture relevant features from OCT images and facilitate accurate classification using the selected architectures.

5) Model Training

a. Train the deep learning model using the labelled training subset of the dataset.

b. Specify the optimization algorithm and its hyper parameters.

c. Determine the batch size, number of epochs, and early stopping criteria to prevent over fitting and achieve optimal training performance.

d. Employ data augmentation techniques to increase the dataset's diversity and enhance model generalization.

6) Model Evaluation and Validation

a. Define evaluation metrics, including accuracy, precision, recall, F1-score, and AUC to assess the model's performance.

b. Evaluate the trained model using the testing subset to measure its classification accuracy and generalization ability.

c. Calculate performance metrics for each class separately to analyze the model's effectiveness in identifying VMA Broad, VMA focal, and Control group OCT images.
d. Discuss potential challenges or limitations encountered during the evaluation process and highlight areas for improvement.
7) Model Optimization and Fine-Tuning

a. Outline a model optimization procedure based on the initial evaluation results.
b. Employ hyper parameter tuning, architecture modifications, or ensemble methods to improve the model's performance.
c. Validate the optimized model using a separate testing subset to ensure unbiased assessment and verify enhanced classification accuracy.
8) Result Analysis and Interpretation
a. Present the classification results obtained by the trained deep learning model for VMA Broad, VMA focal, and Control group OCT images.
b. Provide in-depth analysis and interpretation of the results, identifying patterns, correlations, and potential insights into the classification performance.
c. Compare the performance of the developed models with existing approaches or expert annotations, highlighting the advantages and limitations.
Timeline of the study
Timeline of the study
The study is planned for a duration of 6 months.
Discussion
Discussion
OCT has become indispensable in diagnosing various retinal conditions like DR, AMD, ERM and MH with other techniques like fundus photography and fluorescein angiography. [7-9] Deep learning (DL) a subset of machine learning (ML) is revolutionizing how we approach the diagnosis and management of medical conditions. [10]Advanced DL methods can effectively identify pathological features, and in recent years, several ML methods have emerged for recognizing OCT images in patients with significant eye disorders, including DR, AMD, ERM, glaucoma and CSCR. [11-15]

The automatic detection of abnormal signs in retinal OCT images serves as a crucial component in diagnosing retinal pathologies. This capability provides ophthalmologists with valuable insights, aiding them in decision making process. In conclusion, it lays foundation for a more comprehensive and accessible approach to diagnosing and managing retinal diseases.

Study limitations:
1) High-performance pre-processing is required to enhance image quality.
2) Deep Learning (DL) models are considered "black boxes" where it is tough to understand how they make the predictions. This poor understanding of the models creates a trust issue among healthcare professionals and patients.
3) The DL model will be trained with smaller samples. Limited data availability.
Protocol references
1) Subramanian B, Devishamani C, Raman R, Ratra D. Association of OCT biomarkers and visual impairment in patients with diabetic macular oedema with vitreomacular adhesion. Plos one. 2023 Jul 18;18(7):e0288879.

2) Duker JS, Kaiser PK, Binder S, de Smet MD, Gaudric A, Reichel E et al. The International Vitreomacular Traction Study Group classification of vitreomacular adhesion, traction, and macular hole. Ophthalmology 2013; 120:2611-9.

3) Jackson TL, Nicod E, Simpson A, Angelis A, Grimaccia F, Kanavos P. Symptomatic vitreomacular adhesion. Retina 2013; 33:1503-11.

4) Duker JS, Kaiser PK, Binder S, de Smet MD, Gaudric A, Reichel E et al. The International Vitreomacular Traction Study Group classification of vitreomacular adhesion, traction, and macular hole. Ophthalmology 2013; 120:2611-9.

5) Ting DS, Cheung CY, Lim G, Tan GS, Quang ND, Gan A, Hamzah H, Garcia-Franco R, San Yeo IY, Lee SY, Wong EY. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. Jama. 2017 Dec 12;318(22):2211-23.

6) Lains I,Wang JC, Cui Y, Katz R, Vingopoulos F, Staurenghi G, Vavvas DG, Miller JW, Miller JB. Retinal applications of swept source optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). Progress in retinal and eye research. 2021 Sep 1;84:100951.

7) Schneider EW, Fowler SC. Optical coherence tomography angiography in the management of age-related macular degeneration. Current opinion in ophthalmology. 2018 May 1;29(3):217-25.

8) Corvi F, Cozzi M, Invernizzi A, Pace L, Sadda SR, Staurenghi G. Optical coherence tomography angiography for detection of macular neovascularization associated with atrophy in age-related macular degeneration. Graefe's Archive for Clinical and Experimental Ophthalmology. 2021 Feb;259:291-9.

9) Lindtjørn B, Krohn J, Forsaa VA. Optical coherence tomography features and risk of macular hole formation in the fellow eye. BMC ophthalmology. 2021 Dec;21(1):1-7.

10) Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y. Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology. 2017 Dec 1;2(4).

11) Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. jama. 2016 Dec 13;316(22):2402-10.

12) Li Z, HeY, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology. 2018 Aug 1;125(8):1199-206.
13) Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA ophthalmology. 2017 Nov 1;135(11):1170-6.
14) Lo YC, Lin KH, Bair H, Sheu WH, Chang CS, Shen YC, Hung CL. Epiretinal membrane detection at the ophthalmologist level using deep learning of optical coherence tomography. Scientific reports. 2020 May 21;10(1):8424.
15) Crincoli E, Savastano MC, Savastano A, Caporossi T, Bacherini D, Miere A, Gambini G, De Vico U, Baldascino A, Minnella AM, Scupola A. New artificial intelligence analysis for prediction of long-term visual improvement after epiretinal membrane surgery. Retina. 2022 Aug 17:10-97.