Abstract
AbstractMagnetic resonance imaging (MRI) scanning and Computed Tomography (CT) proves to be a reliable form of imaging for modern medical use, providing clear images for physician and radiologist diagnosis. MRI and CT scans are especially important for neuroimaging of tumors for neuro-oncology after a patient lists symptoms indicating brain cancer. Although imaging does produce a lucid depiction of possible cancerous growth in the brain, inspection by a physician could be challenging due to subtleties in the image or human error. A diagnosis could also never be exact, as a biopsy is the only diagnostic test that can ascertain meningioma growth. A physician could confuse a noncancerous cyst located near the meninges of the brain for a meningioma tumor. Furthermore, World Health Organization (WHO) grading of each tumor could be complicated to differentiate. One possible solution to the human handicap is a Convolutional Neural Network (CNN), a commonly used machine learning method for image extrapolation and classification. For the purposes of this primary research, a multimodal CNN was given testing and training data of different types of brain cancers to test if it could properly classify different forms of CT and MRI scans of meningioma compared to glioma, pituitary, and scans with no tumor. The no tumor dataset included noncancerous cysts, as mentioned before, that could be confused with meningioma. Furthermore, a separate CNN was given different testing and training data on meningioma tumors with WHO grades one to three. The CNNs were run on a private GPU environment on Visual Studio Jupyter Notebook and were given input data in the form of standardized JPEG image files from research institutes around the world. The patient data came from various ages, different nationalities, and both genders. The concept of transfer learning was used to train the model, where the solution to one problem is used to solve another problem. The results of the models show high accuracies above 98% with an upward trend through the twelve epochs ran, indicating stability. The recall and precision scores were also high, indicating quality. Finally, the AUC scores were all above .99, describing the CNN’s capability to include threshold-invariance and scale-invariance. Finally, an attention study demonstrated the CNN’s tendency to apply most attention to the tumor mass itself rather than extraneous variables.
Publisher
Cold Spring Harbor Laboratory