September 2022: Applying Machine Learning to Specific Learning Disability Screening


Mor, N. S., & Dardeck, K. (2021). Applying a convolutional neural network to screen for specific learning disorder. Learning Disabilities: A Contemporary Journal19(2), 161-169. https://eric.ed.gov/?id=EJ1314763

Summary by Diana Salem and Dr. Yusra Ahmed

Background

Specific learning disabilities (SLD) are neurodevelopmental disorders that impede the ability to read, write, or understand mathematics and vary in magnitude of impairment. Dyslexia, dysgraphia, and dyscalculia are common types of SLD, with dyslexia being the most common. SLD negatively impact spelling, reading, expression, fluency, mathematic calculations, memory, and writing (including problems with handwriting, grammar, punctuation, and clarity and organization of written expression).

Undiagnosed SLD can negatively impact the learning experiences of children and interfere with their ability to thrive in an academic setting. Research shows that children with learning disabilities are often isolated among their peers due to feelings of inadequacy (Cavioni, 2017). Academic challenges can hinder social functioning and contribute to emotional concerns with self-esteem, motivation, and mental health. Children that do not experience social difficulties often end up aligning and conforming to social groups that demonstrate high-risk behaviors (Cavioni, 2017). Children with SLD are more likely to succumb to the social pressures of their peers in order to gain their acceptance (Bryan et al., 1989). Therefore, early detection prevents academic and emotional suffering by promoting advanced interventions and accommodations.

One challenge to the early identification of SLD is that there is no “golden rule” to reliably identify or classify all children. Processes used to identify SLD vary across settings (e.g., schools and clinics), districts, and states. Further, theoretically and mathematically derived models (e.g., low achievement and discrepancy models) have not demonstrated adequate validity, convergence, or stability over time. Improving the identification and classification process is therefore necessary.

Machine Learning and Diagnosis

Comprehensive evaluations of SLD must use a variety of technically-sound assessment tools and assess children in all areas of a suspected disability (i.e., it’s a data-gathering process; see Fletcher & Miciak, 2019). Data-driven detection of SLD using machine learning is relatively recent in the scientific literature, but these models offer a promising avenue for refining existing identification methods. Machine learning is a subset of artificial intelligence in which software is trained with examples in order to make predictions, such as whether a child has a SLD. Machine learning models can perform better than human beings in many tasks, such as medical diagnosis based on visual data such as skin cancer classification.

Mor and Dardeck (2021) use a machine learning technique—deep convolutional neural networks (CNNs)—to detect SLD in students. This is the first study to use a data set consisting of handwriting images to screen for SLD. The authors argue that symptoms may be displayed via handwriting as SLD hinders writing skills. Research shows that a person with an SLD may present written assignments with poor handwriting, greater character size variations, poorer clarity, and longer duration towards completion than their age typical peers.

Deep learning applications are increasingly used for medical diagnoses and mental disorder screening because of advances in computation, the availability of very large data sets, and emerging new techniques. Deep learning algorithms provide superior accuracy and performance than human perception in recognizing complex patterns in data. Deep CNNs are artificial neural networks that perform complex programming of visual tasks, such as image, facial, and object detection, as well as pattern recognition (Albawi et. al., 2018). The present study utilizes the deep CNN MobileNetV2 to detect SLD based on handwriting. This model provides fast results through technology that can be deployed on mobile devices, allowing rapid inference from a photo taken from a mobile device.

Methods and Model

In order to conduct this study, the authors collected data, in the form of handwriting samples, across 152 high school students between the ages of 15 and 18. Seventeen participants had a previous diagnosis of SLD, and all participants were required to provide 500 pages of handwriting from old notebooks to be scanned and examined.

Handwriting images (224 x 224 pixels) were inputted into the pretrained MobileNetV2 model where features of the image were extracted and trained on the ImageNet data set. The ImageNet is a database that contains millions of photographs and 1,000 groups (i.e., object categories). The authors used transfer learning (the use of knowledge from one type of visual problem to solve a similar problem) because visual features such as edges are broadly relevant to images from different domains. Transfer learning involved modifying the architecture of the pretrained MobileNetV2 model to suit the current study. The final layer of their model provided the output value that was then classified into the study’s two final predictive categories: no diagnosis of an SLD or diagnosis of an SLD.

The data set consisting of the handwriting images (497) was split at random into two separate sets of data: the training set (447) and the testing/validation set (50). These were evaluated to assess the accuracy of the model. This study trained their algorithm for 25 epochs and reached maximum accuracy after 21 epochs. An epoch is when the entire training data set is passed through the neural network as a single cycle. Epochs are divided due to overfitting (i.e., when the model learns concepts that negatively impact its ability to generalize).

Results

Validation and accuracy metrics, including the F-score, AUC, precision, recall, and accuracy, were obtained from the validation set and used to measure the efficacy of the study. True positive and true negative are outcomes where the model correctly predicts an SLD diagnosis or no diagnosis, respectively. False positive and false negative are outcomes where the model incorrectly classifies individuals (e.g., those with an SLD diagnosis are classified as not having an SLD). As the authors explain:

  • Accuracy is defined by all true predictions of the model divided by the total of all predictions.
  • Precision is defined by true positives divided by the sum of true positives and false positives.
  • Recall is defined by true positives divided by the sum of true positives and false negatives.
  • The F-score is a balanced metric, defined by a weighted average of precision and recall.
  • The area under the curve (AUC) of a classifier refers to the probability that a classifier will rank a randomly chosen SLD case higher than a non-SLD case. Higher values indicate better model performance.

The results indicated good model performance (AUC = 0.89, precision = 0.94, recall = 0.89, F-score = 0.91, and accuracy = 0.92).

Discussion

The findings prove the feasibility of using deep learning algorithms for the screening and detection of SLD to assist (not replace) existing diagnostic processes (e.g., observations, interviews, family history, school reports, neuropsychological assessment) and better characterize SLDs. The results establish a predictive model that is effective in detecting SLD, especially when compared to other applications of deep learning for mental disorders. This study suggests that the model can be used to aid the early detection of SLD in students so that the necessary interventions may be applied to improve the quality of learning.

The smartphone application allows for an accessible method of early SLD identification. The handwriting samples that were collected for initial screening were inputted into a deep learning model that performs at top speed and yields accurate results. This accessible technology is fairly affordable and simple to use. The process is straightforward and easy to operate: a picture of the handwriting is taken via a smartphone and inputted into the model, which then provides a result.

Due to the language of the participants (Hebrew), the small data set collected in this study, and the age of the participants, the results are limited and unable to be generalized across all populations. Recommendations for future work are to test the algorithm on students of different ages and language backgrounds, and to collect a larger dataset. The authors recommend that a new system be developed, tested, and examined among every population of interest.

References

Albawi, S., Bayat, O., Al-Azawi, S., & Ucan, O. N. (2018). Social touch gesture recognition using convolutional neural network. Computational Intelligence and Neurosciencehttps://doi.org/10.1155/2018/6973103

Bryan, T., Pearl, R., & Fallon, P. (1989). Conformity to peer pressure by students with learning disabilities: A replication. Journal of Learning Disabilities22(7), 458-459. https://doi.org/10.1177/002221948902200713

Cavioni, V., Grazzani, I., & Ornaghi, V. (2017). Social and emotional learning for children with learning disability: Implications for inclusion. International Journal of Emotional Education9(2), 100-109. https://www.um.edu.mt/library/oar/handle/123456789/24346

Fletcher, J. M., & Miciak, J. (2019). The identification of specific learning disabilities: A summary of research on best practices. Austin, TX: Texas Center for Learning Disabilities. https://texasldcenter.org/library/resource/the-identification-of-specific-learning-disabilities-a-summary-of-research