Affiliation:
1. School of Computing and Information Technology Jomo Kenyatta University of Agriculture and Technology Nairobi Kenya
Abstract
AbstractMachine learning (ML) has been used in human gait data for appropriate assistive device prediction. However, their uptake in the medical setup still remains low due to their black box nature which restricts clinicians from understanding how they operate. This has led to the exploration of explainable ML. Studies have recommended local interpretable model‐agnostic explanation (LIME) because it builds sparse linear models around an individual prediction in its local vicinity hence fast and also because it could be used on any ML model. LIME is however, is not always stable. The research aimed to enhance LIME to attain stability by avoid the sampling step through combining Gaussian mixture model (GMM) sampling. To test performance of the GMM‐LIME, supervised ML were recommended because study revealed that their accuracy was above 90% when used on human gait. Neural networks were adopted for GaitRec dataset and Random Forest (RF) was adopted and applied on HugaDB datasets. Maximum accuracies attained were multilayer perceptron (95%) and RF (99%). Graphical results on stability and Jaccard similarity scores were presented for both original LIME and GMM‐LIME. Unlike original LIME, GMM‐LIME produced not only more accurate and reliable but also consistently stable explanations.
Funder
International Laboratory of Dynamic Systems and Applications, National Research University Higher School of Economics