Author:
Svantesson Mats,Eklund Anders,Thordstein Magnus
Abstract
1.AbstractBackgroundExpert interrater agreement for epileptiform discharges can be moderate. This reasonably will affect the performance when developing classifiers based on annotations performed by experts. In addition, evaluation of classifier performance will be difficult since the ground truth will have a variability. In this pilot study, these aspects were investigated to evaluate the feasibility of conducting a larger study on the subject.MethodsA multi-channel EEG of 78 minutes duration with abundant periodic discharges was independently annotated for epileptiform discharges by two experts. Based on this, several deep learning classifiers were developed which in turn produced new annotations. The agreements of all annotations were evaluated by pairwise comparisons using Cohen’s kappa and Gwet’s AC1. A cluster analysis was performed on all periodic discharges using a newly developed version of parametric t-SNE to assess the similarity between annotations.ResultsThe Cohen’s kappa values were 0.53 for the experts, 0.52–0.65 when comparing the experts to the classifiers, and 0.67–0.82 for the classifiers. The Gwet’s AC1 values were 0.92 for the experts, 0.92–0.94 when comparing the experts to the classifiers, and 0.94–0.96 for the classifiers. Although there were differences between all annotations regarding which discharges that had been selected as epileptiform, the selected discharges were mostly similar according to the cluster analysis. Almost all identified epileptiform discharges by the classifiers were also periodic discharges.ConclusionsThere was a discrepancy between agreement scores produced by Cohen’s kappa and Gwet’s AC1. This was probably due to the skewed prevalence of epileptiform discharges, which only constitutes a small part of the whole EEG. Gwet’s AC1 is often considered the better option and the results would then indicate an almost perfect agreement. However, this conclusion is questioned when considering the number of differently classified discharges. The difference in annotation between experts affected the learning of the classifiers, but the cluster analysis indicates that all annotations were relatively similar. The difference between experts and classifiers is speculated to be partly due to intrarater variability of the experts, and partly due to underperformance of the classifiers. For a larger study, in addition to using more experts, intrarater agreement should be assessed, the classifiers can be further optimized, and the cluster method hopefully be further improved.
Publisher
Cold Spring Harbor Laboratory