Author:
Berthomier C.,Muto V.,Schmidt C.,Vandewalle G.,Jaspar M.,Devillers J.,Gaggioni G.,Chellappa S. L.,Meyer C.,Phillips C.,Salmon E.,Berthomier P.,Prado J.,Benoit O.,Brandewinder M.,Mattout J.,Maquet J.
Abstract
AbstractStudy ObjectivesNew challenges in sleep science require to describe fine grain phenomena or to deal with large datasets. Beside the human resource challenge of scoring huge datasets, the inter- and intra-expert variability may also reduce the sensitivity of such studies. Searching for a way to disentangle the variability induced by the scoring method from the actual variability in the data, visual and automatic sleep scorings of healthy individuals were examined.MethodsA first dataset (DS1, 4 recordings) scored by 6 experts plus an autoscoring algorithm was used to characterize inter-scoring variability. A second dataset (DS2, 88 recordings) scored a few weeks later was used to investigate intra-expert variability. Percentage agreements and Conger’s kappa were derived from epoch-by-epoch comparisons on pairwise, consensus and majority scorings.ResultsOn DS1 the number of epochs of agreement decreased when the number of expert increased, in both majority and consensus scoring, where agreement ranged from 86% (pairwise) to 69% (all experts). Adding autoscoring to visual scorings changed the kappa value from 0.81 to 0.79. Agreement between expert consensus and autoscoring was 93%. On DS2 intra-expert variability was evidenced by the kappa systematic decrease between autoscoring and each single expert between datasets (0.75 to 0.70).ConclusionsVisual scoring induces inter- and intra-expert variability, which is difficult to address especially in big data studies. When proven to be reliable and if perfectly reproducible, autoscoring methods can cope with intra-scorer variability making them a sensible option when dealing with large datasets.Statement of SignificanceWe confirmed and extended previous findings highlighting the intra- and inter-expert variability in visual sleep scoring. On large datasets those variability issues cannot be completely addressed by neither practical nor statistical solutions such as group training, majority or consensus scoring.When an automated scoring method can be proven to be as reasonably imperfect as visual scoring but perfectly reproducible, it can serve as a reliable scoring reference for sleep studies.
Publisher
Cold Spring Harbor Laboratory