Author:
Darrin Maxime,Samudre Ashwin,Sahun Maxime,Atwell Scott,Badens Catherine,Charrier Anne,Helfer Emmanuèle,Viallat Annie,Cohen-Addad Vincent,Giffard-Roisin Sophie
Abstract
AbstractThe fraction of red blood cells adopting a specific motion under low shear flow is a promising inexpensive marker for monitoring the clinical status of patients with sickle cell disease. Its high-throughput measurement relies on the video analysis of thousands of cell motions for each blood sample to eliminate a large majority of unreliable samples (out of focus or overlapping cells) and discriminate between tank-treading and flipping motion, characterizing highly and poorly deformable cells respectively. Moreover, these videos are of different durations (from 6 to more than 100 frames). We present a two-stage end-to-end machine learning pipeline able to automatically classify cell motions in videos with a high class imbalance. By extending, comparing, and combining two state-of-the-art methods, a convolutional neural network (CNN) model and a recurrent CNN, we are able to automatically discard 97% of the unreliable cell sequences (first stage) and classify highly and poorly deformable red cell sequences with 97% accuracy and an F1-score of 0.94 (second stage). Dataset and codes are publicly released for the community.
Publisher
Springer Science and Business Media LLC
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献