DropTrack—Automatic droplet tracking with YOLOv5 and DeepSORT for microfluidic applications

Author:

Durve Mihir1ORCID,Tiribocchi Adriano2ORCID,Bonaccorso Fabio23,Montessori Andrea4ORCID,Lauricella Marco2ORCID,Bogdan Michał5ORCID,Guzowski Jan5ORCID,Succi Sauro126

Affiliation:

1. Center for Life Nano- & Neuro-Science, Fondazione Istituto Italiano di Tecnologia (IIT), viale Regina Elena 295, 00161 Rome, Italy

2. Istituto per le Applicazioni del Calcolo del Consiglio Nazionale delle Ricerche, via dei Taurini 19, 00185 Rome, Italy

3. Department of Physics and National Institute for Nuclear Physics, University of Rome “Tor Vergata,” Via Cracovia, 50, 00133 Rome, Italy

4. Dipartimento di Ingegneria, Università degli Studi Roma tre, via Vito Volterra 62, Rome 00146, Italy

5. Institute of Physical Chemistry, Polish Academy of Sciences, Kasprzaka 44/52, 01-224 Warsaw, Poland

6. Department of Physics, Harvard University, 17 Oxford St., Cambridge, Massachusetts 02138, USA

Abstract

Deep neural networks are rapidly emerging as data analysis tools, often outperforming the conventional techniques used in complex microfluidic systems. One fundamental analysis frequently desired in microfluidic experiments is counting and tracking the droplets. Specifically, droplet tracking in dense emulsions is challenging due to inherently small droplets moving in tightly packed configurations. Sometimes, the individual droplets in these dense clusters are hard to resolve, even for a human observer. Here, two deep learning-based cutting-edge algorithms for object detection [you only look once (YOLO)] and object tracking (DeepSORT) are combined into a single image analysis tool, DropTrack, to track droplets in the microfluidic experiments. DropTrack analyzes input microfluidic experimental videos, extracts droplets' trajectories, and infers other observables of interest, such as droplet numbers. Training an object detector network for droplet recognition with manually annotated images is a labor-intensive task and a persistent bottleneck. In this work, this problem is partly resolved by training many object detector networks (YOLOv5) with several hybrid datasets containing real and synthetic images. We present an analysis of a double emulsion experiment as a case study to measure DropTrack's performance. For our test case, the YOLO network trained by combining 40% real images and 60% synthetic images yields the best accuracy in droplet detection and droplet counting in real experimental videos. Also, this strategy reduces labor-intensive image annotation work by 60%. DropTrack's performance is measured in terms of mean average precision of droplet detection, mean squared error in counting the droplets, and image analysis speed for inferring droplets' trajectories. The fastest configuration of DropTrack can detect and track the droplets at approximately 30 frames per second, well within the standards for a real-time image analysis.

Funder

Horizon 2020 Framework Programme

National Science Center within Sonata Bis program

HORIZON EUROPE Marie Sklodowska-Curie Actions

Publisher

AIP Publishing

Subject

Condensed Matter Physics,Fluid Flow and Transfer Processes,Mechanics of Materials,Computational Mechanics,Mechanical Engineering

Cited by 17 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3