Author:
Marrett Karl,Zhu Muye,Chi Yuze,Choi Chris,Chen Zhe,Dong Hong-Wei,Park Chang Sin,Yang X. William,Cong Jason
Abstract
AbstractInterpreting the influx of microscopy and neuroimaging data is bottlenecked by neuronal reconstruction’s long-standing issues in accuracy, automation, and scalability. Rapidly increasing data size is particularly concerning for modern computing infrastructure due to the wall in memory bandwidth which historically has witnessed the slowest rate of technological advancement. Recut is an end to end reconstruction pipeline that takes raw large-volume light microscopy images and yields filtered or tuned automated neuronal reconstructions that require minimal proofreading and no other manual intervention. By leveraging adaptive grids and other methods, Recut also has a unified data representation with up to a 509× reduction in memory footprint resulting in an 89.5× throughput increase and enabling an effective 64× increase in the scale of volumes that can be skeletonized on servers or resource limited devices. Recut also employs coarse and fine-grained parallelism to achieve speedup factors beyond CPU core count in sparse settings when compared to the current fastest reconstruction method. By leveraging the sparsity in light microscopy datasets, this can allow full brains to be processed in-memory, a property which may significantly shift the compute needs of the neuroimaging community. The scale and speed of Recut fundamentally changes the reconstruction process, allowing an interactive yet deeply automated workflow.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献