Affiliation:
1. University of Missouri-Kansas City, MO
2. University of Arizona, Tucson, AZ
3. Army Research Lab, Adelphi, MD
Abstract
In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning. While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques. We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption. We also discuss how our approach is capable of efficiently recomputing scores when new data are available. We conducted a comprehensive evaluation of our approach and compared with the MapReduce-style computation using datasets of different characteristics on a 16-node cluster. When the MapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.
Funder
National Science Foundation
King Abdullah Scholarship Program
U.S. Air Force Summer Faculty Fellowship Program and the University of Missouri Research Board
Publisher
Association for Computing Machinery (ACM)
Reference64 articles.
1. {n.d.}. 2010. Java-Gossip. Retrieved from https://code.google.com/archive/p/java-gossip/. {n.d.}. 2010. Java-Gossip. Retrieved from https://code.google.com/archive/p/java-gossip/.
2. Gossip Algorithms
3. 2017. CloudLab. Retrieved from https://www.cloudlab.us/. 2017. CloudLab. Retrieved from https://www.cloudlab.us/.
4. 2017. Kyro. Retrieved from https://github.com/EsotericSoftware/kryo. 2017. Kyro. Retrieved from https://github.com/EsotericSoftware/kryo.
5. 2017. LZ4 - Extremely Fast Compression. Retrieved from https://github.com/lz4/lz4. 2017. LZ4 - Extremely Fast Compression. Retrieved from https://github.com/lz4/lz4.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献