Abstract
Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design – including the separation into two phases, the form of the programming language, and the properties of the aggregators – exploits the parallelism inherent in having data and computation distributed across many machines.
Subject
Computer Science Applications,Software
Cited by
96 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Enhancing Accuracy for Super Spreader Identification in High-Speed Data Streams;Proceedings of the VLDB Endowment;2024-07
2. Couper: Memory-Efficient Cardinality Estimation under Unbalanced Distribution;2023 IEEE 39th International Conference on Data Engineering (ICDE);2023-04
3. Online Cardinality Estimation by Self-morphing Bitmaps;2022 IEEE 38th International Conference on Data Engineering (ICDE);2022-05
4. The Go programming language and environment;Communications of the ACM;2022-04
5. SpaceSaving
±;Proceedings of the VLDB Endowment;2022-02