Abstract
Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.
Funder
UK Engineering and Physical Sciences Research Council Internet Project
Defense Advanced Research Projects Agency and the Air Force Research Laboratory
Subject
General Physics and Astronomy,General Engineering,General Mathematics
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Computing in the Network;Shaping Future 6G Networks;2021-11-05
2. Do Switches Dream of Machine Learning?;Proceedings of the 18th ACM Workshop on Hot Topics in Networks;2019-11-14
3. Software-Defined “Hardware” Infrastructures: A Survey on Enabling Technologies and Open Research Directions;IEEE Communications Surveys & Tutorials;2018
4. Communication networks beyond the capacity crunch;Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences;2016-03-06