PipeArch

Author:

Kara Kaan1,Alonso Gustavo1

Affiliation:

1. ETH Zurich, Switzerland

Abstract

Data processing systems based on FPGAs offer high performance and energy efficiency for a variety of applications. However, these advantages are achieved through highly specialized designs. The high degree of specialization leads to accelerators with narrow functionality and designs adhering to a rigid execution flow. For multi-tenant systems this limits the scope of applicability of FPGA-based accelerators, because, first, supporting a single operation is unlikely to have any significant impact on the overall performance of the system, and, second, serving multiple users satisfactorily is difficult due to simplistic scheduling policies enforced when using the accelerator. Standard operating system and database management system features that would help address these limitations, such as context-switching, preemptive scheduling, and thread migration are practically non-existent in current FPGA accelerator efforts. In this work, we propose PipeArch, an open-source project 1 for developing FPGA-based accelerators that combine the high efficiency of specialized hardware designs with the generality and functionality known from conventional CPU threads. PipeArch provides programmability and extensibility in the accelerator without losing the advantages of SIMD-parallelism and deep pipelining. PipeArch supports context-switching and thread migration, thereby enabling for the first time new capabilities such as preemptive scheduling in FPGA accelerators within a high-performance data processing setting. We have used PipeArch to implement a variety of machine learning methods for generalized linear model training and recommender systems showing empirically their advantages over a high-end CPU and even over fully specialized FPGA designs.

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference78 articles.

1. [n.d.]. Amazon Employee Access Dataset. https://github.com/owenzhang/Kaggle-AmazonChallenge2013. [n.d.]. Amazon Employee Access Dataset. https://github.com/owenzhang/Kaggle-AmazonChallenge2013.

2. [n.d.]. Amazon F1 Instances. aws.amazon.com/ec2/instance-types/f1/. [n.d.]. Amazon F1 Instances. aws.amazon.com/ec2/instance-types/f1/.

3. [n.d.]. AWS FPGA Stack Repository. Retrieved from https://github.com/aws/aws-fpga. [n.d.]. AWS FPGA Stack Repository. Retrieved from https://github.com/aws/aws-fpga.

4. [n.d.]. Baidu FPGA Instances. Retrieved from https://cloud.baidu.com/product/fpga.html. [n.d.]. Baidu FPGA Instances. Retrieved from https://cloud.baidu.com/product/fpga.html.

5. [n.d.]. Intel OPAE Framework. Retrieved from opae.github.io. [n.d.]. Intel OPAE Framework. Retrieved from opae.github.io.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3