Affiliation:
1. Intel Corporation, Marlow, United Kingdom
2. Intel Corporation, Hillsboro, OR, United States
3. Intel Corporation, San Jose, CA, United States
4. Intel Corporation, Meudon, France
Abstract
The advent of AI has driven the exploration of high-density low-precision arithmetic on FPGAs. This has resulted in new methods in mapping both arithmetic functions as well as dataflows onto the fabric, as well as some changes to the embedded DSP Blocks. Technologies outside of the FPGA realm have also evolved, such as the addition of tensor structures for GPUs, as well as the introduction of numerous AI ASSPs, all of which have a higher claimed performance and efficiency than current FPGAs. In this article, we will introduce the Stratix 10 NX device, which is a variant of FPGA specifically optimized for the AI application space. In addition to the computational capabilities of the standard programmable soft-logic fabric, a new type of DSP Block provides the dense arrays of low-precision multipliers typically used in AI implementations. The architecture of the block is tuned for the common matrix-matrix or vector-matrix multiplications in AI, with capabilities designed to work efficiently for both small and large matrix sizes. The base precisions are INT8 and INT4, along with shared exponent support to support block FP16 and block FP12 numerics. All additions/accumulations can be done in INT32 or IEEE-754 single precision floating point (FP32), and multiple blocks can be cascaded together to support larger matrices. We will also describe methods by which the smaller precision multipliers can be aggregated to create larger multipliers that are more applicable to standard signal processing requirements.
In the AI market, the FPGA must compete directly with other types of devices, rather than occupy a unique niche. Deterministic system performance is as important as the performance of individual FPGA elements, such as logic, memory, and DSP. We will show that the feed forward datapath structures that are needed to support the typical AI matrix-vector and matrix-matrix multiplication operations can consistently close timing at over 500 MHz on a mid-speed grade device, even if all of the Tensor Blocks on the device are used. We will also show a full-chip NPU processor implementation that out performs GPUs at the same process node for a variety of AI inferencing workloads, even though it has a lower operating frequency of 365 MHz.
In terms of overall compute throughput, Stratix 10 NX is specified at 143 INT8/FP16 TOPs/FLOPs or 286 INT4/FP12 TOPS/FLOPs. Depending on the configuration, power efficiency is in the range of 1–4 TOPs or TFLOPs/W.
Publisher
Association for Computing Machinery (ACM)
Reference39 articles.
1. Xilinx. 2017. Deep Learning with INT8 Optimization on Xilinx Devices. Retrieved from https://www.xilinx.com/support/documentation/white_papers/wp486-deep-learning-int8.pdf.
2. Nvidia. 2018. NVIDIA-Turing-Architecture-Whitepaper. Retrieved from https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf.
3. Intel. 2019. Agilex F-Series FPGAs and SoC FPGAs. Retrieved from https://www.intel.com/content/www/us/en/products/details/fpga/agilex/f-series.htmf.
4. Graphcore. 2020. Introducing 2nd Generation IPU Systems for AI at Scale. Retrieved from https://www.graphcore.ai/posts/introducing-second-generation-ipu-systems-for-ai-at-scale.
5. Xilinx. 2020. Versal ACAP Packaging and Pinouts Architecture Manual. Retrieved from https://www.xilinx.com/support/documentation/architecture-manuals/am013-versal-pkg-pinout.pdf.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Efficient 8-bit Matrix Multiplication on Intel Agilex-5 FPGAs;2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM);2024-05-05
2. Challenges and Opportunities to Enable Large-Scale Computing via Heterogeneous Chiplets;2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC);2024-01-22
3. BRAMAC: Compute-in-BRAM Architectures for Multiply-Accumulate on FPGAs;2023 IEEE 31st Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM);2023-05
4. HPIPE NX: Boosting CNN Inference Acceleration Performance with AI-Optimized FPGAs;2022 International Conference on Field-Programmable Technology (ICFPT);2022-12-05
5. Trade-Off-Oriented Impedance Optimization of Chiplet-Based 2.5-D Integrated Circuits With a Hybrid MDP Algorithm for Noise Elimination;IEEE Transactions on Circuits and Systems I: Regular Papers;2022-12