Affiliation:
1. APT research group, Department of Computer Science, The University of Manchester, Manchester, UK
Abstract
Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of ordinary differential equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single-precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of
dither
that is a widely understood mechanism for providing resolution below the least significant bit in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by partial differential equations.
This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
Funder
H2020 Future and Emerging Technologies
Engineering and Physical Sciences Research Council
Subject
General Physics and Astronomy,General Engineering,General Mathematics
Reference45 articles.
1. Jouppi NP et al. 2017 In-datacenter performance analysis of a tensor processing unit. In Proc. of the 44th Annual Int. Symp. on Computer Architecture ISCA ’17 pp. 1–12. New York NY: ACM. Available from: http://doi.acm.org/10.1145/3079856.3080246.
2. Kabi B Sahadevan AS Pradhan T. 2017 An overflow free fixed-point eigenvalue decomposition algorithm: case study of dimensionality reduction in hyperspectral images. In Conf. On Design And Architectures For Signal And Image Processing (DASIP). Dresden Germany 27–29 September . Piscataway NJ: IEEE. Available from: http://dasip2017.esit.rub.de/program.html.
3. Beating floating point at its own game: posit arithmetic;Gustafson J;Supercomput. Front. Innov: Int. J.,2017
4. Tapered Floating Point: A New Floating-Point Representation
5. Intel. BFLOAT16—Hardware Numerics Definition; 2018. Online: https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf.
Cited by
35 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献