Training PPA Models for Embedded Memories on a Low-data Diet

Author:

Last Felix1ORCID,Schlichtmann Ulf2ORCID

Affiliation:

1. Technical University of Munich, Munich, Germany and Intel Deutschland GmbH, Neubiberg, Germany

2. Technical University of Munich, Munich, Germany

Abstract

Supervised machine learning requires large amounts of labeled data for training. In power, performance, and area (PPA) estimation of embedded memories, every new memory compiler version is considered independently of previous compiler versions. Since the data of different memory compilers originate from similar domains, transfer learning may reduce the amount of supervised data required by pre-training PPA estimation neural networks on related domains. We show that provisioning times of PPA models for new compiler versions can be reduced significantly by exploiting similarities among different compilers, versions, and technology nodes. Through transfer learning, we shorten the time to provision PPA models for new compiler versions, which speeds up time-critical periods of the design cycle. Using only 901 training samples (10%) is sufficient to achieve an almost worst-case (98th percentile) estimation error of 2.67% and allows us to shorten model provisioning times from 40 days to less than one week without sacrificing accuracy. To enable a diverse set of source domains for transfer learning, we devise a new, application-independent method for overcoming structural domain differences through domain equalization that attains competitive results when compared to domain-free transfer. A high degree of automation necessitates the efficient assessment of the best source domains. We propose using various metrics to accurately identify four of the five best among 45 datasets with low computational effort.

Publisher

Association for Computing Machinery (ACM)

Subject

Electrical and Electronic Engineering,Computer Graphics and Computer-Aided Design,Computer Science Applications

Reference37 articles.

1. An analysis of four missing data treatment methods for supervised learning

2. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, et al. 2021. On the Opportunities and Risks of Foundation Models. Technical Report. arXiv:2108.07258.

3. Rich Caruana, Steve Lawrence, and C. Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Advances in Neural Information Processing Systems, 402–408.

4. Transfer learning for Latin and Chinese characters with Deep Neural Networks

5. Translated learning: Transfer learning across different feature spaces;Dai Wenyuan;Advances in Neural Information Processing Systems,2008

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3