Abstract
Deploying energy disaggregation models in the real-world is a challenging task. These models are usually deep neural networks and can be costly when running on a server or prohibitive when the target device has limited resources. Deep learning models are usually computationally expensive and they have large storage requirements. Reducing the computational cost and the size of a neural network, without trading off any performance is not a trivial task. This paper suggests a novel neural architecture that has less learning parameters, smaller size and fast inference time without trading off performance. The proposed architecture performs on par with two popular strong baseline models. The key characteristic is the Fourier transformation which has no learning parameters and it can be computed efficiently.
Funder
European Regional Development Fund of the European Union and Greek national funds
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献