Abstract
AbstractA time series is a sequence of sequentially ordered real values in time. Time series classification (TSC) is the task of assigning a time series to one of a set of predefined classes, usually based on a model learned from examples. Dictionary-based methods for TSC rely on counting the frequency of certain patterns in time series and are important components of the currently most accurate TSC ensembles. One of the early dictionary-based methods was WEASEL, which at its time achieved SotA results while also being very fast. However, it is outperformed both in terms of speed and accuracy by other methods. Furthermore, its design leads to an unpredictably large memory footprint, making it inapplicable for many applications. In this paper, we present WEASEL 2.0, a complete overhaul of WEASEL based on two recent advancements in TSC: Dilation and ensembling of randomized hyper-parameter settings. These two techniques allow WEASEL 2.0 to work with a fixed-size memory footprint while at the same time improving accuracy. Compared to 15 other SotA methods on the UCR benchmark set, WEASEL 2.0 is significantly more accurate than other dictionary methods and not significantly worse than the currently best methods. Actually, it achieves the highest median accuracy over all data sets, and it performs best in 5 out of 12 problem classes. We thus believe that WEASEL 2.0 is a viable alternative for current TSC and also a potentially interesting input for future ensembles.
Funder
Humboldt-Universität zu Berlin
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference35 articles.
1. Agarwal, S., Nguyen, T.T., Nguyen, T.L., et al. (2021). Ranking by aggregating referees: Evaluating the informativeness of explanation methods for time series classification. In International Workshop on Advanced Analytics and Learning on Temporal Data, Springer, pp. 3–20.
2. Bagnall, A., Lines, J., Bostrom, A., et al. (2016). The great time series classification bake off: An experimental evaluation of recently proposed algorithms. Extended Version. Data Mining and Knowledge Discovery pp. 1–55
3. Bagnall, A., Bostrom, A., Large, J., et al. (2017). Simulated data experiments for time series classification part 1: Accuracy comparison with default settings. arXiv preprint arXiv:1703.09480
4. Christ, M., Braun, N., Neuffer, J., et al. (2018). Time series feature extraction on basis of scalable hypothesis tests (tsfresh-a python package). Neurocomputing, 307, 72–77.
5. Dau, H. A., Bagnall, A., Kamgar, K., et al. (2019). The ucr time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6), 1293–1305.
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献