Abstract
AbstractSimple neural network classification tasks are based on performing extraction as transformations of the set simultaneously with optimization of weights on individual layers. In this paper, the Representation 7 architecture is proposed, the primary assumption of which is to divide the inductive procedure into separate blocks – transformation and decision – which may lead to a better generalization ability of the presented model. Architecture is based on the processing context of the typical neural network and unifies datasets into a shared, generically sampled space. It can be applicable in the case of difficult problems – defined not as imbalance or streaming data but by low-class separability and a high dimensionality. This article has tested the hypothesis that – in such conditions – the proposed method could achieve better results than reference algorithms by comparing the R7 architecture with state-of-the-art methods, raw mlp and Tabnet architecture. The contributions of this work are the proposition of the new architecture and complete experiments on synthetic and real datasets with the evaluation of the quality and loss achieved by R7 and by reference methods.
Publisher
Springer Science and Business Media LLC
Reference34 articles.
1. Jamain A, Hand DJ (2009) Where are the large and difficult datasets? ADAC 3(1):25–38
2. Dua D, Graff C (2017) UCI Machine Learning Repository. https://archive.ics.uci.edu/ml. Accessed 15 Apr 2023
3. Shand C, Allmendinger R, Handl J, Webb A, Keane J (2019) Evolving controllably difficult datasets for clustering. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp 463–471
4. Bojer CS, Meldgaard JP (2021) Kaggle forecasting competitions: An over-looked learning opportunity. Int J Forecast 37(2):587–603
5. Derrac J, Garcia S, Sanchez L, Herrera F (2015) Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. J Mult Valued Logic Soft Comput 17:255–287