Abstract
Artificial intelligence technologies, today, are rapidly developing and are an important branch of Computer Science. Artificial intelligence is at the heart of research and development of theory, methods, technologies, and applications for modeling and expanding human intelligence. Artificial intelligence technology has three key aspects, namely data, algorithm, and computing power, in the sense that training an algorithm to produce a classification model requires significant data, and the learning process requires improved computing capabilities. In the age of big data, information can come from a variety of sources (such as sensor systems, Internet of Things (IoT) devices and systems, as well as social media platforms) and/or belong to different stakeholders. This mostly leads to a number of problems. One of the key problems is isolated data Islands, where data from a single source/stakeholder is not available to other parties or training an artificial intelligence model, or it is financially difficult or impractical to collect a large amount of distributed data for Centralized Processing and training. There is also a risk of becoming a single point of failure in centralized architectures, which can lead to data intrusion. In addition, data from different sources may be unstructured and differ in quality, and it may also be difficult to determine the source and validity of the data. There is also a risk of invalid or malicious data. All these restrictions may affect the accuracy of the forecast. In practice, artificial intelligence models are created, trained, and used by various subjects. The learning process is not transparent to users, and users may not fully trust the model they are using. In addition, as artificial intelligence algorithms become more complex, it is difficult for people to understand how the result of training is obtained. So, recently there has been a tendency to move away from centralized approaches to artificial intelligence to decentralized ones.