A Deep Learning-Based Multimodal Resource Reconstruction Scheme for Digital Enterprise Management
-
Published:2023-03-03
Issue:
Volume:
Page:
-
ISSN:0218-1266
-
Container-title:Journal of Circuits, Systems and Computers
-
language:en
-
Short-container-title:J CIRCUIT SYST COMP
Author:
Yang Tingting1,
Zheng Bing2ORCID
Affiliation:
1. Nanchang Institute of Technology, Nanchang 330044, Jiangxi, P. R. China
2. School of Information Engineering, Hainan Vocational University of Science and Technology, Haikou, Hainan 571126, P. R. China
Abstract
Nowadays, almost all of the enterprises are facing resources and materials with multimodal format. For example, textual information can be mixed with visual scenes, and visual information can be also mixed with textual scenarios. As a result, such information fusion among multimodal materials costs a large amount of human labors in daily management affairs. To deal with such issue, this paper introduces deep learning to characterize gap between vision and texts, and proposes a deep learning-based multimodal resource reconstruction scheme via awareness of table document, so as to facilitate digital enterprise management. A deep neural network is developed to construct a method to automatically extract table texts from images, so that multimodal information fusion can be realized. This can reduce much human labor in recognizing textual characteristics from visual scenarios, which can further facilitate the resource dispatching activities in the process of digital enterprise management. Some experiments are also conducted upon the basis of real-world data set, and proper results are obtained to prove that the proposal is endowed with considerable efficiency.
Funder
China University industry university research innovation fund project
Publisher
World Scientific Pub Co Pte Ltd
Subject
Electrical and Electronic Engineering,Hardware and Architecture,Media Technology