Abstract
AbstractThe Internet era is an era of information explosion. By 2022, the global Internet users have reached more than 4 billion, and the social media users have exceeded 3 billion. People face a lot of news content every day, and it is almost impossible to get interesting information by browsing all the news content. Under this background, personalized news recommendation technology has been widely used, but it still needs to be further optimized and improved. In order to better push the news content of interest to different readers, users' satisfaction with major news websites should be further improved. This study proposes a new recommendation algorithm based on deep learning and reinforcement learning. Firstly, the RL algorithm is introduced based on deep learning. Deep learning is excellent in processing large-scale data and complex pattern recognition, but it often faces the challenge of low sample efficiency when it comes to complex decision-making and sequential tasks. While reinforcement learning (RL) emphasizes learning optimization strategies through continuous trial and error through interactive learning with the environment. Compared with deep learning, RL is more suitable for scenes that need long-term decision-making and trial-and-error learning. By feeding back the reward signal of the action, the system can better adapt to the unknown environment and complex tasks, which makes up for the relative shortcomings of deep learning in these aspects. A scenario is applied to an action to solve the sequential decision problem in the news dissemination process. In order to enable the news recommendation system to consider the dynamic changes in users' interest in news content, the Deep Deterministic Policy Gradient algorithm is applied to the news recommendation scenario. Opposing learning complements and combines Deep Q-network with the strategic network. On the basis of fully summarizing and thinking, this paper puts forward the mode of intelligent news dissemination and push. The push process of news communication information based on edge computing technology is proposed. Finally, based on Area Under Curve a Q-Leaning Area Under Curve for RL models is proposed. This indicator can measure the strengths and weaknesses of RL models efficiently and facilitates comparing models and evaluating offline experiments. The results show that the DDPG algorithm improves the click-through rate by 2.586% compared with the conventional recommendation algorithm. It shows that the algorithm designed in this paper has more obvious advantages in accurate recommendation by users. This paper effectively improves the efficiency of news dissemination by optimizing the push mode of intelligent news dissemination. In addition, the paper also deeply studies the innovative application of intelligent edge technology in news communication, which brings new ideas and practices to promote the development of news communication methods. Optimizing the push mode of intelligent news dissemination not only improves the user experience, but also provides strong support for the application of intelligent edge technology in this field, which has important practical application prospects.
Publisher
Springer Science and Business Media LLC
Reference33 articles.
1. Talaei, M., Mousavi, A. & Sayadi, A. R. Highest-level implementation of push-relabel algorithm to solve ultimate pit limit problem. J. Min. Environ. 12(2), 443–455 (2021).
2. Chang, C. W. Developing a multicriteria decision-making model based on a three-layer virtual internet of things algorithm model to rank players. Mathematics 10(14), 2369 (2022).
3. Smys, S. & Haoxiang, W. A secure optimization algorithm for quality-of-service improvement in hybrid wireless networks. IRO J. Sustain. Wirel. Syst. 3(1), 1–10 (2021).
4. Ansari, E. et al. PAPIR: Privacy-aware personalized information retrieval. J. Ambient Intell. Human. Comput. 12(10), 9891–9907 (2021).
5. Wu, F. et al. Feedrec: News feed recommendation with various user feedbacks. In Proceedings of the ACM Web Conference 2022, vol. 3, no. 10, 2088–2097 (2022).