Affiliation:
1. School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
Abstract
In recent years, predicting mobile app usage has become increasingly important for areas like app recommendation, user behaviour analysis, and mobile resource management. Existing models, however, struggle with the heterogeneous nature of contextual data and the user cold start problem. This study introduces a novel prediction model, Mobile App Prediction Leveraging Large Language Model Embeddings (MAPLE), which employs Large Language Models (LLMs) and installed app similarity to overcome these challenges. MAPLE utilises the power of LLMs to process contextual data and discern intricate relationships within it effectively. Additionally, we explore the use of installed app similarity to address the cold start problem, facilitating the modelling of user preferences and habits, even for new users with limited historical data. In essence, our research presents MAPLE as a novel, potent, and practical approach to app usage prediction, making significant strides in resolving issues faced by existing models. MAPLE stands out as a comprehensive and effective solution, setting a new benchmark for more precise and personalised app usage predictions. In tests on two real-world datasets, MAPLE surpasses contemporary models in both standard and cold start scenarios. These outcomes validate MAPLE's capacity for precise app usage predictions and its resilience against the cold start problem. This enhanced performance stems from the model's proficiency in capturing complex temporal patterns and leveraging contextual information. As a result, MAPLE can potentially improve personalised mobile app usage predictions and user experiences markedly.
Funder
Royal Thai Government scholarship
UNSW RTP scholarship
Publisher
Association for Computing Machinery (ACM)
Reference44 articles.
1. In Situ and Context-Aware Target Apps Selection for Unified Mobile Search
2. Context-aware Target Apps Selection and Recommendation for Enhancing Personal Mobile Assistants
3. Predicting The Next App That You Are Going To Use
4. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020) 1877--1901.
5. Mark Chen Jerry Tworek Heewoo Jun Qiming Yuan Henrique Ponde Jared Kaplan Harrison Edwards Yura Burda Nicholas Joseph Greg Brockman Alex Ray Raul Puri Gretchen Krueger Michael Petrov Heidy Khlaaf Girish Sastry Pamela Mishkin Brooke Chan Scott Gray Nick Ryder Mikhail Pavlov Alethea Power Lukasz Kaiser Mohammad Bavarian Clemens Winter Philippe Tillet Felipe Petroski Such David W. Cummings Matthias Plappert Fotios Chantzis Elizabeth Barnes Ariel Herbert-Voss William H. Guss Alex Nichol Igor Babuschkin Suchir Balaji Shantanu Jain Andrew Carr Jan Leike Joshua Achiam Vedant Misra Evan Morikawa Alec Radford Matthew M. Knight Miles Brundage Mira Murati Katie Mayer Peter Welinder Bob McGrew Dario Amodei Sam McCandlish Ilya Sutskever and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. ArXiv abs/2107.03374 (2021) --. https://api.semanticscholar.org/CorpusID:235755472