Author:
Boughanmi Khaled,Ansari Asim
Abstract
The success of creative products depends on the felt experience of consumers. Capturing such consumer reactions requires the fusing of different types of experiential covariates and perceptual data in an integrated modeling framework. In this article, the authors develop a novel multimodal machine learning framework that combines multimedia data (e.g., metadata, acoustic features, user-generated textual data) in creative product settings and apply it to predict the success of musical albums and playlists. The authors estimate the proposed model on a unique data set collected using different online sources. The model integrates different types of nonparametrics to flexibly accommodate diverse types of effects. It uses penalized splines to capture the nonlinear impact of acoustic features and a supervised hierarchical Dirichlet process to represent crowd sourced textual tags, and it captures dynamics via a state-space specification. The authors show the predictive superiority of the model with respect to several benchmarks. The results illuminate the dynamics of musical success over the past five decades. The authors then use the components of the model for marketing decisions such as forecasting the success of new albums, conducting album tuning and diagnostics, constructing playlists for different generations of music listeners, and providing contextual recommendations.
Funder
W. Edwards Deming Center of Columbia Business School
Subject
Marketing,Economics and Econometrics,Business and International Management
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献