Abstract
PurposeWe argue that a fundamental issue regarding how to search and how to switch between different cognitive modes lies in the decision rules that influence the dynamics of learning and exploration. We examine the search logics underlying these decision rules and propose conceptual prompts that can be applied mentally or computationally to aid managers’ decision-making.Design/methodology/approachBy applying Multi-Armed Bandit (MAB) modeling to simulate agents’ interaction with dynamic environments, we compared the patterns and performance of selected MAB algorithms under different configurations of environmental conditions.FindingsWe develop three conceptual prompts. First, the simple heuristic-based exploration strategy works well in conditions of low environmental variability and few alternatives. Second, an exploration strategy that combines simple and de-biasing heuristics is suitable for most dynamic and complex decision environments. Third, the uncertainty-based exploration strategy is more applicable in the condition of high environmental unpredictability as it can more effectively recognize deviated patterns.Research limitations/implicationsThis study contributes to emerging research on using algorithms to develop novel concepts and combining heuristics and algorithmic intelligence in strategic decision-making.Practical implicationsThis study offers insights that there are different possibilities for exploration strategies for managers to apply conceptually and that the adaptability of cognitive-distant search may be underestimated in turbulent environments.Originality/valueDrawing on insights from machine learning and cognitive psychology research, we demonstrate the fitness of different exploration strategies in different dynamic environmental configurations by comparing the different search logics that underlie the three MAB algorithms.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献