Emergent behavior in repeated collective decisions of minimally intelligent agents -- who at each step in time invoke majority rule to choose between a status quo and a random challenge -- can manifest through the long-term stationary probability distributions of a Markov Chain. We use this known technique to compare two kinds of voting agendas: a zero-intelligence agenda that chooses the challenger uniformly at random, and a minimally-intelligent agenda that chooses the challenger from the union of the status quo and the set of winning challengers. We use Google Co-Lab's GPU accelerated computing environment, with code we have hosted on Github, to compute stationary distributions for some simple examples from spatial-voting and budget-allocation scenarios. We find that the voting model using the zero-intelligence agenda converges more slowly, but in some cases to better outcomes.