When analyzing the results of an experiment, we often assume that the interpretation of the data is a straightforward act; however, experiments on reproducibility that have large numbers of scientists analyze the same dataset show that even subtle differences in workflows can lead to drastically different results. Beyond the use of different analysis methods, one potential reason for different conclusions from the same data is confirmation bias, a phenomenon well documented in psychology. We reasoned that confirmation bias might not only influence the choice of analysis methods and depth of analysis, but that it might even shape how different researchers interpret the same graphical representation of data. To test this notion, we designed a simple experiment, in which we contrast the prior expectation of participants’ for the relationship between two variables (income and happiness) with their interpretation of a corresponding data plot. We artificially engineered the data such that under superficial examination, it shows an overall negative correlation, while a closer look at distinct age groups (distinguished by color) reveals a within-group positive correlation. Prior to showing the plot, we had asked the participants whether they expected a positive or a negative relationship. We found that participants who expected a positive correlation were more than twice as likely to detect the positive within-age group correlation than those expecting a negative correlation. This simple experiment demonstrates the presence of confirmation bias in the interpretation of graphical data representations.