How negative results are reported and interpreted following null hypothesis significance testing is often criticised. With small sample sizes and often low number of test trials, studies in animal cognition are prone to producing non-significant p-values, irrespective of whether this is a false negative or true negative result. Thus, we assessed how negative results are reported and interpreted across published articles in animal cognition and related fields. In this study, we manually extracted and classified how researchers report and interpret non-significant p-values, and examined the p-value distribution of these non-significant results. We found a large amount of heterogeneity in how researchers report non-significant p-values in the result sections of articles, and how they interpret them in the titles and abstracts. “No Effect” interpretations were common in the titles (84%), abstracts (64%), and results sections (41%) of papers, whereas “Non-Significant” interpretations were less common in the titles (0%) and abstracts (26%), but were present in the results (52%). Discussions of effect sizes were rare (<5% of articles). A p-value distribution analysis was consistent with research being performed with low power research to detect effect sizes of interest.