Abstract
AbstractTeachers need technology-related knowledge to effectively use technology in the classroom. Previous studies have often used self-reports to assess such knowledge. However, it is questionable whether self-reports are valid measures for this purpose. This study investigates how mathematics teachers’ self-reports correlate with their scores in a paper–pencil knowledge test regarding TPCK (technological pedagogical content knowledge), CK (content knowledge), pedagogical content knowledge (PCK), and technological knowledge (TK). Participants were $$N = 173$$
N
=
173
pre- and in-service mathematics teachers. To assess self-reports, we adapted an existing survey from the literature. We also compiled a knowledge test based on items from existing test instruments. To increase comparability between the two instruments, both the self-report and the paper–pencil knowledge test addressed the specific topic of fractions. The four subscales in both instruments had sufficient reliability. The correlations between the self-reports and the paper–pencil test scores were low or very low for all subscales $$\left(r=.00-.23\right)$$
r
=
.
00
-
.
23
, suggesting that the two instruments captured different underlying constructs. While paper–pencil tests seem more suitable for assessing knowledge, self-reports may be influenced more strongly by participants’ personal traits such as self-efficacy. Our findings raise concerns about the validity of self-reports as measures of teachers’ professional knowledge, and the comparability of studies that use distinct assessment instruments. We recommend that researchers should be more cautious when interpreting self-reports as knowledge and rely more strongly on externally assessed tests.
Funder
Technische Universität München
Publisher
Springer Science and Business Media LLC