Evidence from various domains underlines the key role that human factors, and especially, trust, play in the adoption of AI-based technology by professionals. As AI-based educational technology is increasingly entering K-12 education, it is expected that issues of trust would influence the acceptance of such technology by educators as well, but little is known about this matter. In this work, we bring the opinions and attitudes of science teachers that interacted with several types of AI-based technology for K-12. Among other things, our findings indicate that teachers are reluctant to accept AI-based recommendations when it contradicts their previous knowledge about their students and that they expect AI to be absolutely correct even in situations that absolute truth may not exist (e.g., grading open-ended questions). The purpose of this paper is to provide initial findings and start mapping the terrain of this aspect of teacher-AI interaction, which is critical for the wide and effective deployment of AIED technologies in K-12 education.