In low-stakes assessments with a strict time limit, some students may fail to reach the end of the test and leave some items unanswered due to various reasons, such as fatigue and lack of test-taking motivation, which are referred to as not-reached items (NRIs). NRIs were ubiquitous in the Problem Solving and Inquiry (PSI) portion of eTIMSS 2019, as many students could not reach all PSI items because they either ran out of time or stopped responding to the items. When calibrating test items using an item response theory (IRT) model, there are several methods for dealing with NRIs, such as treating them as not-administered or scoring them as incorrect. However, research shows that these methods may yield undesirable outcomes, such as biased estimates of item and person parameters. To deal with NRIs more effectively, additional data retrieved from computer-based can be considered to develop a new scoring rule that adjusts test scores for NRIs based on students’ response behaviors. In this study, we utilize item response times (RTs) to create a scoring procedure for the PSI items in eTIMSS 2019. We employ the Normative Threshold method to determine RT thresholds for each item to identify three response behaviors: rapid guessing (i.e., answering an item with an unrealistically low RT), idling (i.e., lingering while solving an item and thereby wasting too much time), and optimal responding (i.e., balancing the speed and accuracy in responding to the items). Then, we transform the original item scores into polytomous scores depending on how accurately students answer them while optimally using the allotted time. This approach prioritizes correct responses with optimal responding over correct or incorrect responses with either rapid guessing or idle responding. Using the PSI math and science tasks for Grade 4, we investigate whether eTIMSS 2019 results would change if students’ test-taking behaviors were considered in scoring the PSI tasks. To address this goal, we evaluate how country rankings change between PSI scores based on the original item responses and the polytomous responses. The results that when the students’ response behaviors are considered (i.e., their response times are incorporated into the scoring process), the country rankings change after the top 10 countries in math. In contrast, the rankings change drastically for many countries in science. Changes in the rankings are associated with the number of NRIs and students’ response behaviors in each country. Additional analyses on the utility of RTs in scoring the PSI items showed that the polytomous responses from the first seven items could help test administrators with the early identification of students who are likely to have NRIs.