Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the accuracy of the corrective feedback of the leading AWE program Grammarly. 1136 Grammarly-identified errors related to style, lexis, and form were graded and discussed by two native English speakers. An overall accuracy rate of 78.86% and an accuracy rate of 91.60% when excluding style-related errors were found. However, several issues relating to the promotion of a set writing style, variance in feedback quality, and accuracy of style-related corrective feedback were also identified.