ChatGPT, with version GPT-4, is currently the most powerful generative pretrained transformer on the market. To date, however, there is a lack of empirical studies investigating the quality and use of ChatGPT in higher education. Therefore, we address the following research questions: What kind of prompt is needed to ensure high quality AI feedback in higher education? What are the differences between novice, expert, and AI feedback in terms of feedback quality and content accuracy? To test our research questions, we formulated a learning goal with three errors and developed a theory-based manual to determine prompt quality. Based on this manual, we developed three prompts of varying quality. We used these prompts to generate feedback using ChatGPT. We gave the best prompt to novices and experts to formulate feedback. Our results showed that only the prompt with the highest prompt quality generates almost consistently high-quality feedback. Second, our results revealed that both expert and AI feedback show significantly higher quality than novice feedback and that AI feedback is not only less time consuming, but also of higher quality than expert feedback in the categories explanation, questions and specificity. In conclusion, feedback generated with ChatGPT can be an economical and high-quality alternative to expert feedback. However, our findings point to the relevance of a manual for generating prompts to ensure both the quality of the prompt and the quality of the output. Moreover, we need to discuss ethical and data relevant questions regarding the future use of ChatGPT in higher education.