The emergence of large language models (LLMs) has sparked considerable interest in their potential application in psychological research, either as a human-like entity used as a model for the human psyche or as a general text-analysis tool. However, carelessly using LLMs in psychological studies, a trend we rhetorically refer to as ``GPTology,'' can have negative consequences, especially given the convenient access to models such as ChatGPT. We elucidate the promises, limitations, and ethical considerations of using LLMs in psychological research. First, LLM-based research should pay attention to the substantial psychological diversity around the globe, as well as demographic diversity within populations. Second, while LLMs are convenient tools, we caution against treating them as a one-size-fits-all method for psychological text analysis. Third, LLM-based psychological research needs to develop methods and standards to compensate for LLMs' opaque black-box nature to facilitate reproducibility, transparency, and robust inference from AI-generated data.While acknowledging the prospects offered by LLMs for easy task automation (e.g., text annotation) and to expand our understanding of human psychology (e.g., by contrasting human and machine psychology), we make a case for diversifying human samples and expanding psychology's methodological toolbox to achieve a truly inclusive and generalizable science, rather than homogenizing samples and methods through over-reliance on LLMs.