This study analyzes large language models (LLMs) as a methodology for computational sociology, focusing on applications to supervised text classification. We consider how the latest generation of text-to-text transformer models can make predictions using prompts and minimal training examples and assess the sensitivity of these approaches to wording and composition. Through a comprehensive case study on identifying opinions expressed about politicians on Twitter and Facebook, we evaluate four different LLM architectures, varying in size, training data, and architecture. We compare the performance across different training regimes, from prompt-based zero-shot learning to fine-tuning using thousands of annotated examples. Our findings demonstrate how LLMs can perform complex text classification tasks with high accuracy, substantially outperforming conventional baselines. We use these results to provide practical recommendations for sociologists interested in employing LLMs for text classification tasks. Fine-tuning smaller models offers an optimal solution for most researchers due to their relatively high accuracy and low cost. We discuss the trade-offs between proprietary and open-source models, the importance of evaluating models for bias, and concerns related to transparency and reproducibility. This study contributes to understanding the capabilities and limitations of these models in a sociological context, providing a foundation for future research and applications in the field.