The large language models based on transformers have shown strong text generation ability. However, due to the need for significant computing resources, little work has been done to generate emotional text using language models such as GPT-2. To address this issue, the authors proposed an affective prompt-tuning-based language model (APT-LM) equipped with an affective decoding (AD) method, aiming to enhance emotional text generation with limited computing resources. In detail, the proposed model incorporates the emotional attributes into the soft prompt by using the NRC emotion intensity lexicon and updates the additional parameters while freezing the language model. Then, it steers the generation toward a given emotion by calculating the cosine distance between the affective soft prompt and the candidate tokens generated by the language model. Experimental results show that the proposed APT-LM model significantly improves emotional text generation and achieves competitive performance on sentence fluency compared to baseline models across automatic evaluation and human evaluation.