Many existing fine-grained sentiment analysis (FGSA) methods have problems such as easy loss of fine-grained information, difficulty in solving polysemy and imbalanced sample categories. Therefore, a Transformer based FGSA method for Weibo comment text is proposed. Firstly, the RoBERTa model with knowledge augmentation was used to dynamically encode the text so as to solving the polysemy issue. Then, BiLSTM is used to effectively capture bidirectional global semantic dependency features. Next, Transformer is used to fuse multi-dimensional features and adaptively strengthen key features to overcome the problem of fine-grained information loss. Finally, an improved Focal Loss function is utilized for training to solve the issue of imbalanced sample categories. As demonstrated by the experimental outcomes on the SMP2020-EWECT, NLPCC 2013 Task 2, NLPCC 2014 Task 1, and weibo_senti_100k datasets, the suggested method outperforms the alternatives for advanced comparison methods.