1. TA-BERT outperforms generic BERT in text encoding, demonstrating the value of task-adaptive pretraining. ConvNeXt-Small excels in image encoding;encoder comparisons
2. We examined six feature fusion strategies and found that low-rank bilinear attention used alone (strategy b) does not enhance model performance. Effective fusion occurred only when combining obtained bilinear attention features with lower-layer features (strategy e);Furthermore, introducing Transformer in the multi-layer feature fusion stage (strategy f)
3. Duck optimization with enhanced capsule network based citrus disease detection for sustainable crop management;A Arthi;Sustain. Energy Technol. Assessments,2023