Abstract
Deep convolutional neural networks (CNN) have demonstrated remarkable success in various applications.However, deploying these models on mobile or embedded devices is challenging due to constraints such as limited memory, computational resources, and low classification accuracy.We propose a novel design, NCANet (Normalized Channel Attention Network),an enhancedversion of MobileNetV3-large, to address challenges in feature representation within lightweight neural networks.First, the normalized channel attention mechanism is added to adjust the image-feature channel weights so as to improve the recognition accuracy of the model.Second, the MetaACON activation function is introduced, replacing the ReLU activation function to further enhance performance.Third, to minimize computational costs and the number of parameters, we utilize asymmetric 1×5 and 5×1 convolutions to replace the traditional 5×5 convolution. The experimental results on CIFAR-10, CIFAR-100 and ImageNet datasets achieve the highest accuracy 93.24%, 80.12% and 77.9%, respectively.This demonstrates that NCANet exhibits greater efficiency compared to lightweight models and significantly outperforms state-of-the-art networks with lower complexity.