Author:
Justin Swapnil,Waoo Aaradhya,Waoo Akhilesh A
Abstract
Activation functions are pivotal components in neural networks, serving as decision-making units that evaluate the output of network nodes, thus influencing overall performance. Selecting the appropriate activation function is crucial for neural network effectiveness. While various activation functions exist, not all are suitable for every scenario; some may be deprecated due to operational limitations. Characteristics like monotonicity, derivatives, and range finiteness are crucial for effective learning. This research assesses commonly used additive functions such as Swish, ReLU, and Sigmoid, examining their properties, advantages, and disadvantages. Understanding activation functions is vital in maximizing neural network (NN) performance. By exploring the diverse types of activation functions and their respective merits and drawbacks, researchers and practitioners can make informed choices to optimize NN efficacy across different applications [1][2][3].
Publisher
Granthaalayah Publications and Printers