Optimization of Deep Convolutional Neural Network with the Integrated Batch Normalization and Global pooling

Main Article Content

Priyanga K K
Sabeen S

Abstract

Deep convolutional neural networks (DCNN) have made significant progress in a wide range of applications in recent years, which include image identification, audio recognition, and translation of machine information. These tasks assist machine intelligence in a variety of ways. However, because of the large number of parameters, float manipulations and conversion of machine terminal remains difficult. To handle this issue, optimization of convolution in the DCNN is initiated that adjusts the characteristics of the neural network, and the loss of information is minimized with enriched performance. Minimization of convolution function addresses the optimization issues. Initially, batch normalization is completed, and instead of lowering neighborhood values, a full feature map is minimized to a single value using the global pooling approach. Traditional convolution is split into depth and pointwise to decrease the model size and calculations. The optimized convolution-based DCNN's performance is evaluated with the assistance of accuracy and occurrence of error. The optimized DCNN is compared with the existing state-of-the-art techniques, and the optimized DCNN outperforms the existing technique.

Article Details

How to Cite
K K, P., & S, S. (2023). Optimization of Deep Convolutional Neural Network with the Integrated Batch Normalization and Global pooling. International Journal of Communication Networks and Information Security (IJCNIS), 15(1), 59–69. https://doi.org/10.17762/ijcnis.v15i1.5617
Section
Research Articles