Accuracy analysis in back propagation neural network considering neurons proportionality among hidden layers

Y. Yong *, M. Guoe, J. Shan

Department of Mechanical and Aerospace Engineering, Old Dominion University, Norfolk, VA, United States

Abstract

In this paper, an analysis is performed on the importance of proportionality among multiple hidden layers of BPNN. In case of any discrepancies in the network, a maximum of two layers is enough to train the whole neural network to get the desired result. But in some situations where accuracy is the chief criteria and training data is similar in data sets, (Like in Multiscript Numeral Recognition where shapes of different numerals resemble different values), in such conditions, accuracy matters the most. This paper describes a five-layer hidden approach to get the most possible accurate results in Multiscript Pin Code Recognition System using MATLAB. In order to get the desired output, the proportionality of hidden layers are semi-optimized to precise level but it may vary depending upon the training data set and the type of problem.

Keywords

Multiple hidden layers, Back propagation neural network, Multiscript pin code recognition

Digital Object Identifier (DOI)

https://doi.org/10.21833/AEEE.2019.06.002

Article history

Received 5 February 2019, Received in revised form 1 May 2019, Accepted 2 May 2019

Full text

DownloadAvailable in PDF
Portable Document Format

How to cite

Yong Y, Guoe M, and Shan J (2019). Accuracy analysis in back propagation neural network considering neurons proportionality among hidden layers. Annals of Electrical and Electronic Engineering, 2(6): 6-9

References (16)

  1. Caseiro D and Ljolje A (2013). Multiple parallel hidden layers and other improvements to recurrent neural network language modeling. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Vancouver, Canada: 8426-8429. https://doi.org/10.1109/ICASSP.2013.6639309   [Google Scholar]
  2. Dan Z and Xu C (2013). The recognition of handwritten digits based on BP neural network and the implementation on android. In 2013 3rd International Conference on Intelligent System Design and Engineering Applications, IEEE, Hong Kong, China: 1498-1501. https://doi.org/10.1109/ISDEA.2012.359   [Google Scholar]
  3. Hunter D, Yu H, Pukish III MS, Kolbusz J, and Wilamowski BM (2012). Selection of proper neural network sizes and architectures—A comparative study. IEEE Transactions on Industrial Informatics, 8(2): 228-240. https://doi.org/10.1109/TII.2012.2187914   [Google Scholar]
  4. Ke J and Liu X (2008). Empirical analysis of optimal hidden neurons in neural network modeling for stock prediction. In 2008 IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application, IEEE, Wuhan, China, 2: 828-832. https://doi.org/10.1109/PACIIA.2008.363   [Google Scholar]
  5. Liu Y, Starzyk JA, and Zhu Z (2007). Optimizing number of hidden neurons in neural networks. In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA '07): 121–126.   [Google Scholar]
  6. Mitrpanont JL and Imprasert Y (2011). Thai handwritten character recognition using heuristic rules hybrid with neural network. In 2011 8th International Joint Conference on Computer Science and Software Engineering (JCSSE), IEEE, Nakhon Pathom, Thailand: 160-165. https://doi.org/10.1109/JCSSE.2011.5930113   [Google Scholar]
  7. Muthukumar PK and Black AW (2014). Automatic discovery of a phonetic inventory for unwritten languages for statistical speech synthesis. In the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, Florence, Italy: 2594-2598. https://doi.org/10.1109/ICASSP.2014.6854069   [Google Scholar]
  8. Neil D and Liu SC (2014). Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 22(12): 2621-2628. https://doi.org/10.1109/TVLSI.2013.2294916   [Google Scholar]
  9. Ouchtati S, Redjimi M, and Bedda M (2014). Realization of an offline system for the recognition of the handwritten numeric chains. In the 2014 9th Iberian Conference on Information Systems and Technologies (CISTI), IEEE, Barcelona, Spain: 1-6. https://doi.org/10.1109/CISTI.2014.6877040   [Google Scholar]
  10. Prasad JR and Kulkarni U (2015). Gujarati character recognition using adaptive neuro fuzzy classifier with fuzzy hedges. International Journal of Machine Learning and Cybernetics, 6(5): 763-775. https://doi.org/10.1007/s13042-014-0259-8   [Google Scholar]
  11. Sahu N and Raman NK (2013). An efficient handwritten Devnagari character recognition system using neural network. In the 2013 International Mutli-Conference on Automation, Computing, Communication, Control and Compressed Sensing (iMac4s), IEEE, Kottayam, India: 173-177. https://doi.org/10.1109/iMac4s.2013.6526403   [Google Scholar]
  12. Sazal MMR, Biswas SK, Amin MF, and Murase K (2014). Bangla handwritten character recognition using deep belief network. In the 2013 International Conference on Electrical Information and Communication Technology (EICT), IEEE, Khulna, Bangladesh: 1-5. https://doi.org/10.1109/EICT.2014.6777907   [Google Scholar]
  13. Sheela K and Deepa S (2011). Analysis of computing algorithm using momentum in neural networks. Journal of Computing, 3(6): 2151-9167.   [Google Scholar]
  14. Sheela KG and Deepa SN (2013). A new algorithm to find number of hidden neurons in radial basis function networks for wind speed prediction in renewable energy systems. Journal of Control Engineering and Applied Informatics, 15(3): 30-37.   [Google Scholar]
  15. Tamura SI and Tateishi M (1997). Capabilities of a four-layered feed forward neural network: Four layers versus three. IEEE Transactions on Neural Networks, 8(2): 251-255. https://doi.org/10.1109/72.557662   [Google Scholar]
  16. Trenn S (2008). Multilayer perceptrons: Approximation order and necessary number of hidden units. IEEE transactions on neural networks, 19(5): 836-844. https://doi.org/10.1109/TNN.2007.912306   [Google Scholar]