How good a shallow neural network is for solving non-linear decision-making problems

Show simple item record

dc.contributor.author He, Hongmei
dc.contributor.author Zhu, Zhilong
dc.contributor.author Xu, Gang
dc.contributor.author Zhu, Zhenhuan
dc.date.accessioned 2019-01-09T09:35:14Z
dc.date.available 2019-01-09T09:35:14Z
dc.date.issued 2018-12-31
dc.identifier.citation Hongmei He, Zhilong Zhu, Gang Xu, Zhenhuan Zhu. (2018) How good a shallow neural network is for solving non-linear decision-making problems. In: BICS 2018: Advances in Brain Inspired Cognitive Systems. International Conference on Brain Inspired Cognitive Systems. Lecture Notes in Computer Science, Volume 10989, pp. 14-24 en_UK
dc.identifier.isbn 978-3-030-00562-7
dc.identifier.uri https://doi.org/10.1007/978-3-030-00563-4_2
dc.identifier.uri https://dspace.lib.cranfield.ac.uk/handle/1826/13800
dc.description.abstract The universe approximate theorem states that a shallow neural network (one hidden layer) can represent any non-linear function. In this paper, we aim at examining how good a shallow neural network is for solving non-linear decision making problems. We proposed a performance driven incremental approach to searching the best shallow neural network for decision making, given a data set. The experimental results on the two benchmark data sets, Breast Cancer in Wisconsin and SMS Spams, demonstrate the correction of universe approximate theorem, and show that the number of hidden neurons, taking about the half of input number, is good enough to represent the function from data. It is shown that the performance driven BP learning is faster than the error-driven BP learning, and that the performance of the SNN obtained by the former is not worse than that of the SNN obtained by the latter. This indicates that when learning a neural network with the BP algorithm, the performance reaches a certain value quickly, but the error may still keep reducing. The performance of the SNNs for the two databases is comparable to or better than that of the optimal linguistic attribute hierarchy, obtained by a genetic algorithm in wrapper or in terms of semantics manually, which is much time-consuming. en_UK
dc.language.iso en en_UK
dc.publisher Springer en_UK
dc.rights Attribution-NonCommercial 4.0 International *
dc.rights.uri http://creativecommons.org/licenses/by-nc/4.0/ *
dc.subject Shallow neural network en_UK
dc.subject Performance-driven BP learning en_UK
dc.subject Incremental approach en_UK
dc.subject Non-linear decision making en_UK
dc.subject Universe approximate theorem en_UK
dc.title How good a shallow neural network is for solving non-linear decision-making problems en_UK
dc.type Conference paper en_UK


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial 4.0 International Except where otherwise noted, this item's license is described as Attribution-NonCommercial 4.0 International

Search CERES


Browse

My Account

Statistics