Paper
1 January 1990 Locally versus globally interlayer-connected feed-forward neural networks: a performance comparison
John D. Provence, S. Naganathan
Author Affiliations +
Abstract
During the past few years, artificial neural networks have been applied to problems such as pattern matching, associative memory recall, and optimization problems. A property common to all the network structures which have been proposed and studied is global interconnectivity among the neural processing nodes. While global interconnectivity presents few problems for implementing simulations of networks, it is a major factor which prohibits implementation using VLSI and or optical technology. In this paper, we present the results of a study which compares the performance of globally interlayer connected multi-layer neural networks with that of networks which employ local connections. The locally connected neural network contains multiple hidden layers and each neural processing node in a given layer connects to at most its three nearest neighbor processing nodes in the higher layer. The advantages of a locally connected network are reduced interconnection complexity, reduced I/O requirements for each neural processor node, and faster processing at each neural processing node. The locally connected and globally connected networks are compared with respect to training iterations, number of neural processor nodes needed, number of hidden layers needed, and error rate performance for a number of different problems.
© (1990) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John D. Provence and S. Naganathan "Locally versus globally interlayer-connected feed-forward neural networks: a performance comparison", Proc. SPIE 1293, Applications of Artificial Intelligence VIII, (1 January 1990); https://doi.org/10.1117/12.21078
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Artificial neural networks

Content addressable memory

Very large scale integration

RELATED CONTENT

Neural model suitable for optical implementation
Proceedings of SPIE (February 02 1993)
Neural network technology for automatic target recognition
Proceedings of SPIE (August 01 1990)
Analog CMOS contrastive Hebbian networks
Proceedings of SPIE (September 16 1992)
Storing temporal sequences of patterns in neural networks
Proceedings of SPIE (October 29 1993)

Back to Top