Training a feed-forward network by feeding gradients forward rather than by back-propagation of errors
UNSPECIFIED (1997) Training a feed-forward network by feeding gradients forward rather than by back-propagation of errors. NEUROCOMPUTING, 16 (2). pp. 117-126. ISSN 0925-2312Full text not available from this repository.
This paper demonstrates how a multi-layer feed-forward network may be trained, using the method of gradient descent, by feeding gradients forward rather than by feeding errors backwards as is usual in the case of back-propagation. The gradient of steepest descent requires that the gradient of the output of the network with respect to each connection matrix be calculated and that the output of the final layer be calculated. The work in this paper shows how the gradients of the final output are determined by feeding the gradients of the intermediate outputs forward at the same time that the outputs of the intermediate layers are fed forward in order to determine the output of the final layer. This method turns out to be equivalent to back propagation for a 2-layer network but is much more readily extended to several layers. The method makes obvious a great potential for concurrency and the algorithm may be directly implemented using an array processing language. It may be used, without modification, for a network with arbitrary number of layers. The logic complexity of the algorithm is independent of the number of layers in the network.
|Item Type:||Journal Article|
|Subjects:||Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software|
|Journal or Publication Title:||NEUROCOMPUTING|
|Publisher:||ELSEVIER SCIENCE BV|
|Date:||31 July 1997|
|Number of Pages:||10|
|Page Range:||pp. 117-126|
Actions (login required)