Skip to Main Content
Article navigation

The popular backpropagation algorithm for training neural nets is a special case of an earlier principle of significance feedback, which in turn has much in common with Selfridge’s “Pandemonium” and a connection with McCulloch’s “redundancy of potential command”. Ways in which the effects might operate in real neural nets are reviewed, and the ideas are related to the current interest in heterogeneous agents. The tendency to restrict attention to numerical optimisation is regretted.

You do not currently have access to this content.
Don't already have an account? Register

Purchased this content as a guest? Enter your email address to restore access.

Please enter valid email address.
Email address must be 94 characters or fewer.
Pay-Per-View Access
$41.00
Rental

or Create an Account

Close Modal
Close Modal