Two broad approaches to computing are known - connectionist (which includes Turing Machines but is demonstrably more powerful) and selectionist. Human computer engineers tend to prefer the connectionist approach which includes neural networks. Nature uses both but may show an overall preference for selectionism.
"Looking back into the history of biology, it appears that whenever a phenomenon resembles learning, an instructive theory was first proposed to account for the underlying mechanisms. In every case, this was later replaced by a selective theory." - N. K. Jerne, Nobelist in Immunology.
Recently, there has been some effort to using selectionist methods to optimize neural networks. Farhat (1) used optically generated noise to assist in fast simulated annealing. Shamir et al. (2-4) used Genetic Algorithms (GA) to optimize pattern recognition masks for Fourier processors (a very limited type of fully interconnected neural netwo rk). Both built selectionist-designed connectionist systems.
I hope to show that a true hybrid selectionists/connectionist system of useful proportions can be put together optically. That is multiple, parallel neural networks can be assembled to work in parallel optically. Each seeks to learn the task independently, but those which do best are allowed to exchange information among themselves to produce a new generation which replaces the old.
I will then argue that a parallel, evolving, coordinating neural network population is superior in principle to single, even "optimum," neural network for real world problems and that optics is far better suited for such a system than electronics.