Damien Sullivan <phoenix at ugcs.caltech.edu> writes: > On Sat, Aug 17, 2002 at 02:16:36PM -0500, David Dyer-Bennet wrote: > > > Now, at *some* level thought occurs by diffusing neurotransmitters > > through the soup. If what's in our head *actually* works anything > > like a "neural network" (the modern technological concept) works, I'd > > say that "symbol" isn't a very relevent concept to it. > > I'd disagree. Nothing stops symbols from being shunted around in, or > implemented in, a neural network. And one book had specific examples where a > neural net was trained in the standard way, and then the resulting black box > was examined very carefully, and lo, you could see the logical rules right > there. They're obfuscated, not magical. I'm not surprised they might be findable. I wonder, though, if any properties of neural networks *guarantee* that they're findable? At one level we understand neural networks very well (after all most of them are software simulations, quite deterministic), but at other levels I don't think we understand them *at all*. -- David Dyer-Bennet, dd-b at dd-b.net / New TMDA anti-spam in test John Dyer-Bennet 1915-2002 Memorial Site http://john.dyer-bennet.net Book log: http://www.dd-b.net/dd-b/Ouroboros/booknotes/ New Dragaera mailing lists, see http://dragaera.info