Dragaera

OT: bois (was: Sethra Lavode vs. Enchantress of Dzur Mountain)

Sun Aug 25 10:27:16 PDT 2002

On Sat, Aug 24, 2002 at 09:35:44AM -0400, Mark A Mandel wrote:
> On 23 Aug 2002, David Dyer-Bennet wrote:
> 
> #I'm not surprised they might be findable.  I wonder, though, if any
> #properties of neural networks *guarantee* that they're findable?  At

[ they being logical rules ]

After reading Wolfram, I have to admit perhaps not.  You can execute simple
rules in a large space and get behavior not derivable from the rules, except
by seeing what they do.  Of course, that's not peculiar to NNs.  Whether a
trained NN tends to be understandable or magic might depend on the training.

(There was the genetic algorithm grown electronic circuit for adding or
multiplying numbers which was smaller than usual, and had a portion not even
conventionally connected to the rest of the circuit; turned out it was doing
funky stuff with induction.  Within a very narrow temperature range,
though, not really robust.)

> #one level we understand neural networks very well (after all most of
> #them are software simulations, quite deterministic), but at other
> #levels I don't think we understand them *at all*.

Also depends on the NNs we're talking about.  Simple feedforward ones have
been likened to regression techniques in statistics.  Cool implementation, but
nothing all that special.  Recurrent NNs have a lot more potential for being
hard to predict and analyze (but again, the same is true of a randomly
generated Turing machine, say.  It's the halting problem generalized.)

> And let's not forget that despite their name, the software constructs
> that their creators optimistically named "neural nets" probably have
> very little in common with wetware.

To try to respond to subsequent discussion, again, yes and no.  If our brains
are doing information processing we should have to simulated only the firing
patterns and neurotransmitter diffusion, and the latter shuold be imitatable
by proper auxiliary wiring.  Ignoring fine details of biology shouldn't
matter.  And obviously the basic idea of NNs is taken from ideas about brains.
OTOH, Michael Dawson's _Understanding Cognitive Science_ said that NNs aren't
always as biologically imitative as they could be.  Threshold functions, for
example, have been slowly getting more lifelike (improving the NN performance,
too.)  Recurrent NNs have often been avoided, because they're hard to write
math equations about.  And the learning mechanisms are often utterly
unlifelike: this backpropagation or other teaching mechanism comes in from
outside and messes with the weights.  Yeah, right.  But Dawson outlined a
network which would be self-training.  Of course, this about tripled the size
of the network.

The point is that NNs aren't just limited by current knowledge of biology;
according to Dawson, they've also avoiding known and relevant details because
they were too hard.

Mark later brings up number of connections.  NNs I've read abot often start
life fully connected, at least from level to level, and I think Hopfield nets
are totally connected.  In brains the average number of synapses is 1000, vs.
millions or billions or 100 billion neurons.  OTOH, our NNs are often small,
so "fully connected" isn't saying that much.

-xx- Damien X-)