I seem to have plenty of science projects to dabble in, and this is helping to pleasantly ‘kill time’ on this side of eternity. So I thought I would have a bit of a sabbatical from the study of the evolution/ID debate with its ‘red in tooth and claw’ feel and went back to my old word association program ‘thinknet’. This project, which has its roots in some code that goes back to the nineties and ideas that go even further back, links words or rather ‘ideas’ into a network of associations. The links are weighted with an appropriate probability and these probabilities emerge as result of the statistical properties of dictionaries of links. (That’s the theory at least). In this model a link is naturally a two-way thing: a link from concept 1 to concept 2 implies a link from concept 2 to concept 1, although in the most general case the probability weighting of the pair of links will be different. (a concept may be linked to any number of other concepts) This picture seems to have a fairly fundamental mathematical basis in the distribution of properties over items. However, it is natural to wonder just what isomorphisms this model has with the neural networks in the brain. A single neuron is most likely to have a large number of incoming links connected to the dendrites of the neuron but with only one outgoing link via the axon. What’s that mean?
(I'll will, of course, continue to with the evolution/ID studies in due course and thanks to all those who make this study such an exciting project)