In artificial
intelligence (AI) circles, we have two main ways of doing things - rules
based, and connectionist. Actually, that is a major simplification,
but it will do for the moment.
If you want a
computer to learn to do something, you can provide it with a set of rules
about how to do simple tasks and further rules on how to create new
rules from old ones and from experience. And there is nothing wrong
with this. The provision of the original rules need not represent a
huge investment, depending on the level of acceptable fault tolerance.
AI systems normally
consist of two different modes of operation - one is explorative, where
the system tries to learn the rules of the environment in which it has to
exist, and the other exploitative, where it seeks to exploit the
knowledge it has gained through learning.
The other sort of
system is connectionist - based on some sort of neural network.
There are many different types of neural network, but the one most
commonly associated with the term is probably the multilayer
perceptron. It is by far from the best artificial neural network in
many ways, but it does have an elegant simplicity, and is a fairly good
frequency analogue model of a biological neural network (though it would
be best to consider it as inspired
by nature, rather than really being a model of it).
|
Connectionism
Connectionism is essentially a theoretical position which
holds that cognitive processes can be modelled and understood using
highly connected models of brains. But it has further application,
and indeed can be seen as having some impact on connectivism, and in
design and analysis of multi-agent systems.
Essentially the idea is that there are a large number of
autonomous agents, which maintain relationships with one another of
varying strengths. The process of them passing messages to each
other enables the strengths of the relationships to change, and the
system can, as a consequence, be seen to learn. Of course, there
are many ways of setting up a system in such a way that it won't learn,
or at least won't learn anything useful, but that doesn't matter - it is
the fact that some can learn appropriate responses which matters.
And it can be seen that we can consider groups of people in
these terms, especially ones where there are clear modes of
communication. Corporate bodies, for instance, have clear
mechanisms for the transmission of information between their constituent
members, and the individual agents maintain their own impression of the
strengths of the relationships. Do businesses learn to cope with
their environments? Well, some do, some don't.
|
How far?
How far can we go in looking at how connectionism can be
used in communities rather than as models of the internal workings of the
individual?
It seems likely to me, at least, that the model scales up
quite well. Although, technically, it is more scaling down - the
average human brain has something in the order of 1x1011
neurons (and the additional glial cells may have more to do with
information processing than we normally give them credit for) each of
which has connections with 1x104 others. There are
currently only about 6x109 people in the world, and each
generally has connections of consequence with up to about 1x104
during their entire lifetime.
This figure is approximate of course- working on the basis
that a university lecturer may have an intake of 200 students
per year for 40 years, and that the rest of their connections will be
minimal in comparison. Some people will have more - but I would
argue that the strength of the connections will be much less - and the
majority will have far fewer connections.
At least, they used to. With the many-to-many
communications which are now available via the internet, and specifically
not just the web but web2.0, the number of connections has the capacity
to rise dramtically. And, of course, we also have an improved level
of persistence of communication, thanks to writing, which gives more
continuity than neurons can achieve.
|