Posts Tagged ‘connectionism’

Connectionism was explicitly put forward as an alternative to the classical computer based paradigm of the cognitive science approach. Many philosophers see connectionism as a basis for denying structured symbols. Empirical evidence from neuroscience shows that no symbol, proposition, sentence, or algorithm can be found in the brain. There must be an alternative, more basic, mechanism for the representation and processing of knowledge. The goal of the connectionist approach is to construct an abstract model of the neural processes taking place in the brain.

Connectionist models seem particularly well matched to what we know about neurology. Neural networks are also particularly well adapted for problems that require the resolution of many conflicting constraints in parallel. There is ample evidence from research in artificial intelligence that cognitive tasks such as object recognition, planning, and even coordinated motion present problems of this kind.

The main components of connectionism:

Knowledge is distributed: One of the central claims associated with the parallel distributed processing approach is that knowledge is coded in a distributed fashion. Localist representations within this perspective are widely rejected. Bowers (2002) notes that connectionist networks can learn localist representations and many connectionist models depend on localist coding for their functioning. He argues that there are fundamental challenges that have not been addressed by connectionist theories that are readily accommodated within localist approaches. In word and non-word naming tasks, it has been found that distributed representations make it difficult for participants to name monosyllabic items and deal with more complex language phenomena. Neural networks have great difficulty in representing information that specifies who is doing what with whom and with what (eg John hit Paul with the hammer). In contrast, models that learn localist representations support many of the core language functions that connectionist models fail to account for. It is concluded that the common refection of localist coding schemes and complete reliance on distributed representations within many connectionism theories may be premature, and more research needs to be conducted in order to understand it fully.

Knowledge is stored by content: The connectionist processing and learning paradigm has many implications. Due to its associative processing mechanism, it has a content‐addressable memory. Connectionism suggests that this is due to the fact that the incoming pattern of activation that occurs when thinking of something, has matching parts to a previous pattern and this is sufficient to reactivate other parts of the pattern. It is hard to achieve in classical architectures, where items are typically accessed on the basis of knowing where they were stored.

Norman (1981) “information is not stored anywhere in particular. Rather, it is stored everywhere”. An immediate consequence of connectionism is that memories are deeply sensitive to context.

Graceful degradation is another feature that is typical of natural and artificial nervous systems; if small parts of the network are damaged, this has only small effects on its overall performance. Learning is based on a process of self‐organization on a pre-linguistic level.

Information is processed in parallel:  Neural networks often have many hundreds of thousands of small units, each processing different information. Connectionism implies that information is not serially processed, but many computations are performed simultaneously in parallel. Townsend (2004) argues that it is extremely difficult to entirely separate reasonable serial and parallel models on the basis of typical data. The study found strong evidence for pure serial and pure parallel processing, with differences occurring across individuals and inter-stimulus conditions.

Inactive knowledge is nowhere: Knowledge is represented by a pattern of activation in connectionism. When that pattern is not active, the information is not represented in the system. Activation flows directly from inputs to hidden units and then on to the output units. More realistic models of the brain would include many layers of hidden units, and recurrent connections that send signals back from higher to lower levels. Such recurrence is necessary in order to explain such cognitive features as short term memory. Connectionists tend to avoid recurrent connections because little is understood about the general problem of training recurrent nets. However Elman (1991) and others have made some progress with simple recurrent nets, where the recurrence is tightly constrained (Garson, 2007).

Read Full Post »