Jump to content

REDUNDANCY in NETWORKS

From glossaLAB
Charles François (2004). REDUNDANCY in NETWORKS, International Encyclopedia of Systems and Cybernetics, 2(2): 2774.
Collection International Encyclopedia of Systems and Cybernetics
Year 2004
Vol. (num.) 2(2)
ID 2774
Object type Methodology or model

The existence of a multiplicity of nodes and paths between them, able to perform in an equivalent manner.

The theory of redundancy in networks was developed by W.S. McCULLOCH, W.H. PITTS and J.von NEUMANN, who showed that redundancy offered the only hope to obtain satisfactory operative results from networks made of sometimes unreliable components.

In St. BEER words: “…von NEUMANN propounded a mathematical theorem which showed that if one is prepared to go on increasing the redundancy of a network without limit, it is possible to obtain an output of arbitrarily high accuracy from a network whose components are of arbitrarily low reliability” (1968, p.201).

This is so in natural networks, like the cerebral cortex, which remains more or less reliable as a whole, even when numerous neurons have been lost, for example because of a cerebral haemorrhage. It is also the case in artificial networks and even in social ones, as may be seen when they repair themselves, sometimes after massive destruction of elements and interconnections.

This website only uses its own cookies for technical purposes; it does not collect or transfer users' personal data without their knowledge. However, it contains links to third-party websites with third-party privacy policies, which you can accept or reject when you access them.