Bias arises from various life, loyalty and local risk and attention concerns that are difficult to separate or codify.
In physics, researchers are concerned with observer effects[?] - which set clear limits on the process of observing, e.g. as in the Uncertainty Principle, e.g. when untangling quantum entanglement. These are all well-accepted foundations of 20th century philosophy of science and along with a few other such discoveries (like the speed of light) form its core epistemology. Knowing these limits has helped develop a cognitive science by which humans might reasonably characterize the limits of their own perception.
However, bias does not end with cognition. How to interpret the data on what humans 'can' observe becomes controversial when there are few individuals capable of reproducing experiments and compiling new models. Culture arises.
Also, the models, and goals of experiments often come into question as "normal science" nears a "paradigm shift", and considers changing this ontology, among other changes. Thomas Samuel Kuhn stated that the ontology or methodology accepted by prevailing scientists often provided a cultural bias - much like the one that previously guided human cognition of other species as being unfeeling or unconscious. Assigning all freedom of choice to the observer and none to the observed has been a common theme, and error, in all science prior to the 20th century.
Finally, there are notation problems. Aside from the neutrality of math as studied in the philosophy of mathematics, there are many shallow disputes:
For example, a quite different ontology problem identified by anti-reductionist physics[?] is that there is just as much legitimacy describing the bonds or collections between particles as the first-class objects, as there is in the particle physics model composed of particles. Ontological ("what exists") and epistemological ("how you know") concerns often overlap, as they do in theology and philosophy where the terms originate - but descriptions of relations between cognitive bias, culture bias, and notation bias are as controversial as other questions of individual, social and instructional responsibilities - as most clearly visible in the wide range of economic studies of human capital.
The most all-encompassing example of cognitive bias may be the Anthropic Principle: in its "weak" form, this states that we humans cannot observe any of the possible universes in which humans cannot exist, and therefore all human observations of the universe are constrained by the many fundamental constants that gave rise to human cognition itself! It's a tautology, and most scientists accept that peering into other universes is not possible - thus theories about them have serious problems of falsifiability.
The "strong form" of The Anthropic Principle[?] (which most scientists do not accept) argues the reverse: that fundamental constants are constrained by cognition. How can this be, when mathematics predicts future events reliably?
One controversial argument, by Buckminster Fuller, held that cognitive bias arose not directly out of any fundamental physical constants, but rather out of the shared life, loyalty and locality imposed by living in one gravity well sustaining one biosphere - that rather than binding to the whole universe, intelligent creatures were bound emotionally and cognitively to their home planet. In his book "Critical Path[?]" he concluded somewhat controversially that "gravity is love".
However, arguing that planets or biospheres must hopelessly bias all aspects of our cognition strongly in their favor is not even the most extreme claim; Some argue that whatever human cognitive bias exists is extended by culture into notations where it becomes institutionalized:
Many theorists have thought that mathematical notation itself must contain a strong human cognitive bias - that it cannot predict future events but rather future human cognition - a common theme in theology and in the philosophy of mathematics - and a few, led by George Lakoff, have attempted to create a cognitive science of mathematics to clearly bind together basic constants and operations in mathematics to human cognitive assumptions (about the body and its movements and negotiations).
However behavioral psychology[?] and artificial intelligence studies in the 20th century had a similar agenda, and both more or less failed to explain more than a bare minimum of animal and planning behavior respectively.
See also: falsifiability, cognitive science, philosophy of mathematics, epistemology.
Search Encyclopedia
|
Featured Article
|