It can reasonably be considered a confirmation bias, a culture bias, a notation bias, and a cognitive bias, depending on specifics of the situation. Accordingly it is best considered separately as an independent influence. Variant terms technology bias, tool bias or very narrowly experimental apparatus effects usually reflect attempts to minimize or hide this bias, which is quite pervasive and (like infrastructure itself) usually invisible and implicit in experimental activities or proposals for same.
These terms all refer to specific impacts of infrastructural capital as used in the scientific process[?] - and sometimes in other processes, e.g. economics, ethics, and urban planning. These other impacts will not be discussed in this article, other than to note that "sunk costs are zero" in economics, but are rarely held so in real life.
"The Universe doesn't tell us when we're right, only when we're wrong," said Karl Popper, but something other than personal feelings or peers must ultimately tell us that we're right, and to stop testing our thesis. What is that thing? For much of science, especially hard science, it is the cost and limits of infrastructure.
These have been issues since the (unfunded) Galileo's issue with the Catholic Church over the reliability of his telescope. Some have suggested it is infrastructural capital itself, the experimental tools or technologies, that provide us with maximum incentive to agree with each other, and that conflicts in science often rage over such issues, e.g. the still-disputed claims regarding cold fusion. However, limits of apparatus are rarely as important in biasing science as its costs, especially the cost of training scientists in its use, which leads to a profound culture bias.
There is also risk of moral hazards[?], i.e. those who own or control infrastructure feel pressure to prove it useful and retain funding for experimental work that requires that apparatus for empirical validation. In the extreme case, apparatus might be proven obsolete, raising the risk that scientific careers and institutions may end, new equipment to pursue new avenues of research may not be funded. In these circumstances, scientists may come under heavy peer pressure to avoid totally disproving hypotheses that continue to justify a large expenditure in equipment and salaries. If there is a general theory driving research that requires these expenditures, there will be a confirmation bias towards continuing to test and retest that theory.
Self-interest thus helps bias research programs as a whole towards smaller purely incremental experiments (or "normal science" as Thomas Kuhn called it) which do not challenge dominant theory. This has also been called paradigm bias to the degree it is a pressure to conform to prior norms, and notation bias to the degree it is a pressure specifically to use a known means of expression. In practice, however, it is often difficult to distinguish pressure to conform to a "paradigm" from more obvious pressures. The more general term mindset often refers to paradigms and to notations.
However, some bias is assumed to arise purely from shared assumptions rather than shared interests or beliefs, e.g. relying on similar parts in equipment, e.g. relying on a common source of test animals. To the degree that such assumptions are unremarked, they constitute a very widely shared cognitive bias, of a whole class of researchers or experimental protocol designers, that has become embedded into the technology or tool and presented certain affordances[?] that encourage certain experiments and discourage others:
"If all you have is a hammer, everything looks like a nail." - Anonymous
See: mindset, culture bias, cognitive bias, notation bias, confirmation bias, falsifiability, philosophy of science
Search Encyclopedia
|
Featured Article
|