Yassi_Blog.png

Hello

I am a young Luxembourger living in New York City, who is trying to make sense of the world around her. Here are glimpses of my journey. 

Enjoy ❤︎

Can we ever have evidence for our theories?

Scientific theories do more than just tell us about the fabric of reality.  They exist in a web of beliefs that serve practical purposes too. Evidence for scientific theories can take many forms, and is not confined to evidence for a theory’s truth value. In fact, though a theory may aim to describe phenomenon in a way that is isomorphic to its ontological structure, a theory’s value is also weighed for many other attributes. In practice, scientific theories are part and parcel of any scientific endeavor:  they inform what questions are asked, what experiments are conducted to answer them, what counts as data and how that data is interpreted. Moreover, theories can be used to make essential calculations used in everything from microwaves to medical devices that we cannot imagine living without.

One significant complication in the use of theories is theory choice – what theory one chooseswhen faced with empirically equivalent alternativesIntuitively, it may seem as though we choose the theory that is “truer”, where true refers to the ontology of the world, both in terms of identifying the substances that make up the universe and in terms of how those substances are in causal relation with one another. Following this reasoning, what we count as evidence for a theory is nothing more than evidence that the theory maps onto some ontology. In this essay, I will argue that one cannot have evidence for a theory on the basis of truth, but one can have evidence based on other equally important criteria. I will explore three of these criteria, namely style efficacy and coherence, finally concluding that we do not have a normative basis for believing a theory is truer than its counterparts, but can nevertheless adhere to imperatives of a different kind. The conclusions reached in this paper hold crucial implications for the status of science in society and the nature of progress in the sciences.  

I will begin by clarifying my use of some key terms. By empirically equivalent theories I mean x many different theories that have the same referent, assuming it is observable, and hence the same observable consequences. They are nevertheless differenttheories as they stipulate different – even incompatible – things about the unobservable aspects of their framework. Given this definition, there are no empirical grounds for ranking or preferring one theory over another, but believing in both is also troublesome. The upshot is that counting a certain observation as evidence for one theory can simultaneously count as evidence against it, leading to a contradiction. The argument can be formalized as follows:

1.     Theory T says that P is responsible for Q, where Q is some observable consequence

2.     Theory F says that R is responsible for Q

3.     P and R are incompatible such that P ├ ¬R

4.     Evidence predicting Q is both evidence for P and R, which is a contradiction

5.     We cannot have evidence for either T nor F

The possibility of empirically equivalent theories such as T and F leads one to question whether one can count any observation as evidence and if so, on what ground one can justify choosing one theory over another. I will begin by demonstrating that one cannot have evidence for a theory’s ontology. I will next argue that though this is the case, it is not in the purview of science to discern the ontology of nature given this empirical limitation. Here I refer to ontology as an account of what exists or in more colloquial terms, what is “really there”. Theories cannot be vindicated on ontological grounds as they will always be underdetermined by the evidence available viz., we will never be able to observe the complete set of observable entities, nor even extract all the observable aspects of the observed entities. This underdetermination of theory curtails a theory’s ability to say anything meaningful about the nature of what exists. As I maintain that for a theory to be true, it is accurately depicting what is really there in a context independent way, a theory cannot be considered true or truer than its empirically equivalent counterpart. For example, a microscope produces images of a tissue sample that we count as data, but what the image is ofprecisely (not what it is programmed to capture) is unclear. On the one hand, we cannot say that what really lies on the stage is what we are taking an image of since the very instruments used to extract that information are based on the very theories in question. On the other hand, we take the data the microscope produces at face value at use it to propel projects and hence theories.

To bring these ideas into even more vivid terms, let us take the atomic theory, for example. The atomic theory stipulates that matter is made of particles with different states that interact causally and behave in characteristically different ways. This theory forms the foundation of particle physics on which chemistry and biology rest and rely. The Scanning Electron Microscope presupposes the behavior of these particles of matter and the interactions between a beam of electrons and the atoms in the sample to detect signals that then form an image on the computer screen. However, there could be a radically different theory to atomic theory that predicts and explains the same empirical observations, but says something incommensurably different about what the image appearing on the computer screen is of. This is where we see the fruition of premise 3 of the argument above. As such, the appearance of the image from the microscope is nothing more than a function of a theory, rather than a statement about the truth of nature. Even corroboration of this result and what a given theory stipulates about the ontology of a tissue sample is futile, as the instruments have presumably been calibrated vis a vis the theory, they are a function of. Any argument to this end becomes circular given the theory driven nature of our evidence generation.

In like manner, any given theory has a set of things that are currently observable or amenable to observation, and a set of things that are by definition unobservable and solely gleaned through their empirical consequences. Since the methods of science are concerned with observable phenomenon, yet each scientific theory has an unobservable component. It is precisely those unobservable that are at variance with each other, then it follows that the methods of science will never be able to evaluate the aspects of theories that make them incompatible. As such, science itself cannot generate evidence that support’s a given theory’s ontology. All this to say that we cannot choose a theory based on truth, since that would require evidence for it being truth tracking and we can never have evidence for this. This argument can be formalized as follows (2):

1.     The methods of science are concerned with observable phenomenon 

2.     A given theory has observable components Q and unobservable components 

3.     The unobservable components are incompatible across empirically equivalent theories T and F, despite Q being equivalent

4.     Since T and F are incompatible, they cannot both be true

5.     To eliminate either theory one must evaluate what makes them different, namely the unobservable

6.     The methods of science cannot evaluate which theory is true or truer

Taken together, I have so far shown that given the possibility of empirically equivalent theories, we cannot count evidence for a theory being true from a logical standpoint nor through the methods of science. In what remains I will argue that despite this type of evidence being ruled out, we can have evidence based on other criteria. What’s more, without qualifications for evidence of this latter type, we would be in theoretical, or I dare say epistemological, anarchy, where anything could count as evidence for any fathomable theory. In what follows I will argue for evidence based on style, efficacy and coherence.  

            I refer to attributes such as elegance, simplicity and parsimony as the style of a theory, and I argue that style can be used to qualify evidence. Taking the formalizations above, the style of a theory dictates how it is regarded and used. Indeed, the philosophy of science has privileged theories that are parsimonious, rendering parsimony as a heuristic for theory choice.  This privilege, however, is surely not of an ontological kind, since we have no evidence to believe that nature is truly always elegant or simple i.e. there is no basis for believing that “natural kinds” are inherently parsimonious. Nor is there any basis for believing that the universe is inherently simple or elegant. These are attributes of a theory that may be palatable to humans, but are not indicative of anything but human taste and values. However, they are, albeit often subconsciously, used as criteria for evidence in support of one empirically equivalent theory over another. 

            Besides style, we adjudicate between theories based on various goals the scientific community has outlined, often in response to pressing needs in society. The degree to which a theory is conducive to the research and problem solving in line with these goals I will refer to as efficacy. I will illustrate this with an example from our immune system. We have evidence that our body produces antibodies that fight off pathogens, and once these antibodies are produced, your body creates antibody-producing memory cells that confer immunity to future exposure to this pathogen. This theory has been used to develop vaccines that prevent deadly diseases curb pandemics. Now, it is possible that there is an empirically equivalent theory of how your body recovers from disease, but it is less conducive to the type of problem solving we need for things like disease prevention. As such, we can have evidence for the efficacy of the first theory of antibodies and immunity. Importantly, this type of evidence is not indicative of what is really out there, but rather of what predictions and calculations are possible, and whether the evidence supports a theory that produces these effective results. Indeed, efficacy inevitably must have some relation to ontology, but an efficacious theory need not be true with regards to the irreconcilable unobservables, and hence evidence for that theory could be evidence for its efficacy, rather than its truth. By way of this type of evidence, the contradiction found in premise 3 and 4 of the first argument presented in this paper would be avoided. 

Now that we have explored evidence on the basis of style and efficacy, let us finally consider evidence on the basis of consistency and coherence. Let us return briefly to my examples of empirically equivalent theories T and F. What if T is preferable to F on the basis of it being more consistent with the rest of our belief system? In this model, what Laurence BonJour refers to as Coherentism[1], we are justified in privileging T because it is “logically consistent, probabilistically-coherent, mutually explanatory and mutually justifying” [2]with our other beliefs. Saliently, this still does not mean it maps onto some ontological truth. Furthermore, there is no need for referencing natural kinds or truth in this model as one need only reference other theories and beliefs. Here, prediction involves performing a litmus test to see if theories are consistent, where a wrong prediction serves as evidence that our theories are inconsistent. Hence, we are no longer justified in believing in them. Evidence of this type for a theory’s accuracy is merely coherence and consistency across the auxiliary theories. 

One example worth mentioning here to clarify this final type of evidence concerns two competing theories of perception in the neuroscience community, namely the representational theory and the predictive processing theory. Briefly, the representational framework focuses on bottom-up-driven sensory computations where portions of sensory space known as receptive fields can elicit neuronal responses when stimulated[3]. The first of the predictive processing framework, on the other hand, is that neocortical function is comprised of a generative model of the world used to predict sensory input. In other words, the brain generates internal models of the world that are then used to form percepts. For the purposes of our discussion, these two theories are empirically equivalent and in the neuroscience community there is little consensus on which framework more accurately describes the nature of cortical function. The individual neuroscientist, thus, is forced to adjudicate and adopt one of these two theories. In this case, a la BonJour, she can choose one over the other based on how it assimilates with the rest of her belief systems. In this context, it could be that she opts for predictive processing because other members of her belief system include the theory that higher-level cognitive processes such as decision making are also inferential and predictive. Thus, the predictive processing theory is more consistent with her belief system and the representational theory is less consistent. As such, she will count evidence for the consistency of the predictive processing framework of perception with the rest of her belief system as evidence for this theory’s superiority. 

One major caveat with this view is that evaluating consistency between all constituents of a belief system is far-fetched and unfeasible task. It would require surveying each member of one’s belief system and performing a calculus to see whether a given observation is more or less consistent, probabilistically- coherent etc. with other beliefs. Furthermore, it imposes an unrealistic benchmark on our theories, namely that they must cohere with all our beliefs in order to be justified.  We know this to not be the case, which poses a significant challenge for appealing to theory T over F on coherentist grounds. 

We have thus far established that when it comes to two empirically equivalent theories T and F, we cannot use evidence on the basis of ontology to motivate our theory choice. We have explored three other metrics, which I have in this essay referred to as style, efficacy and coherentism. I want to round out this paper by briefly reassessing the conditions of theory choice. If we have no evidence on ontological grounds for why an empirically equivalent theory is truer than the other, we have no basis for being more justified in believing one theory over another is true. Hence, from a normative standpoint, we cannot claim that we ought to believe in T over F qua their truth functionality. Hence, norms in science cannot be based on truth, but rather must be granted by other imperatives. These imperatives could include, but are not limited to, the attributes of style, efficacy and coherentism. For example, Newton’s theories are far more effective and practical than Einstein’s theories in engineering, and hence engineers are bound by a norm to use Newton’s theories in their work. We are not saying that engineersshouldby some objective sense categorically prefer Newton’s theories, but merely that there has been evidence for the efficacy of these laws in the quotidian affairs of an engineer. 

In this essay I have argued against the existence of evidence on ontological grounds, but advocated for evidence on the grounds of style, efficacy and coherence. I believe that truth is not, nor should be, the sole qualifier for evidence. This by no means implies that we can never prefer a theory or count data as evidence for a theory, but rather that we have to readjust how we think of evidence. Furthermore, given the aforementioned weakness of the normative aspects of theory choice, it seems that much of what we count as evidence for a theory is governed by subjective taste and communal values that have been inculcated in our education systems and research institutions. As such, we can resort to finding reassurance in the collective theory choices made, even though they have no ontological footing.  


[1]Laurence BonJour, The Structure of Empirical Knowledge(Cambridge: Harvard University Press, 1985)

[2]ibid

[3]Keller, G. B. & Mrsic-Flogel, T. D. Predictive Processing: A Canonical Cortical Computation. Neuron100, 424–435 (2018).

Normativity in Thought: Where Our Systems Fail Us

Faith and Reason: Two Opposing Steeds?