Yassi_Blog.png

Hello

I am a young Luxembourger living in New York City, who is trying to make sense of the world around her. Here are glimpses of my journey. 

Enjoy ❤︎

Normativity in Thought: Where Our Systems Fail Us

There are many different approaches to characterizing representations and computations, though very little work has explored if and how these processes can err. Previous accounts of misrepresentations and miscomputations have been misled by considering external norms and intentions, instead of assessing the success of said processes qua the process itself. Furthermore, many mental processes are considered to be rule based, and the implementation of the correct law conduces to effective perception or cognition. Violating these rules would also constitute a systematic failure. In this paper, I will use the example of classical conditioning to investigate the possibility of misrepresentation, miscomputation and rule-violation as well as the interplay between these different mistakes. I will finally conclude that miscomputations and misrepresentations do not necessarily rely on rule-violation. This work will hopefully shed light on the nature of rule implementation in the brain. 

Keywords:  misrepresentation, miscomputation, rule-violation, classical conditioning, normativity

Normativity in Thought: Where Our Systems Fail Us

I.             Introduction 

One of the mandates of modern-day psychology is to understand how we behave correctly – how we get things right about our environments. One could, however, argue that a complete explanation also needs to account for how we get things wrong.The possibility that the representational and computational processes that help us behave correctly can err has not gained as much purchase. Tied to the possibility of miscomputation and misrepresentation, is the notion that there is a manner by which a system ought to behave, which is normatively loaded.

The very existence of this normativity, of the existence of rules, presupposes a correctness, and hence, incorrectness, what I refer to here, generally, as a mistake. In addition, there are also various rules or laws we follow that are conducive to proper thinking. Following these rules is not analogous to an agential obedience to laws, but is rather implicit, built into the system’s programming 

Thus, we have three ways in which we can make mistakes - where nature’s apparatus can fail us, each with different etiologies and consequences. In this piece, I will argue that there are distinct cases in experimental psychology that qualify as misrepresentations and miscomputations, but that these are distinct from cases in which a rule, per se, is violated. For the purposes of this essay, I will be using examples of experiments classical conditioning using odors in rodents. I assume that these same foundational principles would apply to more sophisticated minds like ours. Furthermore, I am concerned with physical computation i.e. what it means for a physical system to compute and hence failto. Another assumption is that animals have a nonlinguistic syntactic rule of thought, making them eligible systems for studying cognition. 

Systems can only misrepresent or miscompute if they are representing and computing in the first place, respectively. Thus, we will be assessing their ability to err qua representing and computing, and not? qua the designer’s intent or any other external norm. To miscompute is to compute in a way that violates a computational norm, and to misrepresent is to represent in a way that violates a representational norm. 

 

II.            Considering Rule-Violating

There are various rules that are built into mental programming; they are not explicitly represented, and we do not necessarily consult them when we make inferences.  They do, however, confer a logical automaticity to our thinking and are thereby endowed correctness conditions. One simple example is the modus ponensstyle of argument as opposed to affirming the consequent. We adhere to these rules when we conclude that we will get wet if it rains, but cannot reasonably conclude that because we are wet, it rained.

            From this notion of rules, arises the notion of norms. Speaking in purely semantic terms, there arises a sense in which something ought to function e.g. the heart ought to pump blood and the descriptor “cat” ought to refer to a cat. In other words, it is constitutive of the behavior of the heart to pump blood, and, similarly, it is constitutive of the concept of the cat that it ought to be applied to cats. If the heart fails to pump blood or we assign the descriptor “cat” to something other than a cat, one could say a rule has been violated. 

Mental activity can also take the form of rule-governed symbolic manipulation. Fodor (1981) regards rule following as a manipulation of symbols in virtue of formal syntactic properties. Thus, rather than expressions having semantic content, they are formalized and described as pieces of formal syntax, without considering what the expressions meanin the conventional sense of the term (Fodor, 1981).Inference rules, for example, are syntactic, rather than semantic, and these inference rules will deductively carry true premises to true conclusions. As such, physical machines can be built that manipulate symbols according to these syntactically driven rules and can be programmed to implement inference rules appropriately, transforming true premises into true conclusions. 

            There is a sense in which these symbols contain truth conditions about how they ought to be applied, and it follows that there is no guarantee that the expression will be applied correctly. Afterall, rule following, rather than merely conforming to a rule, can be viewed as an intentional act that demands the alignment of one’s behavior with the dictates of any given rule. I argue that pondering the correctness of the application of said rules is tantamount to exploring the possibility of violating a rule, even if this violation is not explicit. 

Rules come in many additional flavors when one considers computational models. According to classical computational models, the computations really only rely on abstract mathematical descriptions, and hence only the rules governing the mathematical computation of physical content are relevant (Egan, 2013). Connectionism contends that information is stored non-symbolically in the connection weights between individual nodes in a network. Rather than computing according to a rule, certain behaviors are learned (Clark, 2014). Cognitive processes are not rule governed and it is hence unclear whether cognition involves rule-based manipulation of sequences of distributed activity] .  Although it remains to be seen whether abstract rules governing cognition can be learned by neural networks, there are still various principles governing computations i.e. rules that make processes such as back propagation effective at conditioning a neural network.  In other words, there are precise rules that can dictate both how to update the weights of various connections in light of new evidence, and then how to the appropriate action should be selected based on these probabilities.

            Given these accounts of rules, as well as the very possibility of a rule, it pays to examine whether rule violation is at all necessary for misrepresenting or miscomputing. 

III.         Misrepresentation

One key idea I pointed to earlier is that rules are not explicitly represented and that inference rules, for example, are not necessarily consulted before thinking a thought. To use Dennett’s (1978) famous example, when playing chess, we do not explicitly consult the rule that you want to get your opponent’s queen out early. The different ways in which you can get the queen out, on the other hand, may be explicit. Given that these rules are implicit, do we necessarily violate them when we misrepresent?I will explore this question with an example from experimental psychology.

Picture a classical fear conditioning paradigm in which a mouse is conditioned for three consecutive days against an otherwise innocuous odor (see Figure 1).The odor (e.g. acetophenone, an almond like odor) is presented and co-terminated with a shock in the “paired” paradigm, so that the mouse associates the odor (conditioned stimulus) with the foot shock (unconditioned stimulus). It has been found that after three days of this paired paradigm, these mice display aversive behavior every time that odor is presented, which can be measured by the fear potentiated startle response, freezing or by placing them in an odor Trichamber (Morrison et al., 2015). Furthermore, according to Dias and Ressler (2014, 2015), this fear conditioning paradigm can result in an increase in the olfactory receptors to the specific odor. That is, these animals become hypersensitive to a previously innocuous odor, signaling not only a behavioral, but also a physiological and morphological change.  

 

There are various levels at which this could count as a misrepresentation. During the trial, the odor is being represented as a signal for a future painful stimulus. After the third day, upon presentation of the odor, it is represented as a threat, hence why the mice display aversive behaviors such as freezing.  Furthermore, when you place the mouse in a new context, such as a Trichamber, it avoids the side with the acetophenone. In other words, it misrepresents that odor as being dangerous, even in the absence of the shock or the context in which the shock was administered. In both cases, the mouse is representing the odor i.e. the odor and its constituent odorant molecules, are binding to their respective receptors and the information is presumably being sent to the piriform and limbic system. However, the content of that representation, whether formal or semantic, is somehow misleading the mice into perceiving it as threatening, when in reality it alone has nothing to do with the foot shock other than being temporally contiguous.

Odors such as fox urine are innately aversive since smelling fox urine in the wild indicates a nearby fox and hence representing the odor as threatening is evolutionary beneficial. This would be a case where proponents of teleosemantic theories of representations, such as Millikan, would say that this representation has a natural function, namely to protect the animal from harm (Millikan, 1995). This example would fit the teleosemantic definition of misrepresentation as there is a failure to perform a function that has been accounting for the continued reproduction of its type. Although acetophenone is a harmless odor, recent research has shown that the valence of innately appetitive odors can also be reversed, which would provide further evidence that this is a case of misrepresentation for the teleosemanticists, since in order to proliferate, a mouse needs to be able to correctly distinguish between harmful, innocuous and beneficial sensory cues. Similarly, Neander believes that sensory representations represent that which they are supposed to carry information about (Neander, 2013). This information is naturally indicative. As such, a misrepresentation would then be a failure to carry this information or carrying information that is not naturally indicative. 

The aforementioned example is illustrative of  misrepresentation according to both criteria: the mouse has failed to carry the information that this odor is only dangerous in a specific context (as the aversive, fearful behavior is elicited in a different context) and the mouse has failed to carry information that is naturally indicative, since for the past millennia that this mouse has evolved in, almond-like odors have never been naturally indicative of a predator. It is misrepresenting qua its natural function to represent threatening stimuli (Dretske, 1985). It is also misrepresenting qua the system’s mandate to represent the external environment accurately. The odor and foot shock come from different sources (assuming the railings at the bottom of the chamber as different from odor vials from which the odors are being pumped are different) a feature not captured by the representation of the odor. As a result, a fight or flight response is being triggered that is resource intensive.

The misrepresentations occurring in this example pertain to associations we make all the time, some of which can be detrimental to our mental health.  Early life stressors altering the stress response can lead to pernicious associations in which various innocuous situations can be misrepresented as dangerous or stress inducing, leading to a hyper-active hypothalamic pituitary adrenal axis, for example. Many psychiatric disorders such as post-traumatic stress disorder can be similarly linked to a history of misrepresenting stimuli. Furthermore, recent work on the descendants of Holocaust survivors indicates that these misrepresentations could penetrate through to future generations who did not themselves form the misrepresentations (Bowers and Yehuda, 2015).

IV.         Miscomputation

There is, however, a deeper sense in which this example is also a miscomputation. Dewherst (2020) defines computation as following a rule embodied by a system’s physical structure (Dewherst, 2020). Computation refers to the way we interpret information and produce an appropriate output.  Furthermore, a good account of computation should account for when it can go wrong. As seen in the example provided above, depicted in Figure 1., the mouse is misrepresenting an innocuous odor as a threat due to a three day classical fear conditioning paradigm. It may seem prima facie that the mouse is only afraid of the odor in the context it was shocked in, which would not be a miscomputation since that context is correctly responsible for the foot shock; however, this aversive behavior is displayed in novel contexts, such a Trichamber. Thus, on the computational level, two temporally contiguous events (the presentation of acetophenone and the foot shock) are being bound or associated together so that the mouse is conditioned to fear the odor. 

This qualifies as an example of a miscomputation because the system is failing qua computing; temporally contiguous events are bound together and associated with one another in the absence of any intrinsic connection. As such, it is failing according to its actualfunction, not according to the experimenter’s intentions, and it is miscomputing according to the physical structure of which the system if a part of. Whether the substrate of this miscomputation is representation or not will depend on the theory, but what has become clear is that there is computational error that although may during the fear conditioning trials make sense due to the temporal contiguity of the conditioning and unconditioned stimuli, is erroneous as the stimuli are not in essence causally linked – they are only linked on the computational level. 

The power of this example lies also in its applicability to various models. From the connectionist standpoint, the association between the odor and the foot shock is stored in the weights between individual nodes in the network. In the paired group, these weights were strengthened, albeit erroneously. They were strengthened based on the false pretense that their temporal proximity implied a causal connection. Qua the connectionist computational framework, therefore, this is a miscomputation. From a semantic standpoint, what is being computed here are representations, and since the odor is being falsely represented as a red flag, the subsequent computation will be necessarily faulty. Interestingly, however, it remains unclear how this example squares with Fodor’s Language of Thought (LoT). As stated previously, LoT seeks to formalize deductive reasoning, but this may be a case of inductive reasoning. Acetophenone has never before in their evolutionary history been threatening, and so to assume that simply based on a few isolated cases it will continue to be, is on some level problematic in and of itself.  

One objection worth considering here concerns the utility of miscomputations. If this example does indeed fulfill the criteria necessary to be a miscomputation, would one have to refer to all cases of classical conditioning as a miscomputation? Similarly, would all learning be a miscomputation? Taken together, these two lines of reasoning would suggest that miscomputations could have utility, which muddles the claims teleosemanticists make. All I will say to this objection in this essay is that it challenges the assumption that mistakes are always unfavorable. The proverbial notion of a useful mistake may carry more weight than previously thought.

V.           The relationship between these different mistakes

As elucidated by the previous three sections, a system can fail in different ways and the relationship between these various errors is as of yet unclear. On one level, it could be argued that a rule is being followed when the system is misrepresenting and miscomputing. Classical conditioning in some aspects (say, if it were in humans) employs a version of modus ponensreasoning in the form of if odor presentation, then shock. Violating this rule could then take the form of not expecting or reacting to the shock when presented with the odor. One possible argument against this is raised by Dan Sperber and Hugo Mercier in their book The Enigma of Reason (2019). According to them, these instances of miscomputation and misrepresentation do not qualify as a rule violation because there is no understanding of the rules governing the cognition in mice. There is no modus ponensrule of conditional inference being applied; the conditioning is just a response to the odor-shock regularity, and as a result of what they term a “conditioned reflex module”. That is, they do not understand that temporal contiguity does not necessarily mean a causal connection. That being said, in both these cases (misrepresentation and miscomputation), there was a mistake and the system is to blame, even if the errors are all in the realm of learned behavior, rather than a priorisyntactic rules. 

There is perhaps a sense in which these mistakes are all related in the connectionist framework. This rule could look like a probabilistic inference according to Bayes Law, and in that case the mice are correctly representing the odor as signaling a shock. However, the probabilistic inference is also dependent on the specific context, as well as the frequency of this association. If the mouse had been familiarized with this odor previously without the induction of fear or stress, then the Bayesian inference should take into account the specific spatial and temporal context in which the acetophenone and the foot shock are causally related, and when they are not. As such, freezing or avoiding the behavior in other contexts could be considered a violation of this rule. 

Another dimension for the interplay between misrepresentation and miscomputation concerns computational content. Whereas Egan (1995) thinks computational content is strictly formal (Egan, 1995), Peacocke (1999) posits that it has to have some content in order to influence behavior in the way it does (Peacocke, 1999). In the examples provided above, it is the content, specifically the errors in the content (via misrepresentation and subsequent miscomputation) that has an observable effect on the mouse’s behavior i.e. startle response, aversion and freezing. If the content was strictly formal, not only would this behavior not be explainable, but there would be no clear relationship between the misrepresentation of the odor and the behavior it elicits. The premise of contextual fear conditioning relies on there being different responses based on different contexts; the startle response based on fox urine is categorically different that the startle response based on the presentation of acetophenone, for example. As such, there is a computation from one content-involving state, namely the state of smelling the odor, to another, the state of being shocked. Thus, misrepresentation and miscomputation seem to be linked if one is to adopt Peacocke’s stance. 

In a similar vein, if misrepresentation and miscomputation are not related to rule violation, the “how” question persists. If this were not a miscomputation, how was it achieved, according to what rule? If it was, in violation of what rule? These questions imply that there must have been a violation of some rule that led to either of these mistakes, though an argument for what rule that may be is tenuous. 

VI.         Conclusion

Rule-violation, misrepresentations and miscomputations are three ways in which systems can fail or make a mistake. In this essay I have explored each case vis-à-vis a classical fear conditioning paradigm. Although the relationship between these three mistakes is dictated by whatever camp one belongs to, what is hopefully clear is that (i) miscomputations and misrepresentations do notnecessarilyrely on rule violation and (ii) miscomputations and misrepresentations can be favorable in certain cases where a stimulus does indeed cue harm. Future work should examine these principles in more sophisticated systems like the human brain and reconcile empirical data with the theoretical models that make postulations about them.

Can we ever have evidence for our theories?