Introduction
“The heart is deceitful above all things, and desperately wicked: who can know it?”
– Jeremiah 17:9 KJV
“Good afternoon, ma’am. Would you like to buy three boxes of chocolates? I’m raising money for my school’s baseball team,” cheerfully said a young man, after a kindly old woman opened her door for him.
“I’m sorry,” said the old woman with a feigned frown on her face, “I don’t think I’ll need three boxes of chocolates.”
“Oh, that makes sense. Well, how about five chocolate bars? I really would like to raise some money for my school’s baseball team,” exclaimed the young man, hoping she’d be convinced by his plea.
“Only five? Alright, I’ll be happy to pay you for five,” said the old woman, painted with a sincere smile.
“Excellent,” shouted the young man, preparing the chocolate bars for the old woman.
In the preceding dialogue, the following occurred: a high initial offer was made, the high offer was denied, then another, much lower, offer was made, and finally accepted. This tactic is known by salesmen as the foot-in-the-door strategy. This strategy relies on an understanding of game theory, but more importantly, it highlights a protean form of anchoring.
Over the past week, I took the time to review the topic of anchoring, how it affects us, how its elicited, potential resolutions to its seemingly automatic effect, and how it can be observed in our legal institutions, for example, the courts and legislature.
I found the concept of anchoring to be intriguing. With every paper I read on the topic, I felt as if the concept unfolded before me, and by the end of my investigations, I could see where even future researchers seemed to have misunderstood the topic – which is perhaps, in part, why it was subject to the reproducibility crisis that pervaded psychology. Of course, this crisis did not diminish the reality of the phenomenon of anchoring, but it did call into question many papers, some of which – if they had been done correctly – would have generated fascinating findings. Regardless, many of the papers I reviewed, as far as I can see, shouldn’t be excludable because of the reproductive issues of some anchoring papers.
I’d like to review the concept of anchoring in this paper, simply for the sake of improving my ability to understand the world around me and the psychological traps we are all subjected to. I will start by discussing the psychology of anchoring, and then I will proceed by discussing anchoring vis-à-vis the legal system. Lastly, I will review the conclusions covered in the body of this paper, and try to synthesize them. I will conclude the paper by examining the implications of some of the conclusions reached through the synthesis of both psychological and legal studies on the topic of anchoring.
Again, I think it is prudent to study topics like anchoring, just as I studied the topic of framing. Framing, like anchoring, reveals very dark secrets about the nature of the human mind and how it thinks that need to be explored through investigation. Topics like these, I have found, to be somewhat disturbing – once these kinds of phenomena are revealed, and once one has a thorough grasp on them, it is hard to rid yourself of their reality. However, like any naturally occurring stream, one also has the capacity to utilize it for his own benefit, whatever that benefit may be. That is my aim: to study the concept of anchoring so that I can make effective use of it as a psychological device, and so that I may do my best to avoid its effects, even if such effects are seemingly unavoidable. Now, let us begin.
What Is Anchoring?
The anachronistic dialogue at the beginning of this paper was inspired by an early paper investigating mental heuristics. In their 1975 paper, Ciadini et al. covered the topic in an early psychological experiment, simply highlighting the efficacy of what they call “the door-in-the-face” technique, which is a variation of a foot-in-the-door technique.
Their hypothesis rested on a principle of game theory, namely that “the likelihood of a concession by one party is positively related to the occurrence of concession by another party” (ibid). From this principle, the authors hypothesize that “if [you] were to begin by asking for an extreme favor which was sure to be refused by the other, and then [you] were to move to a smaller request, the other would feel a normative strain to match [your] concession with one of his own” (ibid). In other words, “by means of an illusory retreat from [your] initial position, [you] should be able to obtain another agreement to the request [you] desired from the outset” (ibid).
The authors’ first experiment was a resounding success. The authors state that it “is clear from the findings… that making an extreme initial request which is sure to be rejected and then moving to a smaller request significantly increases the probability of a targets person’s agreement to the second request.” However, the authors were not finished, there were still several issues that needed a resolution. Namely, they needed to clarify whether the subject conceded to the concession made by the requestor. In other words, if a second person were to make a smaller request, and the subject were to agree to that request, rather than only conceding to the primary favor-asker, then the principle in question would be falsified.
The authors’ hypothesis was, once again, confirmed through the second experiment. The authors state that it “appears from the results of Experiment 2 that the target’s perception of concession by the requester is a crucial favor in producing compliance with the smaller request.” To be clear, the principle held –the target did not concede to the request of a second requester after the first requester conceded to an extreme request’s denial. The authors argue that it “seems that [the subjects] increased the frequency of assent to the smaller request only in response to what could be interpreted as concession behavior on the part of the requestor” (ibid). I.e., the concession by the target was reciprocal.
But the authors were still skeptical about their results. They were worried that “the subjects… acquiesced to the critical [request,] not because of pressure to reciprocate a concession but because they were dunned into accession by a tenacious requester or because they wanted to avoid the requester’s perception of them as having a generally antisocial or unhelpful nature” (italics added, ibid). The authors concluded that “the data from [the third experiment] argue against the interpretation that a requester’s persistence in making requests accounts to the superiority of the rejection-moderation condition” (ibid).
The authors conclude that the door-in-the-face technique “is quite a powerful one for inducing compliance” (ibid). Continuing, they state that they “were able to double the likelihood of compliance through the use of the rejection-then-moderation procedure.” They also make it clear that “the technique does not limit a requester to the receipt of small favors. It is only necessary that the critical request be smaller than the initial one for a reciprocal concessions mechanism to come into play” (ibid). The authors identified that “the rejection-then-moderation procedure[s]… force seems to derive from the existence of a social norm” (ibid). Lastly, the authors note that “a compliance induction procedure which uses concessions involves the feelings of the target person toward the outcome of the interaction,” i.e., “the person to whom [the technique] is applied will feel more responsible for and satisfied with the outcome” (ibid). This means that, because they are satisfied with the transaction, the target is likely to engage in future interactions with the requester; i.e., “the target person of a rejection-then-moderation moderation procedure may well be vulnerable to subsequent requests by the same requester” (italics added, ibid).
However, the authors note some limitations, as well. They make clear that “the rejection-then-moderation procedure has been shown to work under a fairly limited set of circumstances. E.g., the interaction may not work over the phone, or online, but only in a face-to-face situation. The authors also note that “while the present research appears to support a reciprocal concessions interpretation of the effect, it in no way ultimately confirms that interpretation” (ibid). Regardless, the phenomenon clearly exists and is applicable. In fact, because the aim of the requester is relative to that requester, the requester may even be capable of inducing non-compliance, a potentially potent manipulation tactic.
The reason why I thoroughly reviewed this particular case was to give the reader a taste of just how potent and unconscious the effects of anchoring can be, the door-in-the-face and foot-in-the-door strategy being just two variations of anchoring strategies. For decades, the phenomenon of anchoring has been studied by various researchers in the fields of economics, psychology, and law, at least, and its effects are generally quite reproducible.
Anchoring, as defined by Tversky and Kahneman, 1974, “is the disproportionate influence on decision makers to make judgments that are biased toward an initially presented value.” Importantly, as was identified in a review paper by Furnham and Boo (2011), specifically citing Strack and Mussweiler, 1997, they identify that “anchor values serve as the reference point for people to adjust the boundary of the range of plausible values for the question, presuming that the given anchor is more extreme than the boundary value for the range of plausible answers.”
As of 2011, Furnham and Boo identified that the ‘dominant paradigm’ defining anchoring effects was confirmatory hypothesis testing (Chapman and Johnson, 1999; Mussweiler and Strack, 1999, 2001; Strack and Mussweiler, 1997; and Wegener et al., 2010). To be clear, a judge of a problem may use an anchor to consider a hypothesis that is defined by the anchor. I.e., the anchor promotes confirmation bias. Furnham and Boo argue that “’confirmatory search’ (Chapman and Johnson, 1994) and ‘selective accessibility’ (Strack and Mussweiler, 1997) [contribute] to the fundamental mechanism that accounts for the anchoring effect.”
Furnham and Boo also highlight that there are several types of anchors. For example, the authors argue that “anchors that have informational relevance to the task can lead to anchoring effect[s]” (Furnham and Boo, 2011). For one, “in the legal domain, higher damage awards [have been] obtained when higher compensations are requested in court” (Hastie et al., 1999; Marti and Wissler, 2000). Furnham and Boo also identify that “the sentencings for rape cases are influenced by the prosecutor’s sentencing demand” (Englich et al., 2005). The effects of the latter finding were supported by a selective accessibility model (Englich et al., 2006). For example, the authors found “participants who were exposed to high anchor values responded faster in categorizing incriminating arguments than those presented with low anchor values, indicating that anchor consistent information is activated by relevant anchors… [providing] support for the argument that the anchoring effect is vulnerable to the relevance of the reference value in the task” (Furnham and Boo, 2011). This finding differs with respect to the preceding model in so far as no hypothesis is being tested, whether self-generated or experimenter-provided, the anchor is not itself necessarily reflected in the answer. Instead, the anchor – if it is relevant – causes biasing effects, leading to higher rewards, harsher sentences, and quicker judgments, at least.
Furnham and Boo also argue that “implausible or extreme anchors lead to a larger anchoring effect compared to plausible anchors” (Strack and Mussweiler, 1997; Wegener et al., 2010). They highlight that “under the selective accessibility model, extreme answers would be provided as targets consistent with the anchor becoming activated” (Furnham and Boo, 2011). I.e., anchor differences relative to high and low anchors ‘occurred’ only when the answers were plausible but not for plausible or extreme answers. It has also been identified that “extreme anchors generated smaller anchoring effects than moderate anchors” (ibid; Wegner et al., 2001).
This effect can be explained by eliciting an anti-confirmation bias effect. Furnham and Boo (2011) state that “when values are too extreme, people generate counterarguments to question [their] validity or ignore the values completely, therefore leading to less attitude change” (Wegener et al., 2001). Just to be concise: anchoring effects are caused by reduced counter-hypothesis testing, and unreasonable anchors cause counter-hypothesis testing.
Furnham and Boo also highlighted that anchoring effects differ between different people. For example, it seems as if happy people may be more susceptible to anchoring effects than sadder people because happier people process information more rapidly (Englich and Soder, 2009). Yet, as Furnham and Boo identify, this effect is not observable in judgmental anchoring situations, and participants who are sad are even more susceptible to anchoring effects than happier participants (Bodenhausen et al., 2000; Englich and Soder, 2009).
Two arguments explain these effects, respectively: “sad mood causes people to engage in more effortful processing, where people interpret information through elaboration on their existing knowledge and determine the claim to be acceptable or unacceptable” (Blankenship et al., 2008); and, “sad mood induces judges to engage in more thorough information processing… and hence activate the confirmatory search for anchor consistent information… [suggesting] that a happy mood may lead to a judgment that is uninfluenced by the robust influence of anchoring” (English and Soder, 2009).
With respect to relevant anchors, it also seems that participants are less influenced by anchors; i.e., if you have more knowledge about the information the relevant anchors are referring to, you may be less susceptible to their effects (Chapman and Johnson, 1994). However, experts may not be freed from the effects of anchoring in “judgmental domains.” Car experts, (Mussweiler et al., 2000), real estate agents (Northcraft and Neale, 1987), and legal professionals (Enough and Mussweiler, 2001; Englich et al., 2005, 2006), can all be affected by anchoring effects, despite their expertise.
Motivation doesn’t seem to affect anchoring, however. Tversky and Kahneman (1974) “offered payoffs for accuracy to motivate participants in order to reduce the anchoring effect, but to no effect” (Furnham and Boo, 2011). Wilson et al (1996) also “demonstrat[ed] that anchoring effects are not eliminated with incentives and forewarnings” (Furnham and Boo, 2011). “Some studies have found the effectiveness of forewarning in diminishing the effects of anchoring when warnings about insufficient adjustment (LeBoeuf and Shafir, 2009) and self-generated anchors are given (Epley and Gilovich, 2005)” (ibid). Yet these effects are not absolute, and anchoring effects can still be observed despite motivation and forewarning to avoid the anchor.
Personality differences also have an effect on anchoring. For example, “participants with high conscientiousness and agreeableness and low extroversion (Eroglu and Croxton, 2010), as well as… high openness to experience (McElroy and Dowd, 2007) are more susceptible to the anchoring effect” (Furnham and Boo, 2011) This can be explained by the fact that “individuals with high conscientiousness engage in more thorough thought process before judgments are made, those with high agreeableness take the provided anchors seriously, high openness to experience influence individuals… are more sensitive to anchor cues” (ibid). Furnham and Boo argue that “these attitudes lead to the activation of confirmatory search, and selective accessibility mechanisms of anchoring” (emphasis added). It should also be noted that, based on these findings, women may also be more susceptible to anchoring effects than men due to the fact that they score higher on trait conscientiousness and agreeableness than men, (Schmitt et al., 2008); in other words, women may be less likely to search for disconfirming evidence than men, on average.
Interestingly, there doesn’t seem to be much evidence for anchoring effects being contingent upon cognitive differences, such as intelligence. Of course, this doesn’t mean it’s impossible for IQ to be correlated to anchoring effects or for intelligence levels to alter anchoring effects in some meaningful way, there simply doesn’t seem to be good evidence supporting the matter. In theory, as I will show, this could still be the case, however.
One final note before proceeding, as was stated in the introduction of this paper, the anchoring effect is a victim of the reproducibility crisis. To be clear, for some time, studies were claiming that ‘incidental’ anchoring effects were observable. This claim is extremely dubious. Incidental anchoring effects allegedly occur in the presence of totally irrelevant anchors such as if a restaurant is called Studio 5 compared to Studio 32. Studio 32 will, so the reasoning from incidental anchoring goes, have a higher rating than Studio 5. Another example, football players with higher jersey numbers are rated more highly than those with lower numbers. One final example, which is from a paper I was planning on reading but was dissuaded from doing so, the last digit on a social security card number allegedly can affect responses to answers. These kinds of anchors are not reproducible and fail to generate significant effects when reviewed (Shanks, 2020; Schimmack, 2021). Thus, it is only a requisite duty to inform the reader that not every anchoring effect study can be trusted, and some of the papers cited in Furnham and Boo are just such papers; however, none that I referenced, upon reviewing them, appear to be so untrustworthy as to claim that an incidental anchor, like the ones provided, have significant effects.
An Examination of ‘The Semantics of Anchoring’
How exactly does the anchoring effect play itself out; i.e., what are the psychological mechanisms that generate anchoring effects? To answer just this question, Thomas Mussweiler and Fritz Strack devised the Selective Accessibility (SA) model of anchoring, which can be found in their 2001 paper, The Semantics of Anchoring. For Mussweiler and Strack, the SA model essentially assumes that “anchoring is, in essence, a knowledge accessibility effect and is thus semantic in nature.” I.e., “the model postulates that comparing the judgmental target to the provided anchor value changes the accessibility of knowledge about the target.” To be precise, “the accessibility of an anchor-consistent subset of target knowledge is selectively increased” (Mussweiler and Strack, 2001). When judges needed to assess a problem, they relied on easily accessible knowledge; the anchor serves as a kind of easily accessible knowledge, and thus biases judges’ answers in the direction of the anchor.
To reinforce the claims made about incidental anchors, Mussweiler and Strack state that it “has repeatedly been demonstrated that the extent to which increasing the accessibility of a specific concept in a priming task influences a subsequent judgment is determined by how applicable the activated concept is to this judgment (Higgins and Brendl, 1995; Higgins, Rholes, and Jones, 1977)” (italics added, Mussweiler and Strack, 2001). In other words, random, incidental anchors should be insignificant and were, in theory, insignificant. I.e., “the magnitude of anchoring depends on how applicable the knowledge that was rendered accessible during the comparative task is to the critical absolute judgment” (ibid).
The authors also note that “the direction of anchoring effects… depends on whether the knowledge that was activated during the comparative task pertains to a target that is similar to the judgmental target of the absolute question or to a target that is largely dissimilar” (ibid). The authors provide the example of a low vs. high effect when comparing the mean winter temperature in the Antarctic (-20°C or -50°C). When this comparison is used to elicit answers about temperatures in Hawaii, however, a contrast effect can be observed. I.e., “higher estimates for the mean winter temperature on Hawaii are given if the temperatures in the Antarctic had previously been compared to the low rather than the high anchor (Strack and Mussweiler, 1997)” (ibid). In other words, “the direction of the anchoring effect appears to depend on the similarity of the activated concept and the judgmental target, just as is true for knowledge accessibility effects in general” (ibid).
To support the selective accessibility model in theory, once more, the authors also highlighted how “participants were faster in judging whether a given word constitutes a word if a semantically related word had been presented beforehand (Neely, 1977)” (ibid). I.e., “Response latencies for the absolute anchoring task have been demonstrated to depend on the extent to which the accessibility of judgment-relevant knowledge has been increased during the comparative task (Mussweiler and Strack, 1999, 2000a, 2000b; Strack and Mussweiler, 1997). This kind of evidence seems to directly support the SA model. For example, when participants in a study were asked to compare the average price of a German car to either a high or low anchor value (40,000 vs. 20,000 German Marks), the judges were “faster in recognizing words associated with expensive cars after a comparison with a high anchor than after a comparison with the low anchor” (Mussweiler and Strack, 2000a, Mussweiler and Strack, 2001). While “words associated with inexpensive cars were recognized faster after a comparison with the low anchor… [demonstrating] that the accessibility of anchor-consistent semantic knowledge about the target… is increased as a consequence of the comparative judgment” (ibid).
Mussweiler and Strack also express that there is another, competing model available to explain these effects, but that – because it focuses exclusively on numerical values – is too “narrow a perspective to allow for a complete understanding of the standard anchoring paradigm” (ibid). The authors state that a “purely numeric account can neither explain the fact that anchoring effects depend on changes in the judgmental dimension or the judgmental target nor that they are characterized by a striking temporal robustness and also appear to involve a selective increase in the accessibility of semantic knowledge that pertains specifically to the judgmental target itself” (ibid). Because the semantic content affects the absolute judgment, the anchor value itself doesn’t appear to be the cause. If the contrapositive held, then in the instance where the higher anchor for the mean winter temperature in the Antarctic was given (-50°C), and judges were asked about the mean winter temperature in Hawaii, they should have given a comparatively higher answer than they did with the low anchor (-20°C). They did not. Effects of this kind suggest that the semantics of the anchor are as relevant, if not more, than the numeric value itself.
To test this hypothesis, Mussweiler and Strack conducted several studies. In the first study, “the applicability of accessible knowledge to the absolute judgment was manipulated by changing the target of the absolute judgment and holding the target of the comparative judgment constant.” In the second study, “the target of the comparative judgment was varied and the target of the absolute judgment remained unchanged” (ibid). In the first study, “the size of [the] anchoring effect critically depended on whether the comparative and the absolute question pertained to the same judgmental target or to two different targets… anchoring was more pronounced if the judgmental target remained the same than when it was changed” (ibid). In the second study, the same effect was observed. “[T]he anchoring effect critically depended on whether the comparative and the absolute judgment pertained to the same target or to two different targets,” and “[i]f the target was changed… this effect did not occur.” Specifically, “[s]ubsequent analyses revealed that the difference between the high and… low anchor was only reliable in the unchanged target condition, t(51) = 3.55, p < .001, but not in the changed target condition, t(51) = 1.1, ns.” (ibid). In other words, anchoring is dependent on the applicability of semantic knowledge.
In their final study, the authors investigated whether “semantic and numeric processes influence absolute estimates in an additive manner” or “are pure numeric influences only apparent if semantic effects are undermined?” (ibid). This study essentially asks the following: if you can provide a smaller absolute value but a comparative similar value (e.g., 5100 m compared to 5.1 km), will you net a different answer? If semantic processes are more intelligible than purely numeric effects, this shouldn’t hold. The authors identified the following: “If the comparative and the absolute question pertained to the same target, absolute estimates did not depend on the anchor unit (M = .04 for m and M = .00 for km)” yet “[if] the judgmental target was changed… higher estimates resulted if the anchor had been expressed in meters (M = .30) rather than kilometers (M = -.35)” (ibid). This effect was significant if the target had been changed “t(102) = 2.6, p < .01,” yet insignificant if the target remained the same “t(102) = .18, ns.” (ibid). Thus the “semantic and numeric influence of a given anchor critically depends on the applicability of semantic knowledge”; “[a]nchors with extremely different absolute values but similar semantic implications produced similar estimates”; suggesting “that purely numeric influences are of minor importance in the paradigm… typically associated with anchoring, and are limited to judgmental contexts that are atypical of the classic anchoring situation” (ibid).
Essentially, purely numeric anchoring likely has very little effect in an anchoring paradigm. In other words, it should have been fairly obvious that incidental anchoring effects were highly dubious if social psychologists had been familiar with the literature on anchoring. Of critical importance, as I see it, and as the authors of this study note, “Once a comparison standard has been selected, it appears to boost the semantic content that is activated during its comparison with the target rather than its pure semantic qualities that influence target evaluations” (ibid). The authors proceed to suggest that there are two potential types of anchoring effects if this is the case, as it clearly appears to be: there is a “relatively shallow anchoring influence that operates at the stage of standard selection” and a “deeper anchoring effect that is rooted in the comparison stage” (ibid). In the comparison stage, judges test their or a given hypothesis by generating knowledge that affirms either their or an experimenter’s hypothesis rather than seeking to disconfirm the hypothesis by generating contradictory data. Lastly, this finding calls into question the heuristic nature of anchoring, as Tversky and Kahneman (1974) defined it, and suggests that “the processes that underlie standard anchoring effects [are] fairly elaborate and systematic in nature” (ibid).
Putting in the Effort – When Effortful Thinking Influences Judgmental Anchoring
Just how much can we avoid anchoring effects if we try? This was precisely what Nicholas Epley and Thomas Gilovich sought to discover in their 2005 paper, When Effortful Thinking Influences Judgmental Anchoring: Differential Effects of Forewarning and Incentives on Self-generated and Externally Provided Anchors. Specifically, they wanted to know “when increased effortful thought [influences] the impact of anchors on intuitive judgment and when it will not” (ibid).
The research, as far as these authors were concerned, had shown that the kind of mental effort necessary to diminish the impact of anchoring effects could not be willed or the participants were unable to overcome the biasing effects of anchors. Specifically, the evidence they reviewed clearly showed that neither incentive, forewarning, nor time-diminished anchoring effects in the traditional paradigm had any meaningful effect. However, the authors call this into question. Specifically, they argue that because the psychological processes governing anchoring are two-fold: in one circumstance, the anchoring effect should be “systematically influenced by effortful thought” while in the other, it should not (ibid).
The authors argue that “[b]ecause people evaluate hypotheses by trying to confirm them (Klayman and Ha, 1987),” the exact opposite of what they should be trying to do, “comparative assement[s] [are] likely to activate information consistent with the target value” (ibid). In support of this argument, the authors claim that “people spend more time attending to shared features between the target and the anchor than to unique features (Chapman and Johnson, 1999) and anything that leads people to attend to unique features diminishes the magnitude of anchoring effects (Chapman and Johnson, 1999; Mussweiler, Strack, and Pfeiffer, 2000)” (Epley and Gilovich, 2005). They also argue that “people who have just answered questions in the standard anchoring paradigm are faster to identify words consistent with the implication of an anchor value… than to identify words inconsistent with the anchor value.” Lastly, they argue that “altering the hypothesis considered in the comparative assessment alters participants’ absolute assessments… [resulting in] people [giving] larger absolute estimates after being asked whether a target value is more than the anchor value than after being asked whether a target value is less than the anchor value” (ibid). In other words, effortful thought does not influence anchoring effects because the thought is tending in the wrong direction: confirmation rather than disconfirmation.
The authors argue that there is a meaningful difference between self-generated and experimenter-provided hypotheses. Specifically, self-generated anchors “do not activate the same selective accessibility mechanisms that novel ‘experimenter provided’ anchors do, but instead initiate a process of effortful serial adjustment that modifies the initial anchor in a direction that seems appropriate until a plausible estimate is reached” (ibid). To support this claim, the authors provide evidence showing that “because the adjustment from self-generated anchors is conscious and deliberate, participants are able to consciously report utilizing a process of adjustment when responding to self-generated anchors, but report no such adjustment process when responding to experimenter-provided anchors… (Epley and Gilovich, 2001) (Epley and Gilovich, 2005).
From the evidence cited in the paper, the authors argued that “increasing the motivation or tendency to engage in effortful thought would have no influence on responses to experimenter-provided anchors” yet, they expected that “both would increase the amount of adjustment from self-generated anchors, and thus diminish the anchoring bias” (ibid).
In their first study, the authors predicted that “financial incentives for accuracy would influence responses to self-generated anchors but not to experimenter-provided anchors” (ibid). Half of their participants were given a financial incentive to answer correctly and the other half were not. Self-generated anchor questions are as such: ‘In what year was George Washington elected President of the United States?’; In what year did the second European explorer land in the West Indies?’; and ‘What is the freezing point of vodka?’. The results suggest that participants provided an incentive adjusted more on average than participants not provided an incentive in the self-generated anchor paradigm. The opposite was true in the experimenter-provided anchor paradigm: no difference existed between those who provided an incentive for accuracy and those not provided an incentive for accuracy. Thus, “[i]ncreasing the amount of effortful thought devoted to such questions should therefore increase the amount of adjustment from… self-generated anchor values” (ibid).
In their second study, the authors hypothesized that “warning participants about the potential error [within an] adjustment process should lead people to adjust further.” In the experimenter-provided anchor paradigm, this effect should be non-significant. The results of their study suggest that individuals who are forewarned about insufficient adjustment will provide answers that are further away from the self-generated anchor values than those who are not. In the experimenter-provided anchor condition, no significant effects were observable. The authors state that “Serial adjustment is effortful, deliberate, and therefore consciously available, whereas semantic priming mechanisms that produce anchoring effects in the standard anchoring paradigm are effortless, unconscious, and unintentional… Knowledge may be power,” the authors conclude, “but only if one knows when and how to use it” (ibid).
Thus, effortful adjustments are contingent upon the kind of anchoring condition any participant is subjected to. The authors suggest, however, that when you are provided with an external anchor, all is not lost. They claim “the antidote is akin to the ‘consider-the-opposite’ strategy commonly employed in debiasing research – in this case, considering the ways in which the anchor value might be wrong (Chapman and Johnson, 1999)” (Epley and Gilovich, 2005). I.e., a sufficient amount of skepticism might be quite useful. For self-generated anchors, the authors state that “[they] are anything but arbitrary and so eliminating their influence on judgment is likely to be counterproductive,” especially if one is prone (agreeable, conscientious) to confirmation bias and self-deception. I.e., “the relevant debiasing efforts are perhaps best geared toward fine-tuning the governing psychological processes, not counteracting them completely” (ibid), for example, by generating analogs between examples to find a broad and useful similarity between many like examples.
Knowledge: When and is it Power?
Before proceeding to analyze the effects of anchoring on law, and thus the culture, it will be relevant to examine whether knowledge is a relevant variable affecting the impact of anchoring. In their 2013 paper, Smith, Windschitl, and Bruchmann set out to answer just this question. They suggest that knowledgeable people may be less influenced by the anchoring effect than less-knowledgeable people.
To provide evidence for their hypothesis, the authors cite Blankenship et al., (2008), wherein participants “learned either anchor-consistent or anchor-inconsistent information.” Those “exposed to anchor-inconsistent information exhibited smaller anchoring effects… when not under cognitive load.” Another study cited by these authors (Mussweiler and Strack, 2000) showed that “Participants exhibited smaller anchoring effects when they could specify the category the target belonged to as compared with when they lacked this information.” (Smith, Winschitl, and Bruchmann, 2013). Thus, it appeared as if it was entirely possible that knowledge may have some impact on anchoring effects.
The authors also justifiably cited evidence showing that knowledge did not moderate anchoring. For example, in one study (Northcraft and Neale, 1987) real estate agents showed similar anchoring effects as those who were amateurs. From these kinds of studies, and as was stated in Furnham and Boo, ‘expertise was typically found to have little if any influence on anchoring’ (Englich and Soder, 2009).
However, this did not dissuade Smith, Windschitl, and Bruchmann. Instead, they made the argument that expert knowledge could act as a moderator. They argue that “high-knowledge people might be more likely to know the exact answer of the target estimate.” Secondly, they also claim that “the range of resources that people will consider plausible is likely to be narrower among high-knowledge people than low-knowledge people,” i.e., “a low-knowledge person who is exposed to an extreme anchor will likely recruit information that is consistent with [the] extreme anchor” whereas a high-knowledge person may not. And lastly, “someone with a great deal of knowledge about a target has more overall information… [and will, thus,] be more likely to come up with anchor-inconsistent information than a person with little or no knowledge about a target.” (Smith, Windschitl, and Bruchmann, 2013). Yet this, itself, may not be the case: a high-knowledge person may recruit the kind of information (because they have a greater pool of information to choose from) that is more anchor-consistent than a low-knowledge person. I.e., the anchor will serve as a point of reference for the information pulled by the high-knowledge person, where no such effect would be observable in the low-knowledge person. To test this theory, the experimenters conducted four studies.
In their first study, the authors investigate the anchoring effects in a specific domain: football. In the second study, the authors “had participants from the USA and India about US and Indian topics” (ibid). In the third study, the authors had “participants first indicate[] their level of knowledge about 14 different domains,” and then “answer[] two anchor questions in each domain” (ibid). I feel it is necessary to exclude the last study conducted by these authors due to the fact that they used an irrelevant anchor.
The first study showed a “robust anchoring effect in the overall sample” (ibid). I.e., they “found that participants with higher knowledge exhibited smaller anchoring effects.” The authors also argue that it “is possible that people who tend to be knowledgeable about football also tend to be resistant to judgmental biases for reasons other than football knowledge” (ibid). For example, perhaps they are less conscientious and agreeable than other participants.
The second study revealed that “Indian participants exhibited significantly larger anchoring effects for the US questions than the Indian questions, F(1, 62) = 9.63, p = 0.003, Ƞp2 = .13” (ibid). However, for US participants, the anchoring effects… were not significantly different across the two question domains, F(1, 62) = 1.72, p = .20, Ƞp2 = .03” (ibid). The overall finding of this study, the authors claim, is that “knowledge is inversely related to the size of anchoring effects” (ibid). I.e., “participants tended to exhibit smaller anchoring effects for questions from their high-knowledge domains rather than low-knowledge domain” (ibid).
The third study revealed the expected effect from each respective knowledge domain; i.e., if you had more knowledge in a specific domain, you were less impacted by the anchoring effect than if you were not.
The fourth and final study’s results were in line with the previous three studies’ results. “As predicted,” the authors write “the decrease in anchoring effects occurred when the participants were given relevant background information” (ibid). In order to generate this result, the authors argue, “manipulation designed to reduce anchoring effects must be quite strong.” And “Instructions encouraging the participants to process the information were also necessary to help reduce the biasing influence of anchors” (ibid). The authors did not think these effects were the result of effortful processing.
The authors came away with several conclusions. First, “high-knowledge participants may be more likely to have the exact answers to the posed-questions stored in memory” and thus are less susceptible to anchoring effects. Secondly, “high-knowledge participants are likely to have a narrower range of plausible responses than their low-knowledge counterparts.” And lastly, “high-knowledge participants likely have access to more information that is inconsistent with the anchor,” which suggests they’re more likely to disconfirm false positives.
The authors did not assume that knowledge will always reduce anchoring effects either. Specifically, if a study used moderate anchors, the effect knowledge would have on reducing the anchoring effect may be non-significant relative to less-knowledgeable participants. For example, the authors cite Englich et al., 2006, wherein “legal experts and non-experts read a hypothetical scenario about a woman convicted of shoplifting” The anchoring effect between the two groups was non-significant. Yet, if the anchor were extreme, Smith, Windschitl, and Bruchmann argue, there might have been a different effect. I.e., the experts would have recognized that the extreme anchor was inconsistent with the norm and reduced it whereas a less-knowledgeable individual wouldn’t have. This finding, in particular, leads us quite smoothly into our next analysis of knowledge effects on anchoring: what kind of knowledge is relevant to reduce anchoring effects?
In another paper by Smith and Windschitl (2015), ‘knowledge’ was broken into two different kinds: Metric and Mapping. Mapping knowledge “refers to how items compare with one another” while Metric knowledge “refers to the general statistical properties (e.g., mean, range) that items tend to have” (ibid). They chose to study this topic on the basis that appropriate knowledge can diminish the impact of anchoring effects. The authors argue that “a person with high metric knowledge would have a better grasp of the distribution of values within the range.” However, someone with high mapping knowledge, e.g., the location of France with respect to Russia, with respect to Germany, may have no idea bout the relative population of Germany or France.
To test this theory, the authors conducted several studies. The first study predicted that not only would a knowledge acquisition condition lead to lower anchoring effects than a no-knowledge condition, but also that knowledge about one category (old countries) would carry over to a new category (new countries). The authors found “a significant anchoring… effect F(1, 49) = 160.64, p < .001, Ƞp2 = .76,” which was expected. Secondly, they found that “a predicted Knowledge Condition by the Anchor Interaction, F(1, 49) = 55.11, p < 0.001, Ƞp2 = .53.” And thirdly, “The Knowledge by the Anchor interaction was significant for both the old countries F(1, 50) = 36.82, p < .001, Ƞp2 = .42, and the new countries, F(1, 50) = 28.51, p < .001, Ƞp2 = .36” (ibid). The authors interpret these results as suggesting that “the full-knowledge participants were less biased by the anchors than were the no-knowledge participants… [thus,] the impact of the knowledge gained by full-knowledge participants extended beyond simply knowing which direction to adjust from the anchor” (ibid).
The authors also identified that “metric knowledge” specifically “gained about the old countries was useful when making estimates about the new countries.” However, mapping knowledge, when “gained in the learning phase by the full-knowledge participants,” didn’t help the participants when they encountered new countries. Thus, “it seems unlikely that mapping knowledge played a role in mitigating the anchoring effects for the new countries” (ibid). But to what degree? Study two resolved just this conundrum.
In study two, the participants were provided with two new knowledge conditions. The first condition was a distribution condition, providing the participants with populations of African countries (the metric learning condition). The second condition was a rank-order condition and provided those participants with information about how countries compare with one another (ibid). The authors expected that “the condition that provided metric information (the distribution condition) would show smaller anchoring effects than would the condition that provided mapping information (the rank-order condition).
The authors observed a “pattern of results [that support their] primary expectation that the anchoring effects would be smaller in the conditions designed to enhance metric knowledge… than in the conditions designed to enhance mapping knowledge” (ibid). The authors also observed that participants in the rank-order condition made more errors than those in the distribution condition, suggesting that metric knowledge leads to more accurate guesses than mapping knowledge. The authors, as far as they see it, clearly demonstrated that it is metric knowledge, not mapping knowledge, that leads to lower anchoring effects.
Overall, these findings clearly suggest that particular types of knowledge have an impact on one’s ability to reduce the anchoring effect. For this particular study, the specific knowledge capable of diminishing the anchoring effect was metric knowledge. One consideration that I think is relevant is that the semantics of anchoring questions are relevant, specifically in self-induced anchor scenarios, which may be more typical than experimenter-induced anchor scenarios. In other words, the positive effect of metric knowledge may be more significant in scenarios where metric knowledge is more relevant than otherwise. Regardless, the authors did identify that when “focusing [their] analyses only on those participants who adjusted in the correct direction from high and low anchors, higher metric knowledge was associated with smaller anchoring effects” (ibid). And secondly, “people with high metric knowledge also had a better sense of the distribution of African countries… [which means] they were better [at overcoming] the biasing influence of anchors” (ibid).
Knowledge, in general, appears to diminish the effects of anchoring. Metric knowledge is evidentially more useful in reducing this effect than knowledge generally, but this may be caused by the conditions of the specific study itself and the types of questions asked. For example, if one knows a lot about birds, and the word bird is used as an anchor, would participants be more or less likely to respond with a bird type more quickly? Or what about the words expensive and inexpensive? If I used these words as anchors and then showed someone pictures of numerous cars, would their knowledge of cars diminish the effects of the anchors? Would teaching them about mere model names and prices affect their ability to judge an expensive from an inexpensive car if I showed them pictures of some? From the papers I read, we can derive no clear answers from these kinds of questions. All that can be stated is that knowledge does seem to diminish the effects of extreme anchors, not moderate anchors, and metric knowledge is more useful than mapping knowledge or comparative knowledge.
Interestingly, earlier in the paper, I had claimed that intelligence had not been shown to diminish anchoring effects. I still stand by this. However, based on this evidence, I think it’s clear that – if someone is more intelligent – intelligence should play some kind of role in reducing anchoring effects in specific situations, specifically when the anchor is absurd. Intelligent people may have greater access to information than less intelligent people and may be better at acquiring information than less intelligent people. As such, intelligent people may be better equipped to overcome the biasing effects of anchors with the knowledge they’ve acquired compared to those who’ve acquired less knowledge. Specifically, if intelligent people are better at acquiring metric knowledge than less intelligent people, and are thus more numerically savvy than less intelligent people, they should be better at overcoming biasing effects when the anchor is an extreme numeric value. In short, anchoring clearly seems to be affected by knowledge and thus is probably affected by intelligence.
Anchoring and Law
Clearly, anchoring effects exist. Therefore, we will not need to review the legal literature that demonstrates the existence of these effects. Instead, I think it would be a good idea to only briefly review how anchoring may affect particular kinds of cases and then proceed to more theoretical discussion involving anchoring and the law.
For example, in their 1994 paper, Korobkin and Guthrie hypothesize “that the litigant who makes an extreme opening settlement offer is less likely to reach a settlement with his adversary than the litigant who opens with a moderate offer” (ibid), an effect rooted in anchoring theory. Subjects were asked to assume the role of the plaintiff, and after they reviewed each scenario, they were given a final settlement offer that the participants were instructed to accept or reject. Two versions of the trial existed. The researchers did not expect any differences between the two versions of the subject with respect to risk aversion. The information given to Group A differed only with respect to B in so far as Group A was told an initial offer was made. Group B was told that the dealer’s initial offer, which was rejected, was $10,000. Subjects in Group A were more likely to accept the final settlement than Group B, and the response difference was significant. Contrary to the researcher’s hypothesis, “the strategic use of an extreme opening offer actually made an eventual settlement more likely than a more equitable opening offer, all things considered.
The authors generated another hypothesis to investigate this matter further. They “conducted a second experiment in an attempt to address” whether a moderate opening offer was better than no opening offer at all. They predicted that “an opening offer relatively consistent with the offeree’s sense of legal entitlement would establish a sense of goodwill and make settlement more likely.” To test this hypothesis, the researchers gave their subjects “a hypothetical lawsuit involving a simple landlord-tenant dispute.” Group B received only one extra bit of information: “before they consulted with their attorney, the landlord had offered them $900 if they would agree not to take any action, but they had refused to accept this at that time” (ibid). Group B was much more likely to reject the settlement offer than Group A, the effect of which was statistically significant.
In both experiments, plaintiffs “were less likely to accept a final settlement offer when it was preceded by a reasonable opening offer than when it was preceded by an extreme offer, or no offer at all” (ibid). This surprised the researchers. They had assumed that whether “a defendant previously made a moderate opening offer, rather than an extreme opening offer or no offer at all, should not [have] systematically influence[ed] plaintiffs’ responses to the final settlement offer like it did in [their] experiments” (ibid). Of course, this is predicated on what is assumed to be ‘reasonable’ rather than what is. The authors suggest that their “findings provide empirical support for the proposition that litigants, when making reasonable opening offers to their adversaries, inadvertently erect psychological barriers that impede settlement” (ibid). Regardless, as was previously identified, if the plaintiff has been presented with information regarding how these cases had been settled in the past, and shown what fair settlement offers for people in their circumstances look like, the anchor effect may have been non-significant or diminished.
In order to get a broader view of these kinds of effects and which ones have been shown to be significant, I think it will be prudent to turn to a meta-analysis done by Bystranowski et al., (2021).
With regard to stimulus, the authors argue that “short vignettes were somewhat better at isolating the effect of anchoring than longer written materials, but they might have done that at the cost of decreased… validity” (ibid). Of interest, according to the authors, “[they] observed a noticeably, although not significant, larger effect with audio/visual materials” (ibid). This could be a good thing, but this also may distract jurors or judges from other meaningful pieces of evidence or cause the visual evidence to become more salient than other evidence which may result in a less-biased decision.
The authors also state that “the fact that overall effect sizes of anchoring for bound scales (in both groups) are significant means that it is not the open character of some scales used in law that [are] the primary reason behind the omnipresence of the anchoring effect in legal contexts and merely setting statutory boundaries (such as legal caps) would not insulate the decision-maker from the undesirable influence of an anchor” (ibid). I.e., legislating legal caps on damage awards or awards has no bearing on whether a ‘fair’ and unbiased reward is doled out.
The authors make very clear that “the analysis of the psychological moderators confirmed [their] earlier conjecture that forming a unified theory of the anchoring effect may be problematic,” specifically because different analytical points may be relative to specific theories; i.e., there’s no unified theory of anchoring effects, although something like them seems to exist. The authors argue, and I think this makes the most sense based on my review of the literature that “in most of the cases, the selective accessibility model supplemented by the elaboration-based model might account for the effect in question.” Scale-distortion theory, a theory that I myself did not run across in my review of the literature, has not been employed as much as the SA and elaboration-based models, and thus any absolute claim it does not contribute to anchoring effects in the legal system may be “premature.”
Regardless, their study also found support for the significance of the ad damnum. I.e., “the opportunity to start with a numeric demand gives the plaintiff a clear first-move advantage” (ibid). The authors also note that the prosecutor’s demands are “less of an issue in criminal law, although the pattern of results for the moderator area of law clearly shows that… anchoring influences decisions in the criminal law context, as well” (ibid). I.e., just because the prosecutor can make a demand doesn’t mean that other anchors do not also have potency in criminal proceedings.
The authors note that professionals may be susceptible to anchoring, but as was previously discussed, this effect may be the result of knowledge-contingent anchoring effects; perhaps the professionals were not as knowledgeable about the specific anchor they were dealing with as they could have been. The authors of this study highlight the need to investigate debiasing techniques to further demonstrate their potential effects on anchoring effects in legal proceedings, meaning that there may be legitimate ways to diminish the anchoring effect. Specifically, they suggest “Larrick’s (2004) cognitive (training in biases, training in representations, etc.) or motivational (accountability, etc.) strategies” (ibid). They also suggest that another “potential debiasing measure (that could be classified as a case of training in representations) employed in studies changing the format of the anchor” (ibid) could work; e.g., “this might be achieved by comparing anchors in the form of lump sum and per diem” (Campbell et al., 2017; McAuliff and Bornstein, 2010).
All in all, these findings provide a much broader view of the anchoring effect in legal research and allow us to proceed through more theoretical analyses of anchoring effects in the legal system with greater insight.
The Herd and Covenant
According to Kahan and Klausner (1996), corporate contracts are not free from the effects of anchoring, and other psychological heuristics. They argue that “lawyers and other professional design and draft contract terms… presumably… that promote the interests of their clients,” yet “the interests of the draftsman may diverge from those of the client firm.” These divergences, the authors argue, “may create a bias on the part of the draftsman to employ a standard term rather than customizing an alternative term, even if customization would be best from the client’s perspective” (Kahn and Klausner, 1996).
The authors argue that a standard term generates less uncertainty than a novel term. The lawyer or contractor may do this for several reasons. First, “even if the reputational payoff to the lawyer is linearly related to the value of a term to the client, risk aversion on the part of a lawyer would create a bias in favor of a standard term”; “lawyers will frequently be more risk-averse than their clients.” Secondly, “the reputational payoff to the lawyer may not be linearly related to the value of a contract term.” I.e., “if a standard term offers a lower variance in potential outcomes for the client than does a customized term, even a risk-neutral lawyer will have a bias in favor of employing a standard contract term” (ibid).
The authors primarily argue that this effect is the result of “herd behavior.” “Herd behavior,” so the authors argue, “loosely refers to a situation in which people imitate the actions of others and in so doing ignore, to some extent, their own information and judgments regarding the merits of their decisions.” Essentially, herd behavior decreases the costs associated with engaging in novel or non-standardized behavior. Of course, this necessarily entails a question: how was the behavior standardized?
Regardless, it is clear that herd behavior helps to lower the costs associated with many different kinds of behavior. For example, “herd behavior will occur as a consequence of agents’ rational attempts to enhance their reputations or… ‘to manipulate the labor market’s inference regarding their ability’” (Scharfstein and Stein, 1990). If something bad happens from taking a standardized route, and a client bears this cost, the attorney has an out; he can claim that the risks would have been greater had he taken a non-standardized route, making his choice appear more rational.
Another way to think about herd behavior is that “Payoffs to an agent… depend on the market’s inference of his ability” (Kahan and Klausner, 1996). For example, if B is doing well, and A is mimicking B, then A is probably doing well. I.e., the market will perceive A as doing as well as B. This is a basic inference from Jeffery Zwiebel’s model (1990), in which “agents have a choice of taking one of two actions, each of which leads to randomly distributed outcomes: the industry standard; or an innovation with a higher expected value.” The risks between the two choices are effectively the same. However, future actions depend on inferences about an agent’s ability. If he chooses the industry standard, unless the industry standard’s ability is in question, he will look as if he’s competent and have future potential benefits that someone who takes the innovative route, and fails, will not.
The authors argue that “Only agents that have very good or very bad reputations at the start will innovate – the former because they are relatively unlikely to suffer from bad outcomes, and the latter because they will benefit from inaccurate evaluation” (Kahan and Klausner, 1996). Essentially, this means the margins of a society can benefit, and our previous question has been answered: behavior is standardized by those who are in the position and willing to take risks because they are either free from the costs associated with risk decisions or have nothing to lose.
The authors argue that this kind of herd behavior may be the result of anchoring. They state the following: “Standard contract terms may have an anchoring effect analogous to those observed in [anchoring experiments]. Standard terms carry an aura of stability and objectivity even more than the anchors used in these studies. Although the presence of learning and network externalities may provide a rational reason for a firm to adopt a standard term, the possibility of an anchoring bias suggests that a decision to adopt such a term may not be wholly rational or value-enhancing” (ibid). In other words, anchoring can create a salient effect in the minds of contractors that biases them in a particular direction, unconsciously dissuading them from choosing the non-standardized, unconventional route. Interestingly, knowledge may have no effect on this kind of anchoring effect, and it would be classified as an entirely self-induced anchor. Perhaps knowledge about a specific client’s circumstances may benefit a contractor or attorney when generating or reviewing a contract for a client, as would information about framing effects, but the costs associated with producing a bad contract, which includes reputational loss that could ruin an attorney, may be so great that the loss-aversion and desire to keep one’s head away from the Sword of Damocles maintain the herd mentality. An attorney who seeks to make a difference must either be beyond reproach or he must be on the margins of a society; a model of greatness or a rebel with a cause, either or, but you can’t be in the middle.
Tradition and Progress in Law
In his 2006 paper, Fredrick Schauer argues that our Common Law system in the United States (US) may produce significantly biased results due to the nature of anchoring. He states that if “concrete cases are more often distorting than illuminating, then the very presence of such cases may produce inferior law whenever the concrete case is nonrepresentative of the full array of events that the ensuing rule or principle will encompass.” His argument stems from the conceit that, like it or not, judges do make law. Yet, he also makes clear, this was not always the perception.
For many, “The common law judge was engaged… in locating and articulating a preexisting principle, no less preexisting for never before having been formally articulated.” I.e., the common law approach was about discovery, not creation or generation of principle. Effectively, there seems to be a false distinction between law and principle. Either way, whether it be law, principle, standard or even reason, the decision of the court holds as a norm, specifically if its of the Court, for future cases. Schauer states that “As long as the announced norm exerts at least some constraint on and for future cases, the argument presented here holds.” The question is whether the rule (stated as such or not) is better or worse for “having been initially announced in the context of a concrete dispute that a court is expected to resolve.” Of course, this entire argument is predicated on what you mean by ‘better’ and why one should adopt your conceit of ‘better’ rather than the one they already have or the one they desire.
I will leave this question to the side for now. Continuing on, Schauer argues that it is argued that a common law “judge has before her a concrete token of the type of case for which she is making a rule, while other rulemakers make their rules without having before them the same immediate way a particular token of the case-type that the rule will encompass.” And for this reason, so the argument for the common law system goes, the common law is better than a system that makes law from no firm example. I.e., the best argument for the common law system, as Schauer puts it, is that “rulemaking and lawmaking are better done when the rulemaker has before her a live controversy, a controversy that enables her to see all of the real-world implications of making one rule rather than another.” Continuing, Schauer argues that “By requiring that constitutional rules be made only in the context of actual cases and controversies, and not on the basis of abstract speculation, judges are presumed to be able to make better rules than they would in the absence of that case-generated context.” Schauer’s concern is with the quality of that kind of law, not the authority determining the law. Yet from whom does that kind of law follow? The authority. For example: If Albert is a bad musician, and he plays you a song, he will probably play you a bad song. If Sandra is a bad judge, and she rules on a case, she will probably have ruled on that case poorly. Thus, doesn’t Schauer implicitly have an issue with the authority of the court? I think so, and I do not think that his alternative is as removed from the effects that cause biased decisions in the common law system as he suggests.
Specifically, he argues that “the rulemaker, seeing a concrete case before her, is likely to believe that this case is representative of a larger array… [and] if that immediate case is not representative, it may still be mistakenly thought to be representative, a mistake generated precisely by the fact that this case is before the decisionmaker, while other cases within the class are not.” Simply put, if A is of B, and C is of B, but C is not part of A, you cannot say A is B because they are of C. A screw, shaft, and hammer’s head make up a hammer, at least, but a screw, is not a shaft, is not a hammer’s head, is not a screw. To resolve this issue, the rulemaker would have to determine “the extent to which the larger class did or did not resemble the particular class member whose immediate presence before the decisionmaker prompted making a rule” (Schauer, 2006).
Obviously, as we have seen with anchoring effects, this is easier said than done. Schauer argues that “when decision makers are in the thrall of a highly salient event, that event will so dominate their thinking that they will make aggregate decisions that are overdependent on the particular event and that overestimate the representativeness of that event within some larger array of events.” With regard to anchoring specifically, “because judges… in their lawmaking and rulemaking capacity are necessarily engaged in a process of mapping a large array of future events that will be governed by the rules they make, the risk is that judges who have a particular case before them to decide will systematically overestimate the extent to which those future events will resemble the one they are not most immediately confronting.” I.e., the salience of the case will generate a biased rule that then serves as a biased standard that few, as has been previously discussed, will buck, which also serves as an anchor, creating a circular effect. For example, “If the case that has prompted the rulemaking exercise has some number of particularly salient features, even the judge consciously surveying a larger field of real and predicted cases in order to make a rule will likely focus disproportionately on those cases containing the salient features of the first case, even when those salient features are present to a lesser extent in the larger field” (ibid).
The author argues that “Although a dynamic case-based system,” like the common law system “possess the capacity for change, it is not clear that those changes take place at the right time or that those changes are necessarily or even systematically for the better” (italics added, ibid). It is at this point that I have become skeptical of the argument. For example, when exactly is the right time? Who decides this? Why that time and not another time? Essentially, if I were to go up to numerous random people on the street and ask them what “the right time” and “for the better meant” in this context, would I get an absolute answer or an answer that occurred with greater frequency than any of the others? Would their answers even look alike? I highly doubt it; these kinds of terms are utterly ambiguous and allow the author to slyly slip in a subversive suggestion that genuinely undermines the tradition of the common law system.
Towards the end of the paper, after some fairly insightful observations about the cyclical and circular nature of precedent, which I am only skipping to avoid belaboring the point, the author states what I just made clear: “The largest implications of what I have argued here… are those that go to questions of institutional design” or as I see it, the foundations upon which such claims can even be made. While he is not wrong that the common system is likely subject to anchoring effects and path-dependent effects (such as those reviewed in Kahan and Klausner’s paper), when he makes the claim that it can be better, he must answer this question: better relative to what?
Thankfully, which I say very ironically, Schauer provides us with a kind of answer: “If the implications of case-based rulemaking are to be heeded” then there would be a desire for a less biased rule-making process. “This would then constitute an argument… for administrative agencies making rules in the context of formal rulemaking” (what does formal rulemaking consist of?) “rather than as an adjunct to adjudication.” In other words, Federal Agencies, ‘overseen’ by the Legislature, should make rules that govern your lives because the Court can be biased, or so Schauer’s argument goes. Of course, Schauer makes clear, in many dependent clauses, that this is only a suggestion, and is not the only option, just the one he’s suggesting be the case after the common law system he stands upon is demolished.
While Schauer makes some legitimate points, I believe another author provides a more balanced insight into the rulemaking process that governs our culture.
A Rising Tide and Falling Rain
In his 2006 paper, Rachlinski asks a very similar question to the one asked by Schauer. He asks, which is better for making law: an adjudication process or a legislative process? His answer: both. “Adjudication,” so Rachlinski argues, “builds law from the ground up, one dispute at a time.” Whereas “Legislation,” he continues “builds law from the top down by creating general principles that cover future disputes.”
Just as Schauer recognized, and as I am perfectly content with admitting, Rachlinski states that “Flash issues that fade from the public sphere are more likely to influence legislatures than courts because courts can only decide those cases that come before them.” On the other hand, “Legislatures… can raise any issue they choose.” Thus, courts are limited in their ability to provide ‘solutions’ to novel issues whereas legislatures have a much broader scope of legislative authority.
Unlike Schauer, Rachlinski argues that the Legislative process and the Adjudicative process each have their place. “For certain kinds of problems, the adjudicative approach of building law from the ground up might thwart the goals of the legal system. For others, the top-down approach might be more troublesome. Sometimes, each leads to different answers that cannot be said to be inferior or superior from a normative perspective.” (ibid). In effect, he acknowledges the ambiguity of Schauer’s appeal to a ‘better’ institutional design.
In his comparison of the Adjudicative system to the Legislative system, Rachlinski starts by acknowledging the weakness of the Adjudicative process by highlighting the strengths of the Legislative process. First, “it is prospective”; “legislatures make rules to govern future conduct and hence, legislatures act without knowing precisely how individuals will be affected by the rules they adopt or even who might be affected.” Thus, it can be inferred that the immediacy of an event is not biasing their rulemaking faculties. Secondly, “legislatures are exposed to tradeoffs that are not part of judicial proceedings”; “Legislatures are keenly aware that money spent on one project either will be taken from another, will require a tax increase, or will be added to a deficit” (ibid). With respect to such tradeoffs, Rachlinski argues that courts “usually lack such information or even shun it explicitly.”
Courts are likely to be deceived by the immediacy of a case, especially when they are observing one case at a time. To boot, “Careful statistical reasoning is also apt to be more difficult in an individual case than in the aggregate,” which could serve to decrease the biasing effects produced by an immediate case. In total the negative effects associated with the courts really are related to the fact that they’re observing single cases, one at a time, and can be biased by the variables in those cases that are salient and serve as an anchor. Rachlinski even goes so far as to say “The widely accepted doctrine of res ipsa loquitur, in fact, is founded precisely on a logical fallacy akin to base-rate neglect.” I.e., the courts can act unreasonably, do not take statistics or statistical reasoning seriously, and “have a lengthy history of embracing logical fallacies that arise from a narrow, subjective perspective… cementing them into broad legal principles” and, thus, the law.
The legislative system is not perfect itself, however. Specifically, one benefit the legislative system doesn’t have or has more difficulties with is adapting. “judges might learn to avoid the myopic focus on individuals that leads them to neglect broader forces that produce the disputes they adjudicate… [meaning] that courts, as institutions, evolve over time to take a broader perspective, much like the legislature, and avoid common cognitive errors” (Rachlinski, 2006). Importantly, this process is a hierarchal one. “The common law process is a process of trial and error with both parallel and hierarchical mechanisms for improving legal rules. The hierarchical appellate process offers a means of correcting a foolish legal rule produced at trial” (ibid). Importantly, the biasing facts, the anchoring effects, can be less prominent “in the appellate process than at trial,” so Rachlinski argues. If textual length does affect anchoring effects, he probably has a point. Also, of significant importance, the judicial process produces parallel authorities. “One court’s pronouncement of a legal rule might be contradicted by the conclusions of a coequal court in a different jurisdiction.” I.e., even if a court is wrong, the entire judicial system is not wrong, and “so long as lawyers, professors, and subsequent courts have the means of identifying the sensible decisions of their predecessors, the misguided rulings [should] lose influence.” Unfortunately, the Court, the SCOTUS, is not free from this problem, and in fact is much more limited by its errors, as are other courts vis-à-vis the Court, than the legal system as a whole. In fact, this suggests that a decentralized process for the courts and common law system is best; i.e., when the Court gives more power to the common law system and various lower courts than it does itself, it preserves the dynamic nature of the common law system, in effect preserving its ability to more effectively evolve and be dissuaded from adopting a firm and potentially biased standard. Essentially, the legislative process lacks these kinds of hierarchal and parallel processes that constitute the common law.
Rachlinski also argues that the “decentralized nature of the common law process in the United States might, for example, mitigate the influence of framing effects on decisions.” Whereas the legislature “approaches most social problems from a single, natural frame created by the status quo.” Although it can be argued that the immediacy of a court case can be seen as a weakness, in reality, it can also be a strength. First, “the differences in frame might lead individual courts to adopt different legal rules. As case law accumulates, the courts might select from these rules the most sensible without regard to frame.” Secondly, “the courts might notice that the status quo is having an undesirable effect on the rules that they are adopting… [potentially leading] courts to be able to step outside the frame and see the kinds of disputes that they face from a broader perspective than the default presents.”
Next, “Courts have a much more limited range of remedial authority than legislatures.” This might sound like a bad thing, but it’s not. Great power comes with great responsibility. “Legislatures might be tempted to use their limitless regulatory authority in ways that are overly ambitious.” Perhaps some might even say prideful. “In some circumstances, simple decision rules predict future results better than results of a multiple regression analysis.” This comes from the fact that multiple regression analyses can ‘overfit’ data. “Regression is tailored tightly to the unique pattern of data analyzed, so much so that a regression equation might be unreliable when it is applied to new data.” Simple rules, on the other hand, can be more durable. Thus, a court's ability to generate simple rules may produce more precise predictions than multiple regression equations, which is precisely what the legislature is engaging in.
The main problem of a legislature is as follows: “The observations that a statistician feeds into a regression analysis invariably incorporate some error in the measurement of the parameters in the model (Cohen et al., 2014). This measurement error limits the ability of the regression model to predict future observations, or the ‘validity’ of the regression model (Allen and Yen, 1979)… the reliability of the observations necessarily limits the validity of the model (Cohen et al., 2014). A regression equation is blind to this concern. A regression analysis will produce a unique equation that best fits the observations fed into the model even though observations invariably include some variability” (Rachlinski, 2006). The legislative process works in precisely this manner, and as a result, will only produce results based on the information that is fed into it and processed by it, which effectively means that what it produces can effectively be irrelevant to the problems facing the society it's supposed to be legislating for.
Lastly, the courts rely on pattern recognition. Encountering the same issue repeatedly enables humans to recognize patterns about that issue to overcome it. In fact, Rachlinski argues that it is precisely this pattern recognition ability, this ability to see patterns where there are seemingly none, that we derive myths, legends, and gods. “In developing the common law, judges have relied heavily on pattern-recognition abilities.” This ability to first recognize patterns and then categorize them “simplifies the litigation process enormously and ingeniously.” Rachlinski argues that “The categorical approach distills the complex inquiry of how to determine who is responsible for an injury into a handful of simple rules.” But the pattern recognition abilities of the common law system do not stop there: “the common law process [also] remarkabl[y] at creat[es] categories… in seemingly disparate cases.” By highlighting similarities between disparate decisions, “such as a hunting accident and modern pharmaceutical marketing,” the court can efficiently resolve situations without starting from scratch, which can very easily be classified as a highly creative act. It is precisely this kind of efficient simplicity that positively differentiates the adjudicative process from the legislative.
Thankfully, and I do not say this ironically, Rachlinski does not provide us with the same kind of answer Schauer provides us. However, he does argue that Schauer’s arguments should “inspire a reassessment of the appropriate institution in which to resolve social problems.” For example, courts were thought to be better at resolving social harms and than the lawmaking body disasters. But the inverse may actually be the case. Social harms, by their very definition, may be better at being resolved by the legislature, whereas the adjudicative process, by its nature, may be better equipped to handle natural disasters by “categorizing and processing all manner of accidents… [and] uncover[ing] the underlying forces that produced them more effectively” (Rachlinski, 2006). In short, both the bottom-up and top-down legislative processes seem finely tuned and suited for their niche when they are operating according to their nature. Of course, this means that if they were not, for example, if the legislature were corrupted and was only producing laws that were tailored to solve a specific problem, for a specific group, they would not be functioning properly, and thus would not be operating in a manner conducive to the public’s good. In other words, this means that Rulemaking Agencies, which Schauer proposes as a solution to the problem of anchoring and biasing effects in the common law courts, may be just as biased and ineffective as a biased SCOTUS.
Conclusions and Discussion
“Where does madness leave off and reality begin?”
― H.P. Lovecraft, The Shadow over Innsmouth
To briefly review the findings: anchoring is the disproportionate influence on decision makers to make judgments that are biased toward, or by, an initially presented value: the anchor. Anchoring effects are mainly caused by reduced counter-hypothesis testing. However, this also means that unreasonable anchors also cause counter-hypothesis testing, or produce doubt that leads to diminished anchoring effects.
Interestingly, personality does seem to affect whether one is susceptible to anchoring. Specifically, individuals higher in openness, agreeableness, and conscientiousness are more likely to be affected by an anchor than those who are less high in openness, agreeableness, and conscientiousness. This also suggests that women, who are higher in agreeableness and conscientiousness, will be more affected by deceptive anchors, on average, than men. Importantly, incidental anchors are pretty much irrelevant. I.e., random digits do not cause anchoring effects.
Anchoring, and this is why incidental anchors are irrelevant, is a semantic phenomenon. I.e., a purely numeric anchor is an irrelevant anchor; anchors are contingent upon the meaning of the anchor and its relationship to the problem or question posed. Anchors also appear to make the features of their content appear more salient than they otherwise are. This could potentially explain the effects of audio/visual media as anchors, specifically potentially more potent anchors than text media.
Because anchoring effects are a kind of confirmation bias effect, seeking to disconfirm an anchor or being aware that it may have influence, in certain circumstances, may diminish anchoring effects. Knowledge may also be key to this process. Knowledgeable people may be better at recognizing an absurd anchor and thus avoiding it. However, a knowledgeable person may also use their pool of knowledge to highlight information that is more anchor consistent than a less-knowledgeable person. These effects also only seem to be effective if the knowledgeable individual is highly perceptive. Interestingly, knowledge type is also relevant for overcoming anchoring effects. I.e., higher metric or numerical knowledge about a subject was shown to be better at reducing the effects of an anchor than mapping or comparative knowledge.
These kinds of effects are also likely to be related to intelligence. Specifically, intelligent people may be better at acquiring knowledge than less intelligent people. They may also be better at applying this knowledge than less intelligent people. In turn, this means that knowledgeable people may be less susceptible to the effects of anchoring than less intelligent people.
With respect to the law, award caps, which create limits for damage awards in civil tort cases, at least, do not seem to diminish the effects of anchoring and may even contribute to it, leading to larger awards and penalties than would have otherwise been the case. As stated, evidence can also affect the outcome of a case and can serve as an anchor. Specifically, if visual or audio evidence is consistently presented to a judge or jury, they may be biased by it, ignoring other meaningful data that could either lead to conviction or acquittal. Some of these anchoring effects could be overcome by providing judges and juries with metric knowledge about a specific case.
Interestingly, anchors may induce herd behavior. The anchor can serve as a standard, and thus sets the limits for what is deemed appropriate or inappropriate behavior in a market setting. It is risky to move beyond the limits of the anchor, even if the anchor is, for all intents and purposes, as risky as a more innovative route. People simply are more concerned about their reputation. If choosing the standard route means that no loss in reputation will be incurred, then the standard will be chosen. Effectively, this means that the standard works as an anchor and as a frame that limits the exploratory nature of the market.
Anchors can also be generated by the legal system, specifically through the rulings of judges. These rulings, principles, reasons, rules, or laws serve as the anchor. However, they are also produced in a setting that is influenced by anchors. And thus, the statistical likelihood of a case used as a standard for other cases being representative of the whole class of cases it standardizes is unreasonably low. In other words, when the courts create law, they do so in a way that has significant deleterious effects.
However, rulemaking agencies, which are allegedly overseen by the legislature, are not removed from this effect. These agencies effectively are subjected to the same problem regression equations are. If you create a legal formula that is tailored to solve a specific problem, then the output of that formulaic law will be limited in its ability to ameliorate the problems faced by society generally.
The resolution to this quagmire: let the legislature and common law courts act as they are naturally intended to act. I.e., these systems can be seen as two superorganisms. One is best at generating variation when it is not centralized, the common law courts, and the other is best at resolving impersonal, social issues when it is not captured by rent-seekers, the legislature. This also implores us to use these tools as they are naturally intended to be used. Why should courts be tasked with creating standards that are intended to resolve social issues when they are much better at generating standards for personal issues? And also, why should the legislature focus on hyper-specific or generally irrelevant issues when it could take the time to draft legislation that was well considered and reflects statistical and informational realities? Obviously, both should do what they do best.
Ultimately, anchors are a kind of tool, and thus, like any tool, their value is predicated on what you, individually will value. A society that makes use of this technique regularly, I think, reflects a disturbing trend. Specifically, this is a manipulative technique – it is intended to deceive you and to get you to do that which you wouldn’t have done otherwise. If we use this kind of technique, it genuinely forces us to ask questions about who we are as a people: what kind of person wants to manipulate someone into doing what they want unconsciously? Is this good, is this virtuous behavior, or is it purely self-serving behavior? Perhaps this is why some were so quick to adopt the notion that incidental anchors were a reality. In a society where everyone is doing anything and everything they can to get a leg-up, if you can manipulate someone into doing something for you with random, irrelevant, or incidental numbers or anchors, wouldn’t you? Would you?
More importantly, in this kind of society, if it were to dissolve all notions of morality, if it were to purely act on what gave it pleasure and caused it pain, what might its use of anchors say about it? To what extent would such a society’s manipulative tendencies go? A manipulative society as such, given the ‘gift’ of anchors, likely would not only deceive others, but also delude itself. If consumeristic framing reflects a society overcome by meaninglessness and nothingness, then anchoring, and the manipulative tendency correlated with it reflects the desire to deceive oneself and others out of that feeling of meaningless and nothingness. If all is nothing, pleasure and pain your only motivators, and morality nothing more than a limit on the extent to which you can act, then what’s to stop you from desiring the capacity to manipulate or control others, even if doing so reflects your own delusion? A society as such not only is cruel, in some respects, but also insane. Madness and Void, so it seems, go hand in hand.
Bibliography
Mussweiler, T. and Strack, F., 2001. The semantics of anchoring. Organizational behavior and human decision processes, 86(2), pp.234-255.
Allen, M.J. and Yen, W.M., 2001. Introduction to measurement theory. Waveland Press.
Blankenship, K.L., Wegener, D.T., Petty, R.E., Detweiler-Bedell, B. and Macy, C.L., 2008. Elaboration and consequences of anchored estimates: An attitudinal perspective on numerical anchoring. Journal of Experimental Social Psychology, 44(6), pp.1465-1476.
Bodenhausen, G.V., Gabriel, S. and Lineberger, M., 2000. Sadness and susceptibility to judgmental bias: The case of anchoring. Psychological Science, 11(4), pp.320-323.
Bystranowski, P., Janik, B., Próchnicki, M. and Skórska, P., 2021. Anchoring effect in legal decision-making: A meta-analysis. Law and Human Behavior, 45(1), p.1.
Campbell, J., Chao, B. and Robertson, C., 2017. Time is money: An empirical assessment of non-economic damages arguments. Wash. UL Rev., 95, p.1.
Chapman, G.B. and Bornstein, B.H., 1996. The more you ask for, the more you get: Anchoring in personal injury verdicts. Applied cognitive psychology, 10(6), pp.519-540.
Chapman, G.B. and Johnson, E.J., 1994. The limits of anchoring. Journal of Behavioral Decision Making, 7(4), pp.223-242.
Chapman, G.B. and Johnson, E.J., 1999. Anchoring, activation, and the construction of values. Organizational behavior and human decision processes, 79(2), pp.115-153.
Cialdini, R.B., Vincent, J.E., Lewis, S.K., Catalan, J., Wheeler, D. and Darby, B.L., 1975. Reciprocal Concessions Procedure for Inducing Compliance: The Door-in-the-Face Technique. Journal oj Personality and Social Psychology, 31(2), pp.206-215.
Cohen, P., West, S.G. and Aiken, L.S., 2014. Applied multiple regression/correlation analysis for the behavioral sciences. Psychology press.
Englich, B. and Soder, K., 2009. Moody experts---How mood and expertise influence judgmental anchoring. Judgment and Decision making, 4(1), p.41.
Englich, B., Mussweiler, T. and Strack, F., 2005. The last word in court—A hidden disadvantage for the defense. Law and Human Behavior, 29(6), pp.705-722.
Englich, B., Mussweiler, T. and Strack, F., 2006. Playing dice with criminal sentences: The influence of irrelevant anchors on experts’ judicial decision making. Personality and Social Psychology Bulletin, 32(2), pp.188-200.
Enough, B. and Mussweiler, T., 2001. Sentencing under uncertainty: Anchoring effects in the courtroom 1. Journal of applied social psychology, 31(7), pp.1535-1551.
Epley, N. and Gilovich, T., 2001. Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological science, 12(5), pp.391-396.
Epley, N. and Gilovich, T., 2005. When effortful thinking influences judgmental anchoring: differential effects of forewarning and incentives on self‐generated and externally provided anchors. Journal of Behavioral Decision Making, 18(3), pp.199-212.
Epley, N. and Gilovich, T., 2006. The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological science, 17(4), pp.311-318.
Eroglu, C. and Croxton, K.L., 2010. Biases in judgmental adjustments of statistical forecasts: The role of individual differences. International Journal of Forecasting, 26(1), pp.116-133.
Furnham, A. and Boo, H.C., 2011. A literature review of the anchoring effect. The journal of socio-economics, 40(1), pp.35-42.
Hammitt, J.K., Carroll, S.J. and Relles, D.A., 1985. Tort standards and jury decisions. The Journal of Legal Studies, 14(3), pp.751-762.
Hastie, R., Schkade, D.A. and Payne, J.W., 1999. Juror judgments in civil cases: Effects of plaintiff's requests and plaintiff's identity on punitive damage awards. Law and Human Behavior, 23(4), pp.445-470.
Higgins, E.T. and Brendl, C.M., 1995. Accessibility and applicability: Some" activation rules" influencing judgment. Journal of Experimental Social Psychology, 31(3), pp.218-243.
Higgins, E.T., Rholes, W.S. and Jones, C.R., 1977. Category accessibility and impression formation. Journal of experimental social psychology, 13(2), pp.141-154.
Kahan, M. and Klausner, M., 1996. Path dependence in corporate contracting: Increasing returns, herd behavior and cognitive biases. Wash. ULQ, 74, p.347.
Klayman, J. and Ha, Y.W., 1987. Confirmation, disconfirmation, and information in hypothesis testing. Psychological review, 94(2), p.211.
Koehler, D.J. and Harvey, N. eds., 2008. Blackwell handbook of judgment and decision making. John Wiley & Sons.
Korobkin, R. and Guthrie, C., 1994. Opening Offers and Out-of-Court Settlement: A Little Moderation May Not Go a Long Way. Ohio St. J. on Disp. Resol., 10, p.1.
LeBoeuf, R.A. and Shafir, E., 2009. Anchoring on the" here" and" now" in time and distance judgments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), p.81.
Marti, M.W. and Wissler, R.L., 2000. Be careful what you ask for: The effect of anchors on personal-injury damages awards. Journal of Experimental Psychology: Applied, 6(2), p.91.
McAuliff, B.D. and Bornstein, B.H., 2010. All anchors are not created equal: The effects of per diem versus lump sum requests on pain and suffering awards. Law and Human Behavior, 34(2), pp.164-174.
Mussweiler, T. and Strack, F., 1999. Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35(2), pp.136-164.
Mussweiler, T. and Strack, F., 1999. Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35(2), pp.136-164.
Mussweiler, T. and Strack, F., 2000a. The use of category and exemplar knowledge in the solution of anchoring tasks. Journal of personality and social psychology, 78(6), p.1038.
Mussweiler, T. and Strack, F., 2000b. Numeric judgments under uncertainty: The role of knowledge in anchoring. Journal of experimental social psychology, 36(5), pp.495-518.
Mussweiler, T., Strack, F. and Pfeiffer, T., 2000. Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26(9), pp.1142-1150.
Neely, J.H., 1977. Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited-capacity attention. Journal of experimental psychology: general, 106(3), p.226.
Northcraft, G.B. and Neale, M.A., 1987. Experts, amateurs, and real estate: An anchoring-and-adjustment perspective on property pricing decisions. Organizational behavior and human decision processes, 39(1), pp.84-97.
Rachlinski, J.J., 2006. Bottom-up versus Top-down Lawmaking. The University of Chicago Law Review, pp.933-964.
Ross, L. and Nisbett, R.E., 2011. The person and the situation: Perspectives of social psychology. Pinter & Martin Publishers.
Scharfstein, D.S. and Stein, J.C., 1990. Herd behavior and investment. The American economic review, pp.465-479.
Schauer, F., 2006. Do cases make bad law?. The University of Chicago Law Review, 73(3), pp.883-918.
Schimmack, U. (2021, June 6th). Incidental Anchoring Bites the Dust. Replicability-Index: Improving the Replicability of Empirical Research. Replicationindex.com.
Shanks, D.R., Barbieri-Hermitte, P. and Vadillo, M.A., 2020. Do Incidental Environmental Anchors Bias Consumers’ Price Estimations?. Collabra: Psychology, 6(1).
Smith, A.R. and Windschitl, P.D., 2015. Resisting anchoring effects: The roles of metric and mapping knowledge. Memory & cognition, 43(7), pp.1071-1084.
Smith, A.R., Windschitl, P.D. and Bruchmann, K., 2013. Knowledge matters: Anchoring effects are moderated by knowledge level. European Journal of Social Psychology, 43(1), pp.97-108.
Strack, F. and Mussweiler, T., 1997. Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of personality and social psychology, 73(3), p.437.
Tversky, A. and Kahneman, D., 1974. Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157), pp.1124-1131.
Wegener, D.T., Petty, R.E., Blankenship, K.L. and Detweiler-Bedell, B., 2010. Elaboration and numerical anchoring: Implications of attitude theories for consumer judgment and decision making. Journal of consumer psychology, 20(1), pp.5-16.
Wegener, D.T., Petty, R.E., Detweiler-Bedell, B.T. and Jarvis, W.B.G., 2001. Implications of attitude change theories for numerical anchoring: Anchor plausibility and the limits of anchor effectiveness. Journal of Experimental Social Psychology, 37(1), pp.62-69.
Zwiebel, J., 1990. Corporate conservatism, herd behavior and relative compensation. Journal of.