How do both the hindsight bias and overconfidence impact psychological research?

R. MacCoun, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.3 Hindsight Bias

Hindsight bias is the ex post tendency to overestimate the ex ante likelihood of an outcome, relative to what one would have actually guessed before the event. Because most legal judgments are made ex post, they are vulnerable to this bias, as documented in a variety of experimental studies (Rachlinski 1998). Hindsight bias also influences citizens' ex post reactions to newsworthy legal decisions. Trial evidence appears more incriminating to people who believe a defendant has been convicted than for those who believe the defendant was acquitted (Bodenhausen 1990).

A pernicious example of this phenomena was the elite and popular reaction to the acquittal of actor/athlete O. J. Simpson at his criminal homicide trial in the USA. Because 9 of the 12 jurors were African American, a large number of commentators, including the lead prosecutor, ‘explained’ the verdict as a case of ‘racial nullification’—the notion that African Americans are more lenient when judging other African Americans. In an October 1995 Gallup poll, 48 percent of whites vs. 18 percent of African Americans said they had ‘less confidence’ that ‘jurors can reach a verdict in a trial without letting their racial attitudes affect their judgment’ (Moore and Saad 1995). In fact, the jury literature provides little support for ‘racial nullification’ as a general phenomenon (Kerr et al. 1995). The reasons for the acquittal will never be known with certainty, but it seems clear that pronouncements of ‘racial nullification’ reflected hindsight bias. Despite widespread knowledge of the composition of the jury, surveys during the trial showed that few trial attorneys and citizens expected an acquittal; indeed Sporting Index reported that most American gamblers were betting on a conviction (Los Angeles Times 1995).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767029144

Using Blind Reviews to Address Biases in Medical Malpractice

Jeffrey D. Robinson, in Blinding as a Solution to Bias, 2016

Visual Hindsight Bias

Visual hindsight bias refers to the retention of visual information in the mind, either consciously or subconsciously (Harley et al., 2004). In the popular children’s book series Where’s Waldo? the reader tries to find Waldo, a person in a red and white striped shirt, glasses, and a cap in a visually cluttered picture. Initially, it can be a very difficult task, but once found, viewers are more likely to find Waldo more quickly when returning to the picture, even after many months. Once a person is aware of a fact, or makes an observation, psychologists have shown that the memory of it can be readily recalled. A radiologist (or juror) who sees an abnormality, either independently or having been shown the finding, will be visually drawn to it on subsequent presentations. This is particularly pernicious in radiology malpractice litigation in which the missed observation is frequently displayed at great magnification, because the defense’s claim may be that the abnormality was so subtle that to not have perceived it was within the standard of practice. When initially presented in such a setting, it is difficult for any observer to recapture the naïve state of mind experienced by the original radiologist.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024607000127

Cognitive Psychology of Memory

M. Ross, ... E. Schryer, in Learning and Memory: A Comprehensive Reference, 2008

2.47.1 The Effects of the Present on Recall

In everyday life, the hindsight bias, or the I-knew-it-all-along effect, is perhaps the most widely recognized example of the influence of the present on recall. According to a popular cliché, hindsight is 20-20. In research on the hindsight bias, psychologists compare hindsight judgments made with knowledge of an outcome (e.g., the winner of an obscure military battle) to foresight judgments made without such knowledge. Participants in the hindsight condition typically regard the actual outcome as more likely than do those in the foresight condition (Fischhoff and Beyth, 1975; Slovic and Fischhoff, 1977; Hoffrage et al., 2000). For example, Fischhoff and Beyth (1975) asked university students to predict the likelihood of various events before President Nixon’s visits to Beijing and Moscow. After Nixon’s trips, these students remembered assigning higher probabilities than they originally did to events that actually occurred. Presumably participants had a difficult time recalling the exact probabilities that they had assigned. In reconstructing their predictions, they used their present knowledge to estimate their prior probabilities (Hoffrage et al., 2000).

The hindsight bias has potentially important implications for people’s assessments of behavior and individuals. When people evaluate past performances, they often know the outcomes (e.g., whether a medical diagnosis was valid or a military tactic was successful). The hindsight bias can inappropriately lead people to criticize individuals who fail and admire those who succeed. For example, physicians informed of both a patient’s symptoms and autopsy results indicating the cause of death are surprised that other physicians could have made an incorrect diagnosis prior to the autopsy. Physicians told of the symptoms but not of the autopsy results are less certain of the diagnosis (Dawson et al., 1988). Alternatively, when events turn out well, successful people are sometimes credited with too much foresight. In War and Peace, Tolstoy accuses Russian historians of making this error in judgment (cited in Hoffrage et al., 2000). Historians wrote that the Russian army defeated Napoleon by tricking him into marching toward Moscow. However, the Russian victory was probably more attributable to luck than to foresight.

Researchers have extended investigations of hindsight by examining how people’s current knowledge and beliefs influence their recall of their earlier attitudes, feelings, and behaviors. While recalling the past, people are often very aware of the present (Ross, 1989). For example, people know how they currently feel about a politician but may be less certain how they felt years earlier. Unless they have a compelling reason to think that they have changed, they often presume personal consistency, supposing that their earlier opinions resemble their current beliefs (Ross, 1989). The perception of consistency helps people to sustain a sense of personal identity: They are the same individuals that they were yesterday or last year (Erikson, 1968; Epstein, 1973; James, 1950). When people assume stability in the face of actual change, they exaggerate the similarity of the past to the present and evidence a consistency bias in recall.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123705099001741

Planning

M.V. Pezzo, ... O. Wilder, in Encyclopedia of Human Behavior (Second Edition), 2012

Hindsight Bias

Sense-making also produces another bias, called hindsight bias. Hindsight bias is the tendency to believe that one could have predicted an event with greater accuracy than is really the case. Although the bias is not large, it is somewhat akin to the belief that you ‘knew it all along.’ Many researchers argue that hindsight bias prevents us from learning from the past because our mistakes are less surprising than they should be. The implication for planning, of course, is that people will continue to make the same mistakes in the future. One way to counteract the hindsight bias is to make an explicit prediction for an event before the event occurs. This allows you to compare your (unbiased) prediction with reality and, with any luck, learn from the past so that you can plan for a more successful future.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123750006002809

Cognitive Psychology of Memory

D.S. Lindsay, in Learning and Memory: A Comprehensive Reference, 2008

2.19.5.6 The Knew-It-All-Along Effect

The knew-it-all-along (KIA) effect, or hindsight bias, is observed when persons report that they possessed knowledge at a previous point of time that they in fact acquired subsequent to that time (Fischhoff, 1975; Wood, 1978; Hasher et al., 1981). Of particular interest here is the memory version of the KIA effect, in which subjects answer a set of questions in phase 1, are then exposed to the correct answers to some of those questions in phase 2, and in phase 3 are asked to re-answer the questions exactly as they did in the first phase. The standard finding in this procedure is that subjects’ re-answers to items for which they had been shown the correct answers are often shifted in the direction of the correct answers.

When subjects demonstrate a KIA effect, do they have an (illusory) subjective experience of remembering themselves giving newly learned correct answers on the initial test? Or is their experience merely one of guessing or inferring their prior responses? There is evidence that, under at least some conditions, subjects fail to appreciate the extent to which their re-answers are influenced by the experimental exposure phase in KIA procedures (Begg et al., 1996) and in closely related procedures (e.g., Prentice and Gerrig, 1999; Marsh et al., 2003), but do subjects remember giving correct answers that they did not really give?

To explore this question, Michelle Arnold and I (Arnold and Lindsay, in press) conducted KIA experiments in which subjects were asked to report, for each re-answer, whether they: (1) remembered giving that answer initially, (2) knew they had given that answer without being able to recollect having done so, or (3) felt that they were merely guessing or inferring that they had given that answer. Under standard KIA procedures (passive exposure to the correct answers to trivia questions), when subjects showed a KIA effect they almost always reported guessing or inferring their prior answers. But when the materials were insight problems and the second phase involved providing subjects with sufficient cues to solve the problems, then they quite often subsequently reported false memories of answering questions correctly in the first phase. Presumably in the latter procedure, memories of having been led to figure out a problem in Phase 2 were highly confusable with memories of having spontaneously solved that problem in phase 1.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123705099001753

Elicitation of Probabilities and Probability Distributions

L.J. Wolfson, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Cognitive Aspects of Elicitation

In reporting subjectively held beliefs and preferences, there are several psychological heuristics that can lead to misrepresentation (see Cognitive Psychology: Overview). For more detailed discussion on these, early work on the subject is found in Kahneman et al. (1982), Kyberg and Smokler (1980), Hogarth (1987); updated coverage is detailed in Poulton (1986) and in Wright and Ayton (1994). The most common problems in eliciting subjective opinions come from:

(a)

Overconfidence. Probability assessors tend to underestimate variability and the tails of the distribution.

(b)

Hindsight bias. Most assessors believe they would have predicted correctly the outcome of an event; thus only the outcomes that actually occurred are viewed as having nonzero probability of occurrence.

(c)

Representativeness. Expert judgments can be based on the synthesis of previously observed data. If that data came from small samples, it may not be representative. Other terms often used in conjunction with this heuristic are base-rate neglect, small-sample fallacy, and misperception of randomness. Another well-known aspect of representativeness is the conjunction fallacy, where higher probability is given to a well-known event that is a subset of an event to which lower probability is assigned.

(d)

Availability. The probability of an event is judged by the frequency with which an event can be recalled in memory. The classic example of this is in the elicitation of beliefs about likely causes of death; botulism, which typically gets a great deal of press attention, is usually overestimated as a cause of death, whereas diabetes, which does not generate a great deal of media attention, is underestimated as a cause of death.

(e)

Adjustment and anchoring. When an initial assessment is made, elicitees often make subsequent assessments by adjusting from the initial anchor, rather than using their expert knowledge.

To overcome possible biases introduced in the elicitation of probabilities and utilities by these heuristics, Kadane and Wolfson (1998) summarize several principles for elicitation:

(a)

Expert opinion is the most worthwhile to elicit.

(b)

Experts should be asked to assess only observable quantities, conditioning only on covariates (which are also observable) or other observable quantities.

(c)

Experts should not be asked to estimate moments of a distribution (except possibly the first moment); they should be asked to assess quantiles or probabilities of the predictive distribution.

(d)

Frequent feedback should be given to the expert during the elicitation process.

(e)

Experts should be asked to give assessments both unconditionally and conditionally on hypothetical observed data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767004125

Medicolegal Issues in Diagnostic Imaging

Saurabh Jha MBBS, in Radiology Secrets Plus (Third Edition), 2011

13 What is hindsight bias, and why is it important medicolegally?

Medicolegally, radiology is unique in that the evidence for examination (the image) remains for subsequent scrutiny, in contrast to physical examination findings or findings at endoscopy. It also lends itself to a phenomenon known as the hindsight bias—the tendency for people with knowledge of the actual event to believe falsely that they would have predicted the correct outcome. On 90% of chest films of patients with lung cancer that were reported as normal, the cancer was seen in hindsight. When a radiologist makes a perceptual error, and the finding is subsequently seen, it is difficult to determine whether the finding was seen only in retrospect and in lieu of all the clinical information. In other words, it is difficult to determine when a “miss” is negligent, and when it is simply an error of perception.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323067942000754

Cognitive Bias☆

Andreas Wilke, Rui Mata, in Reference Module in Neuroscience and Biobehavioral Psychology, 2017

Memory Biases: Cognitive and Motivational Determinants

Would humans be better off if we had been blessed with superior cognitive abilities, such as unfailing memories? One view on the rather limited cognitive capacities of the human mind is that limitations, such as forgetting, have functional significance. Some researchers, like John Anderson, have suggested that the function of memory is not simply to store information, but rather to provide relevant information in specific situations. According to this view, the human memory system is organized such that it facilitates the retrieval of information that is recent, frequent, and relevant to the current context. In other words, memory is designed to provide the information we are most likely to need. Many man-made information systems are built in such way. For example, computer applications usually incorporate a timesaving feature as follows: When a user tries to open a document file, the applications presents a “file buffer,” a list of recently opened files from which the user can select. Whenever the desired file is included on the list, the user is spared the effort of searching through the file hierarchy. For this device to work efficiently, the application must provide the user with the desired file. It does so by “forgetting” files that are considered unlikely to be needed on the basis of the assumption that the time since a file was last opened is negatively correlated with its likelihood of being needed now. In other words, such a system has a bias toward information that is likely to be needed.

Although memory systems are very often efficient they can sometimes fail because forgetting and sensitivity to contextual knowledge may lead to systematic error. The hindsight bias is one of the most frequently cited and researched cognitive biases in the psychological literature. Hindsight bias is a type of memory distortion in which, with the benefit of feedback about the outcome of an event, people's recalled judgments are typically closer to the outcome of the event than their original judgments were. Research on hindsight bias is particularly important because it is a ubiquitous phenomenon and one with potentially detrimental consequences in applied settings, such as law and medicine.

In the 1970s, Baruch Fischoff was concerned with professionals such as politicians' or politicians exaggerated feeling of having known all along how patients' recovery or elections were going to turn out. To study this issue empirically, Fischhoff asked participants to assess the probabilities of various possible outcomes concerning upcoming events, for example, President Nixon's historic trips to China and the Soviet Union (e.g., Pres. Nixon will meet Chairman Mao; Pres. Nixon will announce that the trip was a success). After the trips, participants were asked to recall their predictions. Results showed that participants tended to exaggerate what they had known in foresight.

There are two common experimental designs that have been used in the psychological literature. In the memory design, participants first make judgments concerning some stimuli, then receive feedback on some or all of the items, and are finally asked to recall the original judgments. In the hypothetical design, participants first receive feedback concerning some or all of the items and are then asked to say what they would have estimated had they not been given feedback. Empirical results using either design have shown that recalled or hypothetical estimates are commonly biased toward the feedback information.

At present, there is no single theory that can explain all patterns of data and moderator variables that have been studied in laboratory or real-world settings (e.g., expertise, experimental materials). One potential reason for this is that multiple processes are involved in producing the effect. In fact, there is largely consensus that the bias is multiply determined, and involves both cognitive and motivational factors.

Regarding cognitive factors, the prevalent idea is that both processes of retrieval and reconstruction play a role. For example, when reporting the original judgment participants try to retrieve the specific memory of the event as well as reconstruct the original judgment process. Accordingly, the hindsight bias effect can occur by new information (feedback) biasing (1) the retrieval cues used to query memory for the original judgment, (2) the reconstruction of the judgment process, (3) or both. This view also suggests a prominent role for inhibition processes. Accordingly, research shows that individuals with strong inhibitory deficits have more difficulties inhibiting feedback about the outcome of an event from entering working memory and thus show increased hindsight bias. As expected, this is particularly the case when the correct response is either in sight or accessible in working memory at the time of the attempt to recall one's original response.

In addition, there is evidence that hindsight bias may serve motivational goals. For example, people seem to change the perceived probabilities of events so that negative events appear inevitable as a way to mitigate disappointment and personal blame. This seems to occur, however, mostly for situations people can control and are unexpected, suggesting that such phenomena should be interpreted in the light of people's attempts at preparing for future events. In other words, these forms of hindsight bias can be seen as arising from the use of a sense–making process, whereby people integrate all they know about a topic into a coherent mental model. In this light, human memory is not so much designed to accurately reconstruct the past as it is to make sense of it in order to better deal with the future.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093245063768

Subjective Probability Judgments

M. Bar-Hillel, in International Encyclopedia of the Social & Behavioral Sciences, 2001

4 Subjective Uncertainty

When judging probability, people can locate the source of the uncertainty either in the external world or in their own imperfect knowledge (Kahneman and Tversky 1982). The latter is what Bayesian theory means by the term subjective probability (whereas in this article, recall, the adjective refers to the mode of assessment, rather than to the locus of the uncertainty). When assessing their own uncertainty, people tend to underestimate it. The two major manifestations of this tendency are called overconfidence, and hindsight bias.

Overconfidence concerns the fact that people overestimate how much they actually know: when they are p percent sure that they have answered a question correctly or predicted correctly, they are in fact right on average less than p percent of the time (e.g., Lichtenstein et al. 1982, Keren 1991). Hindsight bias concerns the fact that people overestimate how much they would have known had they not possessed the correct answer or prediction: events which are given an average probability of p percent before they are known to have occurred, are given, in hindsight, probabilities higher than p percent (Fischhoff 1975)—a phenomenon sometimes known as ‘I knew it all along.’

Both these biases can result from biased availability. After all, what we know is almost by definition more available to us than what we do not know. Regarding hindsight, it is enough to note that once something (such as the final outcome) has been brought to mind, it is nearly impossible to ignore (try to ‘not think of a white elephant’).

Regarding overconfidence, it has been shown that it can be dampened by instructing people to ‘think of reasons you might be wrong’ prior to asking them to assess their own confidence (Koriat et al. 1980)—a direct manipulation to increase the availiability of the complementary event, the one competing for a share of the total 100 percent probability. In addition, overconfidence turns to underconfidence in very hard questions (e.g., May 1986). Perhaps in answering easy questions, one's best guess is mentally prefaced by: ‘I'm not sure I know, but I think that …’ which focuses on the available. In answering hard questions, on the other hand, one's best guess might be mentally prefaced by: ‘I don't think I know, but I guess that …’ which focuses on the unavailable.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006379

Multinomial processing trees as theoretical bridges between cognitive and social psychology

Jimmy Calanchini, ... Jeffrey W. Sherman, in Psychology of Learning and Motivation, 2018

2.1 The process dissociation model and its relatives

Jacoby's process dissociation model is one of the most widely-applied MPTs within both cognitive and social psychology. The initial version of the model that Jacoby (1991) proposed is a control-dominant, or “early selection,” model, in which the relatively more unintentional process influences responses only when the relatively more intentional process fails. For example, on a recognition memory task, recollection is a relatively more intentional process than familiarity. This control-dominant version of the model specifies that when the two processes would produce different responses (e.g., recollecting that a word was not presented before, even though it seems familiar), recollection will drive the response if both processes are activated, and familiarity can only drive a response when recollection fails. Lindsay and Jacoby (1994) proposed an alternate automatic-dominant, or “late correction,” version of the PD model. In this version of the model, the relatively more intentional process only influences responses when the relatively more unintentional process fails. For example, on a Stroop task, color naming is a relatively more intentional process than word reading. This version of the model specifies that when the two processes would produce different responses (e.g., RED printed in green), word reading will drive the response if both processes are activated, and color naming can only drive a response when word reading fails (see also Jacoby, 1998 for a synthesis of these approaches).

Within cognitive psychology, a number of variations of Jacoby's (1991) process dissociation model have been proposed. For example, Buchner, Erdfelder, and Vaterrodt-Plunnecke (1995) added a parameter representing guessing or response biases that determines behavior when neither the intentional or unintentional memory process drives responses. By separately accounting for response biases, this extended model provides relatively more pure estimates of both types of memory. Hütter, Sweldens, Stahl, Unkelbach, and Klauer (2012) and Hütter and Sweldens (2013) used a similar approach to examine the extent to which evaluative conditioning depends on contingency awareness. They operationalized memory for stimulus pairings as the relatively more intentional process, the conditioned attitude resulting from stimulus pairings as the relatively less intentional process, and accounted for response biases in the absence of either of these influences. In doing so, they demonstrated that evaluative conditioning can create attitudes even when participants are not aware of stimulus contingencies.

Though Jacoby and colleagues' process dissociation models were developed within the domain of cognitive psychology, and have primarily been applied to the study of memory, the process dissociation procedure has also been successfully applied to a variety of topics within social psychology. For example, social psychologists often distinguish between attitudes that are measured explicitly versus implicitly: Explicit attitudes are assessed directly, through self-report measures, whereas implicit attitudes are inferred indirectly, often from the speed or accuracy of responses rather than the contents of responses, per se. Moreover, implicit measures often obscure what is being measured to a greater degree than do explicit measures, and responses on implicit measures are more difficult to strategically feign than are responses on explicit measures. Consequently, implicit attitude measures were initially assumed to assess qualitatively distinct processes than were assessed by explicit attitude measures (Greenwald & Banaji, 1995; Wilson, Lindsey, & Schooler, 2000). Implicit measures were thought to assess automatic or unconscious attitudes, whereas explicit measures were thought to assess conscious or deliberately controlled attitudes. In retrospect, this assumption clearly mirrors the conflation of task with process in recognition memory that Jacoby (1991) addressed. Subsequent social psychological research using the process dissociation procedure had similar results, demonstrating that responses on implicit measures are influenced by both relatively automatic (e.g., stimulus-driven behavioral impulses) and controlled (e.g., intentional responding) processes.

Within social psychology, Jacoby's (1991) control-dominant model has been applied to a wide variety of implicit measures of stereotyping and prejudice in order to reveal the joint contributions of multiple processes, including the weapons identification task (Conrey et al., 2005; Payne, 2001), the shooter task (Plant & Peruche, 2005), and the IAT (Payne & Bishara, 2009). Additionally, the process dissociation procedure has been used by social psychologists to identify and measure different processes in a variety of domains, such as moral reasoning (Conway & Gawronski, 2013; Gawronski, Armstrong, Conway, Friesdorf, & Hütter, 2017), processing fluency (Fazio, Brashier, Payne, & Marsh, 2015; Unkelbach & Stahl, 2009) and judgment and decision making (Damian & Sherman, 2013; Ferreira, Garcia-Marques, Sherman, & Sherman, 2006). For example, Ferreira et al. (2006) tested the assumption that logical reasoning and heuristic decision making are opposite poles on a processing continuum, such that increasing the use of one form of processing necessarily decreases the use of the other (e.g., Petty & Cacioppo, 1986). To do so, they created a series of decisions in which logical and heuristic processing would produce the same judgment in some cases, but produce conflicting judgments in other cases. They also varied the instructions given to participants in ways that should be expected to increase reliance on either logical (e.g., behave like a scientist) or heuristic (e.g., use your intuition) reasoning. By applying the process dissociation model to participants' responses across a series of conditions, Ferreira et al. (2006) demonstrated that logical and heuristic reasoning make independent and dissociable contributions to judgments. As such, the process dissociation procedure provided a more nuanced understanding of the relationship between two processes already assumed to drive responses in a given domain, and provided a means to measure those processes separately (but see Klauer, Dittrich, Scholtes, & Voss, 2015).

Additional processes have been incorporated into conceptual extensions of Jacoby's (1991) model, which, in turn, have expanded process-level understanding of a variety of behaviors. For example, Sherman and colleagues' quadruple process model (Quad model: Conrey et al., 2005; Sherman et al., 2008) builds upon the basic assumption that a relatively unintentional process (i.e., activated mental associations) and a relatively intentional process (i.e., detection of appropriate responses) jointly drive responses on implicit measures. Additionally, the Quad model accounts for guessing or response bias (e.g., Buchner et al., 1995), and includes a process that intervenes to overcome the behavioral responses activated by mental associations when they conflict with the detected correct response.

The structure of the Quad model is depicted as a processing tree in Fig. 2. Using as an example an IAT that presents stimuli representing the ingroup and outgroup along with positive and negative words, a stimulus representing the outgroup might activate negative mental associations (AC), which produce an incorrect response tendency in the incompatible condition (i.e., when “outgroup” and “good” share a response key). In contrast, accuracy-oriented detection (D) always produces a correct response tendency (i.e., to press the task-appropriate button). To the extent that biasing associations are overcome (OB), detection will drive a correct response. Thus, the likelihood of one path toward a correct response on this trial type (an outgroup stimulus in the incompatible condition) can be represented by an equation reflecting the activation of these three processes: AC × D × OB. However, to the extent that the overcoming bias process fails (1 − OB), activated negative associations will drive an incorrect response on this trial type, which can be represented by the equation: AC × D × (1 − OB). Importantly, these are not the only possible combinations of processes through which responses can be made on a task like the IAT; instead, the Quad model posits that multiple combinations of processes can drive responses. For example, a correct response to an outgroup stimulus on an incompatible trial can also result from no associations activated and detection succeeding, (1 − AC) × D, or from no associations activated, detection failing, and a positivity bias driving the response, (1 − AC) × (1 − D) × G. Similarly, an incorrect response to this trial type can also result from activated associations and detection failing, AC × (1 − D), or from no associations activated, detection failing, and a negativity bias driving the response, (1 − AC) × (1 − D) × (1 − G). Taken together, the likelihood of making a correct response to an outgroup stimulus on an incompatible trial can be represented by the sum of these three pathways: [AC × D × OB] + [(1 − AC) × D] + [(1 − AC) × (1 − D) × G]; and the likelihood of making an incorrect response can be represented by the sum of these three pathways: [AC × D × (1 − OB)] + [AC × (1 − D)] + [(1 − AC) × (1 − D) × (1 − G)].

How do both the hindsight bias and overconfidence impact psychological research?

Fig. 2. A portion of Conrey et al.'s (2005) quadruple process (Quad) model. Oval represents a test stimulus and rectangles represent latent cognitive processes hypothesized to influence responses to the stimulus. Parameters with lines leading to them are conditional upon all preceding parameters. The table on the right side of the figure depicts correct (✓) and incorrect (✕) responses as a function of process pattern and trial type.

The Quad model has been successfully applied to a variety of implicit measures, including the IAT, priming tasks (Conrey et al., 2005), and the Go/No-Go association task (Gonsalkorale, von Hippel, Sherman, & Klauer, 2009; Ramos et al., 2015). One way in which the Quad model has been instrumental is by expanding understanding of implicit attitude variability and malleability. A process-pure interpretation of implicit measures can only attribute variations in implicit attitudes to variations in mental associations. In contrast to this perspective, research using the Quad model has demonstrated a number of cases in which other non-associative processes contribute to implicit attitude variability. For example, older people demonstrate greater implicit racial bias than younger people, but biased mental associations do not vary with age. Instead, the ability to inhibit the influence of associations decreases with age, and can account for age differences in IAT performance (Gonsalkorale, Sherman, & Klauer, 2009, 2014). Thus, research using the Quad model has provided a more nuanced understanding of the combinations of processes that contribute to variations in implicit task performance.

This section of the chapter is not meant to provide a complete list of MPTs that have been applied to response conflict tasks, or even a comprehensive discussion of the MPTs described here. MPTs have been used to investigate a wide variety of topics within cognitive and social psychology, such as source monitoring (Batchelder & Riefer, 1990; Batchelder, Riefer, & Hu, 1994; Bayen, Murnane, & Erdfelder, 1996; Klauer & Ehrenberg, 2005; Klauer & Meiser, 2000), social categorization (Klauer & Wegener, 1998), illusory truth (Begg, Anas, & Farinacci, 1992), hindsight bias (Erdfelder & Buchner, 1998), gender bias (Buchner & Wippich, 1996), age-related false memory (Jacoby, Bishara, Hessels, & Toth, 2005), stereotype formation (Meiser & Hewstone, 2004), and propositional reasoning (Klauer & Oberauer, 1995; Oberauer, 2006), among many others. Additionally, a number of MPTs have also been applied to various implicit measures, such as the extrinsic affective Simon task (Stahl & Degner, 2007), affect misattribution procedure (Payne, Hall, Cameron, & Bishara, 2010), stereotype misperception task (Krieglmeyer & Sherman, 2012), and the IAT (Meissner & Rothermund, 2013).c In the following sections, we highlight how MPTs have been used—and can be further used—to build bridges between cognitive and social psychology.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0079742118300148

How does hindsight bias affect psychological research?

The hindsight bias gets in the way by distorting the internal track-record we have of our past predictions. This can lead to overly confident future predictions that justify risky decisions with bad outcomes. More broadly, the bias prevents us from learning from our experiences.

Why should research psychologists beware of hindsight bias and overconfidence?

Recap. Hindsight bias can lead to overconfidence in your ability to predict what is going to happen. This can lead to poor decision-making and can affect how you assign blame for events.

How does hindsight bias create a need for psychological research?

Why does hindsight bias prove we need psychological research? It proves that we need psychological research because everything is not what it seems. There are indiscrepancies in common sense. One thing seems to be common sense while its polar opposite is also common sense.

How does hindsight bias and overconfidence relate to intuition?

Hindsight bias refers to people's tendency to overrate their potential to predict the result of a past event in hindsight. It makes them believe that they could predict future events as well. It may mislead individuals into thinking that they have an exceptional intuition leading them to make irrational decisions.