A child who operates from the autonomous level of moral reasoning believes

An out-of-control trolley is speeding toward five workers who are not able to move out of the way in time. You see a switch connected to the tracks that, if pulled, will divert the trolley onto a sidetrack and kill a single worker who is standing there. Would you pull the switch to sacrifice one person to save five? Or would you push a large man over a footbridge into the path of the trolley to stop the trolley and save those workers? As widely known, the former is the trolley dilemma, and the latter is the footbridge dilemma. What is philosophically interesting about these dilemmas is that people often react differently to them, even though they both involve sacrificing an innocent person to save more innocent people. Most people intuitively accept pulling a switch and deny pushing as a morally permissible way of sacrificing an innocent person to save more innocent people, when they are presented with the trolley and footbridge dilemmas []. What explains these different reactions?

Joshua ; 2009] assert that most people find pushing the large man off the footbridge morally unacceptable because their evolutionary- based negative emotional reactions to actions involving “up close and personal” harm distort their judgments. Moreover, holds that our negative reactions to actions involving personal harm result in deontological judgments. The idea is that since our deontological judgments have this epistemically suspect, evolutionary origin, they are normatively inferior to our consequentialist judgments, which arise from more rational thought processes such as cost-benefit analysis. My aim in this paper is to show that Greene is not entitled to make a substantive normative distinction between consequentialism and deontology.

In the first section, I lay out Greene’s main argument and the results of his initial study, and in the second section, I consider an objection to Greene’s argument and present his revised position. I distinguish, in the third and fourth sections, between different types of moral intuitions and claim that our theoretical moral intuitions, as opposed to concrete and mid-level ones, are independent of direct evolutionary influence because they are the product of autonomous [gene-independent] moral reasoning. Since both consequentialist and deontological theoretical intuitions are immune to nonmoral biases, it would be wrong to make a substantive normative distinction between the two theories. Finally, in the fifth section, I describe how the exercise of moral reasoning and the process of cultural evolution could generate gene-independent consequentialist and deontological moral intuitions that allow us to grasp objective moral facts and distinctions. I conclude that [1] the theory I support offers a better explanation of our consequentialist and deontological responses to particular cases, and that [2] Greene is not justified in his claim that deontology is normatively inferior to consequentialism.

1. Greene’s argument

Greene claims that consequentialism is normatively superior to deontology. To bolster his claim, he aims to undermine the justification of our deontological beliefs by placing a focus on their evolutionary origins. That is, on his account, deontological judgments are typically fueled not by the rational pull of objective moral facts but by morally irrelevant emotional responses which evolved as a means of increasing biological fitness. These emotional responses, according to Greene, are more likely to occur when one finds oneself in a situation where there is a personal harm rather than an impersonal one. Greene’s important claim is that whether harm is “up close and personal” or impersonal is morally irrelevant. This means, according to Greene, only the harm itself must have moral importance. Thus, strong negative responses to cases involving personal harm cannot be the result of the rational force of a deontological principle. Rather, they must be rooted in our evolutionary past: “These responses evolved as a means of regulating the behavior of creatures who are capable of intentionally harming one another, but whose survival depends on cooperation and individual restraint” [, p. 43].

Greene bases his claim on the results of a study he and his colleagues conducted regarding responses to various moral scenarios, including the famous trolley dilemma. Greene et al.’s [2001] study shows that whether the harm is “up close and personal” affects people’s moral judgments. In the study, subjects are presented with both personal and impersonal moral dilemmas and their brain activity is measured using fMRI when they respond. The results are interesting: [a] when subjects are presented with a personal dilemma like the footbridge case, higher activity in brain regions connected with emotional response is observed compared to their brain activity when presented with an impersonal dilemma like the trolley case; [b] people who, contrary to the majority opinion, decided to push the large man in the footbridge case took longer to finalize their decisions than people confronted with the trolley case. Greene infers two conclusions from these findings: [1] people’s emotional responses to morally indistinguishable dilemmas differ according to the way harm is inflicted; [2] the longer response time shows that the cognitive system in the subjects’ brain overrides the emotional system and results in the decision to push the large man. Therefore, Greene and his colleagues think that “the crucial difference between the trolley dilemma and the footbridge dilemma lies in the latter’s tendency to engage people’s emotions in a way that the former does not” [, p. 2106].

Greene thinks that consequentialism is normatively superior to deontology because he associates deontological judgments with the emotional system and consequentialist judgments with the cognitive system, and more importantly, he claims that the emotional system that produces deontological judgments is distorted by morally irrelevant factors such as the proximity of harm or physical contact. Deontologists usually concede the importance of producing best consequences, as they do in the trolley case. Nevertheless, they sometimes choose to place constraints on consequentialism on account of rights or intrinsic human dignity -constraints which apply in the footbridge case. Greene et al.’s study suggests that deontologists offer these constraints not due to the rational force of a deontological principle but due to their strong negative reaction to instances involving personal harm. Greene’s argument can be summarized as follows:

  • P1. The emotional system that induces deontological judgments is affected by features that render a moral dilemma personal.

  • P2. The features that render a moral dilemma personal are morally irrelevant.

  • C1. Therefore, the emotional system that induces deontological judgments is affected by morally irrelevant features [P1, P2; empirical conclusion].

  • P3. Moral judgments that are affected by morally irrelevant features do not reliably track moral facts.

  • P4. Deontological judgments do not reliably track moral facts [C1, P3].

  • P5. Moral judgments have a genuine rational or normative power only if they reliably track moral facts.

  • C2. Therefore, deontological judgments, in contrast to consequentialist judgments, lack a genuine rational or normative power [P4, P5; philosophical conclusion].

2. Personal force

There is a quick objection to Greene’s first premise. Suppose that we change the conditions in the footbridge case: as in the footbridge case, a large man is standing on a footbridge, but you are standing near a remote switch. If you pull the switch, you will knock the victim off the bridge by activating a trap door. Let’s call this case Remote Footbridge. In Remote Footbridge, we are using someone as a mere means as we do in the footbridge case, but now the harm appears to be impersonal. If this variation does not change our negative reaction, then Greene is mistaken: the nature of our reaction to footbridge cases does not depend on whether the harm is personal. Rather, we have strong emotional responses to such cases because we have good reasons to believe that using an innocent person as a mere means is wrong.

study, Greene and his colleagues introduce variations of the footbridge case including Remote Footbridge. The aim of the study is to improve on the 2001 study by making pairwise comparisons between variations of the footbridge case and trying to find out which morally irrelevant factor we respond to. There are three morally irrelevant factors in the standard footbridge dilemma: [i] spatial proximity, [ii] physical contact, [iii] personal force. Personal force differs from physical contact in that personal force occurs “when the force that directly impacts the other is generated by the agent’s muscles, as when one pushes another with one’s hands or with a rigid object” [, p. 365]. Greene and his colleagues introduce three variations in addition to the standard case:

  • Remote Footbridge: Everything is the same except that you knock the victim off the bridge using a trap door and a remote switch. This action involves none of the morally irrelevant factors.

  • Footbridge Pole: Everything is the same except that you use a pole instead of your hands to push the victim. This action involves spatial proximity and personal force.

  • Footbridge Switch: Identical to the Remote Footbridge except that you and the switch are next to the victim. This action only involves spatial proximity.

Comparing Remote Footbridge to the Footbridge Switch isolates spatial proximity. Comparing Standard Footbridge to Footbridge Pole isolates physical contact. And comparing Footbridge Switch to Footbridge Pole isolates personal force.

The results reveal that there is no significant effect of spatial proximity [Remote Footbridge vs. Footbridge Switch] and physical contact [Standard Footbridge vs. Footbridge Pole] on our responses, but there is a significant effect of personal force [Footbridge Switch vs. Footbridge Pole]. That is, we are inclined to find harmful actions involving personal force morally wrong. Greene thereby shows that we generally don’t react negatively to actions that don’t involve personal force such as the actions in the Remote Footbridge and Footbridge Switch. The objection mentioned above, thus, collapses.

In his 2013 book Moral Tribes, Greene cites another evidence that could show that our gut reaction against actions involving personal force is in fact sensitive to the means/side-effect distinction. There is conflicting evidence on our sensitivity to that distinction. On the one hand, most people [81%] approve of harming someone using personal force if the harm occurs as a side effect rather than a means to an end [obstacle collide case]. On the other hand, most people [81%] approve of harming someone as a means when there is no personal force [loop case]. And most people [69%] disapprove of harming someone as a means when personal force is involved, as footbridge case shows. Greene’s solution is what he calls the modular myopia hypothesis, according to which, our emotional system is blind to harmful side effects and detects only harmful actions that are intended as a means to some end. That is, most find it permissible to use personal force when the harm occurs as a side effect because they are emotionally [not cognitively] blind to the resulting harm. Greene thinks the means/side-effect distinction is morally irrelevant, and the reason we find it morally significant is that we first have these distorted intuitive judgments about the distinction, and we then imperfectly organize them into the Doctrine of Double Effect. So, Greene believes that the means/side-effect distinction is unjustified just like the judgment that pushing the large man in the footbridge case is impermissible.

3. Types of moral intuitions and autonomous reasoning

Does Greene’s argument cast doubt on all deontological judgments? Michael categorization of moral intuitions will help us to show that Greene is not entitled to associate selective pressures with our theoretical moral intuitions, be they consequentialist or deontological. On Huemer’s account, there are three types of moral intuitions:

  1. Concrete intuitions: Intuitions about particular cases such as trolley and footbridge dilemmas.

  2. Abstract [theoretical] intuitions: Intuitions about general moral principles such as the consequentialist intuition that the right action is the one whose consequence promotes the overall happiness, or the deontological intuition that it is wrong to use people as mere means.

  3. Mid-level intuitions: Intuitions about principles that have in-between level of generality such as the principle that one ought not to kill sentient beings for entertainment, or that one ought not to lie even if it is the only way to save one’s life.

Huemer claims that concrete and mid-level intuitions are more likely to be responsive to evolutionary biases, whereas abstract or theoretical intuitions are probably not directly responsive to them. We can give two reasons for this conclusion: [1] concrete and mid-level intuitions are related to strong emotional reactions, and [2] evolutionary mechanisms select for certain types of behavior to promote biological fitness.

Consider a concrete intuition such as that “It is wrong that David had sex with his sister,” or a mid-level intuition that “It is not permissible to kill sentient beings for entertainment.” These intuitions are more likely to arouse strong emotional reactions associated with our evolutionary past than the theoretical intuition, say, “S is morally obligated to do x, only if S is capable of doing x.” Furthermore, evolution might have endowed us with intuitions that favor certain types of behavior that are more likely to increase our reproductive success. These could include, for example, intuitions about incest, infanticide, murder, and promiscuity, among many others. However, it is less likely that it has endowed us with intuitions that favor certain types of abstract principles. Admittedly, there are studies that indicate our ancestors’ involvement in moral reflection to some extent; nevertheless, it is less than obvious that there is a meaningful relation between theoretical intuitions and reproductive success. It is true that some of our shared evaluative dispositions may have arisen from properties conducive to survival and reproduction. For instance, properties such as fractal patterns and symmetry could be the evolutionary foundations for our appreciation of beauty. Theoretical intuitions, by contrast, do not seem to be very helpful in terms of survival and reproduction. Abstract moral principles [e.g., one should always maximize utility, or one should never use people as a mere means] are too general to qualify as useful guides to survive and reproduce. It is likely that we reach abstract moral principles through employment of autonomous reflection on our evolved reactive attitudes to particular cases.

And over time, we convert these principles into intuitions and internalize them, as I describe it in the fifth section.

Is the distinction between different types of moral intuitions too vague? We arguably need specific criteria that an intuition must meet for us to classify it reliably as theoretical, mid-level or concrete. I propose three criteria which an intuition must meet to qualify as a theoretical intuition: [1] it does not arise from a specific set of environmental circumstances of time and place [e.g., trolley cases], [2] it does not require acting in a specific way [e.g., pulling a switch], and [3] it does not involve a particular person [e.g., large man]. Let’s look at some examples:

  • [D1] It is unfair when some people are worse off than others owing to differences in their unchosen circumstances.

  • [D2] It is our duty to fulfill a promise, regardless of the goodness of the consequences.

  • [D3] It is morally wrong to prosecute and punish those known to be innocent.

  • [C1] An action is morally good if it increases the overall level of pleasure in the world.

  • [C2] An action is morally bad if it increases the overall level of pain in the world.

While D1, D2, and D3 are deontological theoretical intuitions, C1 and C2 are consequentialist theoretical intuitions. These intuitions do not specify any circumstances, any particular action, or any particular individual. Thus, it seems that they are not generated directly by our emotional responses to particular cases but rather by autonomous moral reasoning.

What makes moral reasoning autonomous or gene-independent? One way to interpret the evolutionary influence on our capacity for morality and the content of our moral judgments is to claim that moral thought and behavior are the direct result of evolutionary mechanisms and have the sole function of adapting humans to their changing surroundings. This strictly behavioristic and eliminativist view sees morality as a “certain type of behavioral pattern or habit, accompanied by some emotional responses” rather than as “a theoretical inquiry that can be approached by rational methods, and that has internal standards of justification and criticism” [, p. 142]. The latter theoretical conception of morality does not deny the fact that we and some species of nonhuman animals share certain basic psychological dispositions and emotional responses that have been shaped by evolutionary forces. Its distinctive claim is that we also possess a superior intelligence that allows us to evaluate, systematize, and occasionally say no to these pre-reflective dispositions and emotional responses. That is, we don’t automatically act on our adaptive dispositions or emotions but we also think about them, evaluate them, try to justify them by appealing to reasons, decide between them, and guide our actions in line with the judgments reached through rational reflection. Moral reasoning is autonomous, then, in the sense that moral thought is not significantly determined by specific, evolutionary-based, psychological dispositions. For example, it may turn out that dispositions that promote slavery, racism, and condemnation of homosexuality are adaptive. But after exercising moral reasoning, revising our judgments about these cases, and passing down our revised judgments to subsequent generations, many people have come to suspect that beliefs that support slavery, racism, and discrimination against LGBTIQ+ people are unjustified, and accordingly they have been trying to act against these adaptive dispositions. The fact that we have the uniquely human capacity to criticize and revise our adaptive responses supports the idea that human morality is not just a totality of behavioral patterns but also a theoretical inquiry.

The fact that pre-reflective, adaptive dispositions cannot determine the content of certain mental activities points to the human capacity for autonomous [gene-independent] reasoning. For example, it would be a mistake to explain our mathematical or scientific judgments only by appeal to their psychological origins. The reasons behind those judgments should also be taken into account. We can justify our mathematical or scientific beliefs only when they correspond to mathematical or scientific facts, some of which are independent of how evolutionary mechanisms work. For instance, truths of the propositions “1+1=2” and “Water boils at 100°C” are independent of how organisms evolve and what kind of basic dispositions they possess. Although it may be true that our general disposition to become involved in mathematics and science is an adaptation, it is less likely that evolutionary influence is so pervasive as to determine the content of these activities. This indicates a shift in the function of our intellectual capacities: they do not only facilitate survival and reproduction, but they also help us track the truth of certain objective facts that go beyond the workings of evolutionary mechanisms.

Furthermore, even though moral codes must be consistent with our biological nature, “moral norms are independent of [biologically conditioned human] behaviors in the sense that some norms may not favor and may hinder [reproductive success]” [, p. 245]. For example, religion and patriotism facilitate cooperation but they also give rise to beliefs about racism and genocide, which may act against biological fitness. Obedience is considered as being conducive to survival [] but it can turn into mass killings in the wrong hands. The rapid decline in birth rates in some areas of Italy in the nineteenth century is another example of human values going beyond evolutionary aims []. All of these examples point to the fact that some of the content of our moral beliefs are exaptations rather than adaptations. Thus, we have good reasons to claim that human moral behavior differs from altruistic behavior of nonhuman animals due to the distinct human capacity for reasoning and evaluation, along with the effect of cultural transmission. Greene does not deny that moral reasoning is autonomous. Rather, on his account, autonomous moral reasoning favors consequentialism because he believes that his studies indicate that deontological judgments, unlike consequentialist ones, are shaped by our evolutionary-based emotional reactions to morally irrelevant factors. My point is that the nature of our consequentialist and deontological theoretical moral intuitions indicates that they both are the result of autonomous application of human intelligence in moral thinking.

Turning back to theoretical moral intuitions, one important objection to the contrast between emotional responses to particular cases and reasoning about abstract theories is the claim that all abstract moral theories ultimately rest upon our responses to particular cases; thus, they are subject to evolutionary influence as well. That is, we first have emotional reactions to particular cases. We then reflect on these reactions and come up with abstract moral theories. Since these abstract theories originate from our evolved reactive attitudes, they also must have been shaped by natural selection. This is the GIGO problem, but it applies to both consequentialist and deontological theoretical intuitions. If deontological theoretical intuitions are epistemically suspect because they are distorted by our evolved attitudes to particular cases, so are consequentialist theoretical intuitions because they arise from our attitudes to particular cases as well. For instance, it is highly unlikely that we arrived at C1 and C2 without reflecting on our evolved attitudes to particular cases involving specific kinds of pleasure and pain. This is one side of the coin: a possible story about how we reach moral judgments. The other side of the coin involves another equally plausible story: the fact that particular cases arouse emotional reactions does not tell us whether the actions in these cases are “morally right, wrong, or neutral”. To decide that, we employ autonomous moral reasoning and decide whether our reactions are appropriate. In cases like condemnation of homosexuality, we have come to suspect that our reactions are not appropriate, whereas in cases like looking after our children we believe that our reactive attitudes are on the right track. What I claim is not that the former story is absolutely wrong, and the latter is absolutely correct. Although I favor the latter story in the rest of this paper, my point in this paragraph is that the truth of either story would go against Greene’s claim that consequentialism is normatively superior to deontology.

The only weakness of our theoretical intuitions seems to be that they are vulnerable to exceptions. This means one could always come up with a counterexample to abstract moral principles, and once a counterexample is found the intuition at issue loses its initial credibility. This occurs quite often in ethical discussions. Deontological ethics is often criticized for being too restrictive. For instance, if lying amounts to using another rational being as a mere means, then lying must be forbidden in all circumstances, including the one in which telling the truth gives rise to death of your children. Once this counterexample is raised, we develop a more critical attitude towards the theoretical intuition that we should never use people as a mere means. Consequentialism, on the other hand, is often criticized for being too demanding. This is because consequentialism requires us to give away almost all our wealth to the poor, since it will increase the overall happiness. For many, this goes above and beyond what morality requires of us. Such counterexamples pose a challenge to generalizations that are reflected in our different consequentialist or deontological theoretical intuitions, making it quite difficult to construct a coherent moral system. They encourage revision not only in our theoretical intuitions but also in our mid-level and concrete intuitions.

4. Formal intuitions

I have argued that while concrete and mid-level intuitions are susceptible to nonmoral biases, theoretical intuitions seem to be immune to them. At the same time, theoretical intuitions of both consequentialism and deontology are vulnerable to exceptions. Should we now conclude that our moral intuitions are never trustworthy as a guide to practical reasoning? Is there no way forward? Huemer proposes that our most trustworthy intuitions are formal intuitions, which are a subset of theoretical intuitions. The function of formal intuitions is not to make any moral evaluation but to place formal constraints on moral theories. Consider three of the examples gives:

  1. If x is better than y and y is better than z, then x is better than z.

  2. If it is wrong to do x, and it is wrong to do y, then it is wrong to do both x

  3. and y.

  4. If two states of affairs, x and y, are so related that y can be produced by adding something valuable to x, without creating anything bad, lowering the value of anything in x, or removing anything of value from x, then y is better than x [p. 386].

Formal intuitions, according to Huemer, are products of reflection upon what is required by the nature of the ‘better than’ relation, wrongness, moral evaluation, permissibility, and so on. While we arrive at non-formal theoretical intuitions first by reflecting on particular cases and then reaching a general conclusion, formal theoretical intuitions are generated by reasoning about what is entailed by the nature of moral predicates such as ‘better than’ or moral evaluation in general. In other words, the reasoning from particular cases to non-formal theoretical intuitions is inductive, whereas the reasoning from formal [theoretical] intuitions to particular moral facts is deductive. We can liken formal intuitions to axioms in geometry: we derive particular moral facts from ethical principles like the principle of transitivity of ‘better than.’ Since formal intuitions follow directly from the nature of evaluative/moral concepts, they do not arise from observation of particular cases. Therefore, formal intuitions are less likely to be affected by nonmoral, evolutionary biases associated with reactive attitudes to particular cases, and they also do not seem to be vulnerable to exceptions.

When a counterexample is found to a theoretical intuition, the intuition loses its initial credibility, which could even result in a rejection of it. However, when one comes up with a counterexample to a formal intuition, we generally call the case a paradox instead of giving up the intuition altogether. For example, take Derek ] famous ‘repugnant conclusion:’

  • A: 100 people live a very high quality of life.

  • B: 200 people live a slightly lower quality of life than the people in A.

  • C: 400 people live a slightly lower quality of life than the people in B.

  • Z: 100.2 people live in conditions barely worth living [pp. 419-430].

Most people have the intuition that “B is better than A,” “C is better than B,” and so on. And if the principle of the transitivity of ‘better than’ is to hold for our intuitions, then we must expect to have the intuition that “Z is better than A.” Despite that, most people have the intuition that “A is better than Z.” We seem to have three options to solve this paradox: [1] reject the transitivity of the ‘better than’ relation, [2] reject some or all of our earlier intuitions [e.g., B is better than A], or [3] accept the “repugnant” conclusion. Denying the principle of the transitivity of ‘better than’ would block the reasoning leading us to the repugnant conclusion at the cost of giving up one of the least controversial evaluative principles. ; 2012] and espouse such a radical approach. , for example, denies what he calls the ‘Internal Aspects View’ in favor of the ‘Essentially Comparative View’ to reject the principle of transitivity of ‘better than.’ Still, there are various ways of coming to terms with the repugnant conclusion. My point here is not that we should accept the repugnant conclusion. Regardless of who is right on this matter, rejecting the transitivity of ‘better than’ is clearly a radical step that challenges one of our most pervasive evaluative principles. My point is that counterexamples to formal intuitions yield paradoxes, whereas counterexamples to non-formal theoretical intuitions do not. Moreover, one plausible way to solve paradoxes generated by formal intuitions is to reject otherwise appealing concrete intuitions in favor of preserving formal ones. Formal intuitions are, therefore, helpful in assessing and -at least sometimes- revising our moral theories.

The formal intuition [c] above seems to be a suitable candidate for a consequentialist formal intuition. Are there any deontological formal intuitions? One possible candidate is Kant’s first formulation of the Categorical Imperative [also known as the universalizability principle]: “Act only in accordance with that maxim through which you can at the same time will that it become a universal law” [Kant 1998, p.31]. In other words, your maxim [or the subjective principle that motivates you to perform a certain act] can qualify as a moral principle if and only if it does not lead to a contradiction when raised to the level of a universal law. For example, if everyone were to act on the subjective principle that “lying is permissible if you can get away with it” with the same regularity of laws of nature, that would lead to a contradiction. This is because the assumption of truth-telling is a necessary condition for lying. Since nobody will assume that anyone is telling the truth, it is impossible for anyone to lie under that maxim. In such cases, we think that a particular consideration gives rise to a paradox about a formal intuition rather than giving up the intuition itself [the law of non- contradiction in Kant’s case]. Even if we accept that concrete and mid-level intuitions are unreliable due to the distortion created by nonmoral factors, we cannot reach the same conclusion for Kant’s first formulation of the Categorical Imperative because it is a purely formal principle without any experiential content. Thus, it is not directly susceptible to evolutionary influences. Kant’s universalizability principle itself may have nothing to do with our evolutionary history or emotions. It is devoid of any empirical content, yet it is the foundation of Kant’s deontological moral theory. Emotions and evolutionary influences could come into play only when we try to derive concrete moral duties [e.g., “You ought not to commit suicide”] from that formal principle.

It is possible that there exists both consequentialist and deontological formal intuitions that are safe from morally irrelevant factors and exceptions. Can we use these formal intuitions to make substantive normative judgments? This is the real difficulty that faces both consequentialists and deontologists when they try to prescind from the level of purely formal intuitions to particular cases. It seems that different moral theories can have different secure formal intuitions. However, once they try to fill the gap between formal intuitions and particular cases, their theories become vulnerable to exceptions and nonmoral factors. The problem that confronts all such theories is, thus, to find a way to translate formal intuitions into action-guiding general principles with content, without falling prey to exceptions and morally irrelevant biases. Although Greene successfully shows us the way in which some of our concrete and mid-level deontological intuitions are susceptible to distortions that arise from evolutionary forces in some moral dilemmas, his evidence does not seem to apply to [non-formal] theoretical and formal [theoretical] intuitions. Hence, he is not entitled to claim that consequentialism is normatively superior to deontology.

5. How to acquire theoretical intuitions

I have argued that while our concrete and mid-level intuitions are susceptible to evolutionary influence, our theoretical intuitions [formal ones included] often aren’t directly susceptible to them. But are theoretical intuitions really “intuitions”? Or, are they generalizations -convictions which we develop only after we reflect on particular cases? Since intuitions are “immediate intellectual grasps or appearances prior to reasoning,” it may seem to be wrong to classify them as intuitions. In this section, I describe how we could turn abstract moral theories into intuitions.

Actions arouse emotions. Due to our shared biological nature, we are inclined to react in particular ways to particular actions. These reactive attitudes are the products of our ‘concrete intuitions.’ For example, when we hear about an instance of incest, or someone beating their children, or someone cheating on their partner, our concrete intuitions make us feel that they are wrong, and we react accordingly. Then we use our capacity for abstraction [the capacity to see actions or objects as members of general categories] and find certain kinds of actions right or wrong [e.g., “Incest is wrong”]. We call such reactions ‘mid-level intuitions’. Mid-level intuitions are similar to what , p. 828] calls “post hoc rationalizations”: They appear to originate from moral reasoning, but they are often merely expressions of our emotions [plus abstraction]. Then we systematically think about our concrete and mid-level intuitions and question their appropriateness, credibility, coherence with each other, and so on. We reflect on various circumstances we might find ourselves in and whether different conditions affect the rightness/wrongness of particular actions. We also try to see whether our concrete and mid-level intuitions check with our formal intuitions. After reflecting systematically on our concrete and mid-level intuitions, we reach generalizations or abstract moral theories. And when we can think of exceptions or counterexamples to our moral theories, we either reject them completely or revise them. This is roughly the way we exercise our capacity for autonomous moral reasoning. But it would be quite difficult -if not impossible- and ineffective to stop what one is doing each time and repeat this process for each action. We certainly need context-sensitive intuitions to effectively guide our actions. Can our generalizations or moral theories convert to intuitions? And if so, how?

To explain how we turn moral theories into “immediate intellectual grasps or appearances”, we need to focus on Greene’s distinction between the cognitive and the emotional. Assuming a sharp distinction between the emotional and the [dispassionately] cognitive is too hasty, of course. A purely cognitive or emotional judgment could be an impossibility due to the extremely complex nature of the brain. What we call ‘the cognitive’ could involve emotions, or what we call ‘the emotional’ could involve cognitive elements [Greene, 2008; ]. However, it is plausible that we are functionally divided into two systems: emotional and cognitive systems.

The distinction between the emotional and the cognitive has been popularized by Daniel Kahneman, who calls them “System 1” and “System 2”, respectively []. System 1 involves automatic, effortless, and unconscious reactions to particular cases. It is like an autopilot, a stranger within, that “operates automatically and quickly, with little or no effort and no sense of voluntary control” [, p. 20]. System 1 includes our innate, evolved, psychological dispositions that we share with other animals. It is our fast, effortless, and mostly unconscious mental system for jumping to conclusions to increase our chances of survival. Activities that are linked with System 1 include “making a ‘disgust face’ when shown a horrible picture”, “understanding simple sentences”, “driving a car on an empty road”, and so on. By contrast, System 2 “allocates attention to the effortful mental activities that demand it, including complex computations” [, p. 21]. It is associated with deliberate, logical, and slow reasoning that requires a lot of energy. System 2 is an expert at solving problems, but it has limited capacity to exert effort. This is because we are not machines: it is difficult for us to maintain our focus for a long time and on different things at the same time. Activities that are linked with System 2 include “searching memory to identify a surprising sound”, “parking in a narrow space”, “filling out a tax form”, “checking the validity of a complex logical argument”, and so on. Edward uses the term “hot cognition” for System 1 and “cold cognition” for System 2. He identifies hot cognition with “knowing how” and cold cognition with “knowing that.”

Now I can explain how we turn our moral theories into intuitions. Our Pleistocene ancestors used to live in small groups of hunter-gatherers, and they interacted mainly with relatives or people well known to them. Under these conditions, they developed psychological adaptations to facilitate and maintain cooperation such as emotions [e.g. empathy, resentment, etc.], the ability to recognize and remember faces, detect cheaters, and so on. We share these hot cognition processes with some of nonhuman animals. The important question is, “how one particular primate, us, managed the abrupt transition from our ancient hunter-gatherer lifestyle to the large-scale, urban way of life made possible by agriculture” [, p. 175]. In other words, how has it become possible for our hominin ancestors, equipped only with hot cognitions, to adapt to urban life? It is less likely that hot cognitions are responsible for this adaptation because the time scale is too short for the evolution of new and complex psychological dispositions. One idea is that the abrupt transition became possible because of the introduction of new, external social institutions, which are designed to keep hot cognitions in check. However, given that cold cognition has a limited capacity to exert effort, it doesn’t seem to be well-equipped for this task. Therefore, it is more likely that the transition to dense urban life “was managed not by consciously suppressing our tribal emotions but by using cold cognition to extend or redirect instincts through a process of emotional education” [Slingerland, 2014, p. 176; emphasis added]. That is, instead of trying constantly to keep our pre-reflective intuitions in check, we create a set of shared values, some of which are the product of autonomous reasoning, and, more importantly, we internalize them: they become second nature to us.

The role of cultural evolution in achieving this is immense: cultural inculcation can train us to internalize the input from autonomous reasoning and react accordingly to particular cases so rapidly [even within a single generation] that biological evolution cannot be responsible for it. Cultural evolution is “a distinctive human mode of evolution that has surpassed the biological mode because it is a more effective form of adaptation; it is faster than biological evolution, and it can be directed” [, p. 252]. We call the effect of culture on transforming our ideas and beliefs ‘cultural evolution’ due to the striking resemblance between evolution of ideas/beliefs and biological evolution. First, there is heredity: ideas/beliefs are transmitted from one individual to another or from one society to another. Second, there is variation: beliefs, ideas, and values differ among individuals and societies. And third, there is differential reproduction: some ideas/beliefs are transmitted more efficiently than others among individuals or groups.

Just as some biological traits are favored by natural selection, so some ideas/beliefs are favored by cultural selection. Cultural ideas/beliefs compete for our attention and some of them are more successful in replicating themselves than others. What could be the reason for such a success? Coercion could be a reason. For example, after the Roman emperor Constantine the Great became a Christian in 312 CE, most of the Western world suddenly became Christian []. Psychological attractiveness could be another reason. For a long time and in most cultures, people have believed that God exists. The reason for this success could be that it gives people security, love, and tranquility, i.e., the idea of God is psychologically appealing. However, some of the cultural ideas/beliefs’ reproductive success is independent of biological fitness. We often opt for certain beliefs, not because they are psychologically attractive but because they track the truth. For example, the reason why no reasonable person believes that the geocentric model of the universe is correct is not because the belief in question is psychologically unappealing but because it does not track the truth. The capacity for autonomous reasoning creates a gap between biological and cultural evolution by enabling us to create cultural mutations that track the truth, regardless of their biological advantages.

Cultural evolution enables us to develop ideas and find solutions to long-term problems much faster and more efficiently than biological evolution, because we can transmit our ideas, beliefs, and values not only to our children but also to all humankind. However, our ability to use cultural information is restricted because cold cognition is slow, effortful, costly, and limited. The solution is domestication or internalization: just as we can deliberately change the behavior of animals and plants in line with our needs, so “the conscious mind can acquire new, desirable goals and then download them onto the unconscious self, where they can then be turned into habits and implemented without the need for constant monitoring” [, p. 65]. For example, when beginner drivers learn how to drive, prefrontal regions in their brains become much more active because they constantly try to maintain their focus to understand the traffic rules, how to turn the wheel, how to press on the pedals, and so on. However, after certain amount of practice, the nervous driver starts talking and making jokes while driving because their brain activity decreases. As they internalize driving skills, their conscious mind or cold cognition becomes less active. Likewise, beginner chess players have to think a lot about how pieces move, how to devise strategies to win, and so on. But over time, thanks to the human capacity to recognize patterns, they develop intuitions that allow them to make quick and efficient decisions without having to think or calculate for too long. [Especially in bullet chess games, where each player is given only a minute in total, players mostly play by their ‘chess intuitions’ instead of active thinking]. In both examples, cold cognitions are transformed into hot cognitions over time.

In similar fashion, we turn our abstract moral theories into theoretical intuitions over time. Although we arrive at moral theories through conscious deliberation, we internalize them later. And once we turn our moral theories into hot cognitions, they start informing our concrete intuitions and affect our reactions to particular cases. That is, our theoretical intuitions help us grasp moral facts and distinctions and react to moral cases immediately, without thinking much about them, like a chess grandmaster playing a bullet game. Admittedly, chess intuitions might go wrong because they are not perfectly reliable in grasping chess facts. Chess grandmasters usually analyze games they played to improve their skills and to learn from their mistakes. And once a better move or strategy is discovered, the chess grandmaster revises or gives up their chess intuitions. Similarly, our theoretical intuitions are not perfectly reliable in grasping moral facts and distinctions. Therefore, when a counterexample is raised, we question the credibility of our theoretical intuitions; we either revise them, or we give them up.

6. Concluding remarks

I have argued that our theoretical moral intuitions are immune to direct evolutionary influence because they are the products of reflection on pre-reflective evolved psychological dispositions and the process of cultural evolution. Since we have both consequentialist and deontological theoretical intuitions, Greene is not justified in his claim that deontology is normatively inferior to consequentialism. Moreover, the theory I support has wider benefits. I have offered a model of how to respond to a great many evolutionary debunking arguments [EDAs]. EDAs attempt to undermine the justification of our evaluative/moral beliefs by asserting that there is a pervasive evolutionary influence on the content of our evaluative/moral beliefs. If my discussion of the different types of moral intuitions gets things right, then we cannot talk about a pervasive evolutionary influence because some of the content of our evaluative/ moral beliefs is determined by autonomous application of human intelligence in moral thinking. I leave the task of debunking EDAs elsewhere to do justice to their complexities.

Chủ Đề