We are guided to a large extent by our emotional responses to the world around us. Our emotions shape our actions, both for better and for worse. How do our emotions influence moral actions?
There has been an explosion of scholarly work on how emotions affect the way people reason, and much of that research is relevant to our question. The conclusions researchers draw generally fall into one of two camps: 1) emotions overwhelm and hinder reasoning and, by extension, intentional action; or, 2) emotions facilitate and help reasoning and intentional action. As is often the case in science (and elsewhere), each of these strictly opposing views is too neat and tidy on its own. My own view is that the relationship between emotions and reasoning is more complex. While emotions can shape our moral reasoning, the inverse is also true: moral reasoning can shape our emotional responses. As a consequence, over time, we can train our emotional responses to be in line with our moral beliefs and our goals.
Those who argue that emotions overwhelm reasoning and intentional action can appeal to decades of work suggesting that people who give in to their emotion-based impulses and desires suffer a host of negative outcomes. The classic work in this area was conducted by Walter Mischel in the 1960s and 1970s. He brought children into the laboratory and faced them with a difficult challenge (the particulars varied among experiments): They could eat one marshmallow that was sitting in front of them immediately, or they could wait ten minutes and eat two marshmallows. The results of this “marshmallow test” are well known: The children who waited — who “delayed gratification” — ended up becoming more successful later in life. This general line of work has continued into the present day, and there is now compelling evidence that people who are better able to control their impulses and delay gratification are less likely to overeat, abuse alcohol or drugs, or drop out of school, and are more successful overall. One way to interpret these findings is that people who allow emotional impulses to influence their actions are less likely to make good decisions overall.
The idea that emotions overwhelm reasoning is also echoed in an emerging body of scholarly literature on moral decision-making. Jonathan Haidt has argued that people often base moral judgments on emotional reactions. For example, people judge incest to be morally wrong even when risks such as pregnancy or emotional and psychological damage are removed from the equation. Haidt concludes that such judgments are rooted more in emotion than in rational deliberations about possible risks.
Other researchers, using a famous thought experiment originally offered by moral philosopher Philippa Foot, have argued that emotional responses can lead to suboptimal moral decisions. You’ve heard the scenario before: Say there’s a trolley coming down the tracks and it’s headed toward five people. You can flip a switch and redirect the train away from those five people, but the train will then be careening towards another, single person. Do you flip the switch? In an alternative version, you can push one person from an overpass onto the tracks to stop the trolley before it reaches the five people. Do you push the one person to save the lives of the five others? Most people answer “yes” to the first question but “no” to the second. Some scientists argue that this difference can be explained by the fact that people have stronger negative emotional reactions to the idea of pushing someone in front of the trolley than they do to flipping a switch. If one accepts that saving five people at the expense of one is the moral choice, then these findings suggest that emotions hinder moral action.
On the other side of the debate, those who maintain that emotions facilitate reasoning and intentional action can appeal to an equally compelling body of scholarship. Research from this camp suggests that emotions are foundational for all of our cognitive and behavioral processes and can result in outcomes consistent with people’s needs and goals. Emotional responses often guide people toward the most beneficial choice even in the absence of any conscious reasoning.
In one famous study, people were given a cash loan and told to gamble by turning cards from four decks of cards. The decks varied in potential payoff and penalty: two were risky (high payoff, high penalty) and two were safe (low payoff, low penalty). Picking from the safe decks repeatedly would result in larger gains overall. But the players were not given any information about the decks and learned of the gain or loss from each card only after turning it, so they had no way of predicting overall gains and losses. Nevertheless, over time, most people made the right choice: They began to choose the more advantageous decks more often than not. Interestingly, many of them also began to experience anticipatory skin conductance responses when contemplating choosing from the risky decks, even before they could verbally describe why they thought the decks were risky.
This work, and other subsequent research in a similar vein, suggests that emotions can help people learn to make better choices, even when they’re unable to reason consciously about those choices. This idea has not been taken up in contemporary research on moral action as much as the contrary idea that emotions overwhelm reasoning. Still, some researchers have suggested that the prevalence of certain strong, negative emotional responses, such as those elicited by the trolley problem, indicates that emotions can help moral action. Although choosing not to push someone in front of an oncoming trolley to save five people is not a morally utilitarian choice, choosing not to kill someone remains a moral action, and one that can be rationally defended.
Depending on one’s views about morality, these lines of research into how emotions might help moral action could prove challenging. For example, some philosophers claim that to be moral an action must be the result of deliberative thought. According to this view, people must consider the reasons for or consequences of their actions when making a moral choice. If one takes this position to its extreme and says that moral actions must be the result of an emotion-free and thus purely rational analytic process, as in extreme forms of deontological and utilitarian ethics, then it would seem that people are incapable of behaving morally, since psychological research suggests that all our reasoning is to some extent influenced by our emotions.
But an alternative and more nuanced understanding of moral action avoids this dispiriting conclusion. What if moral action is the result of deliberative choices made over time? On this view, people can, through deliberative reasoning, change their interpretations of situations and hence also their emotional responses to those situations. Over time, this process can become habitual and thereby alter one’s initial reactions to events and stimuli. In other words, as research has suggested, we can train our emotional responses over time. This idea does not imply that we all share the same view of what constitutes moral behavior, only that we all can bring our emotional responses into line with our own moral beliefs and intentions, whatever those are.
Consider someone who values the virtue of temperance — that things should be enjoyed in moderation — and strives to be temperate in daily life. If that person loves chocolate, emotional responses may overwhelm the intention to be temperate, resulting in overindulgence in chocolate at the first available opportunity. But since the person is striving to be temperate, such overindulgence may prompt negative emotional reactions, such as guilt, shame, and regret. Recent work suggests that these negative reactions provide the perfect occasions for us to develop our emotional responses, to make them better align with our moral attitudes and goals. There are at least three ways this can happen.
1. Negative emotions signal the need to adjust our behavior. It has long been acknowledged by psychologists that one of the primary functions of emotion is to signal that something important is happening and needs attention. Recent work has shown that this aspect of emotion can help us make better decisions. People often have the feeling that “something is wrong” when they make incorrect judgments. Studies show that people also experience greater physiological arousal (the biological equivalent of that feeling of wrongness) when they make incorrect judgments, and those who do spend more time deliberating and are more likely to correct their mistakes. This offers another possible opportunity to redirect our actions toward pursuits that are more in line with our attitudes and beliefs. When the temperate person who loves chocolate encounters the object of his desire, his strong positive emotional response can act as a signal to pay attention and redirect his cognitive resources toward regulating his behavior, rather than immediate gratification.
2. Negative emotions can help us learn from our mistakes. We are all familiar with emotions such as regret, shame, guilt, or disappointment that result from acting immorally or making choices that are not consistent with our own beliefs. Such emotions are unpleasant, of course. But studies have shown that they can nevertheless be integral to our ability to learn from experience. How? Such emotions trigger rumination on counterfactual thoughts: an analysis of what went wrong and how we could have done differently in a given situation. They present a rare window of opportunity to reflect on our actions and prepare ourselves to make different choices in the future. There’s evidence that focusing on actions that we could have changed — rather than, say, focusing on our shortcomings or weaknesses — can result in our making better choices the next time we encounter a similar situation.
3. Emotional responses can be reshaped over time. The idea that behaviors can be changed through experience is a hallmark of many different approaches in psychology. What is important here, however, is the idea that individuals can deliberatively set out to alter their own emotional responses. Recent work in psychology on the topic of “mindfulness,” the psychological state of being receptive to and aware of our surroundings, complements this idea — that we can become, with some effort, more aware of our emotional reactions and bring them into better accord with our attitudes, beliefs, and goals.
Our emotions are both powerful and unavoidable. But we need not be at their mercy, unable to control them or prevent them from overwhelming our reasoning and behavior. On the contrary, research suggests that we can act as agents who actively shape our emotional responses. Recognizing the impact emotions have on our actions can help us to develop stronger and more deliberative moral habits over time.
Discussion Questions:
- Is it possible to encounter a moral choice without having an emotional response? Are emotional responses part of what define moral choices?
- What are the moral implications of people who deliberatively develop their emotional responses to be consistent with harmful or otherwise maladaptive attitudes or beliefs?
- What does it mean for evolutionary theories of moral action that people can alter their emotional responses over time? Does it suggest that all moral responses are learned? Are there some moral responses that are innate and cannot be altered?
- Are there particular emotions that are especially effective in prompting us to alter our moral behavior?
Discussion Summary
Two important themes emerged from the discussion, which capture the next big questions for this area of inquiry. The first issue that came up was the value of emotional responses versus rationality. In other words, is it better to attempt to eliminate or reduce the impact of emotions, or is it better to embrace emotions as part of the moral process? This is an ongoing matter of debate, of course. But more and more evidence is suggesting that it is not possible to parse emotion and rational analysis. And, in fact, some studies have found that the more we attempt to control emotional processes by reducing their influence, the stronger their impact becomes. This is an important area for future investigation, particularly with respect to moral action and decision making.
The second issue that arose was about the moral value of trying to change responses. From some perspectives, there is virtue in attempting to change how we act and behave to bring actions more in line with our values. This has interesting implications for how we perceive and evaluate moral action. Imagine two individuals who engage in the same moral action — offering aid to someone just injured in a car accident. One of the individuals is hemophobic and has a strong aversion to the sight or thought of blood; the other individual is a nurse trained in emergency first responding. Is the same moral action more virtuous for the person who has to overcome his or her natural aversions in order to help than for the person who has the skills necessary to help? This scenario can be extended to consider the value of moral actions for those who have difficulty controlling their actions due to genetic or other organic issues that predispose them to self-control failures or who have urges to commit immoral acts.
New Big Questions:
- What are the consequences of embracing emotions versus attempting to eliminate the influence of emotions on moral actions?
- Do people consider only the outcome of an action to evaluate the virtuousness of that action? Or does the effort required for moral action also indicate virtue?
Couldn’t you still argue that moral actions are the result of an “emotion-free” or purely analytic process, just that in making our deliberations we have to bracket emotions? In other words, why can’t we accept the research that says we can’t ever totally eliminate our emotions, but also think that we can (or should try) not to let them determine our moral decision making to the extent possible?
You could definitely make that argument, and many scientists and philosophers do make that argument. But more and more evidence suggests that this bracketing is not possible. Consider, for example, work on motivated reasoning and confirmatory bias in evaluating evidence. The upshot of this work is that people quickly accept as true evidence that supports their beliefs and discount or counter-argue evidence that does not support their beliefs. The more motivated people are to discount the evidence, the more they engage in this motivated reasoning. Further, work by Pronin and colleagues suggests that we notice that other people engage in motivated reasoning but we all believe that we are perfectly rational. Emotions occur quickly, are compelling, and are closely linked with motivational and behavioral systems. Very strong emotions are easy to detect and we can recognize how they’re affecting us and make efforts to control their influence. But most emotions occur at relatively low intensity, making it difficult to even detect their influence on our thought processes.
Thank you for your essay! I’m really interested in virtue ethics and found your insights to be helpful in thinking about that. I am wondering whether you see your view as compatible with that approach, which sees moral choices as the result of character, developed through practice and habituation? Can virtue ethicists like me take scientific research as evidence for our views?
Great question! This is fertile ground for future study into how psychological processes contribute to character development over time. The fact that emotions can be changed intentionally does seem to be consistent with a virtue ethics account. From a psychological perspective, there is nothing inherently good or bad about the process of changing emotional responses — it is a basic learning mechanism. Psychological findings can elucidate the process through which character development, as described in virtue ethics, can occur. But findings like these cannot speak to whether there is moral good in this type of process or whether the resulting changes qualify as character.
I’m a little confused about the gambling example and why that’s supposed to show that emotions positively influence our reasoning? Is it similar to the idea that people’s instincts can be a good guide when making choices?
Also: what’s the explanation for the phenomenon described in the gambling experiment? Do researchers know why this works this way?
Yes, the gambling example is similar to the idea that people’s instincts can be a good guide when making choices. Except that the “instincts” in this study are emotional responses that occur based on non-conscious learning from experiences. The explanation, oversimplified (see publications by Damasio and colleagues for full details), is that outcomes cause an emotional reaction (something bad = a negative response) and that is the basis for learning and can occur without conscious awareness.
To the extent that virtuous behavior and moral actions involve reasoning, they can be taught — not perfectly, of course, but taught to some extent. But you mention the famous “marshmallow test,” which is often performed on very young children, kids as young as 3 who may not be great at ratiocination but who may have strong feelings about such important things as marshmallows. If I understand that test correctly, its results show that self-control (or a lack thereof) correlates with a range of adult outcomes. Does that mean, and does your argument in this essay mean, that the emotional, instinctual elements of morality to a large extent cannot be taught? That we may be destined, by our genes or some other biological mechanism that affects our emotions, to be more or less moral beings?
Eduardo — Thanks for your question! You might also enjoy our essay from 2015 on whether good character is “caught” or “taught”: https://www.bigquestionsonline.com/2015/02/17/good-character-caught-taught/
I’ll check it out! Thanks for the link.
That is an interesting insight. The evidence suggests that, with rare exception, people can learn from their emotional reactions and can intentionally modify their emotional responses. But, given evidence that there are individual differences in the ability to regulate responses, this process might be much easier and/or much more effective for some people than for others. The findings cannot speak to whether this makes some people more or less moral beings. From a philosophical standpoint, evaluating whether this makes some people more or less moral is likely to hinge on whether the effort to be moral represents value in and of itself, or if the behavior/outcome determines morality.