The Cognitive Science of Moral Failure: Dumbfounded

Four "moral situations" from Haidt, Koller & Dias (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613-628:

1) A woman is cleaning out her closet, and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom.

2) A family’s dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog’s body and cooked it and ate it for dinner.

3) A brother and sister like to kiss each other on the mouth. When nobody is around, they find a secret hiding place and kiss each other on the mouth.

4) A man goes to the supermarket once a week and buys a dead chicken. But before cooking the chicken, he has sexual intercourse with it. Then he thoroughly cooks it and eats it.
Do these things seem wrong to you? Most would say yes. But let me ask you a question, what is, exactly, wrong with each? That is, without simply restating the problem (e.g., that’s unpatriotic, you shouldn’t eat your dog, brothers and sisters should not kiss, etc.), what moral principle is being violated in each situation?

If you are like most people, you’ll find it hard to locate a moral or ethical principle being violated in each scenario. Yet, without a doubt, we know and feel that each scenario is wrong. As I've written about before, this feeling, the strong sense of wrongness while being at a loss for a moral argument, is called "moral dumbfounding" in Jonathan Haidt's interesting research.

For our purposes Haidt's research nicely illustrates the fissure between System 1 and System 2 in our moral judgments. Specifically, in Haidt's scenarios the "feeling of wrongness" is immediately activated by System 1. It's an automatic appraisal. The "reason for wrongness" is a System 2 search. And Haidt cleverly selected his scenarios to flummox that search. A feeling of wrongness exists in System 1 while System 2 spins its wheels to provide reasons, justifications, bible verses, rationales, and warrants for that judgment.

The shocking thing about Haidt's research is that it tends to turn our understanding of moral judgements upside down. Specifically, we tend to think that reasons drive our moral feelings. I judge that X is wrong and, as a consequence, feel that it is wrong. Cognition precedes emotion. Judgment causes feeling.

But Haidt's research suggests that this just might be backwards. Emotion precedes cognition. Feeling causes judgment. We feel something to be wrong and then go in search for a reason. Moral warrants (the stuff of an ethics class) are, essentially, post hoc justifications. And, for most of us, we operate with a "good enough" search criteria. That is, people, seeking to justify their knee jerk moral judgements, generally land upon warrants that provide "just enough" justification. Doesn't matter if these judgements are logically consistent or coherent upon inspection, all that matters is that they quickly help us reconcile our feelings with our self-concepts. This is why moral reasoning, as any philosopher can tell you, is generally so poor and unreflective. The reason is that moral reasoning isn't creating our moral judgments. What generally passes for "moral reasoning" is simply a quickly marshaled justification (for you and me) for why I feel the way I do. I'm not really offering an argument of any kind, although I'd like for you to think that I am. In short, for the most part moral reasoning is painted, as a kind of cognitive decoration, upon an underlying, unshakable conviction. No wonder moral or political discourse is so broken. Emotions are driving the car. All the talking at town halls is just so much verbiage. People already know what they believe. Or, more properly, they feel it. Deep in their bones. And words just don't penetrate.

In short, we are back to the conflict between System 1 and System 2. System 1 is driving our moral judgments. As a consequence, argument has a difficult time affecting moral judgments.

So can we change, morally speaking? Yes, just not through sharing reasons. The only way to change System 1 is to change emotionally and experientially. When your feelings change then you begin to prefer different kinds of moral warrants. Your heart has softened in some way and what previously sounded persuasive no longer moves you. It doesn't ring true. And some verses in the bible now seem cold and distant while others seem warm and alive.

This entry was posted by Richard Beck. Bookmark the permalink.

13 thoughts on “The Cognitive Science of Moral Failure: Dumbfounded”

  1. Nicely put. And I think this explains why many Christians change their moral views on racial equality, the role of women in the church, and homosexuality once they get to know know a person of another race, a woman church leader, or a homosexual person. The gut reaction of System 1 changes with personal experience.

  2. Richard, your posts on morality have come at a point where I have been thinking about these issues in broader form. I have been struck by how many of our decisions are really moral choices. In addition, I have been thinking about societal (perhaps group) morality. The health care debate, outside of the nuts and bolts, is really a question with moral dimensions. So is the deficit. Our entire society seems to be looking at things through the wrong moral lens or no lens at all. (From my perspective) Our morality seems to be misplaced. Digging into these larger questions are interesting and in doing so I ran into a Harvard psychologist named Josh Greene who is who's book on the subject is due out next summer. His group is exploring some interesting ideas. I'm anxious to read it and I instinctively feel that a correct understanding of our moral intuitions is needed before we can truly become a egalitarian and balanced society. It seems to be the next level, if you will. I need an better understanding of the base issues (i.e. system one and two, etc..)to better grasp the big picture. However, the more I think about morality and ethics the more disjointed and multidimensional it looks. I'm not finding the center or the foundation.

    I don't know if you can read between the lines on this short note, but I'd like to know your thoughts. How big are these issues? How are our collective misunderstandings affecting our society? Are these issues the fundamental separation between the "left" and the "right"? What are the foundational moral maxims?

  3. Dr. Beck, I'm very appreciative of this series of posts. I've been searching along these lines after taking a course in non-rationality in decision-making for managers and was introduced to heuristics, biases, and preference reversals. Do you think both systems existed before the fall? If so how do you think that process played out?

    Do you think this emotional and experiential way to change system 1 is only by acknowledging our "missing the mark" and accepting Christ's sacrifice? I'm thinking along those lines, that would explain the spiritual amnesia I tend to experience, but how would you approach people such as my professor who are not believers and think that if you can learn enough about the situations and circumstance where these biases creep up, you can train your brain to rationalize? (dude is addicted to brain age)

  4. Thank you for posting this, Richard. As I intuited back in graduate school, I think that we tend to gravitate toward certain theologians, thinkers, concepts, etc. not because of the tightness or coherence of the respective arguments but rather because of how we are touched on an affective level. On a theological level, since that's where I live most of the time, John Cobb, Jr. and David Griffin write [my words in brackets]:

    "Religious doctrines claiming universal validity are to be accepted, if at all, because of their self-evidence. The verbal expression of a universally experienced fact elicits a believing response [what I believe is connected deeply to emotionality] in us because we had already apprehended the fact. Accordingly, theology [or any other organizational system] should not primarily be argumentation. It should primarily be the attempt to state the basic tenets of one’s faith in such a way as to elicit a responsive perception of these as self-evidently true."

  5. I believe this post is true, but we are in a limited world, where human exists in contexts that are different and unique. This is culture that brings about identity. And identity breeds defense of 'self".

    I think that our American culture, where the individual is valued in his own right, is the highest cultural norm. Without the individual being of value, there is no "outside the box" demension, where the creative is allowed to "breathe". As our Founders were not orthodox Christians, I wonder if they understood that orthodoxy limits social change. And social change is necessary for a wider understanding of universiality.

    The "human" is the universal, devoid of cultural underpinning or defenses. But, is this "freedom of culture" a possibility? We are ingrained by our cultural references, whether they be familial, religious or political traditions.

    I believe that Hirshi Ali overcame her cultural identity in escaping Somalia, her religious tradition in Islam and becoming an atheist. She talks about the psychological "pull" of her tradition's upbringing in her book, "Infidel". And I believe she has found her "niche" here in America at a "think tank".

  6. Great post! I do have an issue with the last paragraph though. Haidt himself says that there is a place for moral reasoning. In his 2001 article where he lays out his "Social Intuitionist Model" of morality, Haidt says that Person A sharing their reasons can influence Person B's intution. However, in line with what you said, Haidt says it is not the reason itself that changes the intuition, but the affective weight of the reason. Other research (e.g. Rydell & McConnell, 2006) though has demonstrated that repeated exposure to counter-attitudinal information can change our implicit attitudes. So I think it would wrong to say that we can't change morally through reasoning, it is just a different effect than we would intuitively expect.

  7. Aaron,
    How do you understand how "the affective weight of reason" affecting moral judgment?

    It seems if one's identity is tied up with certain unreasonable commitments to defend self-identity, then one has to "grab" another's identifying factors through story, or analogy. Politicians do this all the time in their rhetorical skills, and this sways public opinion. But, those who think critically, watch for the follow through.

    As a social psychology student, do you believe there is such a thing as a detached "self"? Or do you think that the "detached self" just attaches to another social group identity? (Like with Hershi Ali, she detached from Islam to attach to atheism). If you believe that one cannot be detached, then what is the determining factor in an individual's decision making process in attachment?

  8. Richard! I haven’t been my usual mouthy self because your systems fissures have sent me into moral black hole hell.

    Have a little mercy, man.

    I’ve been reading along. Just last night, I ranted at the computer screen in cant, “this is dumbfounding!”

    Now I have your pastoral blessing.

    And I see you’ve been jerking my chain all along. I feel like a rhesus under your knife.

    We still don’t know the extent to which morality is discretely modular or synthetic in an analog flow. Yeah, I took the IAT. And I know this assertion contradicts the systems analyses here - a little. Maybe more. I’m probably feeling this from my tectum. But the Haidt, Koller & Dias hypotheticals look a little too close to my practice.

    I hate to say this, but I’m still calculatingly conscious of giving heuristic answers to clients on the Blink because I’ve already made many modest and imperfect calculations that my ignorance-based reasoning in certain cases (not in others) can’t be much improved by pretending that I have Laplace’s demon of omniscient superintelligence.

    This is a “conscious” judgment of calculation on my part. What? Or, is it?

    It’s a systematic reason for crediting fast and furious decisions; but, only to test them. Alas.

    One clinical application (just one) of Thaler and Sunstein (gargoyles who haunt the halls at my old stomping grounds) is the advice to re-frame complex choices into simpler modes so as to ease difficulties for clients in making useful decisions.

    Easy to say. It slides right off the academic forked-tongue.

    This advice assumes a bridge of at least rough and tumble integrity between the two systems.

    Yes, God’s grace is sufficient. But not optimal.

    Is this what you’re telling me, Richard? After all this?

    I’m really no better than a differential atheist finding my way. When all my charismatic conceit is stripped away.

    Dumbfounded is about right.

    I love you too.

    Or, is it the “two of you?”

    Jim

  9. If my system 2 finds no good reason for any of the four examples you gave, shall I try to make my system 1 feel ok about them?

  10. sorry, I mean:
    If my system 2 finds no good reason against any of the four examples you gave at the beginning, shall I try to make my system 1 feel ok about them?

  11. There can be a reason beyond our reasoning. This is a purely conservative (status-quo-preserving) argument. Knowledge is frequently embedded in superstition or tradition. This is not mapped out knowledge, in a systemic sense, but it is part and parcel of the knowledge necessary to move forward in the world. Much of this knowldge is embedded in a "that's just wrong" moral intuition. The fact is that intuitive prejudices may be efficiency shortcuts.
    This is not to say that such shortcuts don't require periodic investigation and re-justification. They do. But such tasks require incredible amounts of time to investigate and thus are best left to specialists. For the rest of us, it is best to trust our moral intuitions most of the time.
    Nathanael Snow

  12. Naturalist, aye that. Globally low rates, longitudinally stable rates, universally criminalized, comparative-religiously demonized – infanticide – an instinctual protection of the status-quo offspring of assortative mating, itself in a pretty long tradition of doing the reproductive watuzi. It’s all in play: evolved instincts, codified morality, law, and religion. Parents protecting big-brain babies who will grow up to have their own moral dilemmas in techno-civil ecologies, for which our big-brains are imperfectly adapted.

    I agree it wouldn’t take a juris nor a naturalist nor a trip to collard clergy for religious advice, not in order for most anyone to know in their knower, not to try to falsify such a universal human innate trait, by attempting infanticide - in any country.

    An innate instinct, I’d say to trust.

    Would a bigger brain do? - better?

Leave a Reply