What does it mean for something to be “good?” or “bad?” Even now there seems to be a fog of confusion surrounding ethics. On one side there are moral absolutists who cling to a Kantian-style worship of principles, while the moral relativists at the other end think that one morality is good as the next. They are both, of course, deeply mistaken.
In order to see why these views are nonsensical moral positions, we first need to establish why morality exists in the first place, analyze how it has evolved through time, and based upon these things, determine what function it serves – or rather what function it should serve.
Evolution tells us that morality wasn’t handed down from on high, but arose for a very specific purpose: to help human beings survive and thrive. To suggest that morality is in no way connected to the relations between conscious beings or survival is akin to saying that religion is in no way connected with having faith or beliefs about the supernatural.
By recognizing the origin of morality as a survival tool, we can therefore conclude that when we talk about morality, we are talking specifically about relations between living conscious beings and not about the relations between living unconscious beings or non-living unconscious objects.
Furthermore, by recognizing that we are talking about relations between conscious beings rather than one between unconscious beings, we are also necessarily talking about the nature of interactions between conscious beings – interactions which involve pain, pleasure, intentions, and consequences. To base morality on a lack of, pain, pleasure, intentions, and consequences is altogether incoherent, and yet I argue that both moral absolutism and moral relativism attempt to do just that.
Let’s start with moral absolutism. Perhaps the most famous moral absolutist (other than Yaweh) was Immanuel Kant. Not only did Kant believe that a “good will” (the intention to do good) was inherently good; he believed it was so good that consequences didn’t matter: “If with its greatest effort this will should yet achieve nothing…then, like a jewel, good will would still shine by its own light as a thing having its whole value in itself.” Furthermore, and perhaps more strangely, Kant believed that people were bound by objective moral duties called “imperatives,” which must be obeyed without question for their own sake.
Perhaps the most often cited of Kant’s imperatives are the imperative not to lie and the imperative not to take one’s own life. For Kant, the act of lying – even for a greater good – was always bad, just as suicide for any reason was bad. This line of reasoning, of course, becomes problematic when we consider lying to Nazis to save the Jews hiding in your attic, or when discussing issues surrounding assisted suicide.
Therefore, Kant’s moral absolutism, however noble it might have seemed at first glance, is morally problematic because it favors intentions, principles, and duties regarding intentions and principles at the expense of other relevant variables – namely, pain, pleasure, and consequences. Moreover, once we recall that morality comes from and relates to interactions between conscious beings, we can also realize the absurdity of the concept of “good in itself” because it is a proposition that suggests something can be valued without being valued by somebody.
On the opposite end of the spectrum, we have moral relativism, which is arguably just as absurd. While moral absolutism makes the mistake of discounting moral variables and detaching judgments about good and bad from human value judgments and interactions, moral relativism makes the mistake of assuming that because there are no moral absolutes, morality is therefore arbitrary.
Again, although this might make sense at a distance, it falls apart upon closer scrutiny. For example, nobody would say that since all exam grades are ultimately subjective that grades are therefore arbitrary. Just because one can’t say that something is an “A” or “F” as a matter of fact does not mean that there is no qualitative distinction between the two.
Of course, relativists will object here and say that the determination of an “A” or an “F,” just like the determination of “good” or “bad” is predicated on a list of criteria that determined by individual preferences; therefore, morality is arbitrary. However, this objection ignores the fact that our determinations about grading criteria, like our determinations of moral criteria, are not solely based on personal preference, but are also based on the application of reason and the appeal to consensus.
This explains the relative uniformity among grading within universities and the general uniformity among ethical systems worldwide. For example, while it is understandable that an “A” for one teacher might be a “B” for another, it is also less likely than an “A” for one teacher will be an “F” for another. Therefore, we can conclude that although we cannot know for a fact if a grade is an “A” or an “F,” this doesn’t mean we can’t reasonably agree upon meaningful qualitative differences between the two.
Moreover, if asked to explain their evaluations, teachers will (or should) substantiate their individual preferences by explaining their reasons for assigning certain grades, and by appealing to their colleagues’ opinions as to what constitutes a “good” or “bad” grade. Anyone who has participated in the process of grade norming understands the truth of this. Although there are always outliers as to what grade an essay should receive, there usually ends up being a general, reasoned consensus about the distinction between an “A” and an “F.”
Returning to the subject of morality, we can see that when we make moral judgments about whether something is “good” or “bad,” we too are (or should be) using reason and consensus to justify why we feel it to be so. Of course, naysayers might ask, “Why should we invoke reason and consensus to justify whether something is immoral?” However, asking this question is akin to asking why teachers should apply their reason or appeal to their colleagues when justifying what grade a student receives. The best answer to both of these questions is simple: because it works.
We should invoke reason and consensus when making moral choices just as teachers invoke reason and consensus when determining grades on an essay – namely, because personal preference (or in the case of absolutism, lack of preference) is an unreliable means of determining student competency or ascribing moral value, respectively. In other words, it doesn’t work.
At this point the moral relativist might ask what I mean by “work” when it comes to morality, but such a question implies either a rejection or an ignorance of the fact that morality exists as a tool for the purposes of human survival and flourishing, just as the idea of grades exists as a means of measuring academic excellence. To argue that morality shouldn’t be about human flourishing would be like arguing that grades shouldn’t be about assessing student competency.
But there is also another problem with moral relativism and absolutism – namely, that they are anti-knowledge, anti-reason, and anti-progress. While moral absolutism relies on a kind of dogmatic assertion of the truth which fails to allow for the consideration of other moral variables, let alone new knowledge pertaining to them, moral relativism takes an equally useless and solipsistic approach by denying that anything about morality can ever be known.
Both positions are thus deeply ironic if one looks at not just the biological evolution of morality which we have so far been talking about, but especially if one considers the socio-historical of evolution of morality. If moral absolutism were true, then we should expect the influx of new information should have no effect on the truth of its principles. For example, if Kant’s categorical imperative to tell the truth was absolute and objectively “good” for all times and all places, then we shouldn’t expect to see any historical exceptions to this rule, and should expect all moral systems that were ever created to be identical – which they aren’t. After all, the ethics of the Old Testament contrasts quite strongly with the ethics of the Enlightenment.
Similarly, if moral relativism were true, not only should we expect to see a lack of evolution in ethical systems throughout history, but we should naturally expect all ethical systems to be radically different from each other with very little overlap. However, as with absolutism, reality does not bear this out. On the contrary, we can see many similarities across cultures and across time periods which suggests a kind of gradual moral aggregation or evolution whereby some moral precepts are retained while less popular or useful ones are discarded.
While the ability of various historical figures to arrive at the same moral precepts such as the golden rule is not proof of an objective, universal morality, neither is it evidence for moral relativism. Instead, history shows that morality is not absolute nor arbitrary, but is a continually evolving product of collective negotiation between people – a product which is made better (that is, more useful) by the influx of new knowledge, the application of reason, and the appeal to general consensus.
It’s also clear that these negotiations are fundamentally rooted in the survival and flourishing of conscious creatures, and are therefore inextricably tied to ideas about pain, pleasure, intentions, and consequences. To argue that something should be “good” or “bad” regardless of what we feel or think about it, or to argue that something can never really be “good” nor “bad” no matter what anyone feels or thinks about it is to either ignore or misunderstand the very concept of morality as well as its biological and socio-historical evolution. It is an abandonment of both reason and evidence, and rallying cry against the moral progress of our species.