Blog

Morality; the antidote to free will

https://maxwickham.substack.com/p/morality-the-antidote-to-free-will

I have always had a deep aversion to debates on the topic of morality, a great irritation stemming from the loose definition of the core grounds upon which we are arguing, and more often than not a completely different basis for that definition. Imagine for a moment an atheist arguing with a religious zealot on whether an action someone performed was right or wrong; if they already disagree on the morality of the action, there is no real ability for either to persuade the other. The atheist’s belief that morality is simply a construct of humanity means any argument they make is based on faulty logic by definition from the point of view of the devout, assuming they take as fact their own belief in a universal morality.

The usual sidestep to this issue I’ve found, to allow for some sort of coherent debate, is to take a set of actions both sides agree are wrong or right (or assume the other does by societal norms), and then try to demonstrate some logical inconsistency in why, given x is right, y must be wrong, and so on. This, however, is in and of itself built upon a manifest assumption that morality should obey some law of consistency or logic, neither of which the two previous definitions (universal morality or simple human constructs) would dictate.

Having said this, it is of course impossible to avoid discussion on the moral topic; whether or not I believe in its “fundamental” nature is irrelevant in that there’s no denying its constant presence within our societal lives. So surely there must be some better, common, and agreeable definition upon which we can ground our fevered debates.


A couple of years ago I was listening to a discussion on this topic of ethics, in which one of the participants described their ethics to the other in a form I would label ethical egoism. In this view, the reason to do good is to maximise your own happiness, which is generally maximised by cooperation with others (“do good unto others and they’ll do good unto you”). Therefore, doing good leads to a greater positive experience for yourself. The follow-up from the second participant to this definition though was roughly along the following lines:

“Imagine you were stranded on a desert island with several others and you all go exploring. By chance you happen to find some food and no one else is aware of this. Assuming there would be no benefit to you sharing said food and no negative repercussions if you didn’t tell anyone about it - in this case, according to the previous definition of your own morality, you wouldn’t share the food?”

The first participant seemed to struggle with this hypothetical a little before conceding the trap and stating that in fact they would share, negating their own previous statement on their own moral thesis.

This response annoyed me a little, namely because I think it completely misses the “point” of morality. I would argue that morality is not about the making of the correct decision; it’s about the removal of the ability to make a bad decision. By this I mean that at the point the decision to take some action occurs, morality should have already impeded your free will so you are unable to make the “bad” decision. For instance, I don’t avoid murdering people because I think it’s a bad thing to do; I am simply unable to do it because the morality is sort of baked into me.


To give a better explanation of this, I think it’s easier to imagine applying this concept to other moral hypotheticals. Take the famous game theory problem, the prisoner’s dilemma.

In this problem two criminals have been captured and are put in separate rooms. They are both told that if they snitch on the other and the other doesn’t snitch on them, they walk free and the other receives ten years in prison. If neither snitches they both receive one year, and if they both snitch they receive nine years. In this scenario, from the point of view of prisoner A, it doesn’t matter what prisoner B does - it’s always better for prisoner A to snitch (and likewise for B). If B has snitched on A, then A will get a shorter prison sentence if they also snitch (nine instead of ten years); if B has not snitched, again A will get less time (zero years versus one). So if both prisoners act in their best interests they will end up both snitching and serving nine, despite the fact they could have not snitched and done one. The core problem here is that neither prisoner has any ability to enforce or know what the second will do, and there is also no repercussion to them screwing over the other.

Now imagine we add a new mechanism to this game. Before the prisoners are separated they are able to write some contract that, if they both sign, some magical force will make completely binding upon both of them - it is completely impossible by any means to break the contract. Of course in this case the obvious thing to do (by any sensible party) would be to sign a contract that ensures they both do one year.

This, at its essence, is what morality is: this magical contract that is enforced on all who have “signed” it, no matter if they would be able to get away with breaking it. Not a decision, but a removal of free will.


Thomas Hobbes, in Leviathan, argued that in the state of nature - absent any social order - life would be “solitary, poor, nasty, brutish, and short,” as rational self-interest leads inevitably to conflict. To escape this, individuals agree to a social contract, surrendering certain freedoms to a sovereign power (the Leviathan) who enforces the terms of cooperation. Crucially, for Hobbes, the contract is only effective because it is externally enforced; without the threat of punishment, rational agents would defect whenever it suited them.

This is where a moral contract differs: the concept of enforcement is inherent in the contract being itself binding. There is no moment of temptation followed by deterrence; the person simply cannot act against the contract (otherwise they cannot be said to be bound by that set of moral laws).


Given this pretext - morality operating as a binding contract - I think the natural next step for any individual defining their morality is to ask: what binding contract, if everyone in the world were beholden to it, would they want to see? Of course the goals and beliefs of each human may be different, so a better, more universal definition may be: what contract would maximise human happiness? (In fact I would say it’s slightly more complicated depending on what you mean by this maximisation; for instance, making a million people’s lives slightly better while making one person’s life way worse may be seen as worse despite improving the average.)

This description may seem obvious, and not particularly different from the definition we would usually use, but it does give a nice solution to some other common moral conundrums.


At the risk of using endless examples, let’s now imagine the infamous trolley problem. A tram operator sees several people ahead lying on the track. The operator can switch to another track, but this track has a single individual lying down on it. The choice is now that the operator does nothing and several people die, or they perform the action of switching tracks and kill the lone individual while saving the several. I think it’s quite easy to see that if people’s predetermined programming were to have the morally correct decision be to switch tracks, this would in general improve the lives of the population (just by the fact that more people are saved).

The common follow-up to this, however, is to now imagine someone walks into a doctor’s office. Unfortunately for this medical miracle of an individual, the observant doctor notices that they have three different organs compatible with three patients who will die in the next hour without transplants of those exact organs. In this case the doctor could murder the unsuspecting patient and save the three, or allow the three to die and leave the individual alone.

On the face of it this seems like the same moral dilemma as the trolley problem: the doctor can perform an action which will result in the death of one but in doing so will result in the survival of three. If we were to say that the moral contract dictated the doctor to commit the murder, this would result in more people being saved; however, the additional consequence of such a term in the contract might be that people are scared to go to the doctor. Overall this could actually end up being worse for society, demonstrating the false equivalency under this system of the two problems.


So far we have generally assumed that whatever moral contract is designed, it would be applied equally to all individuals. An interesting next step then might be to imagine a contract that applies morality upon individuals in some random manner.

For instance, let’s say the contract defines the level of selfishness a person should have (assuming for now this is a measurable metric). It may turn out that for societies to organise better, the levels of selfishness should follow some distribution - that is, a small number of people are more selfish than others. Perhaps these people are more likely to have the mannerisms required to create companies or other institutions, and overall the sacrifice of there being more selfish individuals in the society, breaking some of the assumptions you can have about a person’s actions, is worth it due to a better society resulting for all.

This slightly modifies the definition of the contract: instead of a list of actions being right or wrong, each action is defined as right or wrong for each individual according to some probability distribution. Whilst interesting, and I would argue very present in the system of morality humanity has naturally evolved, when it comes to most discussions I believe it’s worth ignoring this point to avoid the endless complexity it brings.


One other thing to be noted when defining a moral contract is the limits of what contract can exist in practice. Of course it is theoretically possible to have a contract define the right or wrong action for every conceivable situation within the universe, and by extension, given a well-defined definition of maximal human happiness, there is a universally “correct” answer to what the true contract should be, if it is defined to be that which maximises this metric. (As a quick side note, I have ignored here the circular problem that most people’s definition of happiness is already affected by their existing instinctual moral beliefs, and so any moral system itself changes the definition of that which we aim to maximise.)

In reality, however, there are limits on the complexity of predefined decision-making we can embed into the human consciousness, either through genetics or upbringing. There’s a finite set of knobs we can play around with, which is why I mentioned the very general metric of “selfishness.” The definition of selfishness is entrusted upon society to imbue correctly into each person instead of itself being defined in the inherent morality.


All in all, the core point and raison d’être of this article is my grievance with this arguably subtle nuance in how people discuss ethics. Morality, however you see it, only has meaning if a person is bound to it before they find themselves in a situation where it comes into play. At the point of decision-making it’s already too late! This entirely removes the “consequences” discussion from the argument of “why” you should do good and simply makes the question “what do we want everyone’s good to be?” The problem of how we would embed our defined good into others is itself an entirely different discussion.