Utilitarian or Side Constraint Rules?
In his book Anarchy, State, and Utopia, Robert Nozick lists two types of moral rules: utilitarian rules and side constraints. Utilitarian rules are basically end goals that a person or society must reach. For example, there may be a moral rule which says that society must achieve the greatest amount of good possible. In this system, rules and institutions are implemented which lead to this end, which is seen as the moral goal to aim for. The other type of moral rules are side constraints, which are not concerned with the end goal of their implementation, but rather those actions themselves. For example, a side constraint may be ‘do not murder’. This would be a side constraint moral rule if you implement that not looking at what would be the consequences of not murdering people. Murder would be considered bad, whatever the consequences may follow, even if murder increased the happiness and prosperity of a nation or group of people.
My question is, which of these two types of moral rules are correct. As in, are there intrinsic flaws or inconsistencies in one of these lines of thinking? I would contend that the utilitarian line of thinking does.
First of all, in order to implement a utilitarian moral rule, there are actions which lead up to the end goal. Like, you could have action X lead to Action Y, which leads to Action Z, which then leads to the end goal being achieved. But there is a problem with this, and that is that when we perform Action X, we are not able to look into the future and know what Action Y and Z will be, because people have free will. If a person performs Action X, hoping to get to the end goal, there is no way to know if the end goal will be achieved through this action. If people have free will, then the future is variable. Action X could lead to other actions by the same person or others which lead to the end goal being achieved, or it could also not. The future can’t be predicted with physical laws, assuming free will.
In addition, I think that there are problems with many utilitarian goals, like the greatest good. For example, if a politician says that we will fight some war, or implement some regulatory measure, because we want to achieve the greatest good, there are two problems I see with this. First, how do we calculate the greatest good? If we calculate this, we must assign numerical values to actions. For example, we should say that a murder is worth -10 goodness units, or something like that. Maybe charity is worth +4 goodness units. As far as I know, nobody does such calculations.
Also, to calculate the greatest good, we also need a time frame to calculate it in. Time goes on forever, or far into the future, as far as we know. If we want to calculate the amount of good and bad that results from something, we will need to calculate this for until the end of time. This would require infinity calculations.
What do you think? Feel free to agree or disagree in the comments section below.