Bentham might say that utilitarianism isn’t about comparing more or less arbitrary values of utility in different actions or outcomes, but forcing ourselves to ask if there is any utility to the outcome.
Is it better to donate money to cancer research, or give the money to a beggar in the street? Entirely unclear, it’s essentially impossible to calculate the relative utility of these actions until you agree on some measure of utility. That’s fine, that’s not really what utilitarianism is for.
Is it moral for the state to execute people for their homosexuality, as the UK did in Bentham’s time? Maybe according to religious morals, or traditional morals, or duty ethics. Not according to utilitarianism. Absolutely nobody benefits from this, and the suffering is immense.
Utilitarianism, when applied correctly, forces us to critically investigate every action that causes suffering and ask: can this actually be justified?
I hate to break it to you but all values are made up.
look at this user, they don’t even believe in universal moral truth
I am reminded of the goose chasing the person meme:
‘Where’s your source of universal moral truth?!’
What is universal? What is moral, and what is truth? Oh man maybe I did smoke too much weed in the 60’s?
The thing is that utilitarians have this pseudo arithmetic concept that looks objective while it’s not. Other schools of thought are more openly subjective and therefore more honest
Do you have an example of this pseudo-arithmetic? You mean like the trolley problem, that saving five people is better than
savingnot murdering one?ahem
“…that murdering one person is better than letting five die”
FTFY
True, fixed it
Trolley is a good example. Or “You run into a burning house and can either save a dog or a human who is in coma”. Like, don’t even pretend you have a metric for situations that specific. There is also longtermism which I’m sure not all utilitarians subscribe to, basically saying there will be so many people in the future that it’s more important to invest in the technology of my stakeholders than to help our contemporaries. And it doesn’t matter that I’m rich because I will have more offspring than the poor so it’s a net positive. As if you could foresee all the consequences. What you can in fact foresee is the consequence of treating people as your equals.
Three good examples - I’d say that
- the trolley problem is a reasonable application of utilitarianism, not depending on any other metric than “it is good to stop a person from dying”. The main problem with applying it in practice is not the arithmetic (which is sound), but that you are almost never guaranteed that killing the one person will actually save the others.
- comatose man vs dog in burning building is a good example of a case where utilitarianism can’t give you an answer, but can give you a way of investigating the problem by discussing what utility is in this situation
- longtermism is like the reverse of utilitarianism to me. Utilitarianism asks you to ignore the abstract to justify ethics based on actual consequences. Longtermism asks you to ignore the consequences of the present in favor of some made up abstract future utility. It’s the opposite of utilitarianism.
Ultimately, utilitarianism isn’t about calculating which situation gives more utility, but about critically investigating whether your actions actually make the world a better place.
We will assign this value based on supply and demand and call it “price”.
Ooh you know what would be really funny? If we assigned peoples ability to live or die based on this.
I don’t know, it kinda sounds like that other arbitrary system. What was it called, crappy-talism?
*Kappa Talisman.
Exactly! It’s always subjective. You might even say “It’s not turtles but instead, subjective all the way down”
Aah yes, effective altruism