• SmoothOperator@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    7 days ago

    Bentham might say that utilitarianism isn’t about comparing more or less arbitrary values of utility in different actions or outcomes, but forcing ourselves to ask if there is any utility to the outcome.

    Is it better to donate money to cancer research, or give the money to a beggar in the street? Entirely unclear, it’s essentially impossible to calculate the relative utility of these actions until you agree on some measure of utility. That’s fine, that’s not really what utilitarianism is for.

    Is it moral for the state to execute people for their homosexuality, as the UK did in Bentham’s time? Maybe according to religious morals, or traditional morals, or duty ethics. Not according to utilitarianism. Absolutely nobody benefits from this, and the suffering is immense.

    Utilitarianism, when applied correctly, forces us to critically investigate every action that causes suffering and ask: can this actually be justified?

    • lugal@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      7 days ago

      The thing is that utilitarians have this pseudo arithmetic concept that looks objective while it’s not. Other schools of thought are more openly subjective and therefore more honest

      • SmoothOperator@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        7 days ago

        Do you have an example of this pseudo-arithmetic? You mean like the trolley problem, that saving five people is better than saving not murdering one?

        • lugal@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          6 days ago

          Trolley is a good example. Or “You run into a burning house and can either save a dog or a human who is in coma”. Like, don’t even pretend you have a metric for situations that specific. There is also longtermism which I’m sure not all utilitarians subscribe to, basically saying there will be so many people in the future that it’s more important to invest in the technology of my stakeholders than to help our contemporaries. And it doesn’t matter that I’m rich because I will have more offspring than the poor so it’s a net positive. As if you could foresee all the consequences. What you can in fact foresee is the consequence of treating people as your equals.

          • SmoothOperator@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            6 days ago

            Three good examples - I’d say that

            • the trolley problem is a reasonable application of utilitarianism, not depending on any other metric than “it is good to stop a person from dying”. The main problem with applying it in practice is not the arithmetic (which is sound), but that you are almost never guaranteed that killing the one person will actually save the others.
            • comatose man vs dog in burning building is a good example of a case where utilitarianism can’t give you an answer, but can give you a way of investigating the problem by discussing what utility is in this situation
            • longtermism is like the reverse of utilitarianism to me. Utilitarianism asks you to ignore the abstract to justify ethics based on actual consequences. Longtermism asks you to ignore the consequences of the present in favor of some made up abstract future utility. It’s the opposite of utilitarianism.

            Ultimately, utilitarianism isn’t about calculating which situation gives more utility, but about critically investigating whether your actions actually make the world a better place.