• Ludrol
    link
    fedilink
    arrow-up
    1
    ·
    19 hours ago

    But that is only applied if we introduce a goal that has a solution that includes hurting us.

    I would like to disagree in pharsing of this. The AI will not hurt as if and only if the goal contains a clause to not hurt us.

    You are implying that there exist significant set of solutions that don’t contain hurting us. I don’t know any evidence supporting your claim. Most solutions to any goal would involve hurting humans.

    By deafult stamp collector machine will kill humanity, as humans sometimes destroy stamps. And stamp collector need to optimize amount of stamps in the world.

    • MTK@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      19 hours ago

      I think that if you run some scenarios you can logically conclude that most tasks don’t make sense for an AI to harm us, even if it is a possibility. You need to also take vost into account. Bit I think we can agree to disagree :)

      • Ludrol
        link
        fedilink
        arrow-up
        1
        ·
        17 hours ago

        Do you have some example scenarios? I really can’t think of any.