• HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 month ago

    Refusing to use AI tools or output. Sabotage!

    Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

    I work in the field of law/accounting/compliance, btw.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 month ago

      Maybe it’s also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it’s expected to try and try again with different questions until one correct answer comes out and then use that one to “evangelize” about the virtues of AI.

      • Slatlun@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 month ago

        This is how I tested too. It failed. Why would I believe it on anything else?

    • tazeycrazy@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      You can definately ask ai for more jargon and add information about irrelevant details to make it practically unreadable. Pass this through the llm to add more vocabulary, deep fry it and sent it to management.