While the thought of lawyers lawyering with AI gives me the icks, I also understand that at a certain point it may play out like the self-driving car argument: once the AI is good enough, it will be better than the average human – since I think it’s obvious to everyone that human lawyers make plenty of mistakes. So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it’s still a net gain. On the other hand tho, this could lead to complacency that drives even more injustice.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    Feels like overlooking the same issue as with every other AI use

    When a human makes a mistake and is called out, they can usually fix the mistake. When genAI outputs nonsense, it’s fucking nonsense, you can’t fix something that’s fundamentally made up, and if you try to “ask it” to fix it it’ll just respond with more nonsense. I hallucinated this case? Certainly! Here’s 3 other cases you could cite instead: 3 new made up cases

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 days ago

    So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it’s still a net gain.

    thats-not-how-any-of-this-works.webm

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    It’s hard to get into the article’s mood when I know that Lexis not only still exists but is now part of the Elsevier family; this is far from the worst thing that attorneys choose to do to themselves and others. Lawyers have been caught using OpenAI products in court filings and court appearances, and they have been punished accordingly; the legal profession does not seem prepared to let a “few hallucinated citations go overlooked,” to quote the article’s talking head.

  • lagoon8622@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    They will ignore your errors and respond with errors of their own. AI will decide you’re guilty and deny your appeal.

    A later case exactly like yours will result in an innocent verdict because the case used one different word and the butterfly effect will cause the AI to add the word “not” to the verdict.

    AI will conclude that your case was ruled in error, but there’s nothing it can do because the appeal was already denied

  • LousyCornMuffins@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Yeah, I don’t care about the raw amount of mistakes, I care whether the mistakes are severe enough to throw the case. Stuff like missing filing deadlines.

  • DeathsEmbrace@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    Except nobody wants to tali about that 1.2 being it thinks green is an object or something completely fucked.

    • artifex@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      3 days ago

      Yeah, one real concern is that bottom-of-the-barrel lawyers will continue to just use their $20/month chatGPT subscription, and not something more lawyer-centric that will (eventually) be able to weed out the true stupidity almost 100% of the time.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Subscription? You think these people are paying to use the slop machine? I’d expect these people are using it partly because they can just use the free tier.