This is a nice post, but it has such an annoying sentence right in the intro:

At the time I saw the press coverage, I didn’t bother to click on the actual preprint and read the work. The results seemed unsurprising: when researchers were given access to AI tools, they became more productive. That sounds reasonable and expected.

What? What about it sounds reasonable? What about it sounds expected given all we know about AI??

I see this all the time. Why do otherwise skeptical voices always have the need to put in a weakening statement like this. “For sure, there are some legitimate uses of AI” or “Of course, I’m not claiming AI is useless” like why are you not claiming that. You probably should be claiming that. All of this garbage is useless until proven otherwise! “AI does not increase productivity” is the null hypothesis! It’s the only correct skeptical position! Why do you seem to need to extend benefit of the doubt here, like seriously, I cannot explain this in any way.

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    18 hours ago

    “For sure, there are some legitimate uses of AI” or “Of course, I’m not claiming AI is useless” like why are you not claiming that.

    Yes, thank you!! I’m frustrated by that as well. Another one I have seen way too often is “Of course, AI is not like cryptocurrency, because it has some real benefits [blah blah blah]”… uhm… no?

    As for the “study”, due to Brandolini’s law this will continue to be a problem. I wonder whether research about “AI productivity gains” will eventually become like studies about the efficacy of pseudo-medicine, i.e. the proponents will just make baseless claims that an effect were present, and that science is just not advanced enough yet to detect or explain it.