• ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    9
    ·
    edit-2
    4 days ago

    Most of the article is paywalled, but the main points seem to be that AI work is less creative/lower quality & people spend more time fixing it than they would have making it.

    That has not been my experience. On the one hand ‘less creative’ - that’s true, I don’t think LLMs can be creative. But they can summarize information or rephrase/expand on things I say based on provided context. So I spend much less time on formatting and draft creation for text based documents. I can have an agent draft things for me and then I just tidy up.

    As for low quality work products, again, not my experience. I use agentic AI regularly to automate simple but repetitive business tasks that would take me much longer to write code to automate. I am not an engineer, I am an analyst/consultant. I can code some things, but it is often not worth the time investment (many tasks are one-offs, etc).

    A friend of mine made an AI agent using an agent that can interpret pictures of charts and find supporting data in our databases (to find out what other teams referenced for their analyses)and/or make a copy of the chart and make modifications to it. Or it can create seaborn charts from text descriptions using data from our database. Now a team of non-technical users can make seaborn charts without having to know python. That is pretty powerful in terms of saving time & expanding productivity.

    It’s easy to shit on the tech, but it has legitimately useful applications that help productivity.

    Edit: downvote if you want, but it is ignorant to say that LLMs only produce garbage. It very much depends on the user and on the application.

    • quetzaldilla@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 days ago

      AI made a $2M mistake at the public accounting firm I worked at.

      Management responded by blaming and firing an entire team for not double-checking the AI output, even though it was literally impossible for them to do so due to the volume of the output and lack of experience.

      This will be you, sooner or later.

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        4 days ago

        I understand your perspective, but I do review the code. I also do extensive testing. I don’t use packages I’m unfamiliar with. I still read the docs. I don’t run code I don’t understand.

        Again, the quality of the output really comes down to the user and the application. It is still faster for me to do what I’ve outlined above and it makes automating some tasks worth it in terms of ROI that otherwise wouldn’t be.

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 days ago

      Lemmy is mostly anti LLM, hence the downvotes, regardless of how you use it.

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        4 days ago

        No, literally nothing like what I said. It could still be garbage if you didn’t understand or review the output. That’s why you understand and review the output.