• JFranek@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 个月前

    When I read Excel Copilot, I thought "they finally added a chatbot that lets you generate a spreadsheet/range/table processing a datasource. Like “copilot, create a table that aggregates prices by category from table xyz”.

    To which I was like “Ok, maybe that could be useful to some of the many non-technical excel users.” I wasn’t prepared for whatever this is.

    I mean with vibe-coding/excelling? you eventually get something that can run deterministically.

    Are we… are we gonna start seeing terminally AI-pilled bozos implementing gacha mechanics in data pipelines?

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 个月前

      The most useful thing would be if mid-level users had a system where they could just go “I want these cells to be filled with the second word of the info of the cell next to it”, and it would just give you a few commands to use and maybe autofill them.

      But no, that would be useful and checkable

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 个月前

        It also wouldn’t give Microsoft something to justify AI’s existence with - they aren’t selling “automated Excel commands”, they’re selling “magical chatbots which do everything”.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 个月前

        The most useful thing would be if mid-level users had a system where they could just go “I want these cells to be filled with the second word of the info of the cell next to it”,

        In such a case, it would also be very useful if the AI would ask for clarification first, such as: “By ‘the cell next to it’, you mean the cells in column No. xxx, is that correct?”

        Now I wonder whether AI chatbots typically do that. In my (limited) experience, they often don’t. They tend to hallucinate an answer rather than ask for clarification, and if the answer is wrong, I’m supposedly to blame because I prompted them wrong.