• 5BC2E7@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    1 year ago

    well now this is getting interesting beyond gossip. I doubt they made a significant AGI-related breakthrough but it might be something really cool and useful.

    • guitarsarereal@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      1
      ·
      edit-2
      1 year ago

      According to the article, they got an experimental LLM to reliably perform basic arithmetic, which would be a pretty substantial improvement if true. IE instead of stochastically guessing or offloading it to an interpreter, the model itself was able to reliably perform a reasoning task that LLM’s have struggled with so far.

      It’s rather exciting, tbh. it kicks open the door to a whole new universe of applications, if true. It’s only technically a step in the direction of AGI, though, since technically if AGI is possible every improvement like this counts as a step towards it. If this development is really what triggered the board coup, though, then it sort of makes the board coup group look even more ridiculous than they did before. This is step 1 to making a model that can be tasked with ingesting spreadsheets and doing useful math on them. And I say that as someone who leans pretty pessimistically in the AI safety debate.

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        1 year ago

        Being a layperson in this, I’d imagine part of the promise is that once you’ve got reliable arithmetic, you can get logic and maths in there too and so get the LLM to actually do more computer-y stuff but with the whole LLM/ChatGPT wrapped around it as the interface.

        That would mean more functionality, perhaps a lot more of it works and scales, but also, perhaps more control and predictability and logical constraints. I can see how the development would get some excited. It seems like a categorical improvement.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          15
          ·
          1 year ago

          If that’s the case then bad news for OpenAI’s “moat” (and for people arguing for restraint in general): there’s been some recent breakthroughs in getting open-source LLMs trained to understand math as well.

          It’d be hilarious if OpenAI’s board went through huge turmoil, tanked tens of billions of dollars worth of investments, disrupted their partnership with Microsoft to protect this huge revolution they’ve got brewing in their most secret and secure of laboratories… and then someone posts “hey, I got my AI Waifu to count good, check out this github to see how I did it” on Reddit.

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            It also brings into question (well, it adds to the questions, they were already brought up) the whole premise of IP law that “if we don’t protect it properly, no one will want to invent things”. It seems to me like people like creating things and humanity has a strange habit of converging on new inventions from multiple directions. Kinda like how calculus was invented by two different people at the same time.

        • perviouslyiner@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Always wondered why the text model didn’t just put its output through something like MATLAB or Mathematica once it got as far as having something which requires domain-specific tools.

          Like when Prof. Moriarty tried it on a quantum physics question and it got as far as writing out the correct formula before failing to actually calculate the result

          • hamptonio@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            There is definitely a lot of effort in this direction, seems very likely that a hybrid system could be very powerful.

      • Wanderer@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I kinda just realised that the two aspects of this. The LLM part and the basic maths part. Doesn’t this look set to destroy thousands of accounting jobs?

        Surely this isn’t far off doing a lot of the accounting work. Maybe even an app than a small business puts their info into it and that app keeps track of it for a year and then goes to an accountant that needs to look over it for an hour instead of sorting all the shit out for 10 hours

    • Benj1B@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

      Definitely seems AGI related. Has to do with acing mathematical problems - I can see why a generative AI model that can learn, solve, and then extrapolate mathematical formulae could be a big breakthrough.