LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

  • KevonLooney@lemm.ee
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    9 months ago

    Maybe, but this is giving the AI a lot of help. No one rewrites visual questions for humans who take IQ tests. That spacial reasoning is part of the test.

    In reality, no AI would pass any test because the first part is writing your name on the paper. Just doing that is beyond most AIs because they literally don’t have to deal with the real world. They don’t actually understand anything.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      They don’t actually understand anything.

      This isn’t correct and has been shown not to be correct in research over and over and over in the past year.

      The investigation reveals that Othello-GPT encapsulates a linear representation of opposing pieces, a factor that causally steers its decision-making process. This paper further elucidates the interplay between the linear world representation and causal decision-making, and their dependence on layer depth and model complexity.

      https://arxiv.org/abs/2310.07582

      Sizeable differences exist among model capabilities that are not captured by their ranking on popular LLM leaderboards (“cramming for the leaderboard”). Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k=5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

      We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT).

      Just a few of the relevant papers you might want to check out before stating things as facts.