• Communist@lemmy.frozeninferno.xyz
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    edit-2
    10 hours ago

    I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.

    do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.

    if someone can objectively answer “no” to that, the bubble collapses.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      do we know that they don’t and are incapable of reasoning.

      “even when we provide the algorithm in the prompt—so that the model only needs to execute the prescribed steps—performance does not improve”

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 hours ago

        That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          Not “This particular model”. Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.

          The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.

          • Communist@lemmy.frozeninferno.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 minutes ago

            those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.