LOOK MAA I AM ON FRONT PAGE

  • Communist@lemmy.frozeninferno.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 hours ago

      Not “This particular model”. Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.

      The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        4 hours ago

        those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.

        this proves the issue is widespread, not fundamental.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          Is “model” not defined as architecture+weights? Those models certainly don’t share the same architecture. I might just be confused about your point though

          • Communist@lemmy.frozeninferno.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 hours ago

            It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.

            essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they’re all transformers trained in a similar way. I would not say they have different architectures.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 hours ago

          The architecture of these LRMs may make monkeys fly out of my butt. It hasn’t been proven that the architecture doesn’t allow it.

          You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can’t.