• huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    6 months ago

    Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      6 months ago

      I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

    • Kogasa@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 months ago

        Here’s a white paper explicitly proving:

        1. No emergent properties (illusory due to bad measures)
        2. Predictable linear progress with model size

        https://arxiv.org/abs/2304.15004

        The field changes fast, I understand it is hard to keep up

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.