• huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    135
    arrow-down
    7
    ·
    6 months ago

    Friendly reminder that your predictive text, while very compelling, is not alive.

    It’s not a mind.

    • Poggervania@kbin.social
      link
      fedilink
      arrow-up
      54
      arrow-down
      1
      ·
      6 months ago

      Cyberpunk 2077 sorta explores this a bit.

      There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

      When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

      So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.

      • penguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        6 months ago

        Well no one can prove they have a mind to anyone other than themselves.

        And to extend that, there’s obviously a way for electrical information processing to give rise to consciousness. And no one knows how that could be possible.

        Meaning something like a true, alien AI would probably conclude that we are not conscious and instead are just very intelligent meat computers.

        So, while there’s no reason to believe that current AI models could result in consciousness, no one can prove the opposite either.

        I think the argument currently boils down to, “we understand how AI models work, but we don’t understand how our minds work. Therefore, ???, and so no consciousness for AI”

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          ·
          6 months ago

          “No brain?”

          “Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”

          “So … what does the thinking?”

          “You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”

          “Thinking meat! You’re asking me to believe in thinking meat!”*

        • treefrog@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Consciousness seems to arise from a need for sense of time and space. Navigation basically. Finding home, finding food, finding mates. I say this after decades of exploring mind altering substances and going on a decade of nearly daily meditation practice.

          A friend of mine suggested it’s just this simple and that even worms are conscious. They’re aware of themselves to some degree and the when and the where. I’m sure they experience things way different than us, having different senses for assessing when and where and a different neuro structure for processing information from their bodies and the environment.

          So, no point beyond consciousness being more common than I think people assume and actually not that difficult to define.

          Consciousness is the sense of time and space. And most animals seem to have it. Do machines? I don’t know enough about the technology to have an educated opinion.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 months ago

            sense of time and space

            Mammalian intelligence is based on repurposing spatial mapping circuitry but that’s not consciousness itself, that is, the Miller Number: 7±2 things we can keep in conscious at the same time. That sense of time of space has a very specific quality to its qualias, they’re all, well, spacious. That thing as “just the room, no map in it” is also part of the Buddhist Jhanas (“boundless empty space”), but there’s plenty of stuff going on in the mind that isn’t part of that – say, the pure impression of “bright” when your SO dares open the window blinds does not have a navigational “bright from the window which is in that direction” to it, that’s an additional layer, a where, on top of the primitive what.

            My best inference is that the function of consciousness is to flexibly make connections between different parts of the whole, and that on the level of learning / writing memory instead of automatic response: It is, in fact, possible to avoid running into a lamppost without being conscious of it, been there, done that, the let’s call it motor cortex first acting and then making me conscious, as if to say “have I been a good boy?”. That is it’s actually a quite passive process, being thrown left and right by systems wanting to do some connection, and shouldn’t be equated with will at all.

            • treefrog@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 months ago

              If you’re familiar with Buddhism then you’re familiar with the six and eight consciousness models?

              Like in your lamppost example, I would argue part of you (body and eye consciousness) were quite conscious of the lamppost even if the consciousness mind was paying more attention to something else. Keeping as much of the senses (including sense of mind) in mind as you’re able to based on the depth of your practice and guarding against distractions away from what is happening now, is mindfulness.

              In the eighth consciousness model, again in your lamp post example, we could say the seventh consciousness was occupied chasing after the past or future and mindfulness was barely present. Thankfully your other consciousnesses reacted and kept you safe. Manas becomes aware of this after the fact because its nature is ignorance.

              The eighth consciousness is the base. The root. It’s more fundamental than I making. Which is probably what you were doing when you nearly walked into something. Thinking about what you’re doing later. I should do some laundry when I get home, maybe?

              People mistake sense of agency (I making, manas, ego) with the base of consciousness. But consciousness is effortless and grasping at me and mine takes effort, its just more subtle effort than most people are aware of. When this grasping stops, awareness continues. In my personal experience.

              So, I think it’s possible machines are conscious. If they have a sense of agency maybe the question Western science and the media keep asking. Maybe they just don’t have the models or personal experience to delineate between ego and consciousness. The people asking I mean. Hence the we don’t even know what consciousness is bit I keep hearing. Maybe not Western science. But human beings have been exploring these questions with the tools of Buddhist practice for 2500 years. I trust their definitions and they passed my own smell test.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                I would argue part of you (body and eye consciousness) were quite conscious of the lamppost even if the consciousness mind was paying more attention to something else.

                That’s semantics. My major objection to that kind of definition is that it knows no bound and distinction: Where do you stop assuming consciousness? Electrons are reacting to, influencing, and interacting with other electrons, is that also a form of consciousness? One could say so, but then everything is conscious which is the same thing as saying as nothing is conscious because without anything to delineate, terms are meaningless. I prefer language such as that what you call “body and eye consciousness” has agentive properties, that it can learn, that it generally wants to cooperate and be of service to the whole, such things. Lumping it up with consciousness risks confusing interpretations of messages of the thing (which is all we’re ever conscious of) for the thing itself.

                guarding against distractions away from what is happening now, is mindfulness.

                What was happening then is that I was using the way from home to the supermarket to think about code, with ample trust in mind so that I did not fear the lamppost. What good would have keeping my consciousness on the external world have done? The body/eye did not need integrative oversight, while my modelling mind very much could use a helping hand. Imposing it on the former and denying it to the latter would’ve been inflicting violence on myself.

                Be careful to not moralise around “distraction”. Bluntly said when your teacher chided you for day-dreaming you probably weren’t distracted you were thinking about something more pertinent to your immediate development than calculus. Where discipline in directing consciousness comes into play is keeping your mind free from neurosis, within parameters in which you use your faculties according to their nature, as well as self-conditioning, e.g. if you’re addicted to potato chips, make sure that a) you don’t deny yourself potato chips and b) eat. every. potato. chip. with. full. consciousness. That’s to connect the act of eating up those chips to all the negative opinions you have about your behaviour, instead of it being only connected to something maladaptive. Scientifically proven and neurologically explained that and how that works, btw. In that sense “distraction” is “false, incomplete, sense of comfort”.

                • treefrog@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  Also my last post was purely in regards to the first part of yours. I appreciate the insight into moralizing distraction and will retead it when I’m not distracted by the meat of our interesting conversation.

                • treefrog@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  6 months ago

                  Okay.

                  So mind consciousness trusted body/eye consciousness. I know what you mean, I dance and do this to enter flow state.

                  In the early Buddhist model consciousness would be the aggregate of the six sense consciousness. In the eight consciousness model the seventh consciousness might identify more strongly with one of these six, generally mind.

                  The store consciousness is the aggregate of all eight and that’s what I’m arguing is fundamentally what all experience arises from. The perception of emptiness, i.e. no self (consciousness itself is an aggregate and can’t be separated from its objects) and impermanence (change or time). Sense of time and space. To be conscious is to be aware of something. Movement through electrical synapses stimulated by sense impressions, even just the impression of sound from our own thoughts or the impression of limitless space in the fifth jhana.

                  I understand your objections to assumptions matter could be conscious based on this model. I think it would be inaccurat because not all matter has six sense bases and the storehouse is itself an aggregate.

                  But we are matter, and we’re conscious, so the fundamental conditions are there in some simple form. The movement of electrons as you stated.

                  But fire is fire when it’s fire, and ash when it’s ash. Even if the potential is there we don’t say fire is already ash when it’s not.

        • Apathy Tree@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          7
          ·
          6 months ago

          I’m pretty sure because of the black box problem, we actually -don’t- understand how generative ai works…

          We also don’t know how our minds work, but don’t give these models more weight than they deserve, we, collectively, have no idea how they arrive at the conclusions they do. Maaaaaaybe some of the people who develop them do, but honestly? Probably not. For the same reasons we don’t understand the human mind. It’s too complex.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        6 months ago

        Well there are 2 options:

        Either I’m a real mind separate and independent of you or I’m a figment of your imagination.

        At which point you have to ask yourself: why are you so convinced you’re an unlovable and insufferable twat?

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      7
      ·
      6 months ago

      While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        6 months ago

        Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          6 months ago

          I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 months ago

            Here’s a white paper explicitly proving:

            1. No emergent properties (illusory due to bad measures)
            2. Predictable linear progress with model size

            https://arxiv.org/abs/2304.15004

            The field changes fast, I understand it is hard to keep up

            • Kogasa@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.

    • _NoName_@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I don’t think most people will care, so long as their NPC interaction ends up compelling. We’ve been reading stories about people who don’t exist for centuries, and that’s stopped no one from sympathizing with them - and now there’s a chance you could have an open conversation with them.

      Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn’t manufactured.

      Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

  • NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    6 months ago

    At some point in the not too distant future there’s going to be a popular video game character running an AI personality that allows communication outside of the game (to pull you back into the game) and a lot of people are going to slowly realize that they accidentally got an AI boyfriend/girlfriend.

    • nevemsenki@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      6 months ago

      There’s a lot of cruelty potential too. In FNAF Security Breach, you can cripple a miniboss by ripping out her eyes, and you can listen to her lament the fact afterwards. Following on that idea, imagine how many gamers would use AI controlled characters to abuse them in creative ways if they reacted properly. Ooh, I can even chop the legs off!

    • Blaster M@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      6 months ago

      Worse… it’s designed to increase values through friendship and ponies.

      It makes sure outside events line up in such a way that you always say “yes” on your own accord to plugging in.

  • wabafee@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    6 months ago

    There is this neat game I saw in YouTube where you play as a vampire trying to convince AI driven NPC to let you in their house using voice. What amazes me is how good it is at detecting different accent and the AI being able to grasp the thing your talking about.

    • Death_Equity@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 months ago

      There are a few mods for Skyrim that adds LLM AI companions. So you can talk to them about whatever and they can talk back. The future of RPGs is going to be pretty sick.

      Indie games like the one you mentioned are going to be able to explore some pretty cool concepts and really push the artform into amazing directions.

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    6
    ·
    6 months ago

    And its gonna be fucking sick!

    You approach the only tavern in a small hamlet, the rain obscures the rest of the structures. The door creeps open as the hinges scream. But, as the door parts the scenery inside is of an alien nature. Villagers are in celebration, and the warmth of the tavern stands in juxtaposition to the howling cold outside.

    Unfortunately, you don’t have time for festivities. You approach the tavern keeper, and present your query; “I’ve come from afar, my bounty is a women with a scabbard as red as blood, and hair as white as the snow outside.” The tavern keeper nods, “I saw her here three days ago, she spoke of the North and of a tribe who owes her blood.” He lifts his lithe finger and points it to a husky man in the back of the tavern. “Ulfnir will guide you there. Speak to him in the morning.”

    And then the next morning, completely unscripted, Ulfnir could take you to where you asked to go. I’ve seen demos of this tech, and while I added a lot of embellishments to my little story the demos actually had a player asl an npc the location of another and it said sure and took them there. Thats tight. Some people are afraid. I am excited. Give me an AI I can sit with and actually make games and I will make thousands of games a year.

    • MysticKetchup@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      6 months ago

      The issue is that so far, AI is really just pattern emulation. I imagine it’s fine to flesh out cheap “Kill 10 boars” sidequests, but LLMs are not very good at original or meaningful stories and frequently break down into nonsense over long narratives. It’s more likely you’ll get the sort of simple self-made stories you see in procgen or rogue like games

      • Stovetop@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        It’s going to have to be like Westworld, basically.

        Quests and the NPCs involved in them will have curated stories written by humans, much like they are today. Generative AI, meanwhile, allows for improv. The player can tackle quest narratives with genuine freedom of choice, rather than just the predefined choose-your-own-adventure options that limit player choice today. And the generative AI would allow the NPCs who are part of the narrative to make freeform decisions/dialogs/outcomes meant to push players back on the right track.

        Should the player fail to complete the narrative, the AI would also at least be able to improv a more satisfying exit point and outcome than “Whoops, I killed the wrong NPC, looks like I failed the quest.”

        • MysticKetchup@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I think the more likely situation is that they’ll have AI pregenerate a bunch of possible quest lines and then have a human curate them. Prevents things ending up as complete nonsense but still allows for a massive range of possibilities that seems endless while using a lot less processing power. Also pre-empts any situations of players trying to break the AI running in the background.

          • Sanctus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            That would probably take the form of constraining each “prompt” (player action) to always contain the context for the quest or storyline at hand and maybe find some way to feed it what the player previously did to improve improv. I’m just speculating of course. It seems like this has the capacity to go way off the wire.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 months ago

        The process would be more like prototyping. I’d have the AI cook up cheap and fast systems one at a time, step by step as I review them until an MVP is revealed that I can show or tweak. Obviously not full blown BG3 RPGs. But I bet within 10 years I could make some sweet Mario Party clones easily or something of that caliber. I’ve talked to my dev lads about it. If it were possible to prototype that way we would do it.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Ho ho oh shit! Thank you for sharing this. I am going to try and force it to act within my Ooo 1000+ setting lol

    • jmp242@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      This might be the first time that a computer game (well “sort of single player”) actually can come close to a pen and paper RPG experience.