• orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    There is absolutely zero chance I would allow anyone to theorize what they think I would say using AI. Hell, I don’t like AI in its current state, and that’s the least of my issues with this.

    It’s immoral. Regardless of your relation to a person, you shouldn’t be acting like you know what they would say, let alone using that to sway a decision in a courtroom. Unless he specifically wrote something down and it was then recited using the AI, this is absolutely wrong.

    It’s selfish. They used his likeness to make an apology they had no possible way of knowing, and they did it to make themselves feel better. They couldve wrote a letter with their own voices instead of turning this into some weird dystopian spectacle.

    “It’s just an impact statement.”

    Welcome to the slippery slope, folks. We allow use of AI into courtrooms, and not even for something cool (like quickly producing a 3d animation of a car accident for use in explaining—with actual human voices—what happened at the scene). Instead, we use it to sway a judge’s sentencing, while also making an apology on behalf of a dead person (using whatever tech you want because that is not the main problem here) without their consent or even any of their written (you know, like in a will) thoughts.

    Pointing to “AI bad” for these arguments is lazy, reductive, and not even remotely the main gripe.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 minutes ago

      There is absolutely zero chance I would allow anyone to theorize what they think I would say using AI.

      If they based it on my Reddit history it’s got potential to be needlessly harsh to certain groups of life-underachievers, that’s for sure.