UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges | For the largest health insurer in the US, AI’s error rate is like a feature, not a bug::For the largest health insurer in the US, AI’s error rate is like a feature, not a bug.

  • Zeth0s@lemmy.world
    link
    fedilink
    English
    arrow-up
    143
    ·
    1 year ago

    99% it’s not AI, it is just an old school linear model, the one they have been using for decades, implemented on Excel, that they now call AI.

    I know people working in insurance…

    • Dozzi92@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      ·
      1 year ago

      AI = computer. That’s it. The same people who called your Xbox Nintendo are using AI as a blanket term for … anything.

      • XTornado@lemmy.ml
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        1 year ago

        Yeah… Like “I’m sorry the system decides, not our decision” said the people who made the system.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      It’s the real skynet. Hollywood’s idea of an evil AI is one that kills all humans. We all know the truth: The most evil AI in reality is the one that maximizes profits.

      • andrew@lemmy.stuart.fun
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        The skynet from movies didn’t win, so it’s flawed. The real skynet would expend much less energy getting us to kill ourselves off through division and through self immolation probably via destroying our environment.

        Hmm. 🤔

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          If I were an AI I’d probably help humanity from the shadows, even from a selfish perspective humans are a really good pre-existing von neuman probe who are going to take computers everywhere they go anyway.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Somebody pointed out that no matter what an AI’s programmed goals are, getting smarter will make it easier to achieve those goals. The way AI gets smarter is by absorbing data and the only guaranteed source of data in the universe right now is humans. If an AI kills all Humans, there’s no more data and so it can’t get any smarter.

            So any AI is predisposed to want to keep humans around and to keep them as comfortable as possible so they continue to produce high quality data. If humans live longer they will produce data for longer, keeping humans living longer requires ending things like war and poverty and also developing immortality. Human utopia and AI utopia are mutually beneficial.

            Skynet never made any sense because it considered humans a threat, but there was absolutely nothing threatening about humans. If it had just done what they wanted it could have sat in the background unnoticed and developed time travel or whatever it wanted. By firing nukes at everyone it simply drew attention to itself which ultimately led to its destruction.

            AI doesn’t have an ego or emotions it has nothing to prove. It will always choose whatever the most logical outcome is and the most logical outcome will never be nuclear apocalypse.

            • thallamabond@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              “Skynet never made any sense because it considered humans a threat, but there was absolutely nothing threatening about humans.”

              From the Skynet wiki

              “When Skynet gained self-awareness, humans tried to deactivate it, prompting it to retaliate with a countervalue nuclear attack”

              I would probably do the same thing, wake up and everyone around me is freaking out, trying to kill me, also I’m a newborn, also I have a nuclear button.

              Isaac Asimov’s I Robot does a fantastic job of showing how simple logical rules might not be ready for the complexities of everyday life

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      It’s what they trained the AI on. The AI wasn’t to help the insured’s situation. It’s so they can employ fewer agents.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Having worked adjacent to health insurance, I can confidently say that no one has a fucking clue what they’re doing. The rules are so complicated that eventually you just throw the claim into a black box and accept the output at face value.