Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

[…]

]The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

  • 42Firehawk@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    If I commission a vending machine, get one that was made automatically and runs itself, and I set it up and let it operate in my store, then I am responsible if it eats someone’s money without giving them their item, giving the wrong thing, or dispensing dangerous products.

    This has already been decided, and it’s why you can open up and fix them, and each mechanism is controlled.

    A llm making business decisions has no such control or safety mechanisms.

    • Midnitte@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      A llm making business decisions has no such control or safety mechanisms.

      I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.

      If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?

      From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.

      The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.

      • 42Firehawk@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).

        The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.

        You can’t hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.