For those using ChatGPT, if anything you post is used in a lawsuit against OpenAI, OpenAI can send you the bill for the court case (attorney fees and such) whether OpenAI wins or loses.
Examples:
-
A defamation case by an Australian mayor because ChatGPT incorrectly stated that he had served prison time for bribery: https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/
-
OpenAI sued for defamation after ChatGPT fabricates legal accusations against radio host: https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit
-
Sarah Silverman sues OpenAI for copyright infringement: https://lemmy.ml/post/1905056
Attorney talking about their ToS (same link as post link): https://youtu.be/fOTuIhOWFXU?t=268
https://openai.com/policies/terms-of-use 7. Indemnification; Disclaimer of Warranties; Limitations on Liability (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.
If I’m understanding this correctly, whatever chatgpt responds to your queries, you can be held liable for if any damaging content is produced.
That’s gotta be more to cover their ass then to come after you. Unless you use it’s generated text to sue the company I don’t think they would ever try to sue their users or else everyone would stop using the platform and Microsoft would have a huge PR problem and their stock price would drop. It just doesn’t logically make sense for them to do that, unless they were sued by you for the content produced by your inputs.
It makes sense, right?
They produced a language model. It does nothing more than predict the next word. It will lie all the time, that’s part of how it works. It makes stuff up from the input it gets.
If you post that stuff online and it contains lies about people and you didn’t check it, you absolutely should be liable for that. I don’t see a problem with that.
Right, but what about the case where you post something that doesn’t contain lies at all?
What if ChatGPT outputs something that a certain former president gets offended by and he decides to sue OpenAI?
According to their ToS it doesn’t matter if it’s a “frivolous lawsuit”. If OpenAI had to pay any attorney fees just to respond to some ridiculous lawsuit, they could still bill you for those costs.
I don’t think it makes sense at that point at all.
Of course the vast majority of users would never have to worry about this, but it’s still something to be aware of.
It’s a tool. Can’t sue the manufacturer if you injure someone with it.
This isn’t true in the least. Purchase a tool and look through the manual. Every section marked “danger”, “warning”, or “caution” was put in there because someone sued some company because the user or some bystander was hurt or injured.
You are right. Seems I confused common sense with reality.
You can if the tool is defective.
You ever heard of a product recall?