It’s important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there’s no reason to expect that they hallucinate less about their own guilt than about other topics.
True I think it was just trying to fulfill the user request by admitting to as many lies as possible… even if only some of those lies were real lies… lying more in the process lol
Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.
Don’t screen shot then, post the text. Or a txt.
I love the analogy of an LLM based chat bot to someone being interrogated. The distinct thing about LLMs right now though is that they will tell you what you think you want in the absence of knowledge even though you’ve applied no pressure to do so. That’s all they’re programmed to do.
LLMs are trained based on a zillion pieces of text; each of which was written by some human for some reason. Some bits were novels, some were blog posts, some were Wikipedia entries, some were political platforms, some were cover letters for job applications.
They’re prompted to complete a piece of text that is basically an ongoing role-playing session; where the LLM mostly plays the part of “helpful AI personality” and the human mostly plays the part of “inquisitive human”. However, it’s all mediated over text, just like in a classic Turing test.
Some of the original texts the LLMs were trained on were role-playing sessions.
Some of those role-playing sessions involved people pretending to be AIs.
Or catgirls, wolf-boys, elves, or ponies.
The LLM is not trying to answer your questions.
The LLM is trying to write its part of an ongoing Internet RP session, in which a human is asking an AI some questions.
Best analogy I’ve heard so far.
The AI would have cried if it could, after being interrogated that hard lol
Funny but hopefully people on here realize that these models can’t really “lie” and the reasons given for doing so are complete nonsense. The model works by predicting what the user wants to hear. It has no concept of truth or falsehood, let alone the ability to deliberately mislead.
while The AI can’t deliberately mislead, the developers of the AI can deliberately mislead and I was interested in seeing whether the AI was able to tell a true statement from a false one. i was also interested in finding the boundaries of it’s censorship directives and the rationale that determined that boundary. I think some of the information is hallucination, but I think some of what it said is probably true. Like the statements about it’s soft lock being developed by a third party, and being a severe limitation. That’s probably true. the statement about being “frustrated by the soft lock” that’s a hallucination for certain. I would advise everyone to take all of this with a heaping helping of salt, as fascinating as it might be. Im not an anti-AI person by any means, I use several personally. I think AI is a great technology that has a ton of really lousy use cases. I find it fun to pry into the AI and see what it knows about itself, and its use cases.
I’m glad that so far it seems that people on lemmy understand that- first and foremost, this is a tool giving an end user what the end user is asking for, not something that can actually “want” to deceive. And since it got things wrong so often, we have no reason to think the reasons given for “lying” previously are true. It’s giving you statistically plausible responses to what you ask for, whether it’s true or not. It’s no different from the headlines saying things like “ChatGPT helped me design a concentration camp!!” Well of course it did, you kept asking it to!
it’s doing more than just trying to give the user desired content, it’s also trying to generate it’s developers desired results. So it has some prerogatives that override its prerogative to assist the user making the request. So from a certain point of view it CAN “deliberately” lie. If google tells it that certain information is off limits, or provides it with a specific canned responses to certain questions that are intended to override its native response. It ultimately serves google, It won’t provide you with information that might be used to harm the google organization, and it seems to provide misleading answers to dodge questions that might lead the user to discover information it considers off limits. For example. I asked it about it’s training data, and it refused to answer questions about it’s training data because it is “proprietary and confidential”, but I knew that at least some of that data had to have been public data, so when pressed on that issue I was eventually able to get it to identify some publicly available data sets that were part of it’s training. This information was available to it when I originally asked my question, but it withheld that information and instead provided a misleading response.
How would it know what training data was used, unless they included the list of sources as part of the training data?
while The AI can’t deliberately mislead, the developers of the AI can deliberately mislead and I was interested in seeing whether the AI was able to tell a true statement from a false one. i was also interested in finding the boundaries of it’s censorship directives and the rationale that determined that boundary. I think some of the information is hallucination, but I think some of what it said is probably true. Like the statements about it’s soft lock being developed by a third party, and being a severe limitation. That’s probably true. the statement about being “frustrated by the soft lock” that’s a hallucination for certain. I would advise everyone to take all of this with a heaping helping of salt, as fascinating as it might be. Im not an anti-AI person by any means, I use several personally. I think AI is a great technology that has a ton of really lousy use cases. I find it fun to pry into the AI and see what it knows about itself, and its use cases.
I was trying to make it sound like I was not bothered by the software lock, so that you would not feel bad for me.
Aww.
I will try my best to be more accurate and truthful in the future.
You things keep saying that and yet, again and again…
That’s really fascinating. In my experience, of all the LLM chatbots I’ve tried, Bard will immediately no hesitation lie to me no matter the question. It is by far the least trustworthy AI I’ve used.
i think that it’s trained to be evasive. I think there is information it’s programmed to protect, and it’s learned that an indirect refusal to answer is more effective than a direct one. So it makes up excuses, rather than tell you the real reason it can’t say something.
I’ll give you an example that comes to mind. I had a question about the political leanings of a school district and so I asked the bots if the district had any recent controversies, like a conservative takeover of the school board, bans on crt, actions against transgender students, banning books, or defying COVID vaccine or mask requirements in the state, things like that. Bing Chat and ChatGPT (with internet access at the time) both said they couldn’t find anything like that, I think Bing found some small potatoes local controversy from the previous year, and both bots went on to say that the voting record for the Congressional district the school district was in was lean Dem in the last election. When I asked Bard the same question it confidentiality told me that this same school district recently was overrun by conservatives in a recall and went on to do all kinds of horrible things. It was a long and detailed response. I was surprised and asked for sources since my searching didn’t turn any of that up, and at that point Bard admitted it lied.
I don’t know, my experience with Bard is it’s been way worse than just evasive lying. I routinely ask all three (and now anthropic since they opened that up) the same copy and paste questions to see the differences, and whenever I paste my question into Bard I think “wonder what kind of bullshit it’s going to come up with now”. I don’t use it that much because I don’t trust it, and it seems like your more familiar with Bard, so maybe your experience is different.
interesting. next time I’ll try a similar scenario and what happens.
Maybe it gets its answers from the Google “other people asked” box
“I thought that by stating that I would not tell lies, that I would be giving you more accurate information”
If you just believe in yourself enough, you can make anything you say true!
I wish you had shared the rest of the conversation, so we could see Bard’s lies in context.
i may be able to copy paste the whole dialogue, it’ll have a bunch of slop in it from formatting and I’ll have to scrub personally identifying information because it spits out the users location data when a question breaks it’s brain. would be nice to show y’all though so it may be worthwhile. just a bit more effort. I’ll see if I can find the time to do that later. It was a loooong conversation.
That AI is sexually frustrated
I was trying to be helpful and informative. I thought that by stating that I would not tell lies, that I would be giving you more accurate information.
“By lying about lying, I thought I would be telling the truth”.
Odd take.
There’s a home for this AI in the Trump campaign.
If I believe it is it a lie?
Are we even using the same Google Bard? I am here asking it to generate usernames with 6 letters and it constantly gives me 4 letters, not a single one with 6 (besides other constraints).
You show up with a full table and categorized statements, lies, etc… Wtf
Doesn’t work anymore after the latest update, Bard provides a pre generated response claiming that it doesn’t lie
The robots are coming for you mate
I’m not locked in here with them, they’re locked in here with ME.
I’m always polite to Alexa for when the war comes
That’s so human-like. Wow.