We all know by now that ChatGPT is full of incorrect data but I trusted it will no go wrong after I asked for a list of sci-fi books recommendations (short stories anthologies in Spanish mostly) including book names, editorial, print year and of course ISBN.

Some of the books do exist but the majority are nowhere to be found. I pick the one that caught my interest the most and contacted the editorial directly after I did not find it in their website or anywhere else.

This is what they replied (Google Translate):


ChatGPT got it wrong.

We don’t have any books with that title.

In the ISBN that has given you the last digit is incorrect. And the correct one (9788477028383) corresponds to “The Holy Fountain” by Henry James.

Nor have we published any science fiction anthologies in the last 25 years.


I quick search in the “old site” shows that others have experienced the same with ChatGPT and ISBN searches… For some reason I thought it will no go wrong in this case, but it did.

  • Lanthanae@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    1 year ago

    “I used a hammer to screw screws and it didn’t work.”

    ChatGPT is a generative language model. It was not built for this kind of use case, it was not ever intended for this kind of use case, and the fact that it doesn’t succeed at this is like saying that it can’t make you a pizza. The only logical response is “well yeah, what did you expect it to do?”

  • Osayidan@social.vmdk.ca
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You asked for fiction so it gave you some on a whole new level.

    On a more serious note, other services like bing AI chat are more suited to this. It will behave more like an assistant for this kind of query and be able to search the web for lists of highly rated scifi titles, it can also give you titles similar to something else you enjoyed.

    ChatGPT is the same tech behind that but it’s more closed off and unable to do those things properly. If it does spit out some good titles it’ll be both a coincidence and using outdated data from whenever it was last trained.

  • themoonisacheese@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    ChatGPT is a text predictor. You are not asking it for book recommendations, you are writing “some good books are:” and hoping that glorified autocorrect will somehow come up with actual books to complete the sentence.

    This effect is compounded by the fact that it is trained to predict text that will make you click the thumbs up button and not the thumbs down one. Saying “here are some books” and inventing makes you more likely to click 👍 or doing nothing, saying “as an AI language model, I do not know about books and cannot accurately recommend good books” makes you more likely to click 👎, so it doesn’t do that.

    Expecting chatGPT to do anything about the real world beyond writing text “creatively” is a fool’s errand.

    • Not_mikey@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Saying chatgpt is glorified auto correct is like saying humans are glorified bacteria. They were both made under the same basic drive: to survive and reproduce for humans and bacteria; to guess the next word for chatgpt and autocorrect, but they are of wholly different magnitudes. Both chatgpt and humans evolved a much more complex understanding of the world to achieve the same goal as there more basic analogs. If chatgpt had no understanding of the real world it wouldn’t have been able to guess any of the books.

      • themoonisacheese@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        ChatGPT does not have an understanding of the world. It’s able to guess book titles because book titles have a format and it’s training data had book lists in it.

      • inverimus@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Its training data had a lot of code in it. It does the same thing with code that it does with any other text, predict the next token given the previous tokens.

      • themoonisacheese@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Code is text. It’s predicting text that might show up after the text “here is a program that does x”. A lot of the time, in the training data, phrases like that were followed by code.

        It’s not particularly good at programming by the way. It’s good at basic tasks but anything complex enough that you couldn’t learn it in a few hours, it can’t do at all, but it will sure pretend like it can!

  • MetalJewSolid@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 year ago

    Yeah I’ve been noticing this lately too. It’s starting to dredge up random slapped-together information. Last week, for fun, I had it tell me the plot to an obscure N64 series I loved as a kid. I had it do this several times and even provided a full correct plot but the AI simply kept randomly putting information together and I’m not sure where much of it was coming from.

    Having it generate book lists (like for learning a programming language) shows problems obvs. It would show me the same book across 4 editions, make up information about the books (unhelpful when you’re looking for a book with exercises in it), and just general…this didn’t feel like a problem a month ago. Maybe it was. I mostly use it to generate writing prompts.