I’ve seen a lot of sentiment around Lemmy that AI is “useless”. I think this tends to stem from the fact that AI has not delivered on, well, anything the capitalists that push it have promised it would. That is to say, it has failed to meaningfully replace workers with a less expensive solution - AI that actually attempts to replace people’s jobs are incredibly expensive (and environmentally irresponsible) and they simply lie and say it’s not. It’s subsidized by that sweet sweet VC capital so they can keep the lie up. And I say attempt because AI is truly horrible at actually replacing people. It’s going to make mistakes and while everybody’s been trying real hard to make it less wrong, it’s just never gonna be “smart” enough to not have a human reviewing its’ behavior. Then you’ve got AI being shoehorned into every little thing that really, REALLY doesn’t need it. I’d say that AI is useless.

But AIs have been very useful to me. For one thing, they’re much better at googling than I am. They save me time by summarizing articles to just give me the broad strokes, and I can decide whether I want to go into the details from there. They’re also good idea generators - I’ve used them in creative writing just to explore things like “how might this story go?” or “what are interesting ways to describe this?”. I never really use what comes out of them verbatim - whether image or text - but it’s a good way to explore and seeing things expressed in ways you never would’ve thought of (and also the juxtaposition of seeing it next to very obvious expressions) tends to push your mind into new directions.

Lastly, I don’t know if it’s just because there’s an abundance of Japanese language learning content online, but GPT 4o has been incredibly useful in learning Japanese. I can ask it things like “how would a native speaker express X?” And it would give me some good answers that even my Japanese teacher agreed with. It can also give some incredibly accurate breakdowns of grammar. I’ve tried with less popular languages like Filipino and it just isn’t the same, but as far as Japanese goes it’s like having a tutor on standby 24/7. In fact, that’s exactly how I’ve been using it - I have it grade my own translations and give feedback on what could’ve been said more naturally.

All this to say, AI when used as a tool, rather than a dystopic stand-in for a human, can be a very useful one. So, what are some use cases you guys have where AI actually is pretty useful?

  • XeroxCool@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    8 days ago

    AI isn’t useless, but it’s current forms are just rebranded algorithms with every company racing to get theirs out there. AI is a buzzword for tools that were never supposed to be labeled AI. Google has been doing summary excerpts for like a decade. People blindly trusted it and always said “Google told me”. I’d consider myself an expert on one particular car and can’t tell you how often those “answers” were straight up wrong or completely irrelevant to one type of car (hint, Lincoln LS does not have a blend door so heat problems can’t be caused by a faulty blend door).

    You cite Google searches and summarization as it’s strong points. The problem is, if you don’t know anything about the topic or not enough, you’ll never know when it makes mistakes. When it comes to Wikipedia, journal articles, forum posts, or classes, mistakes are possible there too. However, those get reviewed as they inform by knowledgeable people. Your AI results don’t get that review. Your AI results are pretending to be master of the universe so their range of results is impossibly large. That then goes on to be taken is pure fact by a typical user. Sure, AI is a tool that can educate, but there’s enough it proves it gets wrong that I’d call it a net neutral change to our collective knowledge. Just because it gives an answer confidently doesn’t mean it’s correct. It has a knack for missing context from more opinionated sources and reports the exact opposite of what is true. Yes, it’s evolving, but keep in mind one of the meta tech companies put out an AI that recommended using Elmer’s glue to hold cheese to pizza and claimed cockroaches live in penises. ChatGPT had it’s halluconatory days too, it just got forgotten due to Bard’s flop and Cortana’s unwelcome presence.

    Use the other two comments currently here as an example. Ask it to make some code for you. See if it runs. Do you know how to code? If not, you’ll have no idea if the code works correctly. You don’t know where it sourced it from, you don’t know what it was trying to do. If you can’t verify it yourself, how can you trust it to be accurate?

    The biggest gripe for me is that it doesn’t understand what it’s looking at. It doesn’t understand anything. It regurgitates some pattern of words it saw a few times. It chops up your input and tries to match it to some other group of words. It bundles it up with some generic, human-friendly language and tricks the average user into believing it’s sentient. It’s not intelligent, just artificial.

    So what’s the use? If it was specifically trained for certain tasks, it’d probably do fine. That’s what we really already had with algorithmic functions and machine learning via statistics, though, right? But sparsing the entire internet in a few seconds? Not a chance.

    Edit: can’t beleive I there’d a their