New generative A.I breakthroughs are happening almost every week, it seems. The big question is; Are we ready for it? Noted Science Zaddy Kyle Hill explains ...
This is Kyle Hill’s video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?
I didn’t get past the part where he started talking about the dark forest theory as if it “solved” the Fermi paradox. The Fermi paradox is an observation, the dark forest theory is a theory. Worse, actually, it’s considered a hypothesis. I was willing to sit down for the 15 min video. Why blow your credibility in the first sentences.
Unfortunately the Dark Forest thing is super popular right now, so it gets the clicks.
Which is rather annoying, IMO, because as Fermi Paradox solutions go it’s riddled with holes and implausibilities. But it’s scary, and so people latch on to it easily.
I generate AI content (some of which is art) for fun, so I am not against it in theory. I just dont so far find much enjoyment consuming AI content made by others. So far the vast majority of it is mediocre. Which seems like a natural consequence of lowering the barriers to entry.
The Sora demo, for example, is very compelling technologically, but it didn’t impress me at all as something that would replace creative work, so much as provide a tool to get it done differently.
As AI content becomes more prevalent, I will continue to further disengage with that content and prefer authentic human experiences, to the extent that AI content continues to feel mostly soulless and vacuous.
Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?
I wouldn’t mind it as much if these chatbots weren’t being used for nefarious purposes, like mass data collection, tracking, influencing, and privacy violations. Other than that, if it walks like a human, talks like a human, and we are convinced it’s a human, is there anything wrong with that? It might as well be human. This is going to become more and more of a big question as we get closer to AGI. An AGI isn’t going to suddenly “wake up” and become self aware one day. All these systems are slowly inching towards it. There’s not going to be a clean line between “just a program mimicking a human” and “a fully self-aware entity”. It’s up to us to determine that, and there’s no hard rules to determine that, because it falls into the “problem of other minds” philosophical concept.
This is Kyle Hill’s video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?
I didn’t get past the part where he started talking about the dark forest theory as if it “solved” the Fermi paradox. The Fermi paradox is an observation, the dark forest theory is a theory. Worse, actually, it’s considered a hypothesis. I was willing to sit down for the 15 min video. Why blow your credibility in the first sentences.
Unfortunately the Dark Forest thing is super popular right now, so it gets the clicks.
Which is rather annoying, IMO, because as Fermi Paradox solutions go it’s riddled with holes and implausibilities. But it’s scary, and so people latch on to it easily.
I generate AI content (some of which is art) for fun, so I am not against it in theory. I just dont so far find much enjoyment consuming AI content made by others. So far the vast majority of it is mediocre. Which seems like a natural consequence of lowering the barriers to entry.
The Sora demo, for example, is very compelling technologically, but it didn’t impress me at all as something that would replace creative work, so much as provide a tool to get it done differently.
As AI content becomes more prevalent, I will continue to further disengage with that content and prefer authentic human experiences, to the extent that AI content continues to feel mostly soulless and vacuous.
I wouldn’t mind it as much if these chatbots weren’t being used for nefarious purposes, like mass data collection, tracking, influencing, and privacy violations. Other than that, if it walks like a human, talks like a human, and we are convinced it’s a human, is there anything wrong with that? It might as well be human. This is going to become more and more of a big question as we get closer to AGI. An AGI isn’t going to suddenly “wake up” and become self aware one day. All these systems are slowly inching towards it. There’s not going to be a clean line between “just a program mimicking a human” and “a fully self-aware entity”. It’s up to us to determine that, and there’s no hard rules to determine that, because it falls into the “problem of other minds” philosophical concept.