- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The Inventor Behind a Rush of AI Copyright Suits Is Trying to Show His Bot Is Sentient::Stephen Thaler’s series of high-profile copyright cases has made headlines worldwide. He’s done it to demonstrate his AI is capable of independent thought.
Wow, that’s some of the most concrete, down-to-earth explanation of what everyone is calling AI. Thanks.
I’m technical, but haven’t found a good article explaining today’s AI in a way I can grasp well enough to help my non-technical friends and family. Any recommendations? Maybe something you’ve written?
It would be funny if that comment was ai generated.
I read once we shouldn’t be worried when AI starts passing Turing tests, we should worry when they start failing them again 🤣
I read a physical book about using chatGPT that I’m pretty sure was written by chatGPT.
Sidenote: you don’t need to read a book about using chatGPT.
I’ve had most success explaining LLM ‘fallibility’ to non-techies using the image gen examples. Google ‘AI hands’, and ask them if they see anything wrong. Now point out that we’re _extremely_sensitive to anything wrong with our hands, and so these are very easy for us to spot. But the AI has no concept of what a hand is, it’s just seen a _lot _ of images from different angles, sometimes fingers are hidden, sometimes intertwined etc. So it will happily generate lots more of those kinds of images, with no regard to whether they could / should actually exists.
It’s a pretty similar idea with the LLMs. It’s seen a lot of text, and can put together words in a convincing-looking way. But it has no concept of what it’s writing, and the equivalent of the ‘hands’ will be there in the text. It’s just that we can’t see them at first glance like we can with the hands.
Nice comparisons. Will add that to my explanations.
Thanks!
This one helped me a bit - https://www.understandingai.org/p/large-language-models-explained-with
Thanks!
Yann LeCun is the main person behind open source offline AI as far as putting the pieces in place and events that lead to where we are now. Maybe think of him as the Dennis Ritchie or Stallman of AI research. https://piped.video/watch?v=OgWaowYiBPM
I am not the brightest kid in the room. I’m just learning this stuff in practice and sharing some of what I have picked up thus far. I am at a wall when it comes to things like understanding rank 3 tensors or greater, and I still can’t figure out exactly how the categorization network is implemented. I think that last one has to do with Transformers and has something to do with rotation of vectors in an efficient way, but I haven’t figured it out intuitively yet. Thanks for the complement through.
Oh crap, you already done lost me in the second half there, but I’ll give the link a watch.
Thanks again!