• 0 Posts
  • 269 Comments
Joined 2 年前
cake
Cake day: 2023年6月12日

help-circle













  • You act like Apple doesn’t collect an absolute massive amount of data on every one of their devices and users. Sure it’s done In a more clandestine way than Google but they still do.

    With Google I can run a security focused ROM like GrapheneOS and have virtually no data collection on my device, while with apple black boxes you will enjoy the illusion of data privacy and be happy about it.

    I know installing custom ROMs is not standard user behaviour and I will concede that in many instances apple is better than stock android for privacy /security, but the way you framed your argument erked me because you cant say or prove anything with their closed source black boxes. Apple aren’t good people, they are anti consumer and exist to make money. Of course they collect data.





  • given a particular prompt/keyword, which might reproduce the original training data almost in it’s entirety given similar set of prompt or set of keywords.

    What you describe here is called memorization and is generally considered a flaw/bug and not a feature, this happens with low quality training data or not enough data. As far as I understand this isn’t a problem on frointer llms with the large datasets they’ve been trained on.

    Eitherway, just like a photocopier an llm can be used to infringe copyright if that’s what someone is trying to do with it, the tool itself does not infringe anything.


  • But it’s not the same, you don’t understand how LLM training works. The original piece of work is not retained at all, the training data is used to tune pre existing numbers, those numbers change slightly as training goes on.

    At no point in time is anything resembling the training data ever present in the 1’s and 0’s of the model.

    You are wrong, bring on the downvotes uninformed haters.

    FYI I also agree sampling music should be fine for artists