![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Amen, only angle I can see someone disagreeing with is trees becoming a potential bank of carbon to be fed back into the atmosphere via fuel for wildfires.
I so wish there were better ways to control forest fires.
Amen, only angle I can see someone disagreeing with is trees becoming a potential bank of carbon to be fed back into the atmosphere via fuel for wildfires.
I so wish there were better ways to control forest fires.
These all align with my understanding! Only thing I’d mention is that when I said “we’ve not had llms available” I meant “LLMs this powerful ready for public usage”. My b
I haven’t been in decision analytics for a while (and people smarter than I are working on the problem) but I meant more along the lines of the “model collapse” issue. Just because a human gives a thumbs up or down doesn’t make it human written training data to be fed back. Eventually the stuff it outputs becomes “most likely prompt response that this user will thumbs up and accept”. (Note: I’m assuming the thumbs up or down have been pulled back into model feedback).
Per my understanding that’s not going to remove the core issue which is this:
Any sort of AI detection arms race is doomed. There is ALWAYS new ‘real’ video for training and even if GANs are a bit outmoded, the core concept of using synthetically generated content to train is a hot thing right now. Technically whomever creates a fake video(s) to train would have a bigger training set than the checkers.
Since we see model collapse when we feed too much of this back to the model we’re in a bit of an odd place.
We’ve not even had a LLM available for the entire year but we’re already having trouble distinguishing.
Making waffles so I only did a light google but I don’t really think chatgpt is leveraging GANs for it’s main algos, simply that the GAN concept could be applied easily to LLM text to further make delineation hard.
We’re probably going to need a lot more tests and interviews on critical reasoning and logic skills. Which is probably how it should have been but it’ll be weird as that happens.
sorry if grammar is fuckt - waffles
Predictable issue if you knew the fundamental technology that goes into these models. Hell it should have been obvious it was headed this way to the layperson once they saw the videos and heard the audio.
We’re less sensitive to patterns in massive data, the point at which we cant tell fact from ai fiction from the content is before these machines can’t tell. Good luck with the FB aunt’s.
GANs final goal is to develop content that is indistinguishable… Are we surprised?
Edit since the person below me made a great point. GANs may be limited but there’s nothing that says you can’t setup a generator and detector llm with the distinct intent to make detectors and generators for the sole purpose of improving the generator.
If only these poor oil producers had some sort of half century or more lead to reap the benefits of their export advantage and diversify their interests. Won’t someone think of how unfair it is to ask them to change? /s
It’d be an uphill battle but if someone got into programming via free online courses they could build a resume via collaborating with projects on github. It’d be a way to prove skill without the diploma.
Advice goes the same for anything where you can build a portfolio to demonstrate competency, most people in industries just care about results. This could be photography, graphic design, a physical labor like wood working etc.
Sucks because you’d have to outlay time upfront before maybe getting payed though. Ymmv
Agreed! I was just mentioning the only negative angle I could see, still a net positive!