![](https://mbin.grits.dev/media/8a/4c/8a4cd03b01da0ea7fa37eb0fa5c51e295a9dfb39be2df5d03c38b23a57e3873a.png)
![](https://lemmy.world/pictrs/image/8aead832-799f-4d34-a20d-eae5b621a9b1.jpeg)
At one point I messed around with a lore generator that would chop up sections of “The Dungeon Alphabet” and “Fire on the Velvet Horizon” along with some other stuff, and feed random sections of them into the LLM for inspiration and then ask it to lay out a little map, and it pretty reliably came up with all kind of badass stuff.
Yeah. It is fairly weird to me that it’s such a common thing to do to take the raw output of the LLM and send that to the user, and to try use fine-tuning to get that raw output to look some way that you want.
To me it is obvious that something like having the LLM emit a little JSON block which includes some field which covers “how sure are you that this is actually true” or something, is more flexible and simpler and cheaper and works better.
But what do I know