My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.
I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.
Late but I’m pretty sure it’s a byproduct of the RHLF process used to train these types of models. Basically, they have a bunch of humans look at multiple outputs from the LLM and rate the best ones, and it turns out people find lists easier to understand than other styles (alternatively, the poor souls slaving away in the AI mines rating responses all day find it faster to understand a list than a paragraph through the blurry lens of mental fatigue)
Why do LLMs obsess over making numbered lists? They seem to do that constantly.
Oh, I can help! 🎉
My conspricy theory is that early LLMs have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.
I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.
That’s not a bad theory especially since newer models don’t do it as often
Well they are computers…
Late but I’m pretty sure it’s a byproduct of the RHLF process used to train these types of models. Basically, they have a bunch of humans look at multiple outputs from the LLM and rate the best ones, and it turns out people find lists easier to understand than other styles (alternatively, the poor souls slaving away in the AI mines rating responses all day find it faster to understand a list than a paragraph through the blurry lens of mental fatigue)