Modifying large language model post-training for diverse creative writing
John Joon Young Chung
Computer science - computation and language, computer science - machine learning
Abstract
As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing generation, we investigate post-training approaches to promote both output diversity and quality. Our core idea is to include deviation – the degree of difference between a trainin
Relevance Assessment
Research Gap
Notes
Notes are automatically saved as you type
Tags
creativity frameworks › creative-textual creativityevaluation › document-levelevaluation › automatic metricsmodel used › Medium (8-24)related to creativity › related to creativity as a textual genretextual genre › literaturescope › creative trainingscope › technical research
Search Queries
Paper ID: 23d098cf-115a-49ef-bded-fe82097a366eAdded: 10/26/2025