Papers

50 papers found

Review Papers

E.A.R.T.H.: Structuring creative evolution through model error in generative AI

Yusen Peng

How can AI move beyond imitation toward genuine creativity? This paper proposes the E.A.R.T.H. framework, a five-stage generative pipeline that transforms model-generated errors into creative assets through Error generation, Amplification, Refine selection, Transform, and Harness feedback. Drawing on cognitive science and generative modeling, we posit that "creative potential hides in failure" and operationalize this via structured prompts, semantic scoring, and human-in-the-loop evaluation. Imp

Computer science - artificial intelligenceRelevant
creativity frameworks › creative-textual creativityevaluates a creative feature › sloganevaluation › automatic metricsevaluation › creativity evaluationevaluation › human evalmodel used › Medium (8-24)related to creativity › related to creativity as a human abilityscope › prompt engineering

Origins of creativity in attention-based diffusion models

Emma Finn

As diffusion models have become the tool of choice for image generation and as the quality of the images continues to improve, the question of how `creativity' originates in diffusion has become increasingly important. The score matching perspective on diffusion has proven particularly fruitful for understanding how and why diffusion models generate images that remain plausible while differing significantly from their training images. In particular, as explained in (Kamb \& Ganguli, 2024) and ot

Computer science - machine learning, computer science - computer vision and pattern recognitionNot Relevant

An analytic theory of creativity in convolutional diffusion models

Mason Kamb

We obtain an analytic, interpretable and predictive theory of creativity in convolutional diffusion models. Indeed, score-matching diffusion models can generate highly original images that lie far from their training data. However, optimal score-matching theory suggests that these models should only be able to produce memorized training examples. To reconcile this theory-experiment gap, we identify two simple inductive biases, locality and equivariance, that: (1) induce a form of combinatorial c

Computer science - machine learning, condensed matter - disordered systems and neural networks, computer science - artificial intelligence, quantitative biology - neurons and cognition, statistics - machine learning, I.2.10Not Relevant

Creativity in LLM-based multi-agent systems: a survey

Yi-Cheng Lin

Large language model (LLM)-driven multi-agent systems (MAS) are transforming how humans and AIs collaboratively generate ideas and artifacts. While existing surveys provide comprehensive overviews of MAS infrastructures, they largely overlook the dimension of \emph{creativity}, including how novel outputs are generated and evaluated, how creativity informs agent personas, and how creative workflows are coordinated. This is the first survey dedicated to creativity in MAS. We focus on text and ima

Computer science - human-computer interaction, computer science - artificial intelligence, computer science - computation and languageRelevant
creativity frameworks › computational creativityscope › agentsevaluation › creativity evaluationrelated to creativity › related to creativity as a textual genre

Toward modeling creative processes for algorithmic painting

Aaron Hertzmann

This paper proposes a framework for computational modeling of artistic painting algorithms, inspired by human creative practices. Based on examples from expert artists and from the author's own experience, the paper argues that creative processes often involve two important components: vague, high-level goals (e.g., "make a good painting"), and exploratory processes for discovering new ideas. This paper then sketches out possible computational mechanisms for imitating those elements of the paint

Computer science - artificial intelligence, computer science - computer vision and pattern recognition, computer science - graphicsNot Relevant

Modeling creativity: Case studies in python

Tom De Smedt

Modeling Creativity (doctoral dissertation, 2013) explores how creativity can be represented using computational approaches. Our aim is to construct computer models that exhibit creativity in an artistic context, that is, that are capable of generating or evaluating an artwork (visual or linguistic), an interesting new idea, a subjective opinion. The research was conducted in 2008-2012 at the Computational Linguistics Research Group (CLiPS, University of Antwerp) under the supervision of Prof. W

Computer science - artificial intelligenceRelevant
creativity frameworks › computational creativitycreativity frameworks › psychological/cognitivecreativity frameworks › creative-textual creativityevaluation › creativity evaluationevaluation › human evalrelated to creativity › mentions creativity as a human ability

HLLM-creator: Hierarchical LLM-based personalized creative generation

Junyi Chen

AI-generated content technologies are widely used in content creation. However, current AIGC systems rely heavily on creators' inspiration, rarely generating truly user-personalized content. In real-world applications such as online advertising, a single product may have multiple selling points, with different users focusing on different features. This underscores the significant value of personalized, user-centric creative generation. Effective personalized content generation faces two main cha

Computer science - information retrieval, computer science - computation and languageNot Relevant

Taming flow-based I2V models for creative video editing

Xianghao Kong

Although image editing techniques have advanced significantly, video editing, which aims to manipulate videos according to user intent, remains an emerging challenge. Most existing image-conditioned video editing methods either require inversion with model-specific design or need extensive optimization, limiting their capability of leveraging up-to-date image-to-video (I2V) models to transfer the editing capability of image editing models to the video domain. To this end, we propose IF-V2V, an I

Computer science - computer vision and pattern recognition, computer science - multimediaNot Relevant

Enhancing creative generation on stable diffusion-based models

Jiyeon Han

Recent text-to-image generative models, particularly Stable Diffusion and its distilled variants, have achieved impressive fidelity and strong text-image alignment. However, their creative capability remains constrained, as including `creative' in prompts seldom yields the desired results. This paper introduces C3 (Creative Concept Catalyst), a training-free approach designed to enhance creativity in Stable Diffusion-based models. C3 selectively amplifies features during the denoising process to

Computer science - computer vision and pattern recognitionNot Relevant

Does generation require memorization? Creative diffusion models using ambient diffusion

Kulin Shah

There is strong empirical evidence that the state-of-the-art diffusion modeling paradigm leads to models that memorize the training set, especially when the training set is small. Prior methods to mitigate the memorization problem often lead to a decrease in image quality. Is it possible to obtain strong and creative generative models, i.e., models that achieve high generation quality and low memorization? Despite the current pessimistic landscape of results, we make significant progress in push

Computer science - machine learning, statistics - machine learningNot Relevant

CommonCanvas: An open diffusion model trained with creative-commons images

Aaron Gokaslan

We assemble a dataset of Creative-Commons-licensed (CC) images, which we use to train a set of open diffusion models that are qualitatively competitive with Stable Diffusion 2 (SD2). This task presents two challenges: (1) high-resolution CC images lack the captions necessary to train text-to-image generative models; (2) CC images are relatively scarce. In turn, to address these challenges, we use an intuitive transfer learning technique to produce a set of high-quality synthetic captions paired

Computer science - computer vision and pattern recognition, cs.CYNot Relevant

CCEdit: Creative and controllable video editing via diffusion models

Ruoyu Feng

In this paper, we present CCEdit, a versatile generative video editing framework based on diffusion models. Our approach employs a novel trident network structure that separates structure and appearance control, ensuring precise and creative editing capabilities. Utilizing the foundational ControlNet architecture, we maintain the structural integrity of the video during editing. The incorporation of an additional appearance branch enables users to exert fine-grained control over the edited key f

Computer science - computer vision and pattern recognitionNot Relevant

Base models beat aligned models at randomness and creativity

Peter West

Alignment has quickly become a default ingredient in LLM development, with techniques such as reinforcement learning from human feedback making models act safely, follow instructions, and perform ever-better on complex tasks. While these techniques are certainly useful, we propose that they should not be universally applied and demonstrate a range of tasks on which base language models consistently outperform their popular aligned forms. Particularly, we study tasks that require unpredictable ou

Computer science - computation and languageRelevant
creativity frameworks › computational creativityevaluation › human evaltextual genre › poetrycreativity frameworks › psychological/cognitiveevaluates a creative feature › logic (puzzles, etc.)

L-C4: Language-based video colorization for creative and consistent color

Zheng Chang

Automatic video colorization is inherently an ill-posed problem because each monochrome frame has multiple optional color candidates. Previous exemplar-based video colorization methods restrict the user's imagination due to the elaborate retrieval process. Alternatively, conditional image colorization methods combined with post-processing algorithms still struggle to maintain temporal consistency. To address these issues, we present Language-based video Colorization for Creative and Consistent C

Computer science - computer vision and pattern recognitionNot Relevant

Pattern languages as media for the creative society

Takashi Iba

This paper proposes new languages for basic skills in the Creative Society, where people create their own goods, tools, concepts, knowledge, and mechanisms with their own hands: the skills of learning, presentation, and collaboration. These languages are written as a pattern language, which is a way of describing the tacit practical knowledge. In this paper, a new type of pattern languages are proposed as "Pattern Languages 3.0" and three examples are introduced: Learning Patterns, Collaboration

cs.CYNot Relevant

Ranking creative language characteristics in small data scenarios

Julia Siekiera

The ability to rank creative natural language provides an important general tool for downstream language understanding and generation. However, current deep ranking models require substantial amounts of labeled data that are difficult and expensive to obtain for different domains, languages and creative characteristics. A recent neural approach, the DirectRanker, promises to reduce the amount of training data needed but its application to text isn't fully explored. We therefore adapt the DirectR

Computer science - computation and languageRelevant
creativity frameworks › computational creativitycreativity frameworks › creative-textual creativityevaluates a creative feature › figures of speechevaluates a creative feature › humorevaluates a creative feature › punsevaluation › creativity evaluationevaluation › human evalmodel used › Small (<3B)evaluation › automatic metricsscope › technical researchtextual genre › literaturerelated to creativity › related to creativity as a human ability

Evaluating creative language generation: The case of rap lyric ghostwriting

Peter Potash

Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet di

Computer science - computation and languageNot Relevant

Galton's law of mediocrity: Why large language models regress to the mean and fail at creativity in advertising

Matt Keon

Large language models (LLMs) generate fluent text yet often default to safe, generic phrasing, raising doubts about their ability to handle creativity. We formalize this tendency as a Galton-style regression to the mean in language and evaluate it using a creativity stress test in advertising concepts. When ad ideas were simplified step by step, creative features such as metaphors, emotions, and visual cues disappeared early, while factual content remained, showing that models favor high-probabi

Computer science - artificial intelligenceRelevant
related to creativity › related to creativity as a human abilityrelated to creativity › related to creativity as a textual genrecreativity frameworks › computational creativitycreativity frameworks › creative-textual creativitymodel used › Small (<3B)evaluation › human evalevaluation › automatic metricsevaluation › sentence-levelevaluation › creativity evaluationscope › technical researchtextual genre › music

VisuCraft: Enhancing large vision-language models for complex visual-guided creative content generation via structured information extraction

Rongxin Jiang

This paper introduces VisuCraft, a novel framework designed to significantly enhance the capabilities of Large Vision-Language Models (LVLMs) in complex visual-guided creative content generation. Existing LVLMs often exhibit limitations in maintaining high visual fidelity, genuine creativity, and precise adherence to nuanced user instructions when generating long-form texts. VisuCraft addresses these challenges by integrating a multimodal structured information extractor (E) and a dynamic prompt

Computer science - computer vision and pattern recognition, computer science - computation and languageNot Relevant

Creativity or brute force? Using brainteasers as a window into the problem-solving abilities of large language models

Simeng Han

Accuracy remains a standard metric for evaluating AI systems, but it offers limited insight into how models arrive at their solutions. In this work, we introduce a benchmark based on brainteasers written in long narrative form to probe more deeply into the types of reasoning strategies that models use. Brainteasers are well-suited for this goal because they can be solved with multiple approaches, such as a few-step solution that uses a creative insight or a longer solution that uses more brute f

Computer science - artificial intelligence, computer science - computation and languageRelevant
creativity frameworks › psychological/cognitiveevaluates a creative feature › logic (puzzles, etc.)evaluates a creative feature › riddlesmodel used › ChatGPTmodel used › Large (>32B)model used › Medium (8-24)model used › Small (<3B)related to creativity › mentions creativity as a human abilityscope › prompt engineeringscope › technical researchevaluation › LLM-as-a-judgeevaluation › human evalevaluation › document-levelevaluation › creativity evaluation

Let's think outside the box: Exploring leap-of-thought in large language models with creative humor generation

Shanshan Zhong

Chain-of-Thought (CoT) guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advancements. In this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs – a non-sequential, creative paradigm involving strong associations and knowledge leaps. To this end, we study LLMs on the

Computer science - artificial intelligence, computer science - computation and language, computer science - computer vision and pattern recognitionRelevant
creativity frameworks › psychological/cognitiveevaluates a creative feature › humorevaluation › automatic metricsevaluation › human evalevaluation › sentence-levelevaluation › creativity evaluationmodel used › ChatGPTmodel used › Medium (8-24)model used › Large (>32B)scope › creative trainingscope › prompt engineeringscope › technical researchrelated to creativity › related to creativity as a human ability

Curiosity-driven LLM-as-a-judge for personalized creative judgment

Vanya Bannihatti Kumar

Modern large language models (LLMs) excel at objective tasks such as evaluating mathematical reasoning and factual accuracy, yet they falter when faced with the nuanced, subjective nature of assessing creativity. In this work, we propose a novel curiosity-driven LLM-as-a-judge for evaluating creative writing which is personlized to each individual's creative judgments. We use the Torrance Test of Creative Thinking(TTCW) benchmark introduced in Chakrabarty et al. (2024), which has stories annotat

Computer science - computation and language, computer science - machine learningRelevant
evaluation › LLM-as-a-judgecreativity frameworks › psychological/cognitiveevaluation › human evalevaluation › automatic metricsevaluation › document-levelmodel used › Large (>32B)model used › Small (<3B)related to creativity › related to creativity as a human abilityrelated to creativity › related to creativity as a textual genretextual genre › literaturescope › technical research

LLM-driven e-commerce marketing content optimization: Balancing creativity and conversion

Haowei Yang

As e-commerce competition intensifies, balancing creative content with conversion effectiveness becomes critical. Leveraging LLMs' language generation capabilities, we propose a framework that integrates prompt engineering, multi-objective fine-tuning, and post-processing to generate marketing copy that is both engaging and conversion-driven. Our fine-tuning method combines sentiment adjustment, diversity enhancement, and CTA embedding. Through offline evaluations and online A/B tests across cat

Computer science - computation and language, computer science - artificial intelligence, computer science - information retrievalNot Relevant

We're different, we're the same: Creative homogeneity across LLMs

Emily Wenger

Numerous powerful large language models (LLMs) are now available for use as writing support tools, idea generators, and beyond. Although these LLMs are marketed as helpful creative assistants, several works have shown that using an LLM as a creative partner results in a narrower set of creative outputs. However, these studies only consider the effects of interacting with a single LLM, begging the question of whether such narrowed creativity stems from using a particular LLM – which arguably has

cs.CY, computer science - artificial intelligence, computer science - computation and language, computer science - machine learningRelevant
creativity frameworks › psychological/cognitiverelated to creativity › mentions creativity as a human abilitymodel used › ChatGPTmodel used › Large (>32B)model used › Medium (8-24)model used › Small (<3B)evaluation › automatic metricsevaluation › creativity evaluationevaluation › human evalscope › prompt engineeringscope › technical research

Designing and evaluating dialogue LLMs for co-creative improvised theatre

Boyd Branch

Social robotics researchers are increasingly interested in multi-party trained conversational agents. With a growing demand for real-world evaluations, our study presents Large Language Models (LLMs) deployed in a month-long live show at the Edinburgh Festival Fringe. This case study investigates human improvisers co-creating with conversational agents in a professional theatre setting. We explore the technical capabilities and constraints of on-the-spot multi-party dialogue, providing comprehen

Computer science - computation and languageNot Relevant

Creative beam search: LLM-as-a-judge for improving response generation

Giorgio Franceschelli

Large language models are revolutionizing several areas, including artificial creativity. However, the process of generation in machines profoundly diverges from that observed in humans. In particular, machine generation is characterized by a lack of intentionality and an underlying creative process. We propose a method called Creative Beam Search that uses Diverse Beam Search and LLM-as-a-Judge to perform response generation and response validation. The results of a qualitative experiment show

Computer science - artificial intelligence, computer science - computation and language, computer science - human-computer interaction, computer science - machine learningRelevant
creativity frameworks › psychological/cognitiveevaluation › LLM-as-a-judgeevaluation › creativity evaluationevaluation › human evalmodel used › Medium (8-24)related to creativity › mentions creativity as a human abilityevaluation › document-levelscope › technical research

CreativEval: Evaluating creativity of LLM-based hardware code generation

Matthew DeLorenzo

Large Language Models (LLMs) have proved effective and efficient in generating code, leading to their utilization within the hardware design process. Prior works evaluating LLMs' abilities for register transfer level code generation solely focus on functional correctness. However, the creativity associated with these LLMs, or the ability to generate novel and unique solutions, is a metric not as well understood, in part due to the challenge of quantifying this quality. To address this research

Computer science - computation and languageNot Relevant

On recipe memorization and creativity in large language models: Is your model a creative cook, a bad cook, or merely a plagiator?

Jan Kvapil

This work-in-progress investigates the memorization, creativity, and nonsense found in cooking recipes generated from Large Language Models (LLMs). Precisely, we aim (i) to analyze memorization, creativity, and non-sense in LLMs using a small, high-quality set of human judgments and (ii) to evaluate potential approaches to automate such a human annotation in order to scale our study to hundreds of recipes. To achieve (i), we conduct a detailed human annotation on 20 preselected recipes generated

Computer science - computation and languageNot Relevant

Probing and inducing combinational creativity in vision-language models

Yongqian Peng

The ability to combine existing concepts into novel ideas stands as a fundamental hallmark of human intelligence. Recent advances in Vision-Language Models (VLMs) like GPT-4V and DALLE-3 have sparked debate about whether their outputs reflect combinational creativity–defined by M. A. Boden (1998) as synthesizing novel ideas through combining existing concepts–or sophisticated pattern matching of training data. Drawing inspiration from cognitive science, we investigate the combinational creativ

Computer science - computer vision and pattern recognition, computer science - artificial intelligence, computer science - computation and languageNot Relevant

EscapeBench: Towards advancing creative intelligence of language model agents

Cheng Qian

Language model agents excel in long-session planning and reasoning, but existing benchmarks primarily focus on goal-oriented tasks with explicit objectives, neglecting creative adaptation in unfamiliar environments. To address this, we introduce EscapeBench, a benchmark suite of room escape game environments designed to challenge agents with creative reasoning, unconventional tool use, and iterative problem-solving to uncover implicit goals. Our results show that current LM models, despite emplo

Computer science - computation and language, computer science - artificial intelligence, computer science - machine learningNot Relevant

Weaver: Foundation models for creative writing

Tiannan Wang

This work introduces Weaver, our first family of large language models (LLMs) dedicated to content creation. Weaver is pre-trained on a carefully selected corpus that focuses on improving the writing capabilities of large language models. We then fine-tune Weaver for creative and professional writing purposes and align it to the preference of professional writers using a suit of novel methods for instruction data synthesis and LLM alignment, making it able to produce more human-like texts and fo

Computer science - computation and language, computer science - artificial intelligence, computer science - machine learningRelevant
creativity frameworks › creative-textual creativityevaluation › LLM-as-a-judgeevaluation › human evalevaluation › creativity evaluationevaluation › document-levelmodel used › ChatGPTmodel used › Large (>32B)model used › Medium (8-24)model used › Small (<3B)post-editing › post-editing with LLMsscope › agentsscope › creative trainingscope › technical researchtextual genre › literaturerelated to creativity › related to creativity as a human abilityrelated to creativity › related to creativity as a textual genre

Creative divergent synthesis with generative models

Axel Chemla–Romeu-Santos

Machine learning approaches now achieve impressive generation capabilities in numerous domains such as image, audio or video. However, most training \& evaluation frameworks revolve around the idea of strictly modelling the original data distribution rather than trying to extrapolate from it. This precludes the ability of such models to diverge from the original distribution and, hence, exhibit some creative traits. In this paper, we propose various perspectives on how this complicated goal coul

Computer science - machine learning, statistics - machine learningNot Relevant

Creative painting with latent diffusion models

Xianchao Wu

Artistic painting has achieved significant progress during recent years. Using an autoencoder to connect the original images with compressed latent spaces and a cross attention enhanced U-Net as the backbone of diffusion, latent diffusion models (LDMs) have achieved stable and high fertility image generation. In this paper, we focus on enhancing the creative painting ability of current LDMs in two directions, textual condition extension and model retraining with Wikiart dataset. Through textual

Computer science - computer vision and pattern recognition, computer science - artificial intelligence, computer science - computation and language, computer science - graphics, computer science - machine learningNot Relevant

Ownership and creativity in generative models

Omri Avrahami

Machine learning generated content such as image artworks, textual poems and music become prominent in recent years. These tools attract much attention from the media, artists, researchers, and investors. Because these tools are data-driven, they are inherently different than the traditional creative tools which arises the question - who may own the content that is generated by these tools? In this paper we aim to address this question, we start by providing a background to this problem, raising

cs.CYNot Relevant

BILLY: Steering large language models via merging persona vectors for creative generation

Tsung-Min Pai

Multi-LLM systems enhance the creativity of large language models by simulating human collective intelligence but suffer from significant drawbacks, such as high computational costs and inference latency. To address these limitations, we propose BILLY (BlendIng persona vectors for Large Language model creativitY), a training-free framework that captures the benefits of multi-LLM collaboration, i.e. inducing diverse perspectives and specialized expertise, within a single model. BILLY operates by

Computer science - computation and language, computer science - artificial intelligenceRelevant
creativity frameworks › psychological/cognitiveevaluation › LLM-as-a-judgeevaluation › creativity evaluationevaluation › document-levelmodel used › Medium (8-24)related to creativity › related to creativity as a human abilityscope › technical research

Creative writers' attitudes on writing as training data for large language models

Katy Ilonka Gero

The use of creative writing as training data for large language models (LLMs) is highly contentious and many writers have expressed outrage at the use of their work without consent or compensation. In this paper, we seek to understand how creative writers reason about the real or hypothetical use of their writing as training data. We interviewed 33 writers with variation across genre, method of publishing, degree of professionalization, and attitudes toward and engagement with LLMs. We report on

Computer science - human-computer interactionNot Relevant

The creative psychometric item generator: a framework for item generation and validation using large language models

Antonio Laverghetta Jr. and Antonio Laverghetta Jr

Increasingly, large language models (LLMs) are being used to automate workplace processes requiring a high degree of creativity. While much prior work has examined the creativity of LLMs, there has been little research on whether they can generate valid creativity assessments for humans despite the increasingly central role of creativity in modern economies. We develop a psychometrically inspired framework for creating test items (questions) for a classic free-response creativity test: the creat

Computer science - computation and languageNot Relevant

C2Ideas: Supporting creative interior color design ideation with large language model

Yihan Hou

Interior color design is a creative process that endeavors to allocate colors to furniture and other elements within an interior space. While much research focuses on generating realistic interior designs, these automated approaches often misalign with user intention and disregard design rationales. Informed by a need-finding preliminary study, we develop C2Ideas, an innovative system for designers to creatively ideate color schemes enabled by an intent-aligned and domain-oriented large language

Computer science - human-computer interactionNot Relevant

Inspire creativity with ORIBA: Transform artists' original characters into chatbots through large language model

Yuqian Sun

This research delves into the intersection of illustration art and artificial intelligence (AI), focusing on how illustrators engage with AI agents that embody their original characters (OCs). We introduce 'ORIBA', a customizable AI chatbot that enables illustrators to converse with their OCs. This approach allows artists to not only receive responses from their OCs but also to observe their inner monologues and behavior. Despite the existing tension between artists and AI, our study explores in

Computer science - multimedia, computer science - artificial intelligence, computer science - human-computer interaction, 14J60 (primary) 14F05, 14J26 (secondary), F.2.2; I.2.7Not Relevant

A mathematical abstraction for balancing the trade-off between creativity and reality in large language models

Ritwik Sinha

Large Language Models have become popular for their remarkable capabilities in human-oriented tasks and traditional natural language processing tasks. Its efficient functioning is attributed to the attention mechanism in the Transformer architecture, enabling it to concentrate on particular aspects of the input. LLMs are increasingly being used in domains such as generating prose, poetry or art, which require the model to be creative (e.g. Adobe firefly). LLMs possess advanced language generat

Computer science - computation and language, computer science - machine learningNot Relevant

Sasha: Creative goal-oriented reasoning in smart homes with large language models

Evan King

Smart home assistants function best when user commands are direct and well-specified (e.g., "turn on the kitchen light"), or when a hard-coded routine specifies the response. In more natural communication, however, human speech is unconstrained, often describing goals (e.g., "make it cozy in here" or "help me save energy") rather than indicating specific target devices and actions to take on those devices. Current systems fail to understand these under-specified commands since they cannot reason

Computer science - human-computer interaction, computer science - artificial intelligenceNot Relevant

LLMs can realize combinatorial creativity: Generating creative ideas via LLMs for scientific research

Tianyang Gu

Scientific idea generation has been extensively studied in creativity theory and computational creativity research, providing valuable frameworks for understanding and implementing creative processes. However, recent work using Large Language Models (LLMs) for research idea generation often overlooks these theoretical foundations. We present a framework that explicitly implements combinatorial creativity theory using LLMs, featuring a generalization-level retrieval system for cross-domain knowle

Computer science - artificial intelligenceNot Relevant

Mathemyths: Leveraging large language models to teach mathematical language through child-AI co-creative storytelling

Chao Zhang

Mathematical language is a cornerstone of a child's mathematical development, and children can effectively acquire this language through storytelling with a knowledgeable and engaging partner. In this study, we leverage the recent advances in large language models to conduct free-form, creative conversations with children. Consequently, we developed Mathemyths, a joint storytelling agent that takes turns co-creating stories with children while integrating mathematical terms into the evolving nar

Computer science - human-computer interactionNot Relevant

FlexMind: Supporting deeper creative thinking with LLMs

Yaqing Yang

Effective ideation requires both broad exploration of diverse ideas and deep evaluation of their potential. Generative AI can support such processes, but current tools typically emphasize either generating many ideas or supporting in-depth consideration of a few, lacking support for both. Research also highlights risks of over-reliance on LLMs, including shallow exploration and negative creative outcomes. We present FlexMind, an AI-augmented system that scaffolds iterative exploration of ideas,

Computer science - human-computer interactionNot Relevant

Igniting creative writing in small language models: LLM-as-a-judge versus multi-agent refined rewards

Xiaolong Wei

Large Language Models (LLMs) have demonstrated remarkable creative writing capabilities, yet their substantial computational demands hinder widespread use. Enhancing Small Language Models (SLMs) offers a promising alternative, but current methods like Supervised Fine-Tuning (SFT) struggle with novelty, and Reinforcement Learning from Human Feedback (RLHF) is costly. This paper explores two distinct AI-driven reward strategies within a Reinforcement Learning from AI Feedback (RLAIF) framework to

Computer science - computation and language, computer science - artificial intelligenceNot Relevant

Cooking up creativity: Enhancing LLM creativity through structured recombination

Moran Mizrahi

Large Language Models (LLMs) excel at many tasks, yet they struggle to produce truly creative, diverse ideas. In this paper, we introduce a novel approach that enhances LLM creativity. We apply LLMs for translating between natural language and structured representations, and perform the core creative leap via cognitively inspired manipulations on these representations. Our notion of creativity goes beyond superficial token-level variations; rather, we recombine structured representations of exis

Computer science - computation and language, computer science - artificial intelligence, computer science - machine learningRelevant
creativity frameworks › psychological/cognitivecreativity frameworks › computational creativityevaluation › automatic metricsevaluation › creativity evaluationevaluation › document-levelevaluation › human evalmodel used › ChatGPTmodel used › Large (>32B)related to creativity › mentions creativity as a human abilityscope › prompt engineeringscope › technical research

CreativityPrism: a holistic benchmark for large language model creativity

Zhaoyi Joey Hou

Creativity is often seen as a hallmark of human intelligence. While large language models (LLMs) are increasingly perceived as producing creative text, there is still no holistic framework to evaluate their creativity across diverse scenarios. Existing evaluation methods remain fragmented, with dramatic variation across domains and tasks, largely due to differing definitions and measurements of creativity. Inspired by the hypothesis that creativity is not one fixed idea, we propose CreativityPri

Computer science - computation and language, computer science - artificial intelligenceRelevant
related to creativity › mentions creativity as a human abilitycreativity frameworks › psychological/cognitiveevaluates a creative feature › logic (puzzles, etc.)evaluation › automatic metricsevaluation › word-levelmodel used › Medium (8-24)evaluation › creativity evaluationtextual genre › literaturetextual genre › poetry

Modifying large language model post-training for diverse creative writing

John Joon Young Chung

As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing generation, we investigate post-training approaches to promote both output diversity and quality. Our core idea is to include deviation – the degree of difference between a trainin

Computer science - computation and language, computer science - machine learningRelevant
creativity frameworks › creative-textual creativityevaluation › document-levelevaluation › automatic metricsmodel used › Medium (8-24)related to creativity › related to creativity as a textual genretextual genre › literaturescope › creative trainingscope › technical research

Characterising the creative process in humans and large language models

Surabhi S. Nath

Large language models appear quite creative, often performing on par with the average human on creative tasks. However, research on LLM creativity has focused solely on \textit{products}, with little attention on the creative \textit{process}. Process analyses of human creativity often require hand-coded categories or exploit response times, which do not apply to LLMs. We provide an automated method to characterise how humans and LLMs explore semantic spaces on the Alternate Uses Task, and contr

Computer science - human-computer interaction, computer science - artificial intelligence, computer science - computation and language, quantitative biology - neurons and cognitionNot Relevant

Creative robot tool use with large language models

Mengdi Xu

Tool use is a hallmark of advanced intelligence, exemplified in both animal behavior and robotic capabilities. This paper investigates the feasibility of imbuing robots with the ability to creatively use tools in tasks that involve implicit physical constraints and long-term planning. Leveraging Large Language Models (LLMs), we develop RoboTool, a system that accepts natural language instructions and outputs executable code for controlling robots in both simulated and real-world environments. Ro

Computer science - robotics, computer science - artificial intelligence, computer science - machine learningNot Relevant