Back to Papers

Assessing the creativity of LLMs in proposing novel solutions to mathematical problems

Junyi Ye et al.

Abstract

The mathematical capabilities of AI systems are complex and multifaceted. Most existing research has predominantly focused on the correctness of AI-generated solutions to mathematical problems. In this work, we argue that beyond producing correct answers, AI systems should also be capable of, or assist humans in, developing novel solutions to mathematical challenges. This study explores the creative potential of Large Language Models (LLMs) in mathematical reasoning, an aspect that has received limited attention in prior research. We introduce a novel framework and benchmark, CreativeMath, which encompasses problems ranging from middle school curricula to Olympic-level competitions, designed to assess LLMs' ability to propose innovative solutions after some known solutions have been provided. Our experiments demonstrate that, while LLMs perform well on standard mathematical tasks, their capacity for creative problem-solving varies considerably. Notably, the Gemini-1.5-Pro model outperformed other LLMs in generating novel solutions. This research opens a new frontier in evaluating AI creativity, shedding light on both the strengths and limitations of LLMs in fostering mathematical innovation, and setting the stage for future developments in AI-assisted mathematical discovery.

Relevance Assessment

Research Gap

Notes

Notes are automatically saved as you type

Tags

creativity frameworks › computational creativitycreativity frameworks › psychological/cognitiveevaluation › LLM-as-a-judgeevaluation › automatic metricsevaluation › creativity evaluationevaluation › human evalmodel used › ChatGPTmodel used › Large (>32B)related to creativity › related to creativity as a human ability

Search Queries

Paper ID: 2ed8cd30-8279-44b8-a766-6206b1736765Added: 10/26/2025