Back to Papers

Do LLMs Generate Creative and Visually Accessible Data visualisations?

Clarissa Miranda-Pena

2024ALTA

Abstract

Data visualisation is a valuable task that combines careful data processing with creative design. Large Language Models (LLMs) are now capable of responding to a data visualisation request in natural language with code that generates accurate data visualisations (e.g., using Matplotlib), but what about human-centered factors, such as the creativity and accessibility of the data visualisations? In this work, we study human perceptions of creativity in the data visualisations generated by LLMs, and propose metrics for accessibility. We generate a range of visualisations using GPT-4 and Claude-2 with controlled variations in prompt and inference parameters, to encourage the generation of different types of data visualisations for the same data. Subsets of these data visualisations are presented to people in a survey with questions that probe human perceptions of different aspects of creativity and accessibility. We find that the models produce visualisations that are novel, but not surprising. Our results also show that our accessibility metrics are consistent with human judgements. In all respects, the LLMs underperform visualisations produced by human-written code. To go beyond the simplest requests, these models need to become aware of human-centered factors, while maintaining accuracy.

Relevance Assessment

Research Gap

Notes

Notes are automatically saved as you type

Tags

related to creativity › mentions creativity as a human abilitymodel used › Large (>32B)evaluation › automatic metricsevaluation › human evalscope › prompt engineeringevaluation › creativity evaluationcreativity frameworks › psychological/cognitivecreativity frameworks › computational creativity

Search Queries

Paper ID: f2c3c348-a06f-4072-830e-31a13125cba7Added: 9/21/2025