Is It Still Art?
The philosophical reckoning with AI creativity that the tech industry doesn’t want to have
“Every act of creation is first an act of destruction.”
— Pablo Picasso
In 1935, Walter Benjamin wrote “The Work of Art in the Age of Mechanical Reproduction,” arguing that mass reproduction strips art of its “aura”—the unique, irreplaceable quality that comes from an artwork’s specific existence in time and space. A photograph of the Mona Lisa is not the Mona Lisa. The reproduction lacks the aura of the original.
Ninety years later, we face a question Benjamin could not have imagined: what happens when there is no original? When the work was never made by human hands, never existed in a specific place, never carried the weight of a creator’s intention? When the “work of art” was generated by a mathematical function optimizing pixel predictions against a loss function?
A Springer Nature paper published in 2025 proposes an answer, and it’s more nuanced than the discourse deserves. The researchers introduce the concept of “semi-aura”—a diminished but still present form of artistic authenticity in AI-generated works. The argument: an AI image is not nothing. It emerged from specific training data, a specific model architecture, a specific prompt crafted by a specific person at a specific moment. It has provenance, even if it lacks traditional authorship. It has a history, even if that history is computational rather than biographical.
This is either a profound expansion of aesthetic theory or a polite way of saying “it’s not quite art but we need a word for it.” The distinction matters because it determines how we treat AI-generated imagery—legally, culturally, and commercially.
The empirical evidence is less equivocal. A Frontiers in Psychology study using eye-tracking technology found that when people are told an artwork is AI-generated, they exhibit measurable behavioral changes: they spend less time looking at it, rate it lower on emotional impact, and show different gaze patterns. The same image, with the same colors and composition, is perceived differently based solely on whether the viewer believes a human made it.
This is remarkable. It means the “art” in a work of art is not entirely in the object. It’s partly in the relationship between the viewer and the creator. We don’t just see art. We see through it to the person who made it. And when there is no person—when the creator is a statistical model—the window we’re looking through goes dark.
The competition controversies illustrate the tension from another angle. In 2022, Jason Allen won the Colorado State Fair’s digital art competition with an image generated by Midjourney. The backlash was immediate and intense. In January 2025, the ARC Salon Competition—a prestigious international art award—faced allegations that an awarded piece was AI-generated. The response revealed how deeply people feel about this boundary: not as a technical question, but as a moral one.
The moral dimension is worth taking seriously rather than dismissing as Luddism. When someone says “AI art isn’t real art,” they are making a claim about value, not capability. The claim is: art derives its value not just from what it looks like, but from the fact that a human being spent time, developed skill, made choices, and expressed something personal through the work. An AI-generated image that looks identical to a painting is not the same as the painting, just as a perfect counterfeit is not the same as the original currency.
But this argument has a vulnerability, and it’s the same one Benjamin identified ninety years ago. If the value of art lies in the process of its creation rather than the quality of its output, then art becomes unfalsifiable. Any output, no matter how mediocre, carries the aura of human effort. Any output, no matter how beautiful, lacks it if generated by machine. We end up valuing provenance over quality—which is, ironically, exactly the argument luxury brands use to charge ten thousand dollars for a handbag.
The bias problem complicates things further. Research published in AI & Ethics documents how AI-generated art significantly amplifies and perpetuates harmful biases and stereotypes. Models trained on the collected imagery of the internet reproduce the internet’s prejudices: Eurocentric beauty standards, gendered stereotypes, racial caricatures. When these biases appear in human art, we can engage with the artist about them. When they appear in AI art, there is no artist to engage with. The biases become architectural, embedded in parameters that no one person chose and no one person can change.
UNESCO has weighed in, calling this “a decisive moment” for the relationship between AI and artistic creation. The Journal of Aesthetics and Art Criticism is publishing a special issue on AI and art in spring 2026. The academy, characteristically, is taking the question seriously while the market has already moved on.
And the market has moved on. Enterprise adoption of AI image tools is now mainstream. Global investment in generative AI solutions tripled in 2025 to roughly thirty-seven billion dollars. Companies are not asking whether AI-generated marketing images are “art.” They are asking whether AI-generated marketing images convert. The philosophical question is being answered not by philosophers but by quarterly earnings reports.
This is perhaps the deepest irony. The technology that has forced a re-examination of what art is, what creativity means, and what value human expression carries—this technology is being deployed primarily to generate product photographs, advertising mockups, and social media content. The most philosophically disruptive technology of the century is being used to sell things. Picasso would understand.
For our part—as a company that detects AI-generated images—the philosophical question has a practical dimension. People come to our tool because they want to know: is this real? But “real” means different things in different contexts. A photojournalist asking “is this image authentic” needs a different answer than an art collector asking “did a human make this.” The first is a forensic question. The second is an existential one.
We can answer the forensic question with increasing accuracy. The existential one is above our pay grade. But we think it’s worth asking, and worth sitting with the discomfort of not having a clean answer. Because the alternative—the alternative where we stop asking, where AI-generated imagery is simply accepted as equivalent to human-created imagery, where provenance is irrelevant and process is invisible—that alternative is not just a technological change. It is a cultural amputation.
Whether AI can make art is the wrong question. The right question is: does it matter how an image came to exist? And if your answer is no—if provenance truly doesn’t matter, if process is irrelevant, if the only thing that counts is the final arrangement of pixels—then you need to reckon with the consequences. Because a world where process doesn’t matter is a world where truth doesn’t matter. And that is a world we should be very careful about building.