“We see what we believe.” — Richard L. Gregory, neuropsychologist, from Eye and Brain: The Psychology of Seeing (1970)
I. Opening the Frame: “Seeing What We Don’t See”
At the 17th ARC Salon Competition, a painting titled “The Witchling” won an honorable mention and a purchase award—only later to be stripped of that honor when allegations arose that the submitted image was, in fact, AI-generated or digitally manipulated. (Smartermarx) The controversy ignited broad discussion: how could jurors, skilled in assessing traditional realism, have been fooled? What perceptual or evidentiary failure allowed a work of hidden computational origin to slip through?
This incident, still simmering in artist circles and on some forums, frames our inquiry nicely: the boundary between human and machine image-making is no longer a simple dichotomy. It is now a perceptual battleground. The question is not merely “Is it AI?” but “How do we know?” and “What/How do we trust?”
In this feature, I propose that the real challenge of AI art lies not in the technology, but in how we see and believe—how perceptual cues, authorship, and context intertwine to produce aesthetic trust or doubt. I’ll explore three illustrative cases (of which Chad Little and “S.S. Neuron” are paradigms), examine how cue coherence and conflict play out in perception, and finally offer a constructive path forward in perceptual literacy and tool frameworks.
II. Case Studies: Edmond de Belamy, S.S. Neuron, and the Disappearing Line of Authorship
1. Edmond de Belamy: The AI Portrait That Became a Signature
In 2018, the Paris collective Obvious created a portrait called Edmond de Belamy using a GAN algorithm. It was printed on canvas, gilded, and submitted to auction—ultimately selling for over $432,000. Wikipedia
The twist: the “hand” of the algorithm became the signature. The painting is signed with the mathematical formula used to generate it (the algorithm’s parameters). In this case, the artist’s identity is inseparable from the generative model itself—and the attribution of such a work forces viewers into a perceptual riddle: is the machine the author, or the curator of prompts, or both?
To test the perceptual ambiguity more scientifically, recent AI vision studies have shown that even state-of-the-art Vision–Language Models struggle to definitively attribute real paintings or detect AI forgeries. arXiv
2. The Electrician: When a “Photograph” Becomes a Philosophical Experiment
In 2023, German photomedia artist Boris Eldagsen submitted an image titled Pseudomnesia: The Electrician to the Sony World Photography Awards, in the Creative category. What made this case extraordinary is that The Electrician was not a photograph at all, but a work generated (and refined) via AI tools—namely DALL·E 2 and subsequent inpainting/outpainting techniques. (Scientific American)
When Eldagsen’s image was announced as the winner, he publicly refused the prize, declaring that AI-generated works should not compete in traditional photography contests and that the rules were not prepared for this paradigm shift. (Eldagsen)
He framed the action as a deliberate provocation:
“I applied as a cheeky monkey… to find out if the competitions are prepared for AI images. They are not.” (Scientific American)
The controversy raised several powerful points that map directly into your theme of perception, authorship, and trust:
- All perceptual cues passed—but the narrative cue collapsed
The Electrician exhibits high visual coherence: lighting, texture, composition, form—all conform to the “language” of vintage portrait photography. Yet once Eldagsen disclosed the generative origin, viewers reported a sudden perceptual shift—a loss of emotional weight and trust—despite nothing in the image having changed. (Scientific American) - Authorship as a meta-cue that reconfigures interpretation
The moment one learns that the work was AI-generated, the viewer must re-map their perceptual strategy. The internal coherence of the image no longer anchors to human intention; instead, it becomes a puzzle of technique and prompt engineering. In other words, the “author” becomes a “more hidden” variable—one whose revelation reprograms the perceptual loop. - Institutional trust is deeply fragile under perceptual stress
In response, the Sony organizers and competition bodies came under intense scrutiny. They claimed judges had been aware of the “co-creation” with AI before awarding the prize, but the artist contended the judges failed to engage public discourse or clarify their position. (Art Newspaper)
The work was later removed from the exhibition and the awards website, further complicating the public record. (Eldagsen) - It becomes a visual thought experiment
Eldagsen himself noted that he was less interested in winning than in accelerating the debate: “How many of you knew or suspected that it was AI generated? Something about this doesn’t feel right, does it?” (Eldagsen)
The gesture transformed a single image into a test of perceptual integrity—forcing everyone to ask: how much of seeing is trusting, and how much is inference?
3. ARC Salon / “The Witchling” as Institutional Failure
The ARC incident is more than scandal; it is a real-world test case of how perceptual, procedural, and institutional trust can fail in the face of machine mimicry. From a SmarterMarx article on the AI issue with the 2024 ARC Salon:
- The initial photo of The Witchling was challenged, prompting community outcry and social media scrutiny. (Smartermarx)
- ARC’s rules explicitly forbid digital art, painted-over prints, and “artworks containing digital elements.” (Smartermarx)
- In response, ARC rescinded the award and announced procedural changes for future competitions, including requiring work-in-progress (WIP) images and high-resolution texture detail. (Smartermarx)
- Several judges flagged multiple suspicious entries during the salon adjudication process, applying cross-detection, artist research, and peer consultation. (Smartermarx)
This episode captures the tension between institutional trust, perceptual judgment, and the pace of technological disruption. Read more about this entire case here: https://www.smartermarx.com/t/ai-and-the-2024-arc-salon/1993
III. Cue Coherence, Cue Conflicts, and Aesthetic Disfluency
To understand why some images slip and others falter, we must dig into how the visual system infers meaning from cues—and how coherence (or its absence) shapes aesthetic judgment.
A. The Mechanism: Bayesian Perception, Cue Integration, Predictive Models
Vision science increasingly views perception as hypothesis testing: the brain posits internal models and tests them against sensory input. (See Purves & Lotto, 2010; Lotto & Purves, 2010). When visual cues cohere—light direction, cast shadows, optical texture, gesture, perspective—our model earns confidence. When they conflict—say, a highlight that doesn’t align with a cast shadow, or textures that resist known physical laws—the model stumbles, eliciting uncertainty or discomfort.
Cue coherence is the condition in which all the image’s signals point to a single consistent origin story (a real space, a real hand). When some cues “defect,” we sense something is amiss—even if we can’t verbalize it.
B. Cue Conflict as the Hidden Signal
In AI-generated art, the giveaway comes less from the early obvious distortions or missing fingers—it increasingly hides in the quiet misalignments of perception. The light might fall convincingly across a form, yet the shadows hesitate to agree. A translucent fabric might glow too evenly, missing the subtle irregularities that real materials scatter. The textures of skin may appear consistent but lack the evidence of an imaged material. Even the rhythm of edges and contours can feel algorithmically smooth—composed by pattern rather than by hand.
These small mismatches might never draw conscious attention, but they introduce a modicum of aesthetic disfluency: a momentary glitch in perceptual fluency that causes hesitation, lowering confidence. This is a perceptual cue in its own right—one that triggers a kind of internal “error signal.”
Waichulis’ notion of cue coherence/conflict fits naturally here: the viewer’s sense of “rightness” depends on harmony among micro- and macro-cues. Conflicting signals—even if subtle—erode trust.
As vision scientist Patrick Cavanagh noted, “We don’t notice when artists bend the physics of light—unless it breaks the logic of the scene.” In AI imagery, those subtle breaks—an inverted reflection, a drifting shadow—become what digital-forensics researcher Hany Farid calls “the new fingerprints” of artifice.
C. Confidence, Engagement, and the Perceptual Loop
Our enjoyment of art depends on perceptual confidence: the feeling that what we see “makes sense”—that the world is anchored. When that confidence wavers, aesthetic engagement recedes. Viewers may still find technical merit, but emotional resonance tends to collapse.
Thus, AI art that feels “off” is not failing entirely—it has triggered a dissonance in our perceptual system. The task for artists and critics becomes not only to spot the glitches, but to understand which cue conflicts trigger disfluency and under what viewing conditions.
IV. Context, Authorship, and the Meta-Cue of Trust
Even in cases where no perceptual glitch is detectable, contextual knowledge—that is, knowing whether an image is AI-generated or human-made—strongly modulates our experience.
A. Experimental Evidence: Authorship Labels Shift Emotion
In one notable empirical study (Specker et al., 2023), participants were shown identical artworks—but in one condition told the image was AI-generated, and in another told it was human-made. The results: viewers rated the “AI” versions as less emotionally moving and less skillful. (The aesthetic judgments shifted even though the image stimuli were identical.) This suggests that authorship can be viewed as a perceptual cue—a narrative anchor that shapes how we integrate the visual with the cognitive and emotional.
Thus, knowing “who made it” is not peripheral; it is part of the perceptual frame, altering not only our interpretation but the entire experience of the image.
B. Trust, Attribution, and the Visual Contract
When we view a work by Rembrandt, we bring a wealth of expectation—of depth, of struggle, of hand and thought. This attribution is part of the visual contract. When that contract is broken—or made ambiguous—the viewer may retract trust. The result is a kind of meta-perceptual dissonance.
Accordingly, authorship functions as a second-order perceptual cue. It guides how much weight we implicitly give to first-order sensory cues. In essence: if the author is “machine,” we lower our confidence in the perceptual experiment. If the author is “human,” we grant interpretive generosity.
This is why provenance, process, and signature matter even more now: they re-anchor the perceptual chain.
V. Tools, Protocols & Best Practices for Reliable Identification
Given the rising complexity of AI mimicry, no single tool or method is infallible. But through triangulation—layering independent methods, contextual checks, and perceptual judgment—one can approach a defensible conclusion.
Below is a ranked/annotated list of the most reliable tools and practices currently available, along with caveats and suggested protocols. These have been vetted by juried competitions, professional art communities, and digital forensics researchers.
| Tool / Method | Strengths / Use Cases | Limitations / Risks | Best Practice (Usage Guidelines) |
|---|---|---|---|
| Reverse Image Search (Google, TinEye, Yandex) | Can reveal prior versions, derivatives, or AI model outputs used as seed | False negatives common; AI works may be novel | Always begin with this. If a claimed artwork matches an existing image or AI-rendered variant, flag for further scrutiny |
| AI-Detection Models (IsItAI, AIorNot, Winston, etc.) | Fast, automated signal; helpful as a first pass | High false positives/negatives; model drift over time; adversarial evasion | Use multiple detectors; interpret probabilistically (e.g. “flagged by 3/5 detectors”) rather than binary |
| Metadata / EXIF Forensics | May reveal source software, editing history, camera traces | Easily stripped or manipulated | Use as supplementary evidence. Discrepancies (e.g. camera metadata on an alleged oil painting) merit deeper investigation |
| Surface / Texture Analysis | High-res macro examination can show unnatural uniformity, repeated patterns, microtiling | Requires print or physical surface; may not apply to digital-only submission | Use multispectral imaging or macro photomicrographs; compare with known authentic works |
| Layer / File Structure Inspection | In native digital files, hidden layers or AI artifacts may expose generator pipelines | Only works when original file is available; intentional obfuscation possible | Reserve for suspicious submissions where file source can be requested |
| Work–in–Progress (WIP) Photography | Offers a visual trail of intention and evolving correction | Can be faked, staged, or manipulated digitally | Request multiple WIPs at various stages; require consistency in composition with final work |
| Verification Shot (Artist & Work) | Anchors the work in human presence; helps judge scale and surface | Can be counterfeited; privacy concerns | Require standardized conditions (lighting, angle); restrict access to jurors; delete after adjudication |
| Process Statement / Technical Narrative | Encourages artist articulation of challenges, materials and methods | May be AI-generated text; boilerplate or obfuscated | Use structured prompts; cross-check for coherence with physical evidence and artistic plausibility |
| Peer / Expert Review & Triangulation | Leverage domain experts to interpret anomalies | Subjective; vulnerable to bias | Combine with evidence; require independent review from multiple trusted judges |
| Statistical / Algorithmic Fingerprint Analysis (research domain) | Emerging tools detect AI-style fingerprints, latent diffusion artifacts | Research-stage; models change over time | Use defensively (i.e. as extra signal) in curated settings; cross-check with other evidence |
Caution: “Any measure to identify a potential AI image must be comprehensive. No tool or action … would yield sufficient evidence for a justified conclusion on their own.” (Smartermarx)
Suggested protocol (cascade approach):
- Start with reverse image search and multiple detectors.
- If flagged, examine metadata and request file-level inspection.
- If anomalies persist, require WIP + verification shots + narrative.
- Solicit peer expert review.
- Escalate only when independent lines converge.
It is better, as you’ve publically stated, to allow some AI works to pass than to wrongly condemn human work. (Smartermarx)
VI. Toward Perceptual Literacy: The Constructive Stance
Rather than framing AI art as a crisis of authenticity, we can frame it as a test of perceptual literacy. Here are some principles and practices to cultivate:
- Train with deliberate blindness
Artists and critics can practice with carefully chosen image pairs (one AI, one human) under low-light, low-res, or time constraints. This hones sensitivity to cue conflict without overt bias. - Develop a “red-flag taxonomy” of micro-anomalies
Curate a shared lexicon of common AI artifacts (micro-repetition, edge bleeding, unnatural anatomy, inconsistent lighting, fractal noise) and update it continuously. - Integrate process into exhibition
Museums, galleries, and competitions should embed process displays—sketches, timelapses, material studies—into the viewing experience. These act not as proof, but interpretive anchors. - Promote transparency and disclosure norms
Encourage artists to voluntarily report tool usage (e.g., “digital reference used, but painted entirely by hand”) so provenance becomes part of the viewer’s perceptual frame. Transparency fosters trust. - Educate jurors and viewers in perceptual theory
Bring in vision science, cue coherence, and cue conflict theory into juror training. Equip adjudicators to recognize disfluency rather than simply intuitive “looks weird” reactions. - Institutionalize redundancy and peer checks
No single juror or detector should make the decision. Use double-blind review, cross-check by multiple modalities, and appeal mechanisms.
In this way, AI art becomes a laboratory for exploring how seeing, knowing, and trusting interact—in which the viewer, far from being fooled, becomes more alert and perceptually refined.
VII. Author, Machine, or Hybrid? Rethinking Authorship as Perceptual Anchor
We might reconsider the notion that authorship is merely an external label. Instead, as mentioned above, authorship can be considered as a perceptual cue—a signal that binds intention to image. When that cue is undermined, the perceptual contract dissolves.
In this light, the old binary (human vs machine) is obsolete. We should conceive of hybrid authorship where AI is a collaborator, a brush, a texture tool—not a usurper. In such contexts, the value lies not in excluding machine agency, but in anchoring human intention through cue coherence and transparency.
AI art will not extinguish authenticity—what it threatens is the complacency of perceptual trust. The real test of our era is whether we can see beyond the illusion and re-anchor art in perceptual literacy.
VIII. Conclusion: The Future of Seeing
The ARC Salon controversy—The Witchling—is not an isolated event. It is a harbinger of deeper perceptual upheaval. As AI visuals proliferate, our old heuristics fail. The brush is increasingly binary, and the eye must become more critical.
But that is also an opportunity. This is not the end of genuine creation—it is the beginning of a new perceptual contract, one where authenticity no longer hides in the absence of machinery, but in the harmonies of intention, coherence, and disclosure.
In this new terrain, the viewer must become a detective, the artist a meta-communicator, and institutions the guardians of perceptual rigor. The more literate our vision, the more richly we will navigate this hybrid future.
