Sign In

Why AI Doesn't Know the Weight of Paint — The Disconnect Between Generative AI and Physical Matter

0

Why AI Doesn't Know the Weight of Paint — The Disconnect Between Generative AI and Physical Matter

I came across a comment from the developer of an impasto LoRA on CivitAI:

"I've pushed the concept as far as I can when training on actual paintings… had to get creative with cropping and captioning"

They also mentioned removing images with "rectangular brushstrokes" from the training set to get more natural-looking strokes.

That comment stopped me cold.

Removing "rectangular brushstrokes" — what that tells us is that the AI can see the shape of a brushstroke in the training data, but it has no access to the hand that made it. No viscosity. No pressure. No drying time.

That gap is what I call the Disconnect Between Generative AI and Physical Matter (物質性の断絶).

---

Paint is a Physical Substance

If you've ever worked with oil paint, you know how demanding it is.

Fresh from the tube, it's stiff. Add too much medium and it becomes transparent. Pile it on thick and it risks cracking. Layer fast-drying over slow-drying and it peels. The tooth of the canvas changes how the brush drags. The color shifts as it dries.

These aren't settings you can prompt. They're the accumulated physical memory of a material — the kind of knowledge a painter builds over decades of bodily engagement.

Matisse spent 54 years building that memory. Then, bedridden after surgery at 71, he picked up scissors and colored paper. The result — his découpages — is now considered some of his greatest work. Not because he "simplified." Because 54 years of material knowledge didn't disappear when the brushes did.

---

What AI Learns. What It Doesn't.

When you train an AI on an oil painting, it learns the visual pattern of raised paint: how light hits it, how shadow falls, how color shifts. All of that is readable in an image.

What is NOT in the image:

- The weight of the paint that created the impasto

- The pressure of the hand that pushed it

- The time it took to dry

- The resistance of the canvas

The image is a result. The physical process is invisible.

That's why AI-generated "impasto-style" images tend to look light — technically accurate in texture pattern, but missing the sense that matter passed through the process. It's a statistical approximation of a material trace, not the trace itself.

This is one of the core reasons for that feeling of "uncanny," "somehow off," or "too clean."

---

What Can We Actually Do About It?

What I'm trying to do is pass physical materiality into the model — not by collecting "images that look like impasto" and training on those, but by designing training datasets through a fine arts lens that can actually capture what matters.

Take gold leaf as an example.

If you want to train "gold leaf," the obvious approach is: collect images of things that look gold and shiny. But the essence of gold leaf isn't color. It's:

- The layered surface structure created by washi paper, lacquer, and metal foil

- The way light bounces differently at different angles

- The random contours where leaf has fractured and lifted at the edges

Without training data designed to capture those three things specifically, you can't learn the weight of gold leaf. You just learn "looks gold."

That's the design principle behind my SHIFUKU Gold Leaf LoRA — and all the texture LoRAs I've been building.

---

You Can't Subtract What You Never Had

There's a principle I keep coming back to:

"You can't subtract what you've never held."

Matisse could strip his work down to pure color and form because he'd spent decades accumulating the material knowledge he was stripping away. The simplicity of his late work presupposes everything that came before it.

When people use a "simple prompt" with AI and get a flat, information-poor image, that's not subtraction. That's absence. There was nothing there to begin with.

Training LoRAs with material-aware datasets is how I try to give the model something to actually subtract from — to give it a foundation that makes genuine restraint possible.

---

What This Looks Like in Practice

My current LoRA library (SHIFUKU series):

- SHIFUKU Gold Leaf v1 — layered gold surface, directional light behavior, fracture edges

- SHIFUKU Kintsugi — gold repair seam, ceramic fracture, wabi-sabi imperfection

- SHIFUKU Gold Leaf v2 — Kiribaku / Sunago / Noge — 3 application techniques, each with distinct light behavior

- SHIFUKU Hamon Steel (Beta) — differential hardening visible in crystal structure, 4 distinct surface materials

These aren't "gold-colored style" LoRAs. They're attempts to encode the physical logic of specific materials.

This principle is now being tested in practice. Gold Leaf v2 trained on three distinct application techniques — and the model learned to generate different light behavior for each. Hamon Steel trained on the visible consequences of differential hardening — and the blade surface carries structural information, not just reflective gloss.

---

For the Community

If you're working on painterly or impasto LoRAs and running into that wall — the one where your trained model looks right at distance but feels weightless up close — that's the disconnect I'm describing.

I'm curious what strategies others have tried. The cropping + captioning approach mentioned in the CivitAI comments is one angle. Material-aware dataset curation is mine.

What's your read on where the gap actually lives?

---

TextureLoRALab | Fine Arts (Japanese painting, Art History) x Museum Studies MA (Leicester, Merit) x AI Engineering

0