By TextureLoRALab (Shitsukan)
The irony is not lost on me. An entire category of tools has been born just to fix the slippery look that AI generates. UnPlastic. Vellum AI. Freepik's AI Skin Enhancer. These exist because the problem is real, widespread, and commercially significant. People are paying money to fix something diffusion models consistently get wrong.
When a symptom becomes a product category, it is no longer a niche complaint. It is the market confirming that something is fundamentally broken. The "plastic look" of AI-generated images is not subjective nitpicking — it is a systemic defect that requires post-production time and money to correct.
But here is the hard truth: these tools are treating the symptom, not the cause. And at a certain point, they hit a wall.
The Problem Is Real
To be clear, I am not here to trash these tools. UnPlastic's own description nails the core issue: "Generative AI is great at composition, but often struggles with fine details. Skin looks plastic, fabrics lack weave, and shadows become flat gradients." That is accurate. Hacker News threads overflow with "why do AI images look so shiny?" — confirming this is a widely recognized problem.
These tools do exactly what they advertise. They add texture back to post-generated images. Generate smooth → add grain and pores → hope it looks natural. Fundamentally, it is patching. A band-aid approach. For people who do not have the time or skill to train models, it genuinely helps.
I am not arguing these tools are useless. I am explaining why they can never be more than a band-aid.
Why Post-Processing Has a Ceiling
What traditional texture restoration filters do: add noise. Unsharp mask sharpens edges and creates local contrast. Clarity filters add micro-contrast. Modern AI upscalers like UnPlastic lay learned texture patterns on top of smooth surfaces.
At the end of the day, they are all doing the same thing: surface decoration.
But surface decoration is not the same as material logic.
In graduate school studying museum conservation, I spent weeks on a single concept: why materials look the way they do. Gold leaf does not just "have texture" — it fractures along specific fault lines determined by how the metal's crystal structure behaves under compression. Lacquer does not just "have depth" — it produces optical depth through refractive index and light scattering across multiple layers. Folded-forged steel does not just "have grain" — the forging process gives the grain directionality.
Texture restoration tools know none of this. They see a too-smooth image and say "add grain." But they have no way of knowing which kind of grain, why that grain should appear only in certain places, or how the material actually behaves under different lighting conditions.
You can add convincing pore texture to skin. But UnPlastic cannot learn on its own that pores should shadow in specific ways depending on light angle, or that pore density varies across facial regions following biological patterns. It can add micro-texture. It cannot add material understanding.
That is the ceiling.
The Real Root Cause: Models Do Not Learn Physics
Diffusion models train on statistical relationships. They learn that oil painting correlates with warm palettes. They learn that gold correlates with specular reflection and warm tones. They learn billions of pixel-level patterns from training data.
What they do not learn is physics. They do not learn that paint has viscosity, weight, and flow. They do not learn that lacquer has specific surface tension. They do not learn that linen weave follows geometric rules that determine how light bounces off it.
What they learn instead is "these pixels usually appear together." Powerful for composition. Fatal for material authenticity.
Post-processing tools operate at the same level of understanding: pixel-oriented, not physics-oriented. They can move colors around and add noise patterns, but they are working within the same statistical framework that produced the plastic look in the first place.
Fix It at Generation, Not After
This is where texture LoRA enters. Instead of adding texture after generation, you teach the model what physical surfaces actually look like during generation.
The principle is simple. A texture LoRA does not learn decoration (adding something). It does not tell the model "put texture somewhere." It tells the model: "When you see gold leaf, here is what physically accurate gold leaf correlates with — micro-fractures, light scattering, oxidation patterns, characteristic refraction."
The model acquires this during training. At generation time, the texture emerges structurally with just a trigger word — because the model understands how that material should look.
Is it perfect? No. The model is still a statistical machine. But the difference between "adding texture to a smooth surface" and "generating with material understanding built in" produces a meaningful gap.
I have seen this with the SHIFUKU Gold Leaf model. Compare gold generated from a base model with UnPlastic post-processing versus SHIFUKU generation without post-processing. The non-LoRA version looks like texture laid on top of gold. The SHIFUKU version is structured as gold from the pixel level.
Whether that difference feels meaningful is something you can see more clearly when you zoom in.
An Honest Assessment
I am not saying ignore UnPlastic or Vellum AI. If speed matters and you do not have time to train or fine-tune models, these tools solve a real problem. They are clearly better than trying to fix plastic skin with an unsharp mask (which just produces ugly white halos).
But if material authenticity matters to you — if you have studied conservation or art history, or if you have spent real time observing how things are actually made — you need to accept that post-processing is a temporary solution to a permanent problem.
The plastic look will not go away on its own. No matter how convincing the patching becomes, something will always feel off. Post-processing tools are fundamentally reactive, not preventive. They respond to a generation-stage failure by trying to hide it. But you cannot hide what is baked into the model. AI can only pull from what it has learned. When it mixes too much of what it has learned, extra uncanniness leaks through.
What AI needs for commercial-grade material is not better texture restoration software. It is training models that know what materials actually are.
That is harder and takes longer. It requires solid methodology. But to stop treating symptoms and actually cure the disease, it may be the only path.
UnPlastic cannot save you. But understanding why you need saving — that is where the real work begins.
Related:
Why AI Doesn't Know the Weight of Paint — The root cause
Style LoRA vs Texture LoRA — They Solve Different Problems — Why style alone hits a ceiling
Subtractive AI: LoRA Training Techniques for Preserving Texture — Training methodology
Models Referenced:
SHIFUKU Gold Leaf v2 — Kiribaku, sunago, noge
SHIFUKU Hamon Steel (Beta) — Japanese sword material surfaces
→ All models on CivitAI

