Sign In

Neta Lumina

116

400

24

Updated: Jul 10, 2025

base model

Verified:

SafeTensor

Type

Checkpoint Trained

Stats

198

0

Reviews

Published

Jun 24, 2025

Base Model

Lumina

Hash

AutoV2
2807541B56
default creator card background decoration
neta_art's Avatar

neta_art

License:

- Follow for more updates at http://discord.com/invite/TTTGccjbEa

- Try Model: Huggingface Playground

- Access to more ongoing training versions

- 中文模型说明


Introduction

Neta Lumina is a high‑quality anime‑style image‑generation model developed by Neta.art Lab.

Building on the open‑source Lumina‑Image‑2.0 released by the Alpha‑VLLM team at Shanghai AI Laboratory, we fine‑tuned the model with a vast corpus of high‑quality anime images and multilingual tag data. The preliminary result is a compelling model with powerful comprehension and interpretation abilities (thanks to Gemma text encoder), ideal for illustration, posters, storyboards, character design, and more.

Key Features

  • Optimized for diverse creative scenarios such as Furry, Guofeng (traditional‑Chinese aesthetics), pets, etc.

  • Wide coverage of characters and styles, from popular to niche concepts. (Still support danbooru tags!)

  • Accurate natural‑language understanding with excellent adherence to complex prompts.

  • Native multilingual support, with Chinese, English, and Japanese recommended first.

Model Versions

Base Model

Request access at https://huggingface.co/neta-art/NetaLumina_Alpha if you are interested.

  • Primary Goal: General knowledge and anime‑style optimization

  • Data Set: >13 million anime‑style images

  • >46,000 A100 Hours

Neta-lumina-beta-0624

  • First beta release candidate

  • Primary Goal: Enhanced aesthetics, pose accuracy, and scene detail

  • Data Set: Hundreds of thousands of handpicked high‑quality anime images (fine‑tuned on the Base)

How  to  Use

Neta Lumina is built on the Lumina2 Diffusion Transformer (DiT) framework, please follow these steps precisely.

ComfyUI

Environment Requirements

Currently Neta Lumina runs only on ComfyUI:

  • Latest ComfyUI installation

  • ≥ 8 GB VRAM

Downloads & Installation

The model provided by Civitai is a three-in-one (te, dit, vae) packaged version, which can be run using the comfyui basic workflow without the need to download Text Encoder and VAE separately.

Original (component) release

  1. Neta Lumina-Beta

    1. Hugging Face: https://huggingface.co/neta-art/Neta-Lumina/blob/main/neta-lumina-beta-0624.pth

    2. Save path: ComfyUI/models/unet/

  2. Text Encoder (Gemma-2B)

    1. Download link: https://huggingface.co/neta-art/Neta-Lumina/resolve/main/gemma_2_2b_fp16.safetensors

    2. Save path: ComfyUI/models/text_encoders/

  3. VAE Model (16-Channel FLUX VAE)

    1. Download link: https://huggingface.co/neta-art/Neta-Lumina/resolve/main/ae.safetensors

    2. Save path: ComfyUI/models/vae/

Workflow: load lumina_workflow.json in ComfyUI.

  • UNETLoader – loads the .pth

  • VAELoader – loads ae.safetensors

  • CLIPLoader – loads gemma_2_2b_fp16.safetensors

  • Text Encoder – connects positive /negative prompts to the sampler

Simple merged release

Download [neta-lumina-beta-0624.safetensors], md5sum = dca54fef3c64e942c1a62a741c4f9d8a, you may use ComfyUI’s simple checkpoint loader workflow (default workflow see beblow).

  • Sampler: res_multistep

  • Scheduler: linear_quadratic

  • Steps: 30

  • CFG (guidance): 4 – 5.5

  • EmptySD3LatentImage resolution: 1024 × 1024, 768 × 1532, or 968 × 1322

Prompt Book

Detailed prompt guidelines: https://civitai.com/articles/16274/neta-lumina-drawing-model-prompt-guide

Community

Discord: https://discord.com/invite/TTTGccjbEa

QQ group: 785779037

Roadmap

Model

  • Continous base‑model training to raise reasoning capability.

  • Aesthetic‑dataset iteration to improve anatomy, background richness, and overall appealness.

  • Smarter, more versatile tagging tools to lower the creative barrier.

Ecosystem

  • LoRA training tutorials and components

    • Experienced users may already fine‑tune via Lumina‑Image‑2.0’s open code.

  • Development of advanced control / style‑consistency features (e.g., Omini Control). Call for Collaboration!

License & Disclaimer

Participants & Contributors

Community Contributors

Evaluators & developers: 二小姐, spawner, Rnglg2

Other contributors: 沉迷摸鱼, poi氵, ashan, 十分无奈, GHOSTLXH, wenaka, iiiiii, 年糕特工队, 恩匹希, 奶冻, mumu, yizyin, smile

Appendix & Resources


license: other

license_name: fair-ai-public-license-1.0-sd

license_link: https://freedevproject.org/faipl-1.0-sd/