OneTrainer
Fine-tune different text to image models with an easy to use and customizable user interface
github.com
https://github.com/Nerogar/OneTrainerOneTrainer can be used to fine tune different text to image models. It's main goal is to provide an easy to use user interface, while still being very customizable.
Currently supported models are:
- Stable Diffusion (v1.5, v2.0, v2.1)
- SDXL
- PixArt Alpha
- Würstchen v2
- Stable Cascade
- Stable Diffusion and SDXL inpainting models
Features:
- Model formats: diffusers and ckpt models
- Training methods: Full fine-tuning, LoRA, embeddings
- Masked Training: Let the training focus on just certain parts of the samples.
- Automatic backups: Fully back up your training progress regularly during training. This includes all information to seamlessly continue training.
- Image augmentation: Apply random transforms such as rotation, brightness, contrast or saturation to each image sample to quickly create a more diverse dataset.
- Tensorboard: A simple tensorboard integration to track the training progress.
- Multiple prompts per image: Train the model on multiple different prompts per image sample.
- Noise Scheduler Rescaling: From the paper Common Diffusion Noise Schedules and Sample Steps are Flawed
- EMA: Train you own EMA model. Optionally keep EMA weights in CPU memory to reduce VRAM usage.
- Aspect Ratio Bucketing: Automatically train on multiple aspect ratios at a time. Just select the target resolutions, buckets are created automatically.
- Multi Resolution Training: Train multiple resolutions at the same time.
- Dataset Tooling: Automatically caption your dataset using BLIP, BLIP2 and WD-1.4, or create masks for masked training using ClipSeg or Rembg.
- Model Tooling: Convert between different model formats from a simple UI.
- Sampling UI: Sample the model during training without switching to a different application.
- AlignProp: A Reinforcement Learning method for diffusion networks from the paper Aligning Text-to-Image Diffusion Models With Reward Backpropagation
Comments
None