NitroDiffusion: High-Fidelity Single-Step Diffusion
through Dynamic Adversarial Training

Page View

Dar-Yen Chen Hmrishav Bandyopadhyay Kai Zou Yi-Zhe Song

SketchX, CVSSP, University of Surrey, United Kingdom

Abstract

Banner

We introduce NitroFusion, a fundamentally different approach to single-step diffusion that achieves high-quality generation through a dynamic adversarial framework. While one-step methods offer dramatic speed advantages, they typically suffer from quality degradation compared to their multi-step counterparts. Just as a panel of art critics provides comprehensive feedback by specializing in different aspects like composition, color, and technique, our approach maintains a large pool of specialized discriminator heads that collectively guide the generation process. Each discriminator group develops expertise in specific quality aspects at different noise levels, providing diverse feedback that enables high-fidelity one-step generation. Our framework combines: (i) a dynamic discriminator pool with specialized discriminator groups to improve generation quality, (ii) strategic refresh mechanisms to prevent discriminator overfitting, and (iii) global-local discriminator heads for multi-scale quality assessment, and unconditional/conditional training for balanced generation. Additionally, our framework uniquely supports flexible deployment through bottom-up refinement, allowing users to dynamically choose between 1-4 denoising steps with the same model for direct quality-speed trade-offs. Through comprehensive experiments, we demonstrate that NitroFusion significantly outperforms existing single-step methods across multiple evaluation metrics, particularly excelling in preserving fine details and global consistency.

Approach

Our method distils a multi-step teacher model into an efficient one-step student generator. The Dynamic Adversarial Framework provides dynamic, stable feedback via a large dynamic Discriminator Head Pool, dynamically sampling a subset of heads in each iteration to provide unbiased and stable feedback to judge real or fake, effectively balancing one-step efficiency with high-quality generation.

The discriminator employs a frozen UNet backbone with a dynamic pool of discriminator heads. At each iteration, a subset of heads is sampled and trained, with 1% of all heads randomly reinitialized to maintain diverse signals and prevent overfitting.

Architecture
Architecture

Results

Visual comparison of our models (NitroSD-Realism and NitroSD-Vibrant) against multi-step SDXL, our teacher models (4-step DMD2 and 8-step Hyper-SDXL), and selected 1-step state-of-the-art baselines, SDXL-Turbo and SDXL-Lightning.

Comparison

1- to 4-step refinement process of our NitroSD-Realism and -Vibrant, illustrating the progressive enhancement of image quality and detail across steps.

Refinement

Single-step samples from NitroSD-Realism.

Showcase of NitroSD-Realism

Single-step samples from NitroSD-Vibrant.

Showcase of NitroSD-Vibrant

BibTeX

@article{chen2024nitrofusionhighfidelitysinglestepdiffusion,
  title={NitroFusion: High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training},
  author={Dar-Yen Chen and Hmrishav Bandyopadhyay and Kai Zou and Yi-Zhe Song},
  booktitle={arXiv preprint arxiv:2412.02030},
  year={2024}
}