Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling
Enhances conventional diffusion models through parameterization techniques for learning varied forward processes
Neural Flow Diffusion Models (NFDM) is a framework that enhances diffusion modeling by incorporating a learnable forward process. Unlike traditional diffusion models, NFDM allows for the specification and learning of latent variable distributions defined by the forward process. This flexibility enables the forward process to adapt to the task at hand and simplify the target for the reverse process. NFDM utilizes an end-to-end simulation-free optimization procedure to minimize a variational upper bound on the negative log-likelihood (NLL).
The key components of NFDM include a neural network-based parameterization for the forward process, which enables it to adjust to the reverse process during training and facilitate the learning of the data distribution. The framework also supports training with constraints on the reverse process to learn generative dynamics with specific properties. For instance, a curvature penalty on deterministic generative trajectories can be imposed to achieve straight-line trajectories, leading to faster sampling speeds and improved generation quality with fewer sampling steps.
NFDM offers various methods for sampling from the trained reverse process, including stochastic sampling with adjustable levels of stochasticity. The framework's parameterization ensures specific distributions for latent variables, simplifying the optimization process. NFDM's objective function is designed to provide a variational bound on the model's likelihood, with connections to Flow Matching and Score Matching objectives. The training procedure for NFDM is simulation-free, optimizing the objective with respect to the parameters of the forward process.
Comments
None