* Support video tiny VAEs
* lighttaew scaling fix
* Also support video taes in previews
Only first frame for now as live preview playback is currently only available through VHS custom nodes.
* Support Wan 2.1 lightVAE
* Relocate elif block and set Wan VAE dim directly without using pruning rate for lightvae
* mm: default to 0 for NUM_STREAMS
Dont count the compute stream as an offload stream. This makes async
offload accounting easier.
* mm: remove 128MB minimum
This is from a previous offloading system requirement. Remove it to
make behaviour of the loader and partial unloader consistent.
* mp: order the module list by offload expense
Calculate an approximate offloading temporary VRAM cost to offload a
weight and primary order the module load list by that. In the simple
case this is just the same as the module weight, but with Loras, a
weight with a lora consumes considerably more VRAM to do the Lora
application on-the-fly.
This will slightly prioritize lora weights, but is really for
proper VRAM offload accounting.
* mp: Account for the VRAM cost of weight offloading
when checking the VRAM headroom, assume that the weight needs to be
offloaded, and only load if it has space for both the load and offload
* the number of streams.
As the weights are ordered from largest to smallest by offload cost
this is guaranteed to fit in VRAM (tm), as all weights that follow
will be smaller.
Make the partial unload aware of this system as well by saving the
budget for offload VRAM to the model state and accounting accordingly.
Its possible that partial unload increases the size of the largest
offloaded weights, and thus needs to unload a little bit more than
asked to accomodate the bigger temp buffers.
Honor the existing codes floor on model weight loading of 128MB by
having the patcher honor this separately withough regard to offloading.
Otherwise when MM specifies its 128MB minimum, MP will see the biggest
weights, and budget that 128MB to only offload buffer and load nothing
which isnt the intent of these minimums. The same clamp applies in
case of partial offload of the currently loading model.
* init
* update
* Update model.py
* Update model.py
* remove print
* Fix text encoding
* Prevent empty negative prompt
Really doesn't work otherwise
* fp16 works
* I2V
* Update model_base.py
* Update nodes_hunyuan.py
* Better latent rgb factors
* Use the correct sigclip output...
* Support HunyuanVideo1.5 SR model
* whitespaces...
* Proper latent channel count
* SR model fixes
This also still needs timesteps scheduling based on the noise scale, can be used with two samplers too already
* vae_refiner: roll the convolution through temporal
Work in progress.
Roll the convolution through time using 2-latent-frame chunks and a
FIFO queue for the convolution seams.
* Support HunyuanVideo15 latent resampler
* fix
* Some cleanup
Co-Authored-By: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
* Proper hyvid15 I2V channels
Co-Authored-By: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
* Fix TokenRefiner for fp16
Otherwise x.sum has infs, just in case only casting if input is fp16, I don't know if necessary.
* Bugfix for the HunyuanVideo15 SR model
* vae_refiner: roll the convolution through temporal II
Roll the convolution through time using 2-latent-frame chunks and a
FIFO queue for the convolution seams.
Added support for encoder, lowered to 1 latent frame to save more
VRAM, made work for Hunyuan Image 3.0 (as code shared).
Fixed names, cleaned up code.
* Allow any number of input frames in VAE.
* Better VAE encode mem estimation.
* Lowvram fix.
* Fix hunyuan image 2.1 refiner.
* Fix mistake.
* Name changes.
* Rename.
* Whitespace.
* Fix.
* Fix.
---------
Co-authored-by: kijai <40791699+kijai@users.noreply.github.com>
Co-authored-by: Rattus <rattus128@gmail.com>
Clean up a bunch of stacked and no-longer-needed tensors on the QWEN
VRAM peak (currently FFN).
With this I go from OOMing at B=37x1328x1328 to being able to
succesfully run B=47 (RTX5090).
The partial unloader path in model re-use flow skips straight to the
actual unload without any check of the patching UUID. This means that
if you do an upscale flow with a model patch on an existing model, it
will not apply your patchings.
Fix by delaying the partial_unload until after the uuid checks. This
is done by making partial_unload a model of partial_load where extra_mem
is -ve.