* feat(security): add System User protection with `__` prefix
Add protected namespace for custom nodes to store sensitive data
(API keys, licenses) that cannot be accessed via HTTP endpoints.
Key changes:
- New API: get_system_user_directory() for internal access
- New API: get_public_user_directory() with structural blocking
- 3-layer defense: header validation, path blocking, creation prevention
- 54 tests covering security, edge cases, and backward compatibility
System Users use `__` prefix (e.g., __system, __cache) following
Python's private member convention. They exist in user_directory/
but are completely blocked from /userdata HTTP endpoints.
* style: remove unused imports
* Support video tiny VAEs
* lighttaew scaling fix
* Also support video taes in previews
Only first frame for now as live preview playback is currently only available through VHS custom nodes.
* Support Wan 2.1 lightVAE
* Relocate elif block and set Wan VAE dim directly without using pruning rate for lightvae
* mm: default to 0 for NUM_STREAMS
Dont count the compute stream as an offload stream. This makes async
offload accounting easier.
* mm: remove 128MB minimum
This is from a previous offloading system requirement. Remove it to
make behaviour of the loader and partial unloader consistent.
* mp: order the module list by offload expense
Calculate an approximate offloading temporary VRAM cost to offload a
weight and primary order the module load list by that. In the simple
case this is just the same as the module weight, but with Loras, a
weight with a lora consumes considerably more VRAM to do the Lora
application on-the-fly.
This will slightly prioritize lora weights, but is really for
proper VRAM offload accounting.
* mp: Account for the VRAM cost of weight offloading
when checking the VRAM headroom, assume that the weight needs to be
offloaded, and only load if it has space for both the load and offload
* the number of streams.
As the weights are ordered from largest to smallest by offload cost
this is guaranteed to fit in VRAM (tm), as all weights that follow
will be smaller.
Make the partial unload aware of this system as well by saving the
budget for offload VRAM to the model state and accounting accordingly.
Its possible that partial unload increases the size of the largest
offloaded weights, and thus needs to unload a little bit more than
asked to accomodate the bigger temp buffers.
Honor the existing codes floor on model weight loading of 128MB by
having the patcher honor this separately withough regard to offloading.
Otherwise when MM specifies its 128MB minimum, MP will see the biggest
weights, and budget that 128MB to only offload buffer and load nothing
which isnt the intent of these minimums. The same clamp applies in
case of partial offload of the currently loading model.
* Create nodes_dataset.py
* Add encoded dataset caching mechanism
* make training node to work with our dataset system
* allow trainer node to get different resolution dataset
* move all dataset related implementation to nodes_dataset
* Rewrite dataset system with new io schema
* Rewrite training system with new io schema
* add ui pbar
* Add outputs' id/name
* Fix bad id/naming
* use single process instead of input list when no need
* fix wrong output_list flag
* use torch.load/save and fix bad behaviors