3963 Commits

Author SHA1 Message Date
Alexander Piskun
d20576e6a3
convert nodes_sag.py to V3 schema (#9940) 2025-09-26 14:13:52 -07:00
Alexander Piskun
a061b06321
convert nodes_tcfg.py to V3 schema (#9942) 2025-09-26 14:13:05 -07:00
Alexander Piskun
80718908a9
convert nodes_sdupscale.py to V3 schema (#9943) 2025-09-26 14:12:38 -07:00
Alexander Piskun
7ea173c187
convert nodes_fresca.py to V3 schema (#9951) 2025-09-26 14:12:04 -07:00
Alexander Piskun
76eb1d72c3
convert nodes_rebatch.py to V3 schema (#9945) 2025-09-26 14:10:49 -07:00
Yoland Yan
c4a46e943c
Add @kosinkadink as code owner (#10041)
Updated CODEOWNERS to include @kosinkadink as a code owner.
2025-09-26 17:08:16 -04:00
comfyanonymous
2b7f9a8196
Fix the failing unit test. (#10037) 2025-09-26 14:12:43 -04:00
comfyanonymous
ce4cb2389c
Make LatentCompositeMasked work with basic video latents. (#10023) 2025-09-25 17:20:13 -04:00
Guy Niv
c8d2117f02
Fix memory leak by properly detaching model finalizer (#9979)
When unloading models in load_models_gpu(), the model finalizer was not
being explicitly detached, leading to a memory leak. This caused
linear memory consumption increase over time as models are repeatedly
loaded and unloaded.

This change prevents orphaned finalizer references from accumulating in
memory during model switching operations.
2025-09-24 22:35:12 -04:00
comfyanonymous
fccab99ec0
Fix issue with .view() in HuMo. (#10014) 2025-09-24 20:09:42 -04:00
Jukka Seppänen
fd79d32f38
Add new audio nodes (#9908)
* Add new audio nodes

- TrimAudioDuration
- SplitAudioChannels
- AudioConcat
- AudioMerge
- AudioAdjustVolume

* Update nodes_audio.py

* Add EmptyAudio -node

* Change duration to Float (allows sub seconds)
2025-09-24 18:59:29 -04:00
Changrz
341b4adefd
Rodin3D - add [Rodin3D Gen-2 generate] api-node (#9994)
* update Rodin api node

* update rodin3d gen2 api node

* fix images limited bug
2025-09-24 14:05:37 -04:00
comfyanonymous
b8730510db ComfyUI version 0.3.60 v0.3.60 2025-09-23 11:50:33 -04:00
Alexander Piskun
e808790799
feat(api-nodes): add wan t2i, t2v, i2v nodes (#9996) 2025-09-23 11:36:47 -04:00
ComfyUI Wiki
145b0e4f79
update template to 0.1.86 (#9998)
* update template to 0.1.84

* update template to 0.1.85

* Update template to 0.1.86
2025-09-23 11:22:35 -04:00
comfyanonymous
707b2638ec
Fix bug with WanAnimateToVideo. (#9990) 2025-09-22 17:34:33 -04:00
comfyanonymous
8a5ac527e6
Fix bug with WanAnimateToVideo node. (#9988) 2025-09-22 17:26:58 -04:00
Christian Byrne
e3206351b0
add offset param (#9977) 2025-09-22 17:12:32 -04:00
comfyanonymous
1fee8827cb
Support for qwen edit plus model. Use the new TextEncodeQwenImageEditPlus. (#9986) 2025-09-22 16:49:48 -04:00
comfyanonymous
27bc181c49
Set some wan nodes as no longer experimental. (#9976) 2025-09-21 19:48:31 -04:00
comfyanonymous
d1d9eb94b1
Lower wan memory estimation value a bit. (#9964)
Previous pr reduced the peak memory requirement.
2025-09-20 22:09:35 -04:00
Kohaku-Blueleaf
7be2b49b6b
Fix LoRA Trainer bugs with FP8 models. (#9854)
* Fix adapter weight init

* Fix fp8 model training

* Avoid inference tensor
2025-09-20 21:24:48 -04:00
Jedrzej Kosinski
9ed3c5cc09
[Reviving #5709] Add strength input to Differential Diffusion (#9957)
* Update nodes_differential_diffusion.py

* Update nodes_differential_diffusion.py

* Make strength optional to avoid validation errors when loading old workflows, adjust step

---------

Co-authored-by: ThereforeGames <eric@sparknight.io>
2025-09-20 21:10:39 -04:00
comfyanonymous
66241cef31
Add inputs for character replacement to the WanAnimateToVideo node. (#9960) 2025-09-20 02:24:10 -04:00
comfyanonymous
e8df53b764
Update WanAnimateToVideo to more easily extend videos. (#9959) 2025-09-19 18:48:56 -04:00
Alexander Piskun
852704c81a
fix(seedream4): add flag to ignore error on partial success (#9952) 2025-09-19 16:04:51 -04:00
Alexander Piskun
9fdf8c25ab
api_nodes: reduce default timeout from 7 days to 2 hours (#9918) 2025-09-19 16:02:43 -04:00
comfyanonymous
dc95b6acc0
Basic WIP support for the wan animate model. (#9939) 2025-09-19 03:07:17 -04:00
Christian Byrne
711bcf33ee
Bump frontend to 1.26.13 (#9933) 2025-09-19 03:03:30 -04:00
comfyanonymous
24b0fce099
Do padding of audio embed in model for humo for more flexibility. (#9935) 2025-09-18 19:54:16 -04:00
Jodh Singh
1ea8c54064
make kernel of same type as image to avoid mismatch issues (#9932) 2025-09-18 19:51:16 -04:00
DELUXA
8d6653fca6
Enable fp8 ops by default on gfx1200 (#9926) 2025-09-18 19:50:37 -04:00
comfyanonymous
dd611a7700
Support the HuMo 17B model. (#9912) 2025-09-17 18:39:24 -04:00
comfyanonymous
9288c78fc5
Support the HuMo model. (#9903) 2025-09-17 00:12:48 -04:00
rattus128
e42682b24e
Reduce Peak WAN inference VRAM usage (#9898)
* flux: Do the xq and xk ropes one at a time

This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.

Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.

This reduces peak VRAM usage for some WAN2.2 inferences (at least).

* wan: Optimize qkv intermediates on attention

As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
2025-09-16 19:21:14 -04:00
comfyanonymous
a39ac59c3e
Add encoder part of whisper large v3 as an audio encoder model. (#9894)
Not useful yet but some models use it.
2025-09-16 01:19:50 -04:00
blepping
1a85483da1
Fix depending on asserts to raise an exception in BatchedBrownianTree and Flash attn module (#9884)
Correctly handle the case where w0 is passed by kwargs in BatchedBrownianTree
2025-09-15 20:05:03 -04:00
comfyanonymous
47a9cde5d3
Support the omnigen2 umo lora. (#9886) 2025-09-15 18:10:55 -04:00
comfyanonymous
4f1f26ac6c
Add that hunyuan image is supported to readme. (#9857) 2025-09-14 04:05:38 -04:00
Jedrzej Kosinski
f228367c5e
Make ModuleNotFoundError ImportError instead (#9850) 2025-09-13 21:34:21 -04:00
comfyanonymous
80b7c9455b
Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
blepping
c1297f4eb3
Add support for Chroma Radiance (#9682)
* Initial Chroma Radiance support

* Minor Chroma Radiance cleanups

* Update Radiance nodes to ensure latents/images are on the intermediate device

* Fix Chroma Radiance memory estimation.

* Increase Chroma Radiance memory usage factor

* Increase Chroma Radiance memory usage factor once again

* Ensure images are multiples of 16 for Chroma Radiance
Add batch dimension and fix channels when necessary in ChromaRadianceImageToLatent node

* Tile Chroma Radiance NeRF to reduce memory consumption, update memory usage factor

* Update Radiance to support conv nerf final head type.

* Allow setting NeRF embedder dtype for Radiance
Bump Radiance nerf tile size to 32
Support EasyCache/LazyCache on Radiance (maybe)

* Add ChromaRadianceStubVAE node

* Crop Radiance image inputs to multiples of 16 instead of erroring to be in line with existing VAE behavior

* Convert Chroma Radiance nodes to V3 schema.

* Add ChromaRadianceOptions node and backend support.
Cleanups/refactoring to reduce code duplication with Chroma.

* Fix overriding the NeRF embedder dtype for Chroma Radiance

* Minor Chroma Radiance cleanups

* Move Chroma Radiance to its own directory in ldm
Minor code cleanups and tooltip improvements

* Fix Chroma Radiance embedder dtype overriding

* Remove Radiance dynamic nerf_embedder dtype override feature

* Unbork Radiance NeRF embedder init

* Remove Chroma Radiance image conversion and stub VAE nodes
Add a chroma_radiance option to the VAELoader builtin node which uses comfy.sd.PixelspaceConversionVAE
Add a PixelspaceConversionVAE to comfy.sd for converting BHWC 0..1 <-> BCHW -1..1
2025-09-13 17:58:43 -04:00
Kimbing Ng
e5e70636e7
Remove single quote pattern to avoid wrong matches (#9842) 2025-09-13 16:59:19 -04:00
comfyanonymous
29bf807b0e
Cleanup. (#9838) 2025-09-12 21:57:04 -04:00
Jukka Seppänen
2559dee492
Support wav2vec base models (#9637)
* Support wav2vec base models

* trim trailing whitespace

* Do interpolation after
2025-09-12 21:52:58 -04:00
comfyanonymous
a3b04de700
Hunyuan refiner vae now works with tiled. (#9836) 2025-09-12 19:46:46 -04:00
Jedrzej Kosinski
d7f40442f9
Enable Runtime Selection of Attention Functions (#9639)
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options

* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added

* Fix memory usage issue with inspect

* Made WAN attention receive transformer_options, test node added to wan to test out attention override later

* Added **kwargs to all attention functions so transformer_options could potentially be passed through

* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention

* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)

* Make flux work with optimized_attention_override

* Add logs to verify optimized_attention_override is passed all the way into attention function

* Make Qwen work with optimized_attention_override

* Made hidream work with optimized_attention_override

* Made wan patches_replace work with optimized_attention_override

* Made SD3 work with optimized_attention_override

* Made HunyuanVideo work with optimized_attention_override

* Made Mochi work with optimized_attention_override

* Made LTX work with optimized_attention_override

* Made StableAudio work with optimized_attention_override

* Made optimized_attention_override work with ACE Step

* Made Hunyuan3D work with optimized_attention_override

* Make CosmosPredict2 work with optimized_attention_override

* Made CosmosVideo work with optimized_attention_override

* Made Omnigen 2 work with optimized_attention_override

* Made StableCascade work with optimized_attention_override

* Made AuraFlow work with optimized_attention_override

* Made Lumina work with optimized_attention_override

* Made Chroma work with optimized_attention_override

* Made SVD work with optimized_attention_override

* Fix WanI2VCrossAttention so that it expects to receive transformer_options

* Fixed Wan2.1 Fun Camera transformer_options passthrough

* Fixed WAN 2.1 VACE transformer_options passthrough

* Add optimized to get_attention_function

* Disable attention logs for now

* Remove attention logging code

* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case

* Satisfy ruff

* Remove AttentionOverrideTest node, that's something to cook up for later
2025-09-12 18:07:38 -04:00
comfyanonymous
b149e2e1e3
Better way of doing the generator for the hunyuan image noise aug. (#9834) 2025-09-12 17:53:15 -04:00
Alexander Piskun
581bae2af3
convert Moonvalley API nodes to the V3 schema (#9698) 2025-09-12 17:41:26 -04:00
Alexander Piskun
af99928f22
convert Canny node to V3 schema (#9743) 2025-09-12 17:40:34 -04:00