kijai
|
729a6485ea
|
expose sageattn 2.0.0 functions
_cuda versions seem to be required on RTX 30xx -series GPUs for sageattn + CogVideoX 1.5
|
2024-12-01 18:45:10 +02:00 |
|
kijai
|
b74aa75026
|
Don't use autocast with fp/bf16
|
2024-11-20 14:22:10 +02:00 |
|
Dango233
|
b31a025673
|
Fix fused sdpa
|
2024-11-20 17:40:28 +08:00 |
|
kijai
|
b0eabeba24
|
fix comfy attention output shape
|
2024-11-19 20:18:13 +02:00 |
|
kijai
|
882faa6dea
|
add comfyui attention mode
|
2024-11-19 19:55:51 +02:00 |
|
kijai
|
128f89c4d2
|
Update workflows, fix controlnet
|
2024-11-19 15:23:38 +02:00 |
|
kijai
|
a7646c0d6f
|
refactor
- unify all pipelines into one
- unify transformer model into one
- separate VAE
- add single file model loading
|
2024-11-19 03:04:22 +02:00 |
|
kijai
|
6f9e4ff647
|
Update custom_cogvideox_transformer_3d.py
|
2024-11-17 22:23:40 +02:00 |
|
kijai
|
e70da23ac2
|
exclude sageattn from compile
|
2024-11-17 22:11:32 +02:00 |
|
kijai
|
eebdc412f9
|
fix sageattention
|
2024-11-17 21:43:53 +02:00 |
|
kijai
|
bececf0189
|
some experimental optimizations
|
2024-11-16 17:32:31 +02:00 |
|
kijai
|
dac6a2a3ac
|
allow limiting blocks to cache
|
2024-11-12 08:42:01 +02:00 |
|
kijai
|
0a121dba53
|
fix FasterCache
|
2024-11-12 07:39:00 +02:00 |
|
kijai
|
6931576916
|
update from upstream, ofs embeds
|
2024-11-11 18:53:12 +02:00 |
|
kijai
|
5f1a917b93
|
padding fix
|
2024-11-11 17:29:57 +02:00 |
|
kijai
|
ca63f5dade
|
update
|
2024-11-11 01:19:11 +02:00 |
|
kijai
|
fb246f95ef
|
attention compile works with higher cache_size_limit
|
2024-11-09 22:56:50 +02:00 |
|
kijai
|
634c22db50
|
sageattn
|
2024-11-09 17:05:55 +02:00 |
|
kijai
|
643bbc18c1
|
i2v
|
2024-11-09 12:13:52 +02:00 |
|
Jukka Seppänen
|
b563994afc
|
finally
works
|
2024-11-09 04:02:36 +02:00 |
|
Jukka Seppänen
|
9aab678a9e
|
test
|
2024-11-09 03:15:21 +02:00 |
|
kijai
|
e783951dad
|
maybe
|
2024-11-09 02:24:18 +02:00 |
|
kijai
|
2074ba578e
|
doesn't work yet
|
2024-11-08 21:24:20 +02:00 |
|
kijai
|
1cc6e1f070
|
use diffusers LoRA loading to support fusing for DimensionX LoRAs
https://github.com/wenqsun/DimensionX
|
2024-11-08 14:24:32 +02:00 |
|
kijai
|
5b4819ba65
|
support Tora for Fun -models
|
2024-10-29 10:44:09 +02:00 |
|
kijai
|
66ba4e1ee7
|
fix fastercache start step
|
2024-10-28 22:52:30 +02:00 |
|
kijai
|
e9fc26b5e3
|
initial experimental FasterCache support for 2b models
|
2024-10-28 21:02:10 +02:00 |
|
kijai
|
dcca095743
|
make torch compile work better
|
2024-10-26 02:33:29 +03:00 |
|
kijai
|
2cc521062f
|
correct Tora fuser dtype
I think...
|
2024-10-22 18:40:31 +03:00 |
|
kijai
|
a654821515
|
testing Tora for I2V
|
2024-10-21 22:53:36 +03:00 |
|
kijai
|
256a638ee4
|
cleanup, bugfixes
|
2024-10-21 03:24:53 +03:00 |
|
kijai
|
e8bc2fd052
|
Initial Tora implementation
https://github.com/alibaba/Tora
|
2024-10-21 00:11:39 +03:00 |
|
kijai
|
d76229c49b
|
controlnet support
https://huggingface.co/TheDenk/cogvideox-2b-controlnet-hed-v1
https://huggingface.co/TheDenk/cogvideox-2b-controlnet-canny-v1
|
2024-10-08 16:22:07 +03:00 |
|
Jukka Seppänen
|
49451c4a22
|
support sageattn
https://github.com/thu-ml/SageAttention
|
2024-10-06 20:26:03 +03:00 |
|