kijai
|
6931576916
|
update from upstream, ofs embeds
|
2024-11-11 18:53:12 +02:00 |
|
kijai
|
5f1a917b93
|
padding fix
|
2024-11-11 17:29:57 +02:00 |
|
kijai
|
ca63f5dade
|
update
|
2024-11-11 01:19:11 +02:00 |
|
kijai
|
fb246f95ef
|
attention compile works with higher cache_size_limit
|
2024-11-09 22:56:50 +02:00 |
|
kijai
|
634c22db50
|
sageattn
|
2024-11-09 17:05:55 +02:00 |
|
kijai
|
643bbc18c1
|
i2v
|
2024-11-09 12:13:52 +02:00 |
|
Jukka Seppänen
|
b563994afc
|
finally
works
|
2024-11-09 04:02:36 +02:00 |
|
Jukka Seppänen
|
9aab678a9e
|
test
|
2024-11-09 03:15:21 +02:00 |
|
kijai
|
e783951dad
|
maybe
|
2024-11-09 02:24:18 +02:00 |
|
kijai
|
2074ba578e
|
doesn't work yet
|
2024-11-08 21:24:20 +02:00 |
|
kijai
|
1cc6e1f070
|
use diffusers LoRA loading to support fusing for DimensionX LoRAs
https://github.com/wenqsun/DimensionX
|
2024-11-08 14:24:32 +02:00 |
|
kijai
|
5b4819ba65
|
support Tora for Fun -models
|
2024-10-29 10:44:09 +02:00 |
|
kijai
|
66ba4e1ee7
|
fix fastercache start step
|
2024-10-28 22:52:30 +02:00 |
|
kijai
|
e9fc26b5e3
|
initial experimental FasterCache support for 2b models
|
2024-10-28 21:02:10 +02:00 |
|
kijai
|
dcca095743
|
make torch compile work better
|
2024-10-26 02:33:29 +03:00 |
|
kijai
|
2cc521062f
|
correct Tora fuser dtype
I think...
|
2024-10-22 18:40:31 +03:00 |
|
kijai
|
a654821515
|
testing Tora for I2V
|
2024-10-21 22:53:36 +03:00 |
|
kijai
|
256a638ee4
|
cleanup, bugfixes
|
2024-10-21 03:24:53 +03:00 |
|
kijai
|
e8bc2fd052
|
Initial Tora implementation
https://github.com/alibaba/Tora
|
2024-10-21 00:11:39 +03:00 |
|
kijai
|
d76229c49b
|
controlnet support
https://huggingface.co/TheDenk/cogvideox-2b-controlnet-hed-v1
https://huggingface.co/TheDenk/cogvideox-2b-controlnet-canny-v1
|
2024-10-08 16:22:07 +03:00 |
|
Jukka Seppänen
|
49451c4a22
|
support sageattn
https://github.com/thu-ml/SageAttention
|
2024-10-06 20:26:03 +03:00 |
|