50 Commits

Author SHA1 Message Date
Jukka Seppänen
8d6e53b556 Allow compiling VAE as well 2024-11-23 17:08:57 +02:00
kijai
895d3b83a4 Update model_loading.py 2024-11-21 03:05:51 +02:00
kijai
276a045a57 use selected load device as LoRA load device too 2024-11-21 02:46:07 +02:00
kijai
e52dc36bc5 Update model_loading.py 2024-11-20 21:28:53 +02:00
kijai
e5fc7c1bf3 Allow mixing Fun and not fun loras 2024-11-20 21:24:51 +02:00
kijai
e187cfe22f Allow loading the "Rewards" LoRAs into 1.5 as well (for what it's worth) 2024-11-20 19:18:40 +02:00
kijai
b74aa75026 Don't use autocast with fp/bf16 2024-11-20 14:22:10 +02:00
kijai
ecd067260c Add CogVideoX-Fun-V1.1-5b-Control
https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-Control
2024-11-20 01:23:54 +02:00
kijai
41a0f33381 Update model_loading.py 2024-11-19 20:27:31 +02:00
kijai
1cfe0835f5 fix GGUF loader 2024-11-19 20:23:47 +02:00
kijai
882faa6dea add comfyui attention mode 2024-11-19 19:55:51 +02:00
kijai
516655b689 Update model_loading.py 2024-11-19 19:17:42 +02:00
kijai
67f2f6abb1 Merge branch 'refactor' 2024-11-19 19:16:39 +02:00
kijai
feeff366b5 update 2024-11-19 19:06:15 +02:00
kijai
6302e4b668 Allow orbit LoRAs with Fun models as well 2024-11-19 15:49:43 +02:00
kijai
a7646c0d6f refactor
- unify all pipelines into one
- unify transformer model into one
- separate VAE
- add single file model loading
2024-11-19 03:04:22 +02:00
kijai
15aa68c95d update 2024-11-17 00:48:01 +02:00
kijai
4374273138 possible compile fixes 2024-11-16 22:18:12 +02:00
kijai
bececf0189 some experimental optimizations 2024-11-16 17:32:31 +02:00
kijai
75e98906a3 rotary embed fix
25a9e1c567
2024-11-15 02:37:44 +02:00
kijai
0bd3da569e code cleanup
codebase getting too bloated:
drop PAB support in favor of FasterCache
drop temporal tilling in favor of FreeNoise
2024-11-14 19:54:52 +02:00
kijai
e8a289112f fix VAE scaling (again) 2024-11-13 15:37:45 +02:00
kijai
ba2dbfbeb4 fixes 2024-11-12 11:29:50 +02:00
kijai
db697fea11 Update model_loading.py 2024-11-12 00:22:26 +02:00
kijai
00fde5ebce allow fused loras with fp8 2024-11-11 23:36:48 +02:00
kijai
ea0273c8ec VAE fix, allow using fp32 VAE 2024-11-11 22:34:43 +02:00
kijai
6d4c99e77d Update model_loading.py 2024-11-11 19:58:17 +02:00
kijai
ca63f5dade update 2024-11-11 01:19:11 +02:00
kijai
87ed4a56cf Update model_loading.py 2024-11-10 18:19:35 +02:00
kijai
ea5ee0b017 GGUF Q4 works 2024-11-10 18:13:44 +02:00
kijai
fb246f95ef attention compile works with higher cache_size_limit 2024-11-09 22:56:50 +02:00
kijai
a630bb3314 update 2024-11-09 22:17:10 +02:00
Jukka Seppänen
909d7026f3
Update model_loading.py 2024-11-09 20:20:15 +02:00
kijai
7162d1040d Update model_loading.py 2024-11-09 18:14:48 +02:00
kijai
75aa19b4e1 Update model_loading.py 2024-11-09 18:12:03 +02:00
kijai
634c22db50 sageattn 2024-11-09 17:05:55 +02:00
kijai
1c3aff9000 some cleanup 2024-11-09 15:15:10 +02:00
kijai
643bbc18c1 i2v 2024-11-09 12:13:52 +02:00
Jukka Seppänen
2eb9b81d27 fp8 2024-11-09 04:41:07 +02:00
Jukka Seppänen
9a64e1ae5e Update model_loading.py 2024-11-09 04:16:46 +02:00
Jukka Seppänen
b563994afc finally
works
2024-11-09 04:02:36 +02:00
kijai
806a0fa1d6 Update model_loading.py 2024-11-08 21:31:31 +02:00
kijai
2074ba578e doesn't work yet 2024-11-08 21:24:20 +02:00
kijai
f7a88cbd56 Update model_loading.py 2024-11-08 21:23:29 +02:00
kijai
4c2ce52f57 Update model_loading.py 2024-11-08 17:30:51 +02:00
kijai
1cc6e1f070 use diffusers LoRA loading to support fusing for DimensionX LoRAs
https://github.com/wenqsun/DimensionX
2024-11-08 14:24:32 +02:00
kijai
07defb52b6 Add compile_args node 2024-11-07 23:17:37 +02:00
kijai
9cce8015d3 Update model_loading.py 2024-11-07 23:02:52 +02:00
kijai
57262d9762 Update model_loading.py 2024-10-31 01:04:43 +02:00
kijai
5ba9b1d634 code cleanup 2024-10-30 21:30:03 +02:00