comfyanonymous
bdb10a583f
Fix loras not working on mixed fp8. ( #10899 )
2025-11-26 00:07:58 -05:00
Kohaku-Blueleaf
7be2b49b6b
Fix LoRA Trainer bugs with FP8 models. ( #9854 )
...
* Fix adapter weight init
* Fix fp8 model training
* Avoid inference tensor
2025-09-20 21:24:48 -04:00
Kohaku-Blueleaf
b20ba1f27c
Fix #9537 ( #9576 )
2025-08-27 12:45:02 -04:00
flybirdxx
4c3e57b0ae
Fixed an issue where qwenLora could not be loaded properly. ( #9208 )
2025-08-06 13:23:11 -04:00
Kohaku-Blueleaf
eb2f78b4e0
[Training Node] algo support, grad acc, optional grad ckpt ( #9015 )
...
* Add factorization utils for lokr
* Add lokr train impl
* Add loha train impl
* Add adapter map for algo selection
* Add optional grad ckpt and algo selection
* Update __init__.py
* correct key name for loha
* Use custom fwd/bwd func and better init for loha
* Support gradient accumulation
* Fix bugs of loha
* use more stable init
* Add OFT training
* linting
2025-07-23 20:57:27 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme ( #8446 )
2025-06-13 19:25:59 -04:00
Kohaku-Blueleaf
2ab9618732
Fix the bugs in OFT/BOFT moule ( #7909 )
...
* Correct calculate_weight and load for OFT
* Correct calculate_weight and loading for BOFT
2025-05-02 13:12:37 -04:00
Kohaku-Blueleaf
a8f63c0d5b
Support dora_scale on both axis ( #7727 )
2025-04-22 05:01:27 -04:00
Kohaku-Blueleaf
966c43ce26
Add OFT/BOFT algorithm in weight adapter ( #7725 )
2025-04-22 04:59:47 -04:00
Kohaku-Blueleaf
1f3fba2af5
Unified Weight Adapter system for better maintainability and future feature of Lora system ( #7540 )
2025-04-21 20:15:32 -04:00