mirror of
https://git.datalinker.icu/comfyanonymous/ComfyUI
synced 2026-01-27 10:40:53 +08:00
* Add --use-flash-attention flag. This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
* Add --use-flash-attention flag. This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.