20 Commits

Author SHA1 Message Date
Abhinav
8b5f93df89 Add Flask-based text-only endpoints, requirements, tests and docs for issue #1014 2025-10-25 21:34:53 +05:30
GeeeekExplorer
9b4e9788e4 Merge pull request #969 from youkaichao/rmsnorm
act_quant_kernel
2025-08-28 11:24:26 +08:00
youkaichao
adecc0efbe fix rmsnorm and act_quant_kernel 2025-08-27 17:12:13 +08:00
youkaichao
82f6008c8c
fix act_quant_kernel (#968)
Signed-off-by: youkaichao <youkaichao@gmail.com>
2025-08-27 16:23:30 +08:00
youkaichao
b15f0dbbbe
support scale_fmt=ue8m0 (#964)
* support scale_fmt=ue8m0

* keep improving

Signed-off-by: youkaichao <youkaichao@gmail.com>

* keep improving

Signed-off-by: youkaichao <youkaichao@gmail.com>

* add clamp min of 1e-4

Signed-off-by: youkaichao <youkaichao@gmail.com>

* rename config

Signed-off-by: youkaichao <youkaichao@gmail.com>

---------

Signed-off-by: youkaichao <youkaichao@gmail.com>
2025-08-27 15:30:21 +08:00
Xingkai Yu
4592be48c0
fp32 gate bias 2025-08-26 17:39:07 +08:00
Xingkai Yu
4cc6253d5c
Merge pull request #666 from codinglover222/deepseek-doc-fix
fix an args description.
2025-04-09 09:50:40 +08:00
huxuedan
d29a967601 modify the explanation of MLA 2025-02-26 17:07:39 +08:00
oyzh
4a65fd9221 fix an args description. 2025-02-15 11:02:28 +08:00
Xingkai Yu
1398800ebf
fix scores mask 2025-02-14 20:26:45 +08:00
Xingkai Yu
5ee97a83f0
fix comment 2025-02-07 16:42:55 +08:00
Xingkai Yu
87a01053e4
Merge pull request #556 from XxAlonexX/main
Fix Linear Layer Bias Initialization
2025-02-05 16:23:02 +08:00
XxAlonexX
6a30b43249 Fix Linear Layer Bias Initialization 2025-02-04 10:38:45 +05:30
Roman Fitzjalen
2756e130c2 clarify assertion error 2025-01-28 13:16:54 +01:00
enoch kan
bc77f22afc Updated model.py docstrings 2025-01-05 18:24:31 +00:00
enoch kan
a1296f099e Enhance documentation and update .gitignore for model conversion scripts 2025-01-05 18:18:18 +00:00
GeeeekExplorer
fd011c11aa torch rmsnorm 2025-01-05 14:33:48 +08:00
Xingkai Yu
8710ec2ecb
require model-parallel in convert.py 2024-12-31 18:05:55 +08:00
Yang Wang
8f1c9488b5
handle missing scale_inv_name (#2)
* handle missing scale_inv_name

Fixed an issue where `weight` and `weight_scale_inv` (e.g. `model.layers.39.mlp.experts.92.gate_proj.weight` and `model.layers.39.mlp.experts.92.gate_proj.weight_scale_inv`) were not in the same SafeTensor, causing an assertion error due to scale_inv_name not being in the state_dict.

* sort filename to reduce memory costs

* Add CUDA cache clearing in memory management

Added torch.cuda.empty_cache() to free up unused memory on the GPU,
2024-12-27 09:34:38 +08:00
stack-heap-overflow
4c2fdb8f55 Release DeepSeek-V3 2024-12-26 19:01:57 +08:00