how to create local gptq-int4 models for qwen3-moe models #1552
RunningLeon
started this conversation in
General
Replies: 1 comment 1 reply
-
For open-source frameworks, please try GPTQModel or llm-compressor. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, how do you get the quantized model Qwen3-30B-A3B-GPTQ-Int4 since auto-gptq is not supported qwen3 moe models?
Beta Was this translation helpful? Give feedback.
All reactions