So, I have a 16GB vram GPU (4070 ti Super) and 32GB DDR4 RAM. The RAM is slow af and thus I tend to run models fully on GPU.

I can easily run up to 21b-ish models with Q4, sometimes high Q3.

I am testing various models out there but I was wondering if you guys have any reccommendation.

I am also really interested in understanding if quantization really decrease the model quality so much. Like, It would be better to have a Q6 12b model (like Gemma 3 12b), a Q2_K_L 32b model (such as QwQ 32b) or a Q3_XS model (such as Gemma 3 27b)?

  • TheCookingSenpai@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    I noticed that for Q4 to above too, with my sweet spot at Q6 if i manage to. I am really confused about Q2-Q3 for models that are 2x+ of Q4 models. E.g. sometimes it seems Gemma3 12b Q4 (or Q6) is better than Gemma3 27b Q3_XS and sometimes it seems the opposite.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I think it’s kind of hard to quantify the “better” other than measure perplexity. with Q3_XS or Q2 it seems to be a step down. But you’d have to look closely at the numbers to compare 12b-q4 to 27b-q3xs. I’m currently on mobile so I can’t do that, but there are quite some tables buried somewhere in the llama.cpp discussions… However… I’m not sure if we have enough research on how perplexity measurements translate to “intelligence”. This might not be the same. Idk. But you’d probably need to test a few hundred times or do something like the LLM Arena to get a meaningful result on how the models compare across size and quantization. (And I heard Q2 isn’t worth it.)