GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts (MoE) architecture but with a more compact parameter size. GLM-4.5-Air also supports hybrid inference modes, offering a “thinking mode” for advanced reasoning and tool use, and a “non-thinking mode” for real-time interaction. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

Blog post: https://z.ai/blog/glm-4.5

Hugging Face:

https://huggingface.co/zai-org/GLM-4.5

https://huggingface.co/zai-org/GLM-4.5-Air

  • RoadTrain@lemdro.id
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    How does GLM 4.5 (Air or regular) compare to more popular models? I picked its answer once on LMArena, but that’s all my encounters with it.

  • doodlebob@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 days ago

    I’m currently using ollama to serve llms, what’s everyone using for these models?

    I’m also using open webui as well and ollama seemed the easiest (at the time) to use in conjunction with that

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 days ago

      ik_llama.cpp (and its API server) is the end-all stop for these big MoE models. Level1tech just did a video on it, and check out ubergarm’s quants on huggingface: https://huggingface.co/ubergarm

      TabbyAPI (exllamav3 underneath) is great for dense models, or MoEs that will just barely squeeze onto your GPU at 3bpw. Look for exl3s: https://huggingface.co/models?sort=modified&search=exl3

      Both are massively more efficient than ollama defaults, to the extent you can run models at least twice the equivalent parameter count ollama can, and support more features too. ik_llama.cpp is also how folks are running these 300B+ MoEs on a single 3090/4090 (in conjunction with a Threadripper, Xeon or EPYC, usually).

        • doodlebob@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          Just read the L1 post and I’m just now realizing this is mainly for running quants which I generally avoid

          I guess I could spin it up just to mess around with it but probably wouldn’t replace my main model

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            8 days ago

            Just read the L1 post and I’m just now realizing this is mainly for running quants which I generally avoid

            ik_llama.cpp supports special quantization formats incompatible with mainline llama.cpp. You can get better performance out of them than regular GGUFs.

            That being said… are you implying you run LLMs in FP16? If you’re on a huge GPU (or running a small model fast), you should be running sglang or vllm instead, not llama.cpp (which is basically designed for quantization and non-enterprise hardware), especially if you are making parallel calls.

            • doodlebob@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 days ago

              yeah, im currently running the gemma 27b model locally I recently took a look at vllm but the only reason i didnt want to switch is because it doesnt have automatic offloading (seems that it’s a manual thing right now)

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                7 days ago

                Gemma3 in particular has basically no reason to run unquantized since Google did a QAT (quantization aware training) finetune of it. The Q4_0 is almost, objectively, indistinguishable from the BF16 weights. Llama.cpp also handles its SWA (sliding window attention) well (whereas last I checked vllm does not).

                vllm does not support CPU offloading well like llama.cpp does.

                …Are you running FP16 models offloaded?

                • doodlebob@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 days ago

                  omg, I’m retarded. Your comment made me start thinking about things and…I’ve been using q4 without knowing it… I assumed ollama ran the fp16 by default 😬

                  about vllm, yeah I see that you have to specify how much to offload manually which I wasn’t a fan of. I have 4x 3090 in an ML server at the moment but I’m using those for all AI workloads so the VRAM is shared for TTS/STT/LLM/Image Gen

                  thats basically why I kind of really want auto offload

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 days ago

      I’ve moved to using Rama Lama mainly because it promises to do the probing to get the best acceleration possible for whatever model you launch.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 days ago

        It looks like it just chooses a llama.cpp backend to compile, so technically you are leaving a good bit of performance/model size on the table if you know your GPU, and the backend to choose.

        All this stuff is horribly documented though.