• Womble@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Look into quantised models (like gguf format) these significantly reduce the amout of memory needed and speed up computation time at the expense of some quality. If you have 16GB of rm or more you can run decent models locally without any gpu, though your speed will be more like 1 word a second than chatgpt speeds