Definitely. It has some alignment, but it won’t straight up refuse to do anything. It will sometimes add notes saying that what you’ve asked is kinda maybe against the law, but will produce a great response regardless. It’s a 70b, so running it locally is kind of a challenge, but for those who can run it - there is simply no other LLM that you can run at home that gets even close to it. It follows instructions amazingly, it’s very consistent and barely hallucinates. There is some special mistral sauce in it for sure, even if it’s “just” a llama2-70b.
It’s not the model that would be sending telemetry - it’s the runtime that you load it up in. Ollama is open source wrapper for llama.cpp, so (if you have enough patience) you could inspect the source code to be sure. Regarding running it in sandbox: you could, and generally it does not add any tangible overhead to the tokens per second performance, but keep in mind that in order to give the model runtime (ollama, vllm and the like) access to your GPU you usually need some form of sandbox concessions like PCIe passthrough for VMs or running nvidia’s proprietary container runtime plugin. From my measurements, there is zero difference in performance when running a model loaded in a GPU on baremetal, a docker container with nvidia container runtime or a proxmox VM with PCIe passthrough. Model executes on GPU itself and barely uses any CPU (sampling and loras are usually CPU operations). vLLM does collect anonymized usage stats. Since it’s open source - you can actually see what’s being sent (spoiler: it’s pretty boring). As far as I know, Ollama has nothing like that. None of the open source engines that I know of are sending your full prompts or responses anywhere though. It doesn’t mean they will keep being like that forever or that you should be less vigilant though 👍