• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • It’s not the model that would be sending telemetry - it’s the runtime that you load it up in. Ollama is open source wrapper for llama.cpp, so (if you have enough patience) you could inspect the source code to be sure. Regarding running it in sandbox: you could, and generally it does not add any tangible overhead to the tokens per second performance, but keep in mind that in order to give the model runtime (ollama, vllm and the like) access to your GPU you usually need some form of sandbox concessions like PCIe passthrough for VMs or running nvidia’s proprietary container runtime plugin. From my measurements, there is zero difference in performance when running a model loaded in a GPU on baremetal, a docker container with nvidia container runtime or a proxmox VM with PCIe passthrough. Model executes on GPU itself and barely uses any CPU (sampling and loras are usually CPU operations). vLLM does collect anonymized usage stats. Since it’s open source - you can actually see what’s being sent (spoiler: it’s pretty boring). As far as I know, Ollama has nothing like that. None of the open source engines that I know of are sending your full prompts or responses anywhere though. It doesn’t mean they will keep being like that forever or that you should be less vigilant though 👍


  • Definitely. It has some alignment, but it won’t straight up refuse to do anything. It will sometimes add notes saying that what you’ve asked is kinda maybe against the law, but will produce a great response regardless. It’s a 70b, so running it locally is kind of a challenge, but for those who can run it - there is simply no other LLM that you can run at home that gets even close to it. It follows instructions amazingly, it’s very consistent and barely hallucinates. There is some special mistral sauce in it for sure, even if it’s “just” a llama2-70b.


  • The info in this thread is mostly incorrect - error has nothing to do with the SD card you plug in the server.

    This error happens because HP had a bug in earlier versions of iLo where flash memory wear levelling was not enabled. It results in a failed flash chip unless iLo was updated early on.

    The SD card is Embedded - it’s NOT the one that you plug in, but rather it’s soldered down on the motherboard. You can try formatting it (there is HP support advisory on this, requires sending special XML) but chances of bringing it back to life are slim. Out of 10+ machines with these symptoms I’ve seen only two were alright after flash format. Proper fix would be to desolder and replace the chip on the motherboard…

    Part number for the chip: SDIN7DP2-4G

    Here’s the link to support advisory: https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04996097



  • I’ve got a k80 and it’s… underwhelming.

    • It’s CUDA API is very old (11.4). Nothing works with it - you have to compile all the things from scratch.
    • The last driver that supported it is nvidia-driver-470 which is not even included anymore in 22.04 ubuntu…
    • Under debian, you can’t (I couldn’t…) install both cuda-drivers-470 and nvidia-driver version 470.
    • It doesn’t mix well with other modern cards like 3090.
    • It idles at around 70W and when in use makes my R730 sound like an industrial vacuum cleaner.
    • It’s not even a really-24Gb card. It’s two 12Gb cards wearing a trench-coat.

    I does run 30B models tho. And it is cheap.