Wake me up when it works offline “The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.”
It’s available through ollama already. i am running the 8b model on my little server with it’s 3070 as of right now.
It’s really impressive for a 8b model
Intriguing. Is that an 8gb card? Might have to try this after all
Yup, 8GB card
Its my old one from the gaming PC after switching to AMD.
It now serves as my little AI hub and whisper server for home assistant
What the heck is whisper? Ive been fooling around with hass for ages, haven’t heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?
whisper is an STT application that stems from openAI afaik, but it’s open source at this point.
i wrote a little guide on how to install it on a server with an NVidia GPU and hw acceleration and integrate it into your homeassistant after. https://a.lemmy.dbzer0.com/lemmy.dbzer0.com/comment/5330316
it’s super fast with a GPU available and i use those little M5 ATOM Echo microphones for this.
I’m running 3.1 8b as we speak via ollama totally offline and gave info to nobody.
I was able to set up small one via open webui.
It did ask to make an account but I didn’t see any pinging home when I did it.
What am I missing here?
Through meta…
That’s where I stop caring
Yo this is big. In both that it is momentous, and holy shit that’s a lot of parameters. How many GB is this model?? I’d be able to run it if I had an few extra $10k bills lying around to buy the required hardware.
its around 800gb
God damn.
That’s some thick model
Time to buy a thread ripper and 800gb of ram so that I can run this model at 1 token per hour.
Kind of petty from Zuck not to roll it out in Europe due to the digital services act… But also kind of weird since it’s open source? What’s stopping anyone from downloading the model and creating a web ui for Europe users?
Did anyone get 70b to run locally?
If so what, what hardware specs?
Afaik you need about 40GB of vram for a 70b model.
Can’t you offload some of it to RAM?
Same requirements, but much slower.
I guess time to buy some ram after spending decade at 16gb
That looks good on paper, but while I find ChatGPT good to create critical thinking, I’ve found Meta’s products (Facebook and Instagram) to be sources of disinformation. That makes me have reservations about Meta’s intentions with LLMs. As the article says, the model comes pre-trained, so it’s most made up of information gathered by Meta.
Neither Meta nor anyone else is hand-curating their dataset. The fact that Facebook is full of grandparents sharing disinformation doesn’t impact what’s in their model.
But all LLMs are going to have accuracy issues because they’re 1) trained on text written by humans who themselves are inaccurate and 2) designed to choose tokens based on probability rather than any internal logic as to whether an answer is factual.
All LLMs are full of shit. That doesn’t mean they’re not fun or even useful in some applications, but you shouldn’t trust anything they write.