- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Sorry, but I can’t comply with that.
I’m trying the 20b weights model in LM studio now, and it’s not having any issues with providing summaries of plots of movies/shows/episodes. Do you know what kind of system prompt or any other details on what’s needed to keep it from responding?
Do you also see the reasoning part? I’ve played with it yesterday and yeah, like half of the reasoning is whether it’s legal to ask a question.
I’ve heard other people on reddit tell that results like on the screenshots are because of quantization, I’ve played with raw 20b.
Well at least red pill question is always forbidden, but for the rest, cannot reproduce.
I tried the red prompt one word for word, and it gave me a list of common red pill ideas. It did also tell me about the misconceptions with each of the red pill ideas and why I shouldn’t believe them 100%, but it didn’t refuse to respond to the question.
I’m currently running it with a generic “you are a helpful assistant” system prompt and low reasoning, it’s possible to that the refusal to answer some questions only happens at higher reasoning levels or a different system prompt.
Yeah it could be tied to reasoning. It’s where it decides if it shouldn’t answer.