Article: https://proton.me/blog/deepseek

Calls it “Deepsneak”, failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can’t speak for Proton, but the last couple weeks are showing some very clear biases coming out.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    You should try the comparison between the larger models and the distilled models yourself before you make judgment. I suspect you’re going to be surprised by the output.

    All of the models are basically generating possible outcomes based on noise. So if you ask it the same model the same question five different times and five different sessions you’re going to get five different variations on an answer.

    You will find that an x out of five score between models is not that significantly different.

    For certain cases larger models are advantageous. If you need a model to return a substantial amount of content to you. If you’re asking it to write you a chapter story. Larger models will definitely give you better output and better variation.

    But if you’re asking you to help you with a piece of code or explain some historical event to you, The average 14B model that will fit on any computer with a video card will give you a perfectly serviceable answer.