Article: https://proton.me/blog/deepseek

Calls it “Deepsneak”, failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can’t speak for Proton, but the last couple weeks are showing some very clear biases coming out.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    4 hours ago

    failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers

    That’s not why. Almost no one is going to do that. That’s why they didn’t mention it.

  • lemmus
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    5 hours ago

    They are absolutely right! Most people don’t give a fuck about hosting their own AI, they just download “Deepsneak” and chat…and it is unfortunately even worse than “ClosedAI”, cuz they are based in China. Thats why I hope Duckduckgo will host deepseek on their servers (as it is very lightweight in resources, yes?), then we will all benefit from it.

    • FuzzyDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 minutes ago

      Serious question, how does them being based in China make them worse? I’d much rather have a foreign intelligence agency collect data on me than one in the country in which I live. It’s not like I’d get extradited to China.

  • febra@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    6 hours ago

    Tutamail is a great email provider that takes security very seriously. Switched a few days ago and I’m very happy.

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 minutes ago

      The article goes into great detail about how it’s different from OpenAI so, no.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      8 hours ago

      The thing is, some people like proton. Or liked, if this keeps going. When you build a business on trust and you start flailing like a headless chicken, people gets wary.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        8 hours ago

        A blog post telling people to be wary of a Chinese app running an LLM people know very little about is flailing?

        • Kbobabob@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          6 hours ago

          Can’t it be run standalone without network?

          They also published the weights so we know more about it than some of the others

          • Evotech@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 hours ago

            This focuses mostly on the app though, which is #1 on the app stores atm

            We know it’s censored to comply with Chinese authorities, just not how much. It’s probably trained on some fairly heavy propaganda.

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    9 hours ago

    Of course it’s biased. One company writing about another company is always biased. Imagine mods of one community collectively writing a post about another community, would the fact alone not be enough? Or admins of one instance about another.

    It was common sense when I as a kid went online, writing all manners of awfully stupid things memories of which still haunt me today.

    You’d be friendly and respectful with all people around you on the same forums and chats. But never ever would you believe them when they tell you what to think about something.

    We live in a strange time when instead of applying this simple rule people are looking for mechanisms like karma or fact-checking or even market share to allow themselves to uncritically believe some stuff.

    • JOMusic@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      9 hours ago

      This is true. However, Proton’s big sell is that they can be trusted to be truthful about what is safe and what is not safe for your privacy.

      I think given the context of the CEO’s personal bias towards current US Republicans, and given that those Republicans are aggressively anti-China, when Proton releases an article warning of a successful Chinese AI, and seemingly purposefully leaves out the part about how people are already running it securely, it starts raising some important questions about their alignment.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 hours ago

        Proton’s big sell is that they can be trusted to be truthful about what is safe and what is not safe for your privacy.

        Which somebody who can be trusted wouldn’t ever do.

        Businesses sell goods, services, deals, not truth.

        And privacy is not about trust.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          Exactly. If a company can be trusted to provide privacy respecting products, they’ll come with receipts to prove it. Likewise, if they claim something else respects or doesn’t respect privacy, I likewise expect receipts.

          They did a pretty good job here, but the article only seems to apply to the publicly accessible service. If you download it and run it through your runner of choice, you’re good. A privacy minded individual would probably already not trust new hosted services.

    • Rogue@feddit.uk
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      1
      ·
      15 hours ago

      The desperate PR campaign against deepseek is also very entertaining.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 hours ago

        We’re playing with it at work and I honestly don’t understand the hype. It’s super verbose and would take longer for me to read the output than do the research myself. And it’s still often wrong.

        It’s cool I guess, and I’m still looking for a good use case, but it’s still a ways from taking over the world.

        • Rogue@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 hours ago

          The same is also true of ChatGPT. On the surface the results are incredibly believable but when you dig into it or try to use some of the generated code it’s nonsense.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 hours ago

            I certainly think it’s cool, but the further you stray from the beaten path, the more newly janky it gets. I’m sure there’s a good workflow here, it’ll just take some time to find it.

  • firadin@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    10
    ·
    18 hours ago

    Unsurprising that a right-wing Trump supporting company is now attacking a tech that poses an existential threat to the fascist-leaning tech companies that are all in on AI.

    • Rogue@feddit.uk
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      30
      ·
      edit-2
      15 hours ago

      For clarity the company did not explicitly support Trump. They simply stated negative things about the “corporate dems” and praised the new republican party.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        4 hours ago

        They explicitly said the Republicans were on the side of the little guy. I probably don’t need to explain the awful shit that they’re doing that showcases that that is not what they’re doing.

        Saying they’re “fighting for the little guys” while at the same time shitting on their political opponent is a clear show of support.

        Now I don’t particularly care about the Proton CEO’s opinions. My opinion of CEOs is that they’re dickheads until proven otherwise. But when you publicly support this shit, and use your company’s official accounts to back yourself up, it becomes a lot more egregious in my mind. And even worse when they pretend they’re not actually doing that.

      • firadin@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        2
        ·
        14 hours ago

        Ah my mistake, they didn’t praise the fascist - just the fascist party. Big difference.

        • Rogue@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          22
          ·
          14 hours ago

          Exactly it’s totally different.

          And they never specifically praised the vice president they simply made some fucked up association that his attendance of an event meant he was on side contrary to pretty much every other indication that has ever been given.

              • sem@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                13
                arrow-down
                2
                ·
                12 hours ago

                You might not want to post apologia for a company defending a fascist party once, then doubling down, then trying to take it all back saying “it was a mistake to get political”

                • Rogue@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  7 hours ago

                  You might not want to post apologia for a company defending a fascist party once, then doubling down, then trying to take it all back saying “it was a mistake to get political”

                  At no point did I state “it was a mistake to get political” that is a narrative entirely from your own imagination.

                  1. I made a sarcastic response to the opening comment. People didn’t notice the sarcasm. No worries my sense of humour isn’t overly obvious and I refuse to litter \s marks everywhere so I’m not too bothered if my comments are misinterpreted at times.

                  2. the opening commenter responds sarcastically.

                  3. I respond with another comment that’s absolutely dripping with sarcasm and even explicitly call out Proton’s bullshit. Somehow people still don’t note the sarcasm and yet they understood the firadin’s comment was sarcastic, odd but again I’m not too bothered.

                  4. Somebody implies I haven’t understood a joke.

                  5. I try to delicately suggest I’ve been misunderstood. Again, I’m not too bothered.

                  6. Your response. Absolutely absurd.

                  At no point did I even defend the Nazis, at no point did I say or imply what you’re quoting me as saying.

                  The most ridiculous thing is you accuse me of “apologia” on the same day I repeatedly call out the inappropriateness of Proton’s stance because I got tired of reading so much “apologia”:

                  The solace I do take from this is that at least people are aware of the insanity of the hill Proton have decided to die on.

  • Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    7
    ·
    18 hours ago

    DeepSeek is open source, but is it safe?

    These guys are in the open source business themselves, they should know the answer to this question.

    • AstralPath@lemmy.ca
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      18 hours ago

      Has anyone actually analyzed the source code thoroughly yet? I’ve seen a ton of reporting on its open source nature but nothing about the detailed nature of the source.

      FOSS only = safe if the code has been audited in depth.

      • activ8r@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        A few of my friends who are a lot more knowledgeable about LLMs than myself are having a good look over the next week or so. It’ll take some time, but I’m sure they will post their results when they are done (pretty busy times unfortunately).

        I’ll do my best to remember to come back here with a link or something when I have more info 😊

        That said, hopefully someone else is also taking a look and we can get a few different perspectives.

      • Fubarberry@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        34
        ·
        17 hours ago

        I haven’t looked into Deepseek specifically so I could be mistaken, but a lot of times when a model is called “open-source” it really is just open weights. You can download it or train other models off of it, but you can’t actually view any kind of source code on how the model works.

        An audit isn’t really possible.

        • L_Acacia@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 hours ago

          It is open-weight, we dont have access to the training code nor the dataset.

          That being said it should be safe for your computer to run Deepseeks models since the weight are .safetensors which should block any code execution from injected code in the models weight.

        • AstralPath@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          6
          ·
          16 hours ago

          Then by default it should never be considered safe. Honestly, this “open” release… it makes me wonder about ulterior motives.

          • rumba@lemmy.zip
            link
            fedilink
            English
            arrow-up
            17
            ·
            14 hours ago

            That’s not quite it either.

            The model itself is just a giant ball of math. They made a thing that can transform an English through the collected knowledge of much of humanity a few dozen times and have it crap out a reasonable English answer.

            The open source part is kind of a misnomer. They explained how they cooked the meal but not the ingredient list.

            To complete the analogy, their astounding claim is that they managed to cook the meal with less fire than anyone else has by a factor of like 1000.

            But the model itself is inherently safe. It’s not like it’s a binary that can carry a virus or do crazy crap. Even convincing it to do give planned nefarious answers is frankly beyond our capabilities so far.

            The dangerous part that proton is looking at and honestly is a given for any hosted AI, is in the hosting server side of things. You make your requests to their servers and then their servers put the requests into the model and return you the output.

            If you ask their web servers for information about tiananmen square they will block you.

            You can, however, download the model yourself and run it yourself and there’s not any security issues there.

            It will tell you anything that you need to know about tiananmen square.

            • sem@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 hours ago

              What are the minimum system requirements to run something like deepseek on your own computer in some kind of firewall container?

              • utopiah@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                11 hours ago

                There are plenty of ways and they are all safe. Don’t think of DeepSeek as anything more than a (extremely large, like bigger than a AAA) videogame. It does take resources, e.g disk space and RAM and GPU VRAM (if you have some) but you can use “just” the weights and thus the executable might come from another project, an open-source one that will not “phone home” (assuming that’s your worry).

                I detail this kind of things and more in https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but to be more pragmatic I’d recommend ollama which supports https://ollama.com/library/deepseek-r1

                So, assuming you have a relatively entry level computer you can install ollama then ollama run deepseek-r1:1.5b and try.

                • utopiah@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 hours ago

                  FWIW I did just try deepseek-r1:1.5b (the smallest model available via ollama today) and … not bad at all for 1.1Gb!

                  It’s still AI BS generating slop without “thinking” at all … but from the few tests I ran, it might be one of the “least worst” smaller model I tried.

          • vala@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            Seems reasonable to think part of the motivation is disrupting American tech like openAI

    • fruitycoder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      They very much do not believe that open source means safe or private. They have a tons of articles talking about the hurdles they have gone through to try and ensure they are, and where and when they have failed to do so.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      18 hours ago

      If I obfuscate my code such that it’s very difficult to understand then in practice it’s like proprietary software, even with an open source license.

      Correct me if I’m wrong but looking at the code isn’t enough to understand what a neural network will do (if these “AI” are using that, maybe they’re not).

      • Tony Bark@pawb.social
        link
        fedilink
        English
        arrow-up
        13
        ·
        18 hours ago

        Deepseek’s R1 was built entirely on a multi-stage reinforcement learning process, and they pretty much open sourced that entire pipeline. By contrast, OpenAI has been giving us nothing but “look what we did” since GPT-3, and we’re supposed to trust them.

  • the_swagmaster@lemmy.zip
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    17 hours ago

    I don’t think they are that biased. They say in the article that ai models from all the leading companies are not private and shouldn’t be trusted with your data. The article is focusing on Deepseek given that’s the new big thing. Of course, since it’s controlled by China that makes data privacy even less of a thing that can be trusted.

    Should we trust Deepseek? No. Should we trust OpenAI? No. Should we trust anything that is not developed by an open community? No.

    I don’t think Proton is biased, they are explaining the risks with Deepseek specifically and mention how Ai’s aren’t much better. The article is not titled “Deepseek vs OpenAI” or anything like that. I don’t get why people bag on proton when they are the biggest privacy focused player that could (almost) replace google for most people!

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      Exactly.

      Also, none of the article applies if you run the model yourself, since the main risk is whatever the host does with your data. The model itself has no logic.

      I would never use a hosted AI service, but I would probably use a self hosted one. We are trying a few models out at work and we’re hosting it ourselves.

    • Bogasse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      19 hours ago

      We actually it seems quite fair-ish 🤷

      AI has the potential to be a truly revolutionary development, one that could drive advancement for centuries. But it must be done correctly. These companies stand to make billions of dollars in revenue, and yet they violated our privacy and are training their tools using our data without our permission. Recent history shows we must act now if we’re to avoid an even worse version of surveillance capitalism.

      Also from 2023 : https://proton.me/blog/ai-gdpr

    • JOMusic@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      Given that you can download Deepseek, customize it, and run it offline in your own secure environment, it is actually almost irrelevant how people feel about China. None of that data goes back to them.

      That’s why I find all the “it comes from China, therefore it is a trap” rhetoric to be so annoying, and frankly dangerous for international relations.

      Compare this to OpenAI, where your only option is to use the US-hosted version, where it is under the jurisdiction of a president who has no care for privacy protection.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        TBF you almost certainly can’t run R1 itself. The model is way too big and compute intensive for a typical system. You can only run the distilled versions which are definitely a bit worse in performance.

        Lots of people (if not most people) are using the service hosted by Deepseek themselves, as evidenced by the ranking of Deepseek on both the iOS app store and the Google Play store.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 hours ago

      Yeah the article is mostly legit points that if your contacting the chatpot in China it is harvesting your data. Just like if you contact open AI or copilot or Claude or Gemini they’re all collecting all of your data.

      I do find it somewhat strange that they only talk about deep-seek hosting models.

      It’s absolutely trivial just to download the models run locally yourself and you’re not giving any data back to them. I would think that proton would be all over that for a privacy scenario.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        It might be trivial to a tech-savvy audience, but considering how popular ChatGPT itself is and considering DeepSeek’s ranking on the Play and iOS App Stores, I’d honestly guess most people are using DeepSeek’s servers. Plus, you’d be surprised how many people naturally trust the service more after hearing that the company open sourced the models. Accordingly I don’t think it’s unreasonable for Proton to focus on the service rather than the local models here.

        I’d also note that people who want the highest quality responses aren’t using a local model, as anything you can run locally is a distilled version that is significantly smaller (at a small, but non-trivial overalll performance cost).

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          You should try the comparison between the larger models and the distilled models yourself before you make judgment. I suspect you’re going to be surprised by the output.

          All of the models are basically generating possible outcomes based on noise. So if you ask it the same model the same question five different times and five different sessions you’re going to get five different variations on an answer.

          You will find that an x out of five score between models is not that significantly different.

          For certain cases larger models are advantageous. If you need a model to return a substantial amount of content to you. If you’re asking it to write you a chapter story. Larger models will definitely give you better output and better variation.

          But if you’re asking you to help you with a piece of code or explain some historical event to you, The average 14B model that will fit on any computer with a video card will give you a perfectly serviceable answer.