After my private Gmail was leaked somewhere, I’ve started to receive an enormous amount of spam that came through into my inbox, which made me switch to Proton and a self-hosted SimpleLogin setup.

So I decided, I might as well dirch Google entirely, for private and work-related stuff.

While Proton already covers Mail and Calendar, I’m in search of alternatives for the following services to replace.

  • Meet: I like the idea of starting a quick meeting by simply sending a link to a customer, who can join instantly. What would be an equivalent software to do that? I tried Mattermost, but it seems more like a Slack alternative, with invites, etc. and is overkill for my case. Revolt chat looks like a Discord alternative.
  • Drive: In short, If possible, I’d prefer one consolidated place to access and edit files. Docs, Excel, PDFs, pictures, videos, etc… Is Nextcloud really the only option here, with the corresponding plugins for onlyoffice and memories (photos)? I tried running thst on an intel nuc, and it’s slow as hell.
  • fluckx@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    I think nextcloud suffers more from carrying along legacy code rather than blaming it on php. There’s tons of stuff written in php that performs well.

    It’s definitely not the right tool for every job, but it’s also not the wrong tool for every job. Which goes for most programming languages. I’ve seen it work fine on high traffic environments. It also carries a legacy reputation from php 5 and before. I haven’t kept up with it much in the last few years though.

    Which nextcloud tasks do you think php is unsuited for? (Genuine question)

    • adONis@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I mentioned it in another topic regarding kbin, which is also written in PHP.

      If you run a node/go/rust server and you hit the endpoint /hello which returns a simple “hello world”, they will just return that. PHP however, has to initialize and execute the whole framework stuff, before returning a simple “hello world”.

      So there’s definitely some overhead, which to some degree can be limited by using caching like redis, etc.

      • fluckx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I can follow that. I think most applications that keep running ( like a go webserver) are more likely to cache certain information in memory, while in PHP you’re more inclined to have a linear approach to the development. As in “this is all the things i need to bootstrap, fetch and get and run before I can answer the query”.

        As a result the fetching of certain information wouldn’t be cached by default and over time that might start adding up. The base overhead of the framework should be minimal.

        You ( nextcloud ) are also subject to whoever is writing plugins. Is nextcloud slow because it is slow, or because the 20 plugins people install aren’t doing any caching and a single page load is running 50 queries? This would be unrelated to NC, but I have no idea if there’s any plugin validation done.

        Then again, I could be talking completely out of my ass. I haven’t done much with NC except install it on my RPI4 and be a bit discontent with its performance. At the same time the browser experience on the RPI was also a bit disappointing so I never went in depth as to why it wasn’t performing very well. I assumed it was too heavy for the PI. This was 4 years ago mind you.

        My main experience with frameworks like Laravel and symfony is that they’re pretty low overhead. But the overhead is there ( even if it is only 40ms ).

        The main framework overhead would be negligible, but if you’re dynamically building the menu on every request using db queries it’ll quickly not become negligible

        • adONis@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          And what I forgot to mention, there’s the fact that it’s not async. So it adds up even more to the delay when fetching stuff.

          • fluckx@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            There are libraries which allow you to do stuff async in PHP ( https://github.com/amphp/amp ). It’s not all async by default like Javascript. A lot of common corporate languages right now are synchronous rather than asynchronous like python, java, c#, … By default, but allow you to run asynchronous code.

            It all has their place. I’m not saying making it async won’t improve some of the issues. Running a program that does 15 async processes might cause some issues on smaller systems like RPIs that don’t have a lot or compute capacity like a laptop/desktop with 20 cores.

            Having said that. I can’t back that up at all :D.

            Thanks for your insights though. I appreciate the civil discussion :)

            • adONis@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              but also, most of these languages run a compiled executable, while PHP has to go through a parser. java is another exception with it’s vm, but you get my point.

              so, all in all… PHP has overhead, in many ways … sure it might be negligible (gosh, I always have to look up the spelling of this word) in some situation, but in other it adds up so much that it makes it unsuitable for the task.

              yeah, I like these type of convos where there’s no right or wrong… just "yes, but…"s

              • fluckx@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I mean. There’s plenty of languages that have this overhead.

                A base Laravel or symfony installation shows a landing page in 30-50ms (probably).

                I’ve written ( in a lightweight framework rather that no longer exists ) a program to encrypt/decrypt strings using XML messages over http requests.

                The whole call took 40-60ms. About 40-50% of that was the serializer that needed to be loaded. The thing was processing a few hundred request per minute in peak. Which is a lot more than the average nextcloud installation. The server wasn’t really budging ( and wasn’t exactly a beast either ).

                I’m definitely not refuting that the JIT compiler adds overhead. But as in my example above, it’s also not like it’s adding 100ms or more per request.

                If you have a very high performance app to the point where you’re thinking about different transport than HTTP because of throughput you’re likely better off using something else.

                Circling back to the original argument my feeling remains that the same codebase in GO or RUST wouldn’t necessarily perform a lot better if you just calculate in php speed and the overhead of the JIT compiler.

                If you’d optimize it in rust/go. It likely will be faster. But I feel like the codebase could use some love/refactoring. But doing that is more difficult when you already have:

                • a large user base on various hardware
                • a large Plugin community which will need to refactor all their plugins
                • need some compatibility with all the stuff that is already there ( files, databases, migrations)

                You don’t want to piss off your entire userbase. Now I feel like I’d like to try it myself and look at the source though :'). ( I’m not saying I can do better though. It’s been a couple of years).

                • adONis@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  ok… valid point, and I also agree on the refactoring argument.

                  To mitigate the compatibility issue, they could release a new major version, and let plugin developers simultaneously (or not) rewrite their codebase to make it compatible. That’s how WordPress plugins work, although WP is a whole other mess, and not the best of examples, but they also have a large userbase and plugins.

                  lol, I too was thinking about trying to kickstart a similar project in Go. I’m by no means a professional go-dev (former PHP-dev, currently Node), but I think it shouldn’t be that hard.