I know that Lemmy is open source and it can only get better from here on out, but I do wonder if any experts can weigh in whether the foundation is well written? Or are we building on top of 4 years worth of tech debt?

  • BURN@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Definitely not a silver bullet, but should stop the app from locking up when one thing gets overloaded. I’m sure they have their reasons for how it’s designed now and I’m probably missing something that would explain it all.

    I’m still not familiar enough with how federation works to speak to how easy that would be. Unfortunately this has happened all as I’ve started moving and I haven’t gotten a chance to dive into code like id want to.

    • Sir_Simon_Spamalot@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      It’s also not the only solution for high-availability system. Multiple monoliths with load-balancing can be used as well.

      Also, a lot of people are self-hosting. In this case, microservice won’t give them any scaling benefit.

      • boeman@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        The problem with scaling monoliths is you are scaling everything, including the pieces that have lower usage. The huge benefit you get from going to micoservices is you only have to scale the pieces that need to be scaled. This allow for horizontal scaling to use less compute resources. It also allow for these compute resources to be spread out as well.

        A lot of the headaches can be removed by having an effective CI/CD strategy that is completely reusable with minimal effort.

        The last headache would be observability. There you’re stuck either living with the nightmare of firefighting problems with 100 services in possibly 10 locations, rolling your own platform using FOSS tools or spending a whole lot of money on something like honeycomb, datadog or new relic.

        But I’m an SRE, I live my life for scalablability and DevOps processes. I know I’m biased.

        • boonhet@lemm.ee
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 year ago

          I find that often the overhead from microservices is worse than any savings from dropping a megabyte worth of unused machine instructions from a binary.

          When your microservices need to talk to each other (and I’m not sure how many services you could split out of Lemmy without them needing to talk to each other), you’re doing a bunch of HTTP requests that are way slower than just calling another function in your monolith.

          I see this at work every day. We run a distributed monolith because someone thought microservices would be a good idea, but we can’t actually separate everything, so it’s usual for an incoming API call to make 2-3 more calls internally. It can get so, so slow.

          • boeman@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Overly chatty micoservices are definitely an issue.

            Changing your mindset to a microservice oriented architecture is not an easy feat, it’s something that took a lot of time for me to fully grasp (back in my architect/developer) days. Yes, you gain overhead that will need to be compensated for. But when do the benefits outweigh the disadvantages?

            Here are some questions to ask during design: How much of this chattiness is because you are tightly coupling these services? Hom much should a microservice be talking between each other? Can you implement an event bus to handle that chatter between services?

            Designing an application using microserves but just replicating the monolith application will give you scalability, but will not give resilience. What can you do to overcome that single point of failure? First, no more synchronous calls to these APIs, toss an event over then fence and move on. Degrade your application if the failure is something you can’t overcome, but don’t just stop the application because one API is no longer responsive.

            Do you need everything to be a microservice? Probably not. The first thing you look at when moving from a monolith to microservice architecture is what makes the most sense to be moved. How much work can be offloaded to background jobs (using something like sidekiq)?

            How do you handle installs? How many packages do we now have to create for this application to work?

            There are a lot of questions that have to be answered before moving toward a microservice architecture. On top of that, there is a complete mindset change as to how the application works that needs to be accomplished. If you design your microservice application with a monolith application mindset, you’ll never realize any of the gains by making the move

        • Rakn@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          Scaling monoliths still works fine though. Microservices are first and foremost an answer to an organizational problem, not a technical one. There is a very high chance that if you are doing microservices with less than 20 people, or let’s say even 50 people, you are doing it wrong.

          Microservices introduce a ton of overhead in engineering effort required, which needs to be balanced with the benefit they provide.

          Scaling shouldn’t be the first and only reason for doing microservices.

          And yes I’ve worked in shops with a few thousand engineers and microservices made sense there. But it does not for Lemmy. If you look at how most of these large companies that do microservices started, it was by building a monolith and scaling it far higher than what Lemmy currently has to deliver.

          • redcalcium@c.calciumlabs.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Do you have more details about this? Lemmy and lemmy-ui are stateless and can be easily scaled horizontally, but pictrs is not stateless and use a filesystem-based database (sled) with lock and it can only be run as a single replica or it crash (if the replica run in the same host and can’t acquire the lock) or will have severe data inconsistency issue (if the replica run in separate host with separate sled database file).

        • Sir_Simon_Spamalot@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I’m a software engineer myself, but not familiar with your field. How would your practice be applied to self-hosting? I’m assuming a bunch of people with their home servers wouldn’t want to just run OpenShift.

          • boeman@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Personally, I wouldn’t touch OpenShift. As someone that has a kubernetes cluster hosted at my house on a mixture of RPis, a nas and in VMs, I’m not one to to say what anyone else would do :).

            But, that can be overcome, it’s all about designing you application for multiple different installs. You don’t have to have all your services running fully separately. You can containerize each service and deploy to an orchastration engine such as kubernetes or docker swarm, or you can have the multiple endpoints on a single machine with an install package that keeps them together. It’s all about architecting toward resiliency, not toward a pattern for no other reason.

            Also, Google has some very good books on SRE for free.