I hope this post is not too off topic. I thought that it would be nice to see the address of all the small self-hosted instances of Lemmy (1~5 users).

  • potato@lolimbeer.com
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    1 year ago

    I stood up an instance on linode for myself and a couple friends.

    We’re lolimbeer.com !

    Which is based on an old meme that I still find hilarious.

    But, then I got a registration application that was excited about “lolis & beer”.

    I still like the domain and I’ve had it sitting around for a minute, but i never really thought about it or read it that way before.

  • Ducks@ducks.dev
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 year ago

    Also on kubernetes, hopefully this message works. First time testing from my self-hosted instance.

    • tyfi@wirebase.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Is there a way to host with high availability? Or is that a kubernetes feature?

      • kelvie@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        K8s is just a huge abstraction over your clusters, the real question is if the software/containers support HA.

        • tyfi@wirebase.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’ve been meaning to test it for a while now, but have just been running VMs/Docker. Will check it out.

      • Philip@endlesstalk.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You can definitely have high availibillity without kubernetes, but its easier(For me atleast) with kubernetes.

          • Philip@endlesstalk.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            For container orchestration, which is mostly what k8s provides, then you could use docker swarm or nomad. You could use docker-compose with multiple replicas of the wanted container + a load balancer to divide the load.

            In general I don’t think k8s/k3s is needed for hosting lemmy yet, but since I have a setup for k3s, it is easier for me to use it.

    • blazarious@mylem.me
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      how do you handle the sled state for pictrs with 2 nodes? I’ve been having some trouble with it.

      • Philip@endlesstalk.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I have only 1 container of pictrs running(with no scaling) and are using longhorn for storage, so if the pictrs container switches node, then longhorn handles it for me.

        • blazarious@mylem.me
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I see, thanks. What volume(s) are you persisting that way exactly? I mean the internal path that pictrs is using.

          • Philip@endlesstalk.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            The internal path, I’m persisting is /mnt, but I also use an older version of pictrs(0.3.1). Think the newer version uses a different path.

            I also needed to add the following for the pictrs container to work correctly.

              securityContext:
                runAsUser: 991
                runAsGroup: 991
                fsGroup: 991
            
  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    lemmy.nrd.li - The domain is pronounced Nerd-ly. I welcome anyone that considers themself a nerd and any community someone feels like being nerdy about.

    I also have like 20+ others domains… Most of which are unused… I may have a problem.

  • gaf@borg.chat
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Three users @borg.chat I was hoping to get established sooner but 0.17 gave me endless trouble that I was never able to resolve.

    • Hangry @lm.helilot.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I tried installing an instance on Docker a few months ago, and it didn’t work. Last month version was far easier to install imo

  • dreamfinder@dis.ney.ink
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I created dis.ney.ink to try to be the Lemmy version of the Disney subreddits (such as r/dvcmember and r/waltdisneyworld) … so far there’s 3 users and ~9 subscribers

  • russjr08@outpost.zeuslink.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    https://outpost.zeuslink.net

    I started a habit a while back ago of naming any servers I run based off of names from Greek mythology - my primary server is Zeus but most forms of just “Zeus” in domain form are already taken. Similarly, I call the quasi-internal network that this server runs (since it’s a hypervisor) “ZeusNet”…

    Problem with that name is “ZeusNet.net” is redundant and would irk me, I wanted something that still ends with the .net TLD (though my personal domain ends with .network).

    This, zeuslink.net is what I came up with given that “link” can mean “network” and the combination isn’t as redundant as “…net.net”!

    Funnily enough, originally my instance was originally under the colony subdomain which I quite liked… But unfortunately I didn’t set things up properly due to how I have everything else setup, and I had already dipped just enough in the federation that when I reset everything so that it actually worked properly, the keys that my server identified with no longer matched which broke my ability to federate properly. Which then forced me to reset everything again under a completely different subdomain (I’m glad it was on a subdomain instead of the root domain for that reason) since Lemmy doesn’t have a “self destruct” option like Mastodon has (which tells all connected instances “Hey, I’m going down - forget you knew me” as far as I understand it).

    And that is the origin story of my domain, along with the subdomain. Thinking about it now, I should copy all of this as a standalone post on my instance 😅