Original

  • ceiphas@feddit.org
    link
    fedilink
    arrow-up
    8
    ·
    9 hours ago

    As an admin that installed gentoo on all computers (>300) of a company producing Windows (oh the irony) i can say: the overhead of maintaining one gentoo system and the synchronizing the machine company-wide is neglectable… It was about 2hours a week, less then i used for Windows or ubuntu

    • Axolotl_cpp@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      9 hours ago

      There is a reason if gentoo is better? Like it have a particular thing that other distro don’t have?

      • Fizz@lemmy.nz
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        Everything is compiled from source. This allows a few advantages, hardware specific optimisation, choosing which parts of the software you actually need eg disabling bluetooth support and being able to patch and modify the packages. Plus the gentoo community is friendly, smart and very Helpful.

  • AnimalsDream@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 hours ago

    I never did fully go through with installing gentoo. I should give it a try for fun one of these days.

      • bobo@lemmy.ml
        link
        fedilink
        arrow-up
        25
        ·
        20 hours ago

        It’s like food, peasants compile their packages, nobles have someone else do it for them

        • ByteJunk@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          10 hours ago

          I think it’s the reverse.

          Peasants have to actually be productive so they don’t starve, they take fast food packages without thinking about it too hard and hoping it won’t wreck them.

          The nobles can afford to handpick at their leisure what goes into their systems. Their understanding of what their system needs, and their cooking skill, varies greatly though.

  • rtxn@lemmy.worldM
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    1 day ago

    I’ve tried, at least in theory, to migrate an entire university’s classroom computers to Linux. Even in the absence of technical limitations, the one obstacle I can’t overcome is entirely human. The living fossils Our esteemed tenured professors refuse to change their habits because they need their Netbeans, they need their Eclipses, they need their Visual Studios. In a lot of ways, it feels like wayland-protocols’ governance. A single NACK from a stubborn fool kneecaps the entire project, and now the university gets to spend hundreds of thousands of euros upgrading the computer labs because the perfectly usable computers are juuuust barely outside Win11’s requirements.

    Sidebar: Back when I was a student at that same university, when Windows was small enough to allow dual-booting with Ubuntu from the same SSD, my Prog-1 teacher insisted on using Joe. He hated Vim, Emacs, and Nano with an equal passion.


    Edit: Just to give some validation to the people who need it, I should point out that Nix would be the ideal OS. We use Clonezilla to deploy a painstakingly prepared golden image of Windows with all applications and configuration changes before every semester. If a teacher forgets to request a software (despite the five separate e-mails and posters around the university), we have to pray that it’s available either as an MSI or through winget, otherwise we have to manually remote into each affected computer (up to several hundred) and install it one by one.

    I would give my left testicle and half my liver for the ability to have a centrally hosted Nix config file that can be edited whenever and then deployed as the computers come online.

    • khar21@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      13 hours ago

      Where I live, surprising number of professors use linux on their school computers. Right now 1 out of 3 professors I have already use linux

    • d00phy@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 day ago

      Something I dealt with a long time ago that has become a sort of rule for me, regardless of how true it might be: Scientists, and University researchers (the tenured ones) hate learning new programming languages and methods. There’s decently good reason behind it, as far as I can tell. I used to support an archive of weather satellite data. Whenever we had a software stack upgrade come in, the scientists grumbled because it meant they had to revalidate large swaths of their data with the new versions to make sure all results were reproducible. One thing they never did, if they could help it, was change the base code they used to generate those results. That would mean much more work. Also, if they wanted to to come up with a new subset of the data, they wrote it in what they knew. Usually Ada! Supporting this is how I learned that we couldn’t get an Ada compiler that would produce 64-bit binaries. The compiler binary itself was 64-bit, but that was it. From what I could learn, SGI had produced a 64-bit compiler for IRIX (I think - which ironically we were migrating from to x64 Linux clusters), and PGI gave up on theirs for “lack of consumer interest.”

      • rozodru@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        24 hours ago

        yeah but from a University IT Admin standpoint NixOS would just be so much easier to maintain and set up. you literally would just need one config file to slap on all the machines. Install the OS, clone the config, rebuild, walk away and go to the next computer. Program causing issues and needs to be removed? cool edit the config, push it to the repo, clone it to all the machines, rebuild.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          12 hours ago

          I don’t know of any organization using it though

          Don’t reinvent the wheel. Ansible is well proven and works on many systems.

          You also could use Fog

        • rtxn@lemmy.worldM
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          23 hours ago

          Install the OS, clone the config, rebuild, walk away and go to the next computer.

          Honestly, I’d automate it to be even fewer operations. The Windows process is already down to only four keystrokes, and three of them are just to boot into PXE. The fourth is just a pause to make sure every computer has booted into Clonezilla (Debian preloaded with the cloning software and my own scripts, pulled from a TFTP server) before they start pulling the Windows image and the network becomes saturated.

      • rtxn@lemmy.worldM
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        24 hours ago

        If it ever comes to pass, there will be an extensive evaluation to determine which tool is best suited for the job and the environment. The Prime Directive applies: we must not disrupt classes that are in progress or about to start unless they specifically ask for something.

        Support for atomic updates is one feature that I won’t compromise on, and while Ansible will definitely be part of the toolkit (on that note: fuck WinRM, all my homies hate WinRM), its idempotent model on its own is not enough to guarantee disruption-free deployments. If the process fails for any reason, the system must roll back to its last functional state. I don’t know if Nix can do that, but when it becomes relevant (so probably never in my professional capacity), I will find the right tool.

        (for the record, that is not my downvote)

          • rtxn@lemmy.worldM
            link
            fedilink
            arrow-up
            3
            ·
            17 hours ago

            Yes, and that is one of the tools that would be evaluated. My immediate problem is that it requires a working OS to rollback to the last filesystem snapshot if the configuration change (which is still not atomic) is interrupted.

            The area where filesystem-level snapshots would be amazing is the /home partition, whenever a teacher asks the computer to be cleaned before an exam.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              16 hours ago

              maybe the snapshot could be rolled back by a PXE bootable system. but for the second part, btrfs can do snapshots per subvolume, so if you could create a subvolume on user creation that could work

    • Ŝan@piefed.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      1 day ago

      It’s no different in any corporate environment where þe business is large enough to have a separate OPs team. A good OPs manager can kill any R&D/dev new technology project by being creative wiþ þe costs to train up þe team, put in all of þe failsafes to maintain RTO & RPO, acquire new support contracts, and generally just run þe proposed change.

      All you devs having trouble wiþ management forcing you to use AI: you’re clearly not leveraging your OPs management.

  • Skyline969@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 day ago

    Monday rolls around, they’ve finished like four of them. “Why won’t this kernel work?! NO, for the last time I’m not using genkernel! It’ll be a bloated mess.”

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      Tbf gentoo lets you connect computing power of multiple computers on a network to compile for this reason.

      With a thousand cores, all the kernels can compile just-in-time :3

  • metoosalem@feddit.org
    link
    fedilink
    arrow-up
    9
    ·
    1 day ago

    Would love to do this but I don’t think all those niche specialized apps will run on Linux. They barely function under windows as it is.