I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I’ll try my best to answer any questions here, but I hope others in the community will contribute too!

  • SineIraEtStudio@midwest.social
    link
    fedilink
    arrow-up
    95
    ·
    8 months ago

    Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.

      • Arthur Besse@lemmy.mlM
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        Ok, I just stickied this post here, but I am not going to manage making a new one each week :)

        I am an admin at lemmy.ml and was actually only added as a mod to this community so that my deletions would federate (because there was a bug where non-mod admin deletions weren’t federating a while ago). The other mods here are mostly inactive and most of the mod activity is by me and other admins.

        Skimming your history here, you seem alright; would you like to be a mod of /c/linux@lemmy.ml ?

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          7
          ·
          8 months ago

          Please feel free to make me a mod too. I am not crazy active, but I think my modest contributions will help.

          And I can make this kind of post on a biweekly or monthly basis :) I think weekly might be too often since the post frequency here isn’t crazy high

        • d3Xt3r@lemmy.nzM
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          8 months ago

          Thanks! Yep I mentioned you directly seeing as all the other other mods here are inactive. I’m on c/linux practically every day, so happy to manage the weekly stickies and help out with the moderation. :)

  • vort3@lemmy.ml
    link
    fedilink
    arrow-up
    24
    ·
    8 months ago

    How do symlinks work from the point of view of software?

    Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.

    Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

    Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?

    Is there a rule of thumb to predict how software behaves when dealing with symlinks?

    I just don’t grok symbolic links.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      8 months ago

      A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It’s possible for a software to not follow the symlink (either intentionally or not).

      So your sync software has to actually be able to follow symlinks. I’m not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        11
        ·
        8 months ago

        An application can know that a file represents a soft link, but they don’t need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just worktm without them needing to do anything differently.

        It is possible for the software to not follow a soft symlink intentionally, yes (if they don’t follow it unintentionally, that might be a bug).

        As for hard links, I’m not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can’t tell the difference.

      • vort3@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        So I guess it’s something like pressing ctrl+c: most software doesn’t specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).

        Thanks.

        Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.

    • 0xtero@beehaw.org
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.

      If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).

      Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.

      There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.

      You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.

    • bizdelnick@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.

      To determine how some specific software handle symlinks, read its documentation. It may have settigs like “follow symlinks” or “don’t follow symlinks”.

    • the16bitgamer@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      ELI5: when a computer stores something like a file or a folder, it needs to know where it lives and where its contents are stored. Normally where the a file or folder lives is the same place as where its contents are. But there are times where a file may live in one place and its contents are elsewhere. That’s a symlink.

      So for your video example, the original video is located in Downloads so the video file will say I am movie.mp4 and I live i live in downloads, and my contents are in downloads. While the symlink says, I am movie.mp4 I live in home, and my contents are in downloads over there.

      For a video player, it doesn’t care if the file and the content is in the same place, it just need to know where the content lives.

      Now how software will treat a symlink as an absolute. For example if you have 2 PCs synced with cloud storage, and both downloads and home is being synced between your 2 pcs. Your cloud storage will look at the symlink, access the content from pc1 and put your movie.mp4 in pc2’s downloads and home. But it will also put the contents in both places in pc2 since to it, the results are the same. One could make software sync without breaking the symlink, but it depends on the developer and the scope of the software.

    • Ramin Honary@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

      Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically “reference” the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.

      A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.

      But some apps that work on directories and files together (like “find”, “tar”, “zip”, or “git”) do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the “find” command to list only symlinks without referencing them: find -type l

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Symlinks are fully transparent for all software just opening the file etc.

      If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      8 months ago

      its a pointer.

      E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.

      The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.

      That location in the filesystems list of shit is also a pointer.

      So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.

      If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.

      If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.

      Okay but who fucking cares? This is stupid!

      If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.

      If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.

      When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.

      If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.

      if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.

      Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.

      Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.

  • Blizzard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    8 months ago

    Why do programs install somewhere instead of asking me where to?

    EDIT: Thank you all, well explained.

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      26
      ·
      8 months ago

      Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

    • Bitrot@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      8 months ago

      Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

      If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

      • krash@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        8 months ago

        Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 months ago

      you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

      In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

      instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

      and each package manager/distribution has an idea of where some files be stored

    • penquin@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      I wish every single app installed in the same directory. Would make life so much easier.

        • Ramin Honary@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          8 months ago

          They do! /bin has the executables, and /usr/share has everything else.

          Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution’s package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

          The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “PATH” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me OpenTTD twice?”

          For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.

        • penquin@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          Not all. I’ve had apps install in opt, flatpaks install in var out of all places. Some apps install in /etc/share/applications

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            In /etc? Are you sure? /usr/share/applications has your system-wide .desktop files, (while .local/share/applications has user-level ones, kinda analogous to installing a program to AppData on Windows). And .desktop files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.

            • penquin@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              OK, that was wrong. I meant usr/share/applications. Still, more than one place.

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                3
                ·
                8 months ago

                The actual executables shouldn’t ever go in that folder though.

                Typically packages installed through a package manager stick everything in their own folder in /usr/lib (for libs) and /usr/share (for any other data). Then they either put their executables directly in /usr/bin or symlink over to them.

                That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

      On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

      So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually /opt.

      There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.

      The way you install software is your distros package manager or flatpak

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      different strokes.

      windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.

      if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program “should” be installed to already.

      but it didn’t used to be like that in the PC world! used to be your computer wasn’t a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren’t disks inside the machine, but big beige external disks that you’d plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn’t just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they’d even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.

      linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it’s where the os will look when you try to run something to see if it can run any program by that name. if a program isn’t installed in $PATH then when you type its’ name in and hit enter the computer won’t know what the hell youre talking about and you’ll have to type it’s whole ass location out and hit enter.

      Why didn’t unix systems that linux imitates ask you where to install stuff? because usually it wasn’t your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.

      so the assumption was that you’d have a variable in your user environment that would say where things were installed but not that you’d have the ability to change it or even install things.

      so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something’s installed at all?

      even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.

      so why the hell won’t linux systems give you the option of installing in a specific location or outside of $PATH altogether?

      they will, but unlike windows, they don’t ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.

  • Godort@lemm.ee
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    8 months ago

    Maybe not a super beginner question, but what do awk and sed do and how do I use them?

    • mumblerfish@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      8 months ago

      This is 80% of my usage of awk and sed:

      “ugh, I need the 4th column of this print out”: command | awk '{print $4}'

      Useful for getting pids out of a ps command you applied a bunch of greps to.

      ”hm, if I change all ‘this’ to ‘that’ in the print out, I get what I want": command | sed "s/this/that/g"

      Useful for a lot of things, like “I need to change the urls in this to that” or whatever.

      Basically the rest I have to look up.

    • harsh3466@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      8 months ago

      If you’re gonna dive into sed and awk, I’d also highly recommend learning at least the basics of regular expressions. The book Mastering Regular Expressions has been tremendously helpful for me.

      Edit: a letter. Stupid autocorrect.

    • Ramin Honary@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      8 months ago

      Awk is a programming language designed for reading files line by line. It finds lines by a pattern and then runs an action on that line if the pattern matches. You can easily write a 1-line program on the command line and ask Awk to run that 1-line program on a file. Here is a program to count the number of “comment” lines in a script:

      awk 'BEGIN{comment_count=0;} /^[[:space:]]*[#]/{comment_count++;} END{print(comment_count);}' file.sh
      

      It is a good way to inspect the content of files, espcially log files or CSV files. But Awk can do some fairly complex file editing operations as well, like collating multiple files. It is a complete programming language.

      Sed works similar to Awk, but it is much simplified, and designed mostly around CLI usage. The pattern language is similar to Awk, but the commands are usually just one or two letters representing actions like “print the line” or “copy the line to the in-memory buffer” or “dump the in-memory buffer to output.”

    • neidu2@feddit.nl
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      8 months ago

      Probably a bit narrow, but my usecases:

      • awk: modify STDIN before it goes to STDOUT. Example: only print the 3rd word for each line
      • sed: run a regex on every line.
  • Kuvwert@lemm.ee
    link
    fedilink
    arrow-up
    15
    ·
    8 months ago

    I installed Debian today. I’m terrified to do anything. Is there a single button backup/restore I can depend on when I ultimately fuck this up?

      • Julian@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        These have both saved my ass on numerous occasions. Btrfs especially is pretty amazing.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      8 months ago

      You want a disk imager like clonezilla or something. If you’re not ready for that just show hidden files and copy your /home/your_username directory to a usb or something. That’s where all your files live.

    • makingStuffForFun@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      I ran Linux in a vm and destroyed it about… 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.

      I have everything backed up and synced so it’s all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.

      Once I stopped destroying everything I did a proper install and haven’t looked back.

      This will be my 7th year on Linux now. And I have to say, it feels good to be free.

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Install everything from store, and you should be fine. If you see a tutorial being too complicated, it is probably not worth following. Set your search engine to past year and see if there are better tutorials.

      You might also want to consider atomic distros, they are much harder to mess up, and much easier to restore.

      • Kuvwert@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        No I’m doing it to learn self hosting, I’m doing the hard stuff on purpose

        • baseless_discourse@mander.xyz
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          8 months ago

          Oh! in that case may I suggest yachts with docker containers? https://yacht.sh/

          Everything on my homeserver is directly installed on the server, keeping them up-to-date is pretty annoying, and permission control is completely non-existent.

          Since want to do things the hard way, I believe this can also be a good opportunity to do things in the “better” way (at least IMO).

          • Kuvwert@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            Ah now that does look promising, I had settled on portainer but this yacht program looks very noob friendly! I’ll install it today and check it out! Cheers!

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Another perspective: Your question implies you want to try out things with Debian. If this assumption is correct, I would highly recommend you just create a virtual machine with qemu/libvirt and learn within this environments/try out things there before doing stuff ‘on the metal’.

      Of course backups are always a good idea and once you got your feed wet you might want to learn about ‘Infrastructure as code’. Have fun!

      • Kuvwert@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        That’s a fantastic suggestion and I’ve already been doing exactly this :) but, I’ve done it just enough to know that I’m really really good at breaking stuff, and I don’t want to wait to fully transition from windows. Hence the need for full system backups

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      8 months ago

      /bin, since that will include any basic programs (bash, ls, cd, etc.).

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 months ago

      As in, the directory in which much of the operating system’s executable binaries are contained in?

      They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.

    • SmashFaster@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

      Some of what others have said is accurate, but to explain a bit further:

      Longer explanation:

      spoiler

      system32 is just some folder name the MS engineers came up back in the day.

      Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho

      The linux filesystem is well defined if you are inclined to research more about it.
      Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.

      https://tldp.org/LDP/intro-linux/html/sect_03_01.html

      tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”

      The basics:

      • /bin - base level executables, ls, mv, things like that
      • /sbin - super-level-only (root) executables, parted, reboot, etc
      • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32’s function of holding critical libraries
      • /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
      • /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables

      Bonus:

      • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
      • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

      For completeness:

      • /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
      • /var - “Variable data”, basically meaning any data that will likely grow over time, eg: /var/log
      • macniel@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Oooh. I always wondered where I would put my docker bind shares in. I currently have them point to /Media but /srv makes so much more sense.

    • ogeist@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      8 months ago

      For the memes:

      sudo rm -rf /*

      This deletes everything and is the most popular linux meme

      The same “expected” functionality:

      sudo rm -rf /bin/*

      This deletes the main binaries. You kinda can recover here but I have never done it.

    • Bitrot@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      Don’t think there is.

      system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.

      The bash in /bin depends on libraries in /lib for example.

    • Gobo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      /usr/lib or /usr/lib64 or /lib (some distros) or /lib64

      Some things (like hosts file) are in /etc. /etc mostly contains configs.

      • KISSmyOSFeddit@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        A weird catch-all folder for “most important Windows system stuff”. It’s not 32bit, just named like that in typical Windows fashion for backwards compatibility.

  • wanghis_khan@lemmy.ml
    link
    fedilink
    arrow-up
    14
    ·
    8 months ago

    NixOS. I don’t get what it really is or does? It’s a Linux distribution but with ceavets or something

    • exu@feditown.com
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 months ago

      It’s a distribution completely centered around the Nix package manager. This basically allows you to program how your system should look using one programming language. If you want an identical system, just copy that file and you’re set.

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        8 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        8 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        8 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

      • ReakDuck@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        7
        ·
        8 months ago

        I remember that thr kernel didn’t had performance flags set and used, making NixOS not a nice Gaming platform.

        Is this true? Can I fix it for myself easily?

    • featured [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      Instead of installing packages through a package manager one at a time and configuring your system by digging into individual config files, NixOS has you write a single config file with all your settings and programs declared. This lets you more easily configure your system and have a completely reproducible system by just copying your nix files to another nixos machine and rebuilding.

      It’s also an immutable distribution, so the base system files are only modified when rebuilding the whole system from your config, but during runtime it’s read only for security and stability.

  • Syltti@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    8 months ago

    Is there an Android emulator that you can actually game on? I’ve tried a number of them (Android x86, Genymotion, Waydroid), but none of them can install a multitude of games from the Google Play store. The one thing keeping me on Windows is Android emulation (I like having one or two idle games running at any given time).

    • d3Xt3r@lemmy.nzM
      link
      fedilink
      arrow-up
      20
      ·
      edit-2
      8 months ago

      Waydroid works, but there’s three main things you need to get things going to replicate a typical Android device:

      • OpenGapps: For GApps/Play Store. You’ll also need to register your device to get an Android ID.
      • Magisk: Mainly to pass SafetyNet / Play Integrity basic checks.
      • libndk / libhoudini: For ARM > x86 translation. libndk works better on AMD.
      • Widevine: (optional) L3 DRM for things that need it, eg Netflix

      There are some automated scripts that can set this all up. I used this one in the past with some success.

      Also, stay away from nVidia. From what I recall, it just doesn’t work, or there are other issues like crashes. But if you’re serious about Linux in general, then ditching nVidia is generally a good idea.

      Finally, games that use anti-cheat can be a hit-or-miss (like Genshin Impact, which crashed when I last tried it). But that’s something that you may face on any emulator, I mean, any decent anti-cheat system would detect the usage of emulators.

      • Syltti@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        I see. I knew most of the emulators lacked ARM support, which seemed to be the biggest issue, but this helps. Sadly, I have a 3080 and no money to buy a new card, so I stuck with nVidia for the foreseeable future. I’ll have to test this when I get time, though. Thanks.

        • Bandicoot_Academic@lemmy.one
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          An nvidia GPU unfortunetly doesn’t work with Waydroid at all. You would have to use CPU rendering which won’t play any games. You might be able to use your CPUs iGPU if it has one.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          You can try using scrcpy. It’s sort of a remote desktop for Android. You can see your phone’s screen on the PC and use mouse and keyboard with it.

  • cosmicrookie@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    8 months ago

    In the terminal, why can’t i paste a command that i have copied to the clipboard, with the regular Ctrl+V shortcut? I have to actually use the mouse and right click to then select paste.

    (Using Mint cinnamon)

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      10
      ·
      8 months ago

      The terminal world has Ctrl+C and Ctrl+(many other characters) already reserved for other things before they ever became standard for copy paste. For for this reason, Ctrl+Shift+(C for copy, V for paste) are used.

    • r0ertel@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      8 months ago

      Old timer here! As many others replying to you indicate, Ctrl+C means SIGINT (interrupt running program). Many have offered the Ctrl+Shift+C, but back in my day, we used Shift+Insert (paste) and Ctrl+Insert (copy). They still work today, but Linux has 2 clipboard buffers and Shift+Insert works against the primary.

      As an aside, on Wayland, you can use wl-paste and wl-copy in your commands, so git clone "$(wl-paste)" will clone whatever repo you copied to your clipboard. I use this one all the time

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 months ago

      In Terminal land, Ctrl+C has meant Cancel longer than it’s meant copy. Shift + Insert does what you think Ctrl+V will do.

      Also, there’s a separate thing that exists in most window managers called the Primary buffer, which is a separate thing from the clipboard. Try this: Highlight some text in one window, then open a text editor and middle click in it. Ta da! Reminder: This has absolutely nothing to do with the clipboard, if you have Ctrl+X or Ctrl+C’d something, this won’t overwrite that.

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      8 months ago

      In most terminal (gnome terminal, blackbox, tilix etc.) you can actually override this behavior by changing keyboard shortcut. Blackbox even have a simple toggle that will enable ctrl+c v copy paste.

      Gnome console is the only terminal I know that doesn’t allow you to change this.

    • Allero@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      Due to some old school terminal things. Add shift to shortcut combinations, such as Ctrl+Shift+V to paste.

    • Thymos@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      What usually also works on Linux is selecting text with the mouse and pasting it by pressing the middle mouse button (or scroll wheel). You’d still need the mouse, but it’s at least a little quicker ☺️

  • krash@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    8 months ago

    I want to start with Btrfs and snapshots, is there a good, beginner friendly tutorial for those coming from a ext* filesystem?

    • kylian0087@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      8 months ago

      If you try a distro that does it by default then it is no more complicated then ext4 for the user. The distro will setup things for you. I know that opensuse Tumbleweed and Fedora Workstation set this up by default. Manually configuring is how ever a bit more complicated.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Great question!

      EndeavourOS has a great little wiki of tutorials around BTRFS and setting up snapshots, that’s a lot more friendly than just reading wiki manuals.

      Here’s a link to the one about getting snapshots and rollbacks set up.

      https://discovery.endeavouros.com/encrypted-installation/btrfs-with-timeshift-snapshots-on-the-grub-menu/2022/02/

      Alternatively, I run OpenSUSE Tumbleweed on my main production rig and it uses BTRFS and sets up snapshots from the GRUB menu for you by default!

      I’m also using Nvidia, so while it’s gotten better and I haven’t had to roll back in a long time, Snapper has saved my butt once or twice in the past. ;)

    • NeoZet@lemmings.world
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Albeit not completely beginner friendly, the arch wiki explains btrfs features and manual configuration pretty well. If you are looking for a guide to a snapshot tool, then it depends on your distro, but they probably have an article for it as well (also, check the “related articles” section at the top of the page).

  • sag@lemm.ee
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    8 months ago

    Why in Linux, Software uses a particular version of a library? Why not just say it’s dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.

    I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.

    • PlexSheep@infosec.pub
      link
      fedilink
      arrow-up
      19
      ·
      8 months ago

      Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.

      As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.

      • sag@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        8 months ago

        Hey, Thanks I have one more question. Is it possible to ship all required library with software?

        • Nibodhika@lemmy.world
          link
          fedilink
          arrow-up
          16
          arrow-down
          1
          ·
          8 months ago

          It is, that’s what Windows does. It’s also possible to compile programs to not need external libraries and instead embed all they need. But both of these are bad ideas.

          Imagine you install dolphin (the KDE file manager) It will need lots of KDE libraries, then you install Okular (KDE PDF reader) it will require lots of the same library. Extend that to the hundreds of programs that are installed on your computer and you’ll easily doubled the space used with no particular benefit since the package manager already takes care of updating the programs and libraries together. Not just that, but if every program came with it’s own libraries, if a bug/security flaw was found in one of the libraries each program would need to upgrade, and if one didn’t you might be susceptible to bugs/attacks through that program.

        • Bienenvolk@feddit.de
          link
          fedilink
          arrow-up
          8
          ·
          8 months ago

          That is possible indeed! For more context, you can look up “static linking vs dynamic linking”

          Tldr: Static linking: all dependencies get baked into the final binary Dynamic linking: the binary searches for libraries in your system’s PATH and loads them dynamically at runtime

        • PlexSheep@infosec.pub
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          Absolutely! That’s called static linking, as in the library is included in the executable. Most Rust programs are compiled that way.

          • sag@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            Yea, That’s why I am learning Rust but I didn’t know it called Static Linking I think it just how Rust works LMAO. And Thanks again

          • jack@monero.town
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            8 months ago

            Doesn’t that mean that you have a lot of duplicate libraries when using Rust programs, even ones with the same version? That seems very inefficient

            • PlexSheep@infosec.pub
              link
              fedilink
              arrow-up
              3
              arrow-down
              2
              ·
              8 months ago

              It’s true that boundaries get inflated as a result, but with today’s hard drives it’s not really a problem.

        • d3Xt3r@lemmy.nzM
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          8 months ago

          In addition to static linking, you can also load bundled dynamic libraries via RPATH, which is a section in an ELF binary where you can specify a custom library location. Assuming you’re using gcc, you could set the LD_RUN_PATH environment variable to specify the folder path containing your libraries. There may be a similar option for other compilers too, because in the end they’d be spitting out an ELF, and RPATH is part of the ELF spec.

          BUT I agree with what @Nibodhika@lemmy.world wrote - this is generally a bad idea. In addition to what they stated, a big issue could be the licensing - the license of your app may not be compatible with the license of the library. For instance, if the library is licensed under the GPL, then you have to ship your app under GPL as well - which you may or may not want. And if you’re using several different libraries, then you’ll have to verify each of their licenses and ensure that you’re not violating or conflicting with any of them.

          Another issue is that the libraries you ship with your program may not be optimal for the user’s device or use case. For instance, a user may prefer libraries compiled for their particular CPU’s microarchitecture for best performance, and by forcing your own libraries, you’d be denying them that. That’s why it’s best left to the distro/user.

          In saying that, you could ship your app as a Flatpak - that way you don’t have to worry about the versions of libraries on the user’s system or causing conflicts.

      • beeng@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        8 months ago

        To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.

        If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.

        If ABC got a new feature D but ABC didn’t change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.

        • PlexSheep@infosec.pub
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          When having a breaking change pre 1.0.0, I’d expect a minor version bump instead, as 1.0.0 signals that the project is stable or at least finished enough for use.

    • Eugenia@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 months ago

      Because it’s not guaranteed that it’ll work. FOSS projects don’t run under strict managerial definitions where they have to maintain compatibility in all their APIs etc. They are developed freely. As such, you can’t really rely on full compatibility.

    • AMDIsOurLord@lemmy.ml
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      8 months ago

      That’s the same on ANY platform, but windows is far worse because most apps ship a DLL and -never- update the damn thing. With Linux, it’s a little bit more transparent. (edit: unless you do the stupid shit and link statically, but again in the brave new world of Rust and Go having 500 Mb binaries for a 5 Kb program is acceptable)

      Also, applications use the API/ABI of a particular library. Now, if the developers of the said library actually change something in the library’s behavior with an update, your app won’t work it no more unless you go and actually update your own code and find everything that’s broken.

      So as you can understand, this is a maintenance burden. A lot of apps delegate this to a later time, or something that happens sometimes with FOSS is that the app goes unmaintained somewhat, or in some cases the app customizes the library so much, that you just can’t update that shit anymore. So you fix on a particular version of the library.

    • nyan@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably “ABI stability”.

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      IMHO the answer is social, not technical:

      Backwarts compatibility/legacy code is not fun, and so unless you throw a lot of money at the problem (RHEL), people don’t do it in their free time.

      The best way to distribute a desktop app on Linux is to make it Win32 (and run it with WINE) … :-P (Perhaps Flatpak will change this.)

  • jack@monero.town
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Why are debian-based systems still so popular for desktop usage? The lack of package updates creates a lot of unnecessary issues which were already fixed by the devs.

    Newer (not bleeding edge) packages have verifiably less issues, e.g. when comparing the packages of a Debian and Fedora distro.

    That’s why I don’t recommend Mint

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        Distrobox can be used to install other programs (including GUI apps)

        I need to play around with that sometime. Is it a chroot or a privileged container or is it a sandboxed container with limited access? How’s hardware excelleration in those?

      • jack@monero.town
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 months ago

        You should definetely check out Bazzite, it’s based on Fedora Atomic and has Steam on the base image. Image and Flatpak updates are applied automatically in the background, no need to wait for the update on next boot. Media codecs and necessary drivers are installed by default.

        The Bazzite image also directly consists of the upstream Fedora Atomic image, just with quality of life changes added and optimized for gaming

    • wolf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      8 months ago

      Debian desktop user here, and I would happily switch to RHEL on the desktop.

      I fully agree, outdated packages can be very annoying (running a netbook with disabled WIFI sleep mode right now, and no, backported kernel/firmware don’t solve my problem.)

      For some years, I used Fedora (and I still love the community and have high respect for it).

      Fedora simply does not work for me:

      • Updated packages can/did break compatibility for stuff I need to get stuff done. Fine if Linux is your hobby, not acceptable if you need to deliver something
      • In the industry, many times not the last recent packages of development environments are used (if you are lucky, you are only a few months or years behind), so having the most recent packages in Fedora helps me exactly zero
      • With Debians 2 years release cycle (and more years of support), I can upgrade to the next version when it is appropriate for me (= 1-2 days when there is a slow week and the worst bugs have been found already)
      • My setup/desktop is heavily customized and fully automated via IaC, no motivation to tweak this stuff constantly (rolling) or every 6-12 months (Fedora)
      • From time to time I have to use software packages from 3rd parties, with Fedora, I might be one update way from breaking this software packages because of version incompatibilities (yes, I might pin a version of something to use a 3rd party software, but this might break Fedora updates (direct and transitive dependencies)
      • I once had a cheap netbook for travel with an infamous chip set bug concerning sleep modes, which would be triggered by some kernels. You can imagine how it is to run Fedora, when you get often Kernel updates and the bug will be triggered or not after double digit numbers of minutes of work.

      Of course, I could now start playing around with containerizing everything I need for work somehow and run something like Silverblue, perhaps I might do it someday, but then I would again need to update my IaC every 6-12months, would have to take care of overlays AND containers etc…

      When people go ‘rolling’ or ‘Fedora’, they simply choose a different set of problems. I am happy we have choice and I can choose the trouble I have to life with.

      On a more positive note: This also shows how far Linux has come along, I always play around with the latest/BETA Fedora Gnome/KDE images in a VM, and seriously don’t feel I am missing anything in Debian stable.

    • AMDIsOurLord@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      8 months ago

      Debian systems are verified to work properly without subtle config breakages. You can run Debian practically unattended for a decade and it’s chug along. For people who prefer their device to actually work, and not just be a maintenance princess, it’s ideal.

      • jack@monero.town
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        Okay, I get that it’s annoying when updates break custom configs. But I assume most newbs don’t want to make custom dotfiles anyways. For those people, having the newest features would be more beneficial, right?

        Linux Mint is advertised to people who generally aren’t willing to customize their system

        • AMDIsOurLord@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          8 months ago

          having a stable base helps. Also, config breakage can happen without user intervention. See Gentoo or Arch’s NOTICE updates

        • Nibodhika@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Breaks can happen without user intervention in other distros, there are some safeguards around it, but it happens. Also new users are much more likely to edit their configs because a random guy on the Internet did it than an experienced person who knows what they’re doing, also a lot more likely not to realize that this can break the system during an upgrade.

    • jdnewmil@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      Noob question?

      You do seem confused though… Debian is both a distribution and a packaging system… the Debian Stable distribution takes a very conservative approach to updating packages, while Debian Sid (unstable) is more up-to-date while being more likely to break. While individual packages may be more stable when fully-updated, other packages that depend on them generally lag and “break” as they need updating to be able to adapt to underlying changes.

      But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability. But it turns out that there is no optimal balance that satifies everyone.

      Mint is a fine distro… but if you don’t like it, that is fine for you too. The only objection I have to your objection is that you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

      • jack@monero.town
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Noob question?

        Should I’ve made a new post instead?

        You do seem confused though… Debian is both a distribution and a packaging system…

        Yes, Debian is a popular distro depending on Debian packages. My concern is about the update policy of the distro

        But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability.

        Debian is pure stability, not the balance between stability and newness. If you mean debian-BASED in particular, trying to introduce more newness with custom repos, I don’t think that is a good strategy to get balance. The custom additional repos quickly become too outdated as well. Also, the custom repos can’t account for the outdatedness of every single Debian package.

        you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

        Yes, I don’t understand/approve the philosophy around the update policy of Debian. It doesn’t make sense to me for desktop usage. The technology of the package system however is great and apt is very fast

        • KISSmyOSFeddit@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Debian is a balance between stability and newness.
          If you want to see what pure stability looks like, try Slackware.

    • LoreleiSankTheShip@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      As someone not working in IT and not very knowledgeable on the subject, I’ve had way less issues with Manjaro than with Mint, despite reading everywhere that Mint “just works”. Especially with printers.

      • Nibodhika@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        Yeah, Manjaro just works, until it doesn’t. Don’t get me wrong, I love Manjaro, used it for years, but if it breaks it’s a pain in the ass to fix, and also hard to get help because the Arch community will just reply with “Not Arch, not my problem” even if it’s a generic error, and the Manjaro community is not as prominent.

        I could also mention them letting their SSL certificate expire, which doesn’t inspire a lot of trust, but they haven’t done that in a while.

    • Cyclohexane@lemmy.mlOPM
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      8 months ago

      Unlike other commenters, I agree with you. Debian based systems are less suitable for desktop use, and imo is one of the reasons newcomers have frequent issues.

      When installing common applications, newcomers tend to follow the windows ways of downloading an installer or a standalone executable from the Internet. They often do not stick with the package manager. This can cause breakage, as debian might expect you to have certain version of programs that are different from what the installer from the Internet expects. A rolling release distro is more likely to have versions that Internet installers expect.

      To answer your question, I believe debian based distros are popular for desktop because they were already popular for server use before Linux desktop were significant.

      • Nibodhika@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        That’s a bad example, Debian is bad because people use it wrong and it breaks is not a really strong argument, same can be said about every other distro.

        I believe Debian based distros are popular because Ubuntu used to be very beginner friendly back in the early 2000s, while other distros not so much. Then a lot of us started with it, and many never switched or switched and came back.

        • Cyclohexane@lemmy.mlOPM
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Debian is not bad. It is just not suitable for newcomers using it for desktop. I think my arguments hold this stance.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      8 months ago

      Because people have the opposite experience and outlook from what you wrote.

      I’m one of those people.

      I’m surprised no one brought up the xz thing.

      Debian specifically targeted by complex and nuanced multi prong attack involving social engineering and very good obfuscation. Defeated because stable (12 stable, mind you, not even 11 which is still in lots of use) was so slow that the attack was found in unstable.

      • Cyclohexane@lemmy.mlOPM
        link
        fedilink
        arrow-up
        7
        ·
        8 months ago

        This is not a good argument imo. It was a miracle that xz vulnerability was found so fast, and should not be assumed as standard. The developer had been contributing to the codebase for 2 years, and their code already landed in debian stable iirc. There’s still no certainty that that code had no vulnerabilities. Some vulnerabilities in the past were caught decades after their introduction.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Its not a miracle it is just probability. When you have enough eyes on something you are bound to catch bugs and problems.

          Debian holds back because its primary goal is to be stable, reliable and consistent. It has been around longer that pretty much everything else and it can run for decades without issue. I read a article about a university that still had the original Debian install from the 90’s. It was on newer hardware but they just copied over the files.

          • Cyclohexane@lemmy.mlOPM
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            Lots of eyes is not enough. As I mentioned earlier, there are many popular programs found on most machines, and some actually user facing (unlike xz) where vulnerabilities were caught months, years, and sometimes decades later. xz is an exception, not a rule.

        • bloodfart@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          I was running 12 stable on a machine that had been updated and upgraded in between the time when the backdoor was introduced and when it was discovered. At no point in time did either dpkg query or the self report show that system had the affected 5.6.0(?) version.

          Stable had versions of xz that contained commits from the attacker and has been walked back to before those were made out of an abundance of caution.

          There’s a lot of eyes on that software now and I haven’t seen anyone report that versions between the attacker gaining commit rights and the attacked version were compromised yet, as you said though: that doesn’t mean it isn’t and vulnerabilities have existed for many years without being discovered.

          As to whether it’s a good argument, vulnerabilities have a short lifespan generally. Just hanging back and waiting a little while for something to crop up is usually enough to avoid them. If you don’t believe me, check the nist database.

          I’m gonna sound like a goober here, but the easiest way to not trip is to slow down and look where you’re going.

      • jack@monero.town
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        If that is a good tradeoff for you, old/broken packages but more trusted, then that’s okay. Btw, the xz backdoor was found so quickly it didn’t even ship to most distros in use, except for Debian Sid and Arch I think

        • bloodfart@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          I see it as a fantastic trade off. There are some packages I use that need to be more up to date than stable repos and I either install them from different repos or in a different way.

          And arch never even had the whole backdoor because they built from source and didn’t include the poison pill binary component from the attacker.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      8 months ago

      I’m not sure what planet you are on but Debian is more stable and secure than anything I have ever tested. Maybe Debian gets a bad rap because of Ubuntu.

      • Cyclohexane@lemmy.mlOPM
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        I disagree. Stable, yes. But stable as in unchanging (including bug-for-bug compatibility), which imo is not what most users want. It is what server admins want though. Most newbie desktop users don’t realize this about debian based systems, and is one of the sources of trouble they experience.

        Debian tries to be secure by back porting security fixes, but they just cannot feasibly do this for all software, and last I checked, there were unaddressed vulnerabilities in debian’s version of software that they had not yet backported (and they had been known for a while). I’m happy to look up the source for you if you’re interested.

      • wolf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Debian is for sure not more secure than most other distributions/operating systems. (Might be true for what you tested).

        Not even mentioning the famous Debian weak SSH key fuck up (ups), Debian is notoriously understaffed to take care of back ports of security patches for everything which is not the kernel/web server/Python etc. (and even there I would not be too sure) and don’t get me started on starting services/opening ports on an apt install etc.

  • eezeebee@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    8 months ago

    Considering switching to Linux, but don’t know what to choose/what will work for my needs. I want to be able to play my steam games, use discord desktop application, and use FL Studio. I need it to work with an audio interface and midi controller too. I am not interested in endless tweaking of settings, simple install would be nice. What should I go for?

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      8 months ago

      Mint would probably work for you. Some stuff is outdated, but it has flatpak which is a package manager with more up to date apps. If you’re willing to put in the time though, I’d recommend trying some of the more common distros out (Mint, Debian, Ubuntu, Fedora). You can use a liveusb to test them without installing.

      Steam is available anywhere so that’s not a problem.

      Discord officially only has a .deb package, so that’s only for Debian based distros (Debian, Ubuntu, Mint). There are other options for almost all distros though - I personally use Webcord

      Fl studio might be tricky - supposedly it runs through wine but you might have to do a bit of work. I’ve personally used Reaper and I works great.

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      8 months ago

      Adding to what others have said I also think Mint is a great option. But I strongly encourage you to install things via the package manager when available, I find that a lot of times when someone complains that something (that should work) doesn’t work on Linux is because they’re trying to install things manually, i.e. the Windows way (open browser, search for program name, open website, download installer, open installer, follow instructions), that’s almost never the correct way on Linux.

    • cosmicrookie@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      8 months ago

      As a fellow user in similar situation, i can tell that i had tried dual boot a few times but would just switch to windows when i wanted something done that didn’t work on linux

      3 weeks ago i went full Mint install and left windows altogether. This forced me to find solutions to problems that i otherwise would solve by just switching to windows. Dont expect everything to work though. You will need to tweak some things and you may even need to do some things differently than youre used to. But isn’t this why we change in the first place?