X11 was released in 1987. The original X Window System was released in 1984. That is not just a few years of difference.
If you meant the X.org implementation, then compare it to compositors, not to the protocol.
I take my shitposts very seriously.
X11 was released in 1987. The original X Window System was released in 1984. That is not just a few years of difference.
If you meant the X.org implementation, then compare it to compositors, not to the protocol.
You’ll have to look into GTK’s Layer Shell implementation.
Look at the source of Eww. It’s written in Rust, it uses GTK (or GDK?), and it has a config option that opens the windows in the bottom layer.


Elden Ring. It is good for what it is, probably the best in its genre, but after so many Soulsbornes, it just feels like more of the same. Formulaic. I’ve tried it three separate times and it never grabbed me.


DT770 gang!


You can probably play Vampire Survivors. All you really do is move around.


The market share is never a precise number because not everybody is asked to do the hardware survey, and not everybody who is asked does. But the Linux userbase is small enough that “~3%” is in the ballpark.
Version control of dependencies is not as difficult as it seems. Unix systems can easily implement bundled dependencies like Windows does, even without sandboxed or monolithic packaging formats. The important thing is to tell the dynamic linker (ld.so in Linux’s case) where to look for the library files, similar to how PATH is used to locate executables. This is similar to how containerization works to a lesser extent, and the Steam client actually does this by loading its own .so files from ~/.local/share/Steam/.... I’m sure there are additional challenges, my knowledge is superficial and approximate at best.
But the point still stands: in most cases, Linux-native ports are simply not worth the effort, either because of limited resources in small teams, or because of profits in large studios. BG3 and Factorio are definite outliers.


From the sole developer responsible for Factorio’s Linux-native port: https://www.factorio.com/blog/post/fff-408
“Why don’t most games support macOS and Linux?” is a sentiment I often see echoed across the internet. Supporting a new platform is a lot more than just changing some flags and hitting compile. Windows, macOS, Linux, and the Nintendo Switch all use different compilers, different implementations of the C++ standard library, and have different implementation quirks, bugs, and features. You need to set up CI for the new platform, expand your build system to support the new compiler(s) and architecture(s), and have at least one person on the team that cares enough about the platform to actively maintain it. If you are a video game, you will likely need to add support for another graphics backend (Vulkan or OpenGL) as well, since DirectX is Windows-exclusive.
Many developers will take one look at the Windows market share and decide that it is not worth the trouble to support other platforms. Also, with the meteoric rise of the Steam Deck and Proton, it is easier than ever for game developers to ignore Linux support because Valve does some black magic that lets their game run anyway.
The list of Linux-first games is so short it’s not even a factor. It’s very difficult to justify the additional effort of implementing a platform that serves exclusively the playerbase with a ~3% market share, especially when a different method exists to serve that same playerbase that works just as well and also serves the 90%+ with no additional effort.
The article I linked also contains an explanation as to why GNOME’s decision to drop server-side decorations is fucking stupid.
Such as?
That tells me you don’t understand what a “stable” release branch is. The Debian maintainers do a lot of work to ensure that the packages not only work, but work well together. They don’t introduce breaking changes during the lifecycle of a major branch. They add feature updates between point releases, and continuously release security updates.
In the real world, that stability is a great value, especially in the server space. You’d be insane to use Arch as a production server, and I’m saying that as an Arch user.
Something, something, sword of Damocles.


At work, we use PiSignage for a large overhead screen. It’s based on Debian and uses a fullscreen Firefox running in the labwc compositor. The developer advertises a management server (cloud or self-hosted) to manage multiple connected devices, but it’s completely optional (superfluous in my opinion) and the standalone web UI is perfectly usable.


This is something the people of !Selfhosted@lemmy.world are better suited to answer.
In my personal opinion, for most home servers, the double redundancy of RAID 6 is more valuable than having a fully rebuilt array as soon as possible. If one member of a RAID 6 array fails, you’re still at an effective redundancy of a RAID 5. If one member of a RAID 5 array fails, you have zero redundancy until the hot spare is rebuilt.
If energy consumption is also a factor, it’s worth keeping in mind that a hot spare can be powered down by the controller until it is needed.
I’d personally go with RAID 6: 4 data + 2 distributed parity. That was the plan for my server too, but the motherboard only has four SATA ports and one had to be dedicated to the OS SSD.


My predecessor at work had a “backup scheme” where each week a full copy of important VMs’ virtual disks would be pulled by a backup VM. Two issues with that. One, the VMs were not powered off and nothing ensured that the disks were synced. Two, the backups were made onto the same physical host with no replication or high availability beyond RAID 1.


I tried it recently. They changed the rootkit and it’s a coin flip on Linux. Genshin is supposed to work, but I’ve never been able to launch the game.


Benefit of my job: I get access to the scrap pile. I don’t know any reputable used/refurbished sellers.


I know what it is, and I ensure compliance at work (I’m a sysadmin). At home, it’s less about best practices and more about what hardware I can afford. Manufacturers tend not to offer regional discounts. A 2-2-0 scheme is better than nothing at all.


No backups. For important documents and photos, that was the backup and most of them have copies on my PC. The rest is easily replaced. I knew what I was getting into, and that the free, decommissioned hard drives with 20-30 thousand hours on them were a lit fuse.
The best way to avoid a single point of failure is to create multiple parallel single points of failure, right?
It looks like GNOME is the only compositor that doesn’t support the
wlr_layer_shellprotocol, which is anything but surprising. Smithay works (Cosmic and Niri), wlroots works, Kwin and Mir work, Aquamarine (Hyprland) is not listed, but I know that it works.