As a full time desktop Linux user since 1999 (the actual year of the Linux desktop, I swear) I wish all you Windows folks the best of luck on the next clean install 👍
…and Happy 30th Birthday “New Technology” File System!
How do you know when someone uses linux?
Don’t worry, they’ll tell you
I wouldn’t tell you if I use Linux. I would tell YOU to use Linux. That reminds me… use Linux!
Join the dozens!
Literally dozens?!? Sign me up!
We have extra time to diss Windows since we don’t have to wait for our OS to reboot all the fucking time.
Comment by someone who hasn’t used Windows in an age. When was the last time you rebooted because you had installed new software? When was the last time you ran random code from a forum post to make software work? Because this windows user doesn’t remember ever doing that.
Literally today. That’s why I brought it up. I installed updates and had to reboot twice to finish the task.
Many Linux package managers themselves tell you you should reboot your system after updates, especially if the update has touched system packages. You can definitely run into problems that will leave you scratching your head if you don’t.
*nix systems are not immune to needing reboots after updates. I work as an escalation engineer for an IT support firm and our support teams that do *nix updates without reboots have DEFINATELY been the cause of some hard to find issues. We’ll often review environment changes first thing during an engagement only to fix the issue to find that it was from some update change 3 months ago where the team never rebooted to validate the new config was good. Not gonna argue that in general its more stable and usually requires less reboots, but its certainly not the answer to every Windows pitfall.
The only time you truly need to reboot is when you update your kernel.
The solution to this problem is live-patching. Not really a game changer with consumer electronics because they don’t have to use ECC, but with servers that can take upwards of 10 minutes to reboot, it is a game changer.
This isn’t true, I had to reboot debian the other day to take an update to dbus which is not part of the kernel.
We have an Ubuntu machine at work with an NVIDIA GPU we use for CUDA. Every time CUDA has an update, CUDA throws obtuse errors until reboot.
To say only kernel updates require reboot is naive.
Damn yeah I didn’t think of that either. Alright, scratch what I said. The point still stands that you very rarely need to update outside of scenarios containing very critical processes such as these, those of which depend on what work you do with it.
It’s been a long slow night and morning and I was half awake when I said that. Hell I’m still half awake now, just disregard anything I’ve said.
Seems to be sloppy engineering. We ran a huge multi site operation on Linux and did not need to.
So you never updated the kernel?
Of course, we did. Whenever there were updates. And there were no surprises because of badly initialized services.
A couple days ago, but I have a company issued remote managed windows laptop, and I get zero say in the matter.
At least once a month my system forces me to do a reboot for updates.
I can tell it to wait, but I can not tell it to stop.
Yesterday, on one of my family members computer the Laptop speakers stopped working, after an hour of clicking through legacy Ui trying to fix it(Lenovo Yoga 730 if someone could help me) I gave up, plugged my Linux boot usb in to test if there is a driver issue or so. Miss click in the boot menu and had to wait half an hour for a random Windows update(I did not start it because I used the physical button to turn it off, with Windows 11 turning off the computer via software requires so much mouse movement).
Haven’t used windows in a while huh?
Edit: Just to clarify, I run ALOT of operating systems in my lab; RHEL, Debian, Ubuntu (several LTS flavors), TruNAS, Unraid, RancherOS, ESXi, Windows 2003 thru 2022, Windows 10, Windows 11.
My latest headless Steam box with Windows 11 based on a AMD 5600g basically reboots about as fast as I can retype my password in RDP.
What does headless mean in this context?
Probably a gaming PC (as he mentioned Steam) without a display connected to it that’s used for game streaming using Parsec or other software like Sunshine. By the way, if you want to try that setup yourself make sure you get a dummy plug (HDMI or DisplayPort) for the GPU as Windows doesn’t really allow video capture if no display is detected.
This, thanks. I just use Steam link though, works good for my needs.
I have extra time because I don’t waste my time on making up arguments!
And boy do you guys ever talk about Windows… Like constantly. Go on any Linux subreddit or community and 8 of the top 10 posts will mention Windows.
Omg. This hits home. I think Linux has prompted / asked me to reboot one time since I installed it 2 months ago. Windows wants you to reboot everytime you change anything. I didn’t realize how insanely often it asks until I had something to compare it to.
I got a friend trying Linux for the first time and they asked for some help picking software to install, like which office suite or photo app etc… They just instinctively rebooted after everything they did like it was a pavlovian response, lol.
This will vary by distro. Arch for example expects (but doesn’t ask) you to reboot quite often since their packages are “bleeding edge” and update the kernel often.
The last update to NTFS was in 2004.
The fact that ReFS doesn’t even support all the features NTFS does is pathetic.
Genuine question, not being sarcastic.
What’s the benefit to the average end user to modernizing NTFS?
Sure, I love having btrfs on my NAS for all the features it brings, but I’m not a normal person. What significant changes that would affect your average user does NTFS require to modernize it?
I just see it as an “if it’s not broken” type thing. I can’t say I’ve ever given the slightest care about what filesystem my computer was running until I got into NAS/backups, which itself was a good 10 years after I got into building PCs. The way I see it, it doesn’t really matter when I’m reinstalling every few years and have backups elsewhere.
- Near instantaneous snapshots and rollback (would help with system restore etc)
- Compression that uses a modern algorithm
- Checking for silent corruption, so users know if their files are no longer correct
I’d add better built in multi-device support and recovery (think RAID and drive pooling) but that might be beyond the “average” user (which is always a vague term and I feel there are many types of users within that average). E.g. users that mod their games can benefit from snapshots and/or reflink copies allowing to make backups of their game dirs without taking up any additional space beyond the changes that the mods add.
Add speed in there
NTFS is slow
I agree all those are nice things to have, and things I’d want to see in an update. Now how can you sell those features to management? How do these improve the experience for the everyday end user?
I’d say the snapshots feature could be a major selling point. Windows needs a good backup/restore solution.
It just seems like potentially a ton of work to satisfy the needs of “people who think about filesystems”, which is an extremely small subset of users. I can see how it might be hard to get the manpower and resources needed to rework the Windows default filesystem.
I really have no clue how much work it takes though, so it’s just speculation on my end. I’m just curious; on one hand, I do see where NTFS is way behind, but on the other… who cares? I’ve somehow made it past 20 years of building WIndows PCs without really caring what filesystem I’ve used, from 95 all the way to 11.
I’m not sure you need to sell it to actual users. A lot of benefits of an advanced filesystem could be done by the OS itself, almost transparently. All of the features I mentioned could be managed by Windows, with only minimal changes to the UI. Even reflink copies could just be a control panel option then used by default in Explorer (equivalent of
cp --reflink=auto
in Linux). And from the OS side, deduplication would help a lot on Windows given all of the DLL bundling, and weird shit they have to do to maintain legacy compatibility, and that’s no small thing given how space inefficient modern Windows installs have become.It would be some work to upgrade it (maybe a lot given how ancient and likely full of cruft that Windows is full of with legacy compatibility) but it would eventually make the system more reliable and more space efficient.
But yeah, there are challenges. I’m mainly speaking in terms of
btrfs
which would take some time to port to Windows (although there is a 3rd party driver they’d want to handle it themselves I suspect) but they’ll probably want to use their ownReFS
and I’ve not really investigated it seriously so I can’t say how ready that is for prime time. But given that it’s being included as an option in some enterprise/server editions of Windows maybe it will be soon in consumer editions soon anyway (as much as I’d prefer something more open and widely supported, at least it’s a step forward on Windows).
At the very least, better filesystem level compression support. A somewhat common usecase might be people who use emulators. Both Wii U and PS3 are consoles where major emulators just use a folder on your filesystem. I know a lot of emulator users who are non-technical to the point that they don’t have “show hidden files and folders” enabled.
Also your average person wouldn’t necessarily need checksums, but having them built into the filesystem would lead to overall more reliability.
deleted by creator
deleted by creator
You’d think it’d be ready… Weren’t they been developing it for like a decade?
Unbelievably, Windows still has a ridiculously short filepath length limit.
Nope, long paths are supported since 8.1 or 10 person bit you have to enable it yourself because very old apps can break
Furthermore, apps using the unicode versions of functions (which all apps should be doing for a couple decades now) have 32kb maximum character length paths.
That’s not an NTFS issue. That’s a Windows issue.
That’s not even a Windows issue, that’s an issue with specific Win32 API.
Are you writing parahraphs for folder/file names? That’s one “issue” I never had problem with.
Maybe enterprises need a solution for it but that’s a very different use case from most end users.
Improvements are always welcome but saying it’s “ridiculously short” makes the problem sound worse than it is.
I think they mean the full path length. As in you can’t nest folders too deep or the total path length hits a limit. Not individual folder name limits.
File paths. Not just the filename, the entire directory path, including the filename. It’s way too easy to run up against limit if you’re actually organized.
It might be 255 characters for the entire path?
I’ve run into it at work where I don’t get to choose many elements. Thanks “My Name - OneDrive” and people who insist on embedding file information into filenames.
The limit was 260. The OS and the filesystem support more. You have to enable a registry key and apps need to have a manifest which says they understand file paths longer than 260 characters. So while it hasn’t been a limitation for awhile, as long as apps were coded to support lesser path lengths it will continue to be a problem. There needs to be an conversion mechanism like Windows 95 had so that apps could continue to use short file names. Internally the app could use short path names while the rest of the OS was no longer held back.
32k Unicode characters. No, mate, it’s not easy to run up.
You like diving 12 folders deep to find the file you’re after? I feel like there’s better, more efficient ways to be organized using metadata, but maybe I’m wrong.
Not OP, but I occasionally come across this issue at work, where some user complains they they are unable to access a file/folder because of the limit. You often find this in medium-large organisations with many regions and divisions and departments etc. Usually they would create a shortcut to their team/project’s folder space so they don’t have to manually navigate to it each time. The folder structure might be quite nested, but it’s organized logically, it makes sense. Better than dumping millions of files into a single folder.
Anyways, this isn’t actually an NTFS limit, but a Windows API limit. There’s even a registry value[1] you can change to lift the limit, but the problem is that it can crash legacy programs or lead to unexpected behavior, so large organisations (like ours) shy away from the change.
C:\Users\axexandriaanastasiachristianson\Downloads\some_git_repo\src\...
You run into the file parth limit all the fucking time if you’re a developer at an organization that enforces fullname usernames.
I think I’ve spotted the real problem.
People have been talking about the real problem from the beginning of the thread: small character limit on file paths.
The limit is 32,000 characters.
I would be pissed if they made me use such a ridiculously long login name at work. Mine is twelve characters and that’s already a pain in the ass (but it’s a huge company and I have a really common name, so I guess all the shorter variations were already taken).
Edit: Also, I checked it’s really very simple to enable 32kb paths in recent versions of Windows.
Metadata is slow, messy, and volatile. Also, shortcuts are a thing.
You want your filesystems to be old and stable. It’s new filesystems you want to view with suspicion… not battle tested.
I wouldn’t really say so. Of course it’s not a good idea take the absolutely latest system as your daily driver since it’s propably not bugproof yet but also you don’t want to use something extremely old just because it’s been tested much more because then you’re just trading away perfomance and features for nothing. For example ext4 is extremely reliable and the stable version is 15year newer than NTFS.
I’m a client-side technician working in a predominantly Windows environment for the last 8 going on 9 years.
Out of all the issues I have seen on Windows, filesystem issues is rather low on that list as far as prevalence, as I don’t recall one that’s not explainable by hardware failure or interrupted write. Not saying it doesn’t happen and that ext4 is bad or anything, but I don’t work in Linux all that much so me saying that I never had an issue with ext4 isn’t the same because I don’t have nearly the same amount of experience.
Also ext came about in 1992, so 31 years so far to hash out the bugs is no small amount of time. Especially in terms of computing.
I read « NFTs turns 30 yo ». Definitely need an exorcism.
I did as well. Time to find some mind bleach.
I read it as NFTS and was very confused for a minute.
NFTS: you invest all of your data into it, and it grows and grows until it suddenly disappears as you discover it was a scam all along.
It is weird to me that Microsoft hasn’t updated the file system in so long. They were going to with Longhorn/VIsta but that failed and it seems like they’ve been gunshy ever since.
You don’t sound like you weren’t around the Windows Vista/Longhorn development days when they promised a successor to NTFS and then over the course of the next couple of years, would bail on that (and nearly every other promise made).
WinFS: https://www.zdnet.com/article/bill-gates-biggest-microsoft-product-regret-winfs/
And FWIW, they are developing ReFS, which looks like it will finally supplant NTFS, but given MS’ business model, don’t expect NTFS to ever really disappear.
Yeah, I definitely was. I think that gave them PTSD or something because they haven’t even tried to make moderate changes to NTFS since. And besides ReFS which I hadn’t heard about until this thread, they haven’t even done something as minor as give you an option to use different file systems like ext4.
NTFS has evolved over the years, but the base structure is mostly unchanged. Things have changed, but not the name. I think they’ve been using NTFS v3 for a while now…
Yeah, that’s what I mean. There have been small changes, but nothing major and if the other poster was right, even minor changes haven’t been made since 2004.
Meanwhile Apple has come out with APFS and *nix variants have multiple file systems, each more modern than NTFS.
It is weird to me. Here’s hoping reFS or some other file system comes out.
ReFS is out. But only specific revisions of Windows, notably Windows server, can use it for specific use cases.
I tried setting up ReFS on a disk for a cluster of hyper-v systems… I couldn’t because they were using a cluster shared DAS, and in that version of Windows server or ReFS there was no support for cluster access to the FS, it should have otherwise worked, it just seems a bit incomplete at the moment. If I had been using it for cifs access for a single server, then yeah, it probably would have been fine, it was just the clustered direct access that wasn’t yet supported.
Windows desktop is unlikely to get ReFS support until the fs is more mature, and it’s likely that will be limited to non-os disks for a while.
It’s pretty far along right now, it’s just that MS isn’t going to pop open any Champaign until the fs can hold its own as a direct replacement and upgrade from NTFS, with all the capabilities and features required (and more).
I’ll note that the vast majority of systems running some kind of *nix are generally using either ext2 or ext3. Where ext3 is essentially just ext2 with journaling (which is something NTFS has, AFAIK), and ext2 is just as old as NTFS.
We can argue and complain all we want, but these are tried and true, battle tested file systems that do the job adequately for the demands of systems, both in the past, and now. They do one fairly simple thing… Organizing data on disk into files and directories, and enabling that data to be written, updated, read from, and otherwise retrieved when needed.
I know in IT we don’t go by the saying “if it’s not broken don’t fix it”, since all of us have horror stories of when you don’t fix something that’s not broken and something very bad happens… But I would say that systems like ext2/3 and NTFS have achieved the coveted goal of RFC 1925, rule 12: In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.
There’s no fat in these file systems. Everything in them generally exists for good reason, the fs is stable and does the required job.
Does that mean we should pack it up, we’ll never need another fs again? Absolutely not. We will hit the hard upper limits of what these file systems can do, eventually; probably fairly soon, but that doesn’t mean that either is bad simply because they are old.
It is weird to me that Microsoft hasn’t updated the file system in so long.
Honest question: why? NTFS isn’t great, it isn’t terrible, it’s functional. I don’t really spend any time thinking about my filesystem. I like having symbolic links on my Linux boxes, but aside from that I just want it to work, and NTFS does.
NTFS has symbolic links as well, I use them all the time
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/mklink
I knew it supported hard links, where the fuck has this been?!
Honest answer: it’s fragile. There are many cases of media durability being an issue and there will be going into the future. Adding a layer of ecc in the fs goes a long way.
WinFS wasn’t a replacement of NTFS as much as it was a supplement. Documents could be broken apart into atomic pieces, like an embedded image and that would be indexed on its own. Those pieces were kept in something more like a SQL database, more like using binary blobs in SharePoint Portal, but that database still was written to the disk on an NTFS partition as I recall. WinFS was responsible for bringing those pieces back together to represent a compete document if you were transferring it to a non-WinFS filesystem or transferring to a different system altogether. It wasn’t a new filesystem as much as it was a database with a filesystem driver.
Can it die now? ZFS all the things!
What the hell ever happened with ReiserFS (or whatever it was called?) It was supposed to be used in Vista, and then just never was.
It’s primary write and maintainer killed his wife and went to prison. The fs stagnated after that.
oh, you mean MurderFS?
What’s the difference from XFS?
XFS is more like ext3 or ext4 than zfs. It has now COW, snapshots, although it is very performant and can handle very large volumes. It’s a pretty good all around filesystem. I trust it more than ext4, but you also can’t shrink it, like you can ext4.
I use both. I like Linux better, even more since W10. It’s spyware, crap, all those nasty things. But hey, I’m a pc gamer and, sadly enough, my games (80% of them) all get funcky in Linux (wine, playonlinux,… I tried it all), so guess I’m stuck with the crap. But again, Linux is far better and superior
When’s the last time you tried gaming on Linux? Valve has made a ton of progress with Proton in the last few years.
It’s been a few months now, so I guess I could try it again
Anticheat is still unavailable on the games I play the most, unfortunately. No warzone, no Fortnite, no Halo MCC, there’s Apex at least.
MCC works now
MCC multiplayer works on Linux now?
Yup, they updated it a few months ago.
Hot damn, here I go!
I miss defragging…
😂😂 why?
The pretty visualization in win95 was kinda great. Really good colors. Graphic design may have peaked in 1994.
For me - It was comforting to know that I have a magic tool that I can run that makes my PC faster.
How old is ext4?
15 years.
Modern Linux systems are slowly moving toward Btrfs at least… which is pretty young compared to ext4 and Ntfs.
XFS, the default filesystem in Red Hat, is older than NTFS. Released 1994.
I’ll say this, the previous admin of one of the Linux servers I support set up RAID-0 striping for the main data slice (must have been dropped on their head as a child or something). Two drives, and one of the drives developed bad sectors, but I was still able to recover 95% of the data before it shit the bed completely. So, XFS is apparently quite resilient, or I got lucky.
This might sound ignorant but that’s cause I am. Why doesn’t windows just use ext4, btrfs, XFS, or something open source. They wouldn’t have to worry about developing it so it’d be a load off their chest and they could get really good features that even NTFS doesn’t have. Well maybe not with ext4 but with btrfs
Microsoft really really hated open source some time ago. Now they seem to have embraced it, however some still think that might be an attempt to EEE.
Still, I suppose Microsoft doesn’t think replacing the Windows default filesystem is a sound investment at this point even if the political resistance to such a change is, supposedly, gone.
Also NTFS is constantly evolving and it’s not the same as 30 years ago.
Microsoft has a replacement called reFS but I don’t know what happened to it
It’s sorta kinda usable but not really? Its main purpose seems to be
causing permanent data corruption inISCSI storage for Veeam backup appliances.You got that right!!
It’s sort of around, but it seems to be more aimed at servers than consumer machines.
Why should they use anything else if NTFS had being great for 30 years?
idk who was dumb enough to upvote this but NTFS hasn’t been great. That’s why they’re making a replacement called reFS
It stood the test of time. Is it up to par with modern alternatives? Mmm, no. But for a 30 years old tech - it’s pretty freaking awesome!
Windows recovery is unable to boot.
there is an open source btrfs kernel driver for it and a userspace one for ext4
Not Terribly Fast System
I heard rumours of Windows 12 moving away from NTFS(I assume keeping it as a legacy option).
What are they moving to?
ReFS allegedly
Does NTFS allow for merging of disks into a single partition? Apple was able to do this by combing a larger HDD with a smaller SSD into a single virtual HFS+ volume.
Yep. You need to convert the disk into a “dynamic disk” (no data loss btw) and then you can create a “spanned volume” across the disks. You can also create a striped volume for performance, which is basically RAID 0.
But apparently dynamic disks are now deprecated and Microsoft wants you to use “storage spaces” instead, which is basically RAID and not just simple spanned volumes. The problem with this, IIRC, is that you’ll need at least two extra drives (in addition to the drive where Windows is installed).
I don’t think a spanned volume is quite what they were after. I’m pretty sure macOS uses the SSD part as a cache and it’s used mainly for increasing the performance of the relatively slow but large capacity HDD. Nowadays though you might as well just go with all SSD in most cases if performance matters.
can you boot from that ?