- cross-posted to:
- autism@lemmy.world
I’ve gotten back into tinkering on a little Rust game project, it has about a dozen dependencies on various math and gamedev libraries. When I go to build (just like with npm in my JavaScript projects) cargo needs to download and build just over 200 projects. 3 of them build and run “install scripts” which are just also rust programs. I know this because my anti-virus flagged each of them and I had to allow them through so my little roguelike would build.
Like, what are we even suppose to tell “normal people” about security? “Yeah, don’t download files from people you don’t trust and never run executables from the web. How do I install this programming utility? Blindly run code from over 300 people and hope none of them wanted to sneak something malicious in there.”
I don’t want to go back to the days of hand chisling every routine into bare silicon by hand, but i feel l like there must be a better system we just haven’t devised yet.
Debian actually started to collect and maintain packages of the most important rust crates. You can use that as a source for cargo
Researchers have found a malicious backdoor in a compression tool that made its way into widely used Linux distributions, including those from Red Hat and Debian.
Yeah they messed up once. It’s still miles better than just not having someone looking at the included stuff
You’d think this would be common sense…
those from Red Hat
Not the enterprise stuff; just the beta mayflies.
Which is why you shouldn’t do that. Dependency nightmare is a real problem many developers face. More to the point they impose it on you as well if you are by any reason forced to use their software. Well established libraries are gateway to this. People are getting out of their way to complicate lives to themselves and massive amount of others just so they could avoid writing a function or two. Biggest absurdity I like to point out to people is the existence of
is-number
NPM package, which does that. It has 2300 dependent projects on it!!! Manifest file for said package is bigger than the source. And the author had the hubris to “release it under MIT”. How can you claim copyright onnum - num === 0
?On all the projects I manage I don’t allow new dependencies unless they are absolutely needed and can’t be easily re-implemented. And even then they’d have to be already in the Debian respository since it’s a good and easy way to ensure quick fixes and patching should it be needed. Sometimes alternative to what we wanted to use already is in repo, then we implement using different approach. We only have few Python modules that are not available in repo.
Managing project complexity is a hard thing and dependencies especially have a nasty habit of creeping up. I might be too rigid or old-school or whatever you want to call it, but hey at least we didn’t get our SSH keys stolen by NPM package.
THIS.
I do not get why people don’t learn from Node/NPM: If your language has no exhaustive standard library the community ends up reinventing the wheel and each real world program has hundreds of dependencies (or thousands).
Instead of throwing new features at Rust the maintainers should focus on growing a trusted standard library and improve tooling, but that is less fun I assume.
Can you give some examples of things missing from Rust standard library?
Easily, just look at the standard libraries of Java/Python and Golang! :-P
To get one thing out of the way: Each standard library has dark corners with bad APIs and outdated modules. IMHO it is a tradeoff, and from my experience even a bad standard library works better than everyone reinvents their small module. If you want to compare it to human languages: Having no standard library is like agreeing on the English grammar, but everyone mostly makes up their own words, which makes communication challenging.
My examples of missing items from the Rust standard library (correct me, if I am wrong, not a Rust user for many reasons):
- Cross platform GUI library (see SWING/Tk)
- Enough bits to create a server
- Full set of data structures and algorithms
- Full set of serialization format processing XML/JSON/YAML/CVS/INI files
- HTTP(S) server for production with support for letsencrypt etc.
Things I don’t know about if they are provided by a Rust standard library:
- Go like communication channels
- High level parallelism constructs (like Tokyo etc.)
My point is, to provide good enough defaults in a standard library which everybody knows/are well documented and taught. If someone has special needs, they always can come up with a library. Further, if something in the standard library gets obsolete, it can easily be deprecated.
Python doesn’t have a production web server in its standard library. Neither does Java. Those are external programs or libraries. C# is the only language I know that comes with an official production grade server, and that’s still a separate package (IIS).
Rust has a set of recommended data structures in their standard libraries too: https://doc.rust-lang.org/std/collections/index.html
I don’t know what algorithms you are looking for so can’t answer here.
The rest I don’t think are included in Rust. Then again they aren’t included in most languages standard libraries.
Golangs web server is production grade and used in production. (Of course everyone uses some high performance proxy like NGINX for serving static pages, that’s another story.)
Technically you are right that java has no production web server, which I don’t like, OTOH Java has standard APIs WebServers and Spring is the defacto standard for web applications. (I totally would not mind to move Spring into the OpenJDK.)
My point is simple: Instead of having Rust edtion 2020, 2021 etc. and tweaking the syntax ad infinitum, I’d rather have a community which invests in a good/broad standard library and good tooling.
The only platform widely used in production w/o a big standard library is Node.js/JavaScript, mostly for historical reasons and look at the problems that Node.js has for a decade now because of the missing standard library.
I thought they already had decent tooling and standard libraries?
It does, but the person you reply to apparently expects a standard library to contain an ECS and a rendering engine.
It’s a really wicked problem to be sure. There is work underway in a bunch of places around different approaches to this; take a look at SBoM (software bill-of-materials) and reproducible builds. Doesn’t totally address the trust issue (the malicious xz releases had good gpg signatures from a trusted contributor), but makes it easier to spot binary tampering.
+1
Shameless plug to the OSS Review Toolkit project (https://oss-review-toolkit.org/ort/) which analyze your package manager, build a dependency tree and generates a SBOM for you. It can also check for vulnerabilitiea with the help of VulnerableCode.
It is mainly aimed at OSS Compliance though.
(I am a contributor)
Do you really need to download new versions at every build? I thought it was common practice to use the oldest safe version of a dependency that offers the functionality you want. That way your project can run on less up to date systems.
Most softwares do not include detailed security fixes in the change log for people to check; and many of these security fixes are in dependencies, so it is unlikely to be documented by the software available to the end user.
So most of the time, the safest “oldest safe” version is just the latest version.
So only protects like Debian do security backports?
Edit: why the downvote? Is this not something upstream developers do? Security fixes on older releases?
Backports for supported versions sure,.
That’s why there is an incentive to limit support to latest and maybe one previous release, it saves on the backporting burden.
Okay, but are you still going to audit 200 individual dependencies even once?
That’s what the “oldest safe version” is supposed to address.
Because everything is labeled safe and unsafe, right?
Your snark is tremendously conducive for a conversation. Go touch some grass.
I’m not familiar with rust but at least for java there’s a owasp plugin that tells you if you’re using an unsafe library.
Like, what are we even suppose
supposed
to tell “normal people” about security? “Yeah, don’t download files from people you don’t trust and never run executables from the web. How do I install this programming utility? Blindly run code from over 300 people and hope none of them wanted to sneak something malicious in there.”
You’re starting to come to an interesting realization about the state of ‘modern’ programming and the risks we saw coming 20 years ago.
I don’t want to go back to the days […]
You don’t need to trade convenience for safety, but having worked in OS Security I would recommend it.
Pulling in random stuff you haven’t validated should feel really uncomfortable as a professional.
deleted by creator
deleted by creator
What a load.
deleted by creator
everytime this happens i become unexplainably happy.
There’s just something about a community doing it’s fucking job that gets me so normal feeling.
Getting noticed because of a 300ms delay at startup by a person that is not a security researcher or even a programmer after doing all that would be depressing honestly.
I love free software community. This is one of the things free software was created. The community defends its users.
I second this. I love to feel part of a community even tho I could have never found the backdoor, let alone fix it.
opensourceautists win!Opensautists
The problem I have with this meme post is that it gives a false sense of security, when it should not.
Open or closed source, human beings have to be very diligent and truly spend the time reviewing others code, even when their project leads are pressuring them to work faster and cut corners.
This situation was a textbook example of this does not always happen. Granted, duplicity was involved, but still.
100%.
In many ways, distributed open source software gives more social attack surfaces, because the system itself is designed to be distributed where a lot of people each handle a different responsibility. Almost every open source license includes an explicit disclaimer of a warranty, with some language that says something like this:
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
Well, bring together enough dependencies, and you’ll see that certain widely distributed software packages depend on the trust of dozens, if not hundreds, of independent maintainers.
This particular xz vulnerability seems to have affected systemd and sshd, using what was a socially engineered attack on a weak point in the entire dependency chain. And this particular type of social engineering (maintainer burnout, looking for a volunteer to take over) seems to fit more directly into open source culture than closed source/corporate development culture.
In the closed source world, there might be fewer places to probe for a weak link (socially or technically), which makes certain types of attacks more difficult. In other words, it might truly be the case that closed source software is less vulnerable to certain types of attacks, even if detection/audit/mitigation of those types of attacks is harder for closed source.
It’s a tradeoff, not a free lunch. I still generally trust open source stuff more, but let’s not pretend it’s literally better in every way.
It’s a tradeoff, not a free lunch. I still generally trust open source stuff more, but let’s not pretend it’s literally better in every way.
Totally agree.
All the push back I’m getting is from people who seem to be worried about open source somehow losing a positive talking point, when comparing it to close source systems, which is not my intention (the loss of the talking point). (I personally use Fedora/KDE.)
But sticking our heads in the sand doesn’t help things, when issues arise, we should acknowledge them and correct them.
using what was a socially engineered attack on a weak point in the entire dependency chain.
An example of what you may be speaking about, indirectly. We can only hope that maintainers do due diligence, but it is volunteer work.
Forgot to ask, but I would love to hear your thoughts on what @5C5C5C@programming.dev has commented about this subject: https://lemmy.world/comment/9003210
In the broader context of that thread, I’m inclined to agree with you: The circumstances by which this particular vulnerability was discovered shows that it took a decent amount of luck to catch it, and one can easily imagine a set of circumstances where this vulnerability would’ve slipped by the formal review processes that are applied to updates in these types of packages. And while it would be nice if the billion-dollar-companies that rely on certain packages would provide financial support for the open source projects they use, the question remains on how we should handle it when those corporations don’t. Do we front it ourselves, or just live with the knowledge that our security posture isn’t optimized for safety, because nobody will pay for that improvement?
i feel like the mental gymnastics should end with a rake step
It’s about the complex rationalizations used to create excuses (pretexts).
The original is this:
Alright I won’t argue about that specific version’s point, but this is basically a template for constructing a strawman argument.
Lmao this is the first time I’m seeing this format, I like the content so far.
Wow this is great
I feel like that’s really crappy non-vegan mental gymnastics. I think veganism is morally superior, but I really want to show mine off, just because I’m offended by how stupid all these are-the fact I know they’re real makes me more ashamed of eating that yogurt earlier than any amount of chatt slavery or butchery ever will.
Queensbury Rules init.
Init as in 'contraction of "isnt it”’? Or as in ‘initialize’?
Immediately noticed even though the packages have been out for over a month?
Easily could have stolen a ton of information in that month.
Yeah but tbf it was deployed on mostly rolling release and beta releases.
No enterprise on prod is worried because they’re still on RHEL 6 /s
Why the /s? We are migrating our host to RHEL7 since months.
we’ve skipped 7 and are jumping straight to 8. The process has been going on for two years now. 9 was released 2 years ago
Ours goes to 11.
My innocent home lab bum thought 4 years would be enough to assume people got off of an EOLd distro lol
Yeah they got lucky. But shows how susceptible systems are. Really makes you wonder how many systems are infected with similar - this wouldn’t be the first back door that’s live in Linux systems.
On what? Servers using Arch Linux? Debian Unstable? Fedora 40?
Phew, thankfully everyone follows appropriate procedures and doesn’t just roll out beta updates to production in their systems.
Right?
I hope so lol. At that point that is natural selection though.
It is pretty funny, I bet he’s kicking himself right now for it.
I just updated xz in my system. Thanks Lemmy!
On any server, you want unattended upgrades.
Depends, for example Debian unattended-upgrade caused system restarts after many updates that was extremely inconvenient for me because I have a more manual bringup process. I had restarts turned off in its settings and it still restarted.
I uninstalled it and have not one single unwanted restart since then, so manual upgrades it is.
I’ve been using it for 10+ years on servers and it’s not been an issue for me.
deleted by creator
For the uninitiated, this is a representation of the Survivorship Bias.
Essentially, the red dots represent bullet holes from aircraft which returned from battle.
If you were to ask someone which places should be reinforced with armour, someone who has the Survivorship Bias would say “where the red dots are”, whereas people who know anything about engineering would say “everywhere else!”
It’s like saying: “why are you wearing a helmet? I’ve met hundreds of soldiers and none of them have ever been shot in the head, helmets are a waste of good armour.”
A true fact: Did you know wearing a helmet increases your chances of dying of cancer.
A true fact: Did you know wearing a helmet increases your chances of dying of cancer.
Rofl I love this. Great comment
What are you saying? That there are people doing the top version (“I want a backdoor / I ask the corpo to grant me access”) for FOSS but they’re less likely to get caught if they don’t do all the gymnastics?
OP is referring to a backdoor that was found. It apparently modified behaviour in a way that was noticeable to humans, suggesting that it was built by an unskilled adversary.
It’s a safe bet that there are others (in FOSS) that remain undiscovered. We know that skilled adversaries can produce pretty amazing attacks (e.g. stuxnet), so it seems likely that similar vulnerabilities remain in other FOSS packages.
Stuxnet was done by a literal army assembled by state actors with massive funding hoarding zero days. If an attack like that came at you there is very little you can do.
It’s a safe bet that there are others (in FOSS) that remain undiscovered.
I agree, but I don’t think that image (about survivors’ bias) applies to the op meme then, as that would imply that it only seems like open source backdoors are convoluted because we’ve not found the simple/obvious ones
Survivorship bias or survival bias is the logical error of concentrating on entities that passed a selection process while overlooking those that did not. This can lead to incorrect conclusions because of incomplete data.
In this case, the selection process is discovering human-evident back doors. It fits by my reading.
What did i miss?
OpenSSH backdoor
Openssh backdoor via a trojan’ed release of liblzma
Ever wondered why ${insert_proprietary_software_here} takes so long to boot?
related blog - https://robmensching.com/blog/posts/2024/03/30/a-microcosm-of-the-interactions-in-open-source-projects/
Make no mistake. This is the way it works.
It needs to change.
Agreed.