Crates.io: Malicious Crates Faster_log and Async_println
Posted3 months agoActive3 months ago
blog.rust-lang.orgTechstory
calmnegative
Debate
40/100
RustSupply Chain SecurityMalicious Packages
Key topics
Rust
Supply Chain Security
Malicious Packages
The Rust community is dealing with two malicious crates, faster_log and async_println, discovered on crates.io, highlighting concerns about supply chain security and package management.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
29m
Peak period
17
0-6h
Avg / period
6
Comment distribution24 data points
Loading chart...
Based on 24 loaded comments
Key moments
- 01Story posted
Sep 26, 2025 at 2:28 PM EDT
3 months ago
Step 01 - 02First comment
Sep 26, 2025 at 2:58 PM EDT
29m after posting
Step 02 - 03Peak activity
17 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 29, 2025 at 9:00 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45389550Type: storyLast synced: 11/20/2025, 2:35:11 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
What are actual things that crates.io or npm could do but aren’t to improve the security of the ecosystem?
- Substantially the same README as another package
- README links to a GitHub that links back to a different package
And additionally:
- Training a local LLM on supply-chain malware as they capture examples, and scanning new releases with it. This wouldn't stop an xz-style attack but will probably catch crypto stealers some of the time.
- Make a "messages portal" for maintainers and telling them never to click a link in an email to see a message from the repository (and never including a link in legitimate emails). You get an email that you have a message and you log in to read it.
Just spit balling.
Almost every other action would be just an guess with information for the devs and getting in the way of edge cases. For example, what if you genuinely want to publish a malware example or a vulnerability reproducer? What if you want your own fork of another package because you carry extra patches?
We will struggle to read every release of every package and we won't catch every attack, though, I agree. If we were able to force adversaries to engage in sophisticated multi-pronged attacks instead of trivially malicious packages, that would be a win. It would make their operations more complex, time consuming, and prone to failure.
Essentially: building the world from GitHub repos on SLSA L2 hardened infra and delivering directly to our customers to bypass the registry threat vector (which is where vast, vast majority of attacks occur—we'll be blogging about this soon with more data).
[1] https://www.chainguard.dev/unchained/announcing-chainguard-l...
In this particular case, the bogus libraries had been out there for months. But if in addition to a delay, you mirror just the most common subset of packages with some opinionated selection criteria and build directly from source, you eliminate most of these attacks. (The same is true across whatever language ecosystems, including JS as you mention npm, etc.)
Is this 100% infallible? No, but security is a risk reduction game.
Go back to the distribution/maintainer model. It worked. But it requires that developers slow down the rate of (non-alpha/beta/rc) releases until it matches the maintainer capacity of major software distributions. This is bitter medicine, but it's the solution.
Software distributions exist for a reason. They have maintainers, who are responsible for watching for stuff like this. Unmoderated language-specific registries have encouraged a massive degree of churn. This churn is incompatible with maintainer review, which is why a lot of distributions have basically given up on language-specific registries.
And still completely missed the xzutils compromise.
And I’m 90% sure those distribution maintainers don’t watch for stuff like this because they simply wouldn’t have the bandwidth to. I think they mostly rather just determine whether or not a software package is worth adding and maybe determining it initially and whether it has problems building. For example, the base available software in Arch is quite limited while the AUR is a choose your own adventure.
There's no comparison.
That was the culmination of a three-year effort -- almost certainly state-backed. Stuff like that happens maybe three times a decade, and makes headlines. Meanwhile supply chain attacks against language-specific package managers are a monthly or perhaps even weekly event.
There's no comparison.
Or run a separate registry with reviewed packages, I'm sure we're going to see that soon as a service.
Developers used to ask themselves "if my next version changes from requiring (libfoo>=0.3) to requiring (libfoo>=0.4), will that risk it being left out of the next Debian release?" In the C/C++ ecosystem people still ask themselves this question (or similar). Oftentimes that leads to thoughtful solutions, like being able to build against either libfoo, but simply disabling certain features if (libfoo<0.4).
The churn rate and the "upgrade-all-muh-pkgs-and-hash-em-good" workflow make it painfully impractical to ask this question and give it due consideration.
So we get supply chain attacks.
Some solutions for that include Bootstrappable Builds (and StageX), Reproducible Builds and crev.
https://bootstrappable.org/ https://stagex.tools/ https://reproducible-builds.org/ https://github.com/crev-dev/
1 more comments available on Hacker News