XZ: Avoiding FUD and Learning Lessons

There has already been lots of ambulance-chasing and FUD about the XZ compromise discovered late last week, on the Friday leading into Easter Sunday. Here’s my take on the hard truth of what happened, and what we should learn from it.

People have asked me, both honestly and mockingly, whether SBOMs could have “solved” the XZ compromise. The answer, as with nearly every tool or dataset out there, is no.

99% of tools whose marketing suggests that they could have prevented or detected this attack are misleading. Sure, any tool that maintains a software inventory - whether it’s based on SBOMs, SCA scans, or other automated scanning - can help identify impacted assets, which certainly helps expedite incident response. When responding to a supply chain compromise, whether XZ or log4j, that can be immensely helpful for incident responders.

But regarding the detection of this attack, there is no smoking gun in this case. The attack was sophisticated, drawn out, and didn’t have any easy signature to scan on. The likely Github author, JiaT75, had been contributing code to the repository for years, and clearly had earned the trust of the other maintainers. There wasn’t some obvious flag like the author’s location being in adversarial countries or having other nefarious connections (even though there are many people in USG who think this is a high-signal detection technique). Clearly no security teams had discovered evidence of exploitation in their environments. This was a subtle, well-executed software supply chain attack.

So what can we learn from the incident?

1. We can’t catch everything with tools.

This attack wasn’t detected by a fancy security tool, or even a world-class security team. A curious developer, Andrew Freund, just happened to investigate why he was experiencing slow performance for the liblzma package.

2. Know the open-source software you rely on.

The orgs that are having the worst time right now are those with no visibility into their OSS footprint, rendering them completely unable to answer the question “Do we have any affected versions of XZ or the relevant Linux distros in my environment?”

3. Zero-trust in the supply chain?

Sure, zero-trust has become a buzzword, but perhaps there’s something we can draw on. How can maintainers of OSS software raise the bar for how they trust other/new contributors? And how can OSS consumers raise the same bar for how much they trust any OSS maintainer?

For example, I’ve talked with some security teams that will require manual reviews of the first time any OSS library is introduced into their internal code base; otherwise perform reviews if an OSS library is less than a certain age, etc. None of these checks are perfect or fully comprehensive, but they are steps in the right direction.

4. OSS Author as “IOC.”

Sure, zero-trust has become a buzzword, but perhaps there’s something we can draw on. How can maintainers of OSS software raise the bar for how they trust other/new contributors? And how can OSS consumers raise the same bar for how much they trust any OSS maintainer?

For example, I’ve talked with some security teams that will require manual reviews of the first time any OSS library is introduced into their internal code base; otherwise perform reviews if an OSS library is less than a certain age, etc. None of these checks are perfect or fully comprehensive, but they are steps in the right direction.

While enterprise security teams are scrambling to find impacts assets, and a CVE has been created that describes the impacted packages… where’s the authoritative publication of other OSS repositories that the (alleged) malicious author contributed to? Sure, there’s lots of discussion in the threads of online security forums, but how does that help facilitate response and mitigation activities globally? Perhaps Github, Gitlab, and other source code repositories can more deeply explore reputation/trust factors for authors based on the quality of their code, not just how many internet stars other pseudonymous usernames give them.

Here’s a nice write up from Evan Boehs, which I suggest reading: https://boehs.org/node/everything-i-know-about-the-xz-backdoor.

“[In 2021] JiaT75 (Jia Tan) creates their GitHub account. The first commits they make are not to xz, but they are deeply suspicious. Specifically, they open a PR in libarchive: Added error text to warning when untaring with bsdtar. This commit does a little more than it says. It replaces safe_fprint with an unsafe variant, potentially introducing another vulnerability. The code was merged without any discussion.”

Overall, we all need to take a harder look at how much trust we put into our third-party (and open-source) software, do better to detect attacks, and invest in making response faster and easier.

Next
Next

SBOMs Take Center Stage in the EU’s Cyber Resilience Act