Are you an AppSec engineer? Are you trying to minimize the mass exploitability of your attack surface? What does success look like to you? My guess is it looks a little something like this:
No supply chain highs/criticals/exploit chains in your environment.
Depending on your technology stack(s), this is hard! For example, Node tech stacks can mean hundreds of dependencies with dozens of advisories. If you're a small shop and you can just patch everything as soon as an advisory drops, great! Do that! But...what if you can't? Let me ask you some questions (be honest):
- How long did you spend getting your engineering team to do your last security-related package upgrade?
- How much political capital did it cost you?
- How many emails did you get from Product hand-wringing about timelines/velocity/resource allocation?
When I wore an AppSec hat at my previous job, I spent a lot of time wrangling dependency vulnerabilities, as I'm sure you all do: this problem has gotten worse, not better. First, some things I tried that did not work:
- Supply chain inbox zero - lots of time investment and manual effort for products that only scan manifests and lockfiles.
- "Just upgrade all of your dependencies." This doesn't scale at an enterprise level. Library changes often require extensive testing and sometimes refactoring.
Like many of you, I determined that my best bet was to focus on the highest-impact vulnerabilities that were actually reachable in my codebase. This meant a LOT of my supply chain time was spent:
- Reading an advisory from a huge stack of findings to get context on the vulnerability.
- Understanding the entry point.
- Grepping that codebase for the vulnerable function.
- Repeat N times, where N is the number of open findings.
Sometimes, I got lucky and dependencies got re-used across codebases so I could skip steps 1 and 2, but when you have tens, hundreds, or even thousands of repositories to repeat this on, even with scripting tools you're spending a lot of time doing 1,2, and parts of 3 by hand. Automation simply lets you scale that time investment out. Furthermore, the number of hats you wear as an AppSec engineer increases as your team gets smaller - if it's just you, you're juggling a whole program! Dependency management can't take up more than a couple hours a week, you've got too much other stuff to do.
Does this sound like you?
In 2019, Whitesource (now Mend) discussed their reachability product and claimed between 70% and 85% of open source vulnerabilities are not called by user code based on user surveys. ShiftLeft published a blog post demonstrating their reachability analysis reducing finding noise by over 90%.
So, how is Semgrep Supply Chain’s approach different? Most established tools don't have great reachability support or require building/instrumenting binaries/deploying agents to achieve reachability:
|Technology||Language Support||Requires Build||Requires Agent||Reference|
|govulncheck||Go||Yes (Go 1.18+)||No||govulncheck documentation|
|Dependabot||Python||No||No||Dependabot reachability announcement blog|
|Mend (formerly WhiteSource)||Java/Scala/Kotlin, Node, C#, Python||Yes||Yes||Effective Usage Analysis PDF|
|Snyk||Java||No||No||Snyk Open Source reachability support|
At launch, Semgrep Supply Chain supports the Node (npm), Python (PyPI), Go (gomod) and Ruby (gem) package registries with experimental support for Java (Maven). It works just like the Semgrep you know and love - no building/agent required! Turn it on in your environment and it will "just work." Semgrep Supply Chain has the same speed you've come to expect from Semgrep and no additional configuration work needed in your environment. If Semgrep is already deployed, you're good to go.
Our research team automatically detects new entries in publicly-accessible vulnerability databases and uses Semgrep's powerful polyglot engine to write high-confidence rules to detect impacted functionality. Emerging vulnerabilities in ecosystems supported by Semgrep will be automatically covered without any configuration by you, the end user. While we are currently prioritizing new and emerging vulnerabilities, r2c's internal tooling allows our team to quickly backfill reachability rules for customer priority CVEs. Need coverage for a specific set of older CVEs endemic to your environment? Just reach out to us!
Does this sound great but your organization needs some assurances around parity coverage with tools like Dependabot for regulatory reasons? No worries! Semgrep also maintains a list of "supply chain v1" rules that simply detect the presence of vulnerable versions. The research team has been you and we fully understand that comprehensive results are nice (and sometimes required) but what you really need is a short list of high-impact, potentially exploitable issues to spend your limited political capital on fixing. With Semgrep Supply Chain, "inbox zero" becomes an achievable goal: your inbox is up to 99% smaller! It's still an automated tool, and it still needs a human touch, but you're trading an avalanche of "a problem might exist" findings for a handful of "a problem likely exists" findings.
Nobody gets anywhere playing defense in security without a healthy dose of skepticism...or paranoia, depending on who's asking. You might be saying to yourself: "That's great if it works, but how do I know if it works?" Fair question. Allow me to elaborate.
The engine built by our team of program analysis PhDs and the security detections ("rules") written by our experienced security researchers are trusted by companies like Slack, Snowflake, Figma, and many more to continuously scan every PR. Semgrep Supply Chain uses this same engine to power its reachability capabilities. If you examine a Supply Chain advisory in the App, it’ll even show you the pattern we use to identify reachable findings.
The team in charge of rules has been you - we have personally felt the pain of managing third-party risk. We know a good tool that doesn't deliver is worse than a bad tool that does.
Most of our rules use open-source software as a test corpus for the authoring process. We leverage contextual clues like library imports and typed metavariables to minimize false positives. Take, for example, CVE-2022-29181: memory corruption (out-of-bound read) via XML parsing in the XML and HTML4 parsers in nokogiri. The Supply Chain rule for this leverages taint mode as well as import statements to ensure that:
- You are directly importing the affected library.
- You are using the impacted functionality of the library - the HTML5 parser is unaffected!
- The rule can handle inline or early-binding parsing constructions.
rules: - id: ... message: nokogiri < 1.13.6 is vulnerable to memory corruption via crafted inputs to the XML and HTML4 SAX parsers. Only CRuby implementations are affected. Upgrade to nokogiri 1.13.6. metadata: ... languages: - ruby severity: ERROR mode: taint pattern-sources: - patterns: - pattern: | Nokogiri::$P::SAX::Parser.new - metavariable-pattern: patterns: - pattern-not: HTML5 metavariable: $P pattern-sinks: - pattern: | $PARSER.parse ... r2c-internal-project-depends-on: depends-on-either: - namespace: gem package: nokogiri version: < 1.13.6
Complex rules at risk of high false positive/false negative rates are field tested on open-source projects sourced from GitHub at known-vulnerable versions in their commit history. Rule writing is tool-assisted where possible to ensure consistency, quality, and correctness. Finally, all rules committed to the registry must be peer-reviewed.
The research team also periodically tests Supply Chain rulesets with internal tools that allow us to run rules against hundreds (or even thousands) of open-source repositories on-demand in a matter of minutes. Using modern DevOps tools like Kubernetes, we can even run scaleable workloads across thousands of targets at Semgrep speeds, allowing us to get rule performance feedback on large quantities of public code, as a PR gate!
While I can't tell you exactly how the sausage is made, here's a sneak peek at some of the technology backing the Supply Chain rule writing infrastructure:
- Automated creation of new Supply Chain rules from publicly disclosed vulnerabilities
- Heuristics to pull proof-of-concept code from advisories and reference links
- Automated notifications of new CVEs from multiple sources for all supported languages
A short refresher on some of the issues with supply chain dependency management today:
- Dependency management is hard!
- "Inbox zero" is great if you can get there...
- ...but if you can't, legacy tools will bury you in work.
- There's a good chance that as few as 1% of your traditional supply chain findings are reachable and require action.
- You have other stuff on your plate - wrangling dependencies is a small part of your job but seems to take up most of your time.
Wouldn't it be better if...
- Dependency management was easy!
- "Inbox zero" was achievable...
- Supply chain findings are almost all true positives.
- Dependency management is a small, manageable part of your workflow.
Y'all ain't never had a friend like me! Claw back those triage hours to work on cool stuff and ask your developers to fix things they're actually using. Keep tabs on which findings you need to act on, track emerging vulnerabilities, and get to "inbox zero" for the stuff that matters with Semgrep Supply Chain.