Is source code inspection a security risk? Maybe not, experts say Moscow’s recent demand to inspect the source code of American software vendors supplying the Russian government does not pose the severe security threat some are making it out to be, experts say, emphasizing that while sharing source code with a nation-state adversary does make it easier for an attacker to find security flaws, source code is far from the “keys to the kingdom” for bug hunters.
At a time of heightened cyberespionage between the US and Russia, Moscow’s worries about possible backdoors in American software seem like legitimate concerns that justify a request for source code review, experts suggested.
The controversy began in October, when the news broke that Hewlett Packard Enterprise let a Russian defense agency review the source code for the company’s ArcSight SIEM offering (since sold to UK firm Micro Focus International Plc), widely used in industry and also by the Pentagon, according to an October report by Reuters. The revelation sparked an outcry against sharing source code with foreign governments, and prompted Symantec’s CEO Greg Clark to tell Reuters “These are secrets, or things necessary to defend (software). It’s best kept that way.”
Well-known cybersecurity experts questioned this tempest in a teapot, however. “As someone who has hunted bugs for 15 years, having source code is barely advantageous,” former NSA hacker Charlie Miller, best known for stunt hacking a Jeep a few years ago, tweeted. “Counterintuitive but true,” former head of cybersecurity research at DARPA Peiter “Mudge” Zatko, agreed, “You find fewer bugs analyzing source code. You find more bugs evaluating binaries and augmenting with fuzzing.”
Having the source code can make it easier to identify weaker areas to target when researching vulnerabilities in software, for example, and developers sometimes leave behind useful comments in the code, like “come back and finish this later,” that can help attackers. However, a lot can happen when compiling source code, application security experts say. Security flaws that appear to exist in the source code might not exist in the compiled binary, and sometimes the compilation process itself can introduce new, unexpected vulnerabilities.
As a result, application security (appsec) researchers include source code review as only a small part of looking for security flaws. That’s where fuzzing comes in.
What is fuzzing?
Fuzzing attacks a running binary executable by giving the program semi-random data in the hope of causing an unexpected error condition, or even a crash. Analyzing unexpected output can help researchers identify security flaws. Because fuzzing can be automated, attackers can mount highly effective attacks against complex software without ever seeing the source code.
“Fuzzing gets better results,” says Brian Knopf, senior director of security research at Neustar. “That’s the way you find a zero day. You’re not finding it with code analysis.” Fuzzing, Knopf explains, is “throwing everything and the kitchen sink at [a compiled binary], throwing junk and trying to get something to come out that shouldn’t come out.”
An adversary does not need the source code to engage in this kind of appsec research. A foreign government that purchases American software without the source code will almost certainly fuzz critical software before deploying it in production.
“When you look at the source code, you see what could be,” says Daniel Miessler, director of advisory services at IOActive. “When you’re fuzzing, especially an application in production, you’re seeing the reality of how that application presents to the world.”
Fuzzing has been called a “dumb science” and many powerful fuzzing tools are freely available online for anyone to download and use. Popular fuzzers include BurpSuite and Wapiti, both web vulnerability scanners; extensible fuzzing frameworks like Peach, SPIKE and Sulley; network-level protocol fuzzers like Scapy; and the ever-popular American fuzzy lop (AFL). Nation-state attackers with the human resources and budget, however, will do more than just fuzz critical software. They’ll reverse engineer it.
Reverse engineering mission-critical software
Reverse engineering is an effective security research tool well within reach of even small nation-state adversaries, says Columbia University professor Steven Bellovin. If you don’t have the source code, he says, “You can always reverse engineer it. There are very good reverse engineering tools, well-known techniques for understanding what compiled code does.”
Reverse engineering takes a compiled binary and, like the name suggests, reverses the compilation process to produce source code — often mangled and difficult to understand, but source code nonetheless. Malware researchers and antivirus companies, for example, do a lot of reverse engineering as part of their work, since viruses don’t typically come with source code to review.
Michael Sikorski, director of FireEye Labs Advanced Reverse Engineering (FLARE) Team, and author of Practical Malware Analysis, agrees. “When you reverse malware, you’re asking, ‘what does it do?’” he says. “It’s the same with commercial off-the-shelf software. You’re asking ‘what does it do?’ so you can find vulnerabilities.”
“I don’t love the idea of giving source code to a foreign government,” Knopf says, “but if they [HPE or Symantec or another American tech vendor] have taken care of their criticals and highs, even their mediums…yeah, I’d agree with Mudge, [foreign governments] are going to fuzz it. They’re not going to find a home run [zero day] with static analysis.”
Security flaws are typically rated on a severity scale from critical (the most dangerous) down through high, medium, and low. Static analysis typically refers to automated source code review.
Smells like due diligence
There are good reasons to ask for source code review that have nothing to do with hunting zero-days, numerous sources suggest. For his part, Sikorski agrees that a foreign government’s demands for American source code look a lot like due diligence. “We get a lot of requests from [American] companies saying, ‘We’re about to make this big purchase from country XYZ. Can you tell us if this thing is backdoored?’” he says. “If it was me personally buying a product, I would kind of look for that as a request, ‘Hey, can I have your source code?’ An ask from a foreign government buying code made in another country, it doesn’t seem like a wild ask.”
Bellovin suspects the recent froth around sharing source code with foreign governments is really about preventing theft of intellectual property (IP), and not security. “If I was in charge of Symantec [or another American tech vendor], I’d be far more worried about the IP issue,” he says. “If the FSB [Russia’s Federal Security Service] wants to find security holes, they’re going to do it anyway.”
This story, “Is source code inspection a security risk? Maybe not, experts say” was originally published by CSO.