UNC2452: Hacking without Consequences — Part Two: A Hypothetical Attack Life Cycle

Dan Hranj
7 min readJan 4, 2021

The details of SolarWinds Orion supply chain attack and UNC2452 will continue to trickle into the security community for years to come. If we’re lucky, enough nuggets of information will be shared in drunk conference ramblings, blog posts by incident responders, updated indicators of compromise, analysis of software patches, OPSEC slips, and OSINT research to create a compelling narrative but all of that will likely pale in comparison to the truth.

Even the general security community will probably never know details about the breach of United States government systems beyond the fact that they were hacked and I would bet money that most executives at the helm of the other victim companies never say a word. The breach reporting landscape is changing as more companies acquire certifications like FedRAMP and abide by regulations like GDPR which typically require candidates to level up their security posture and impose mandatory incident reporting requirements. In my experience, however, companies rarely go public with any details of a breach unless they are responsible for the loss of PCI data or PII. One could make a case that breaches have a financially material impact and should be reported to the SEC but quantifying the damage done is impossible and thus these incidents go unreported. Cursory searches of SEC 8-K filings revealed only FireEye and SolarWinds had notified the agency of a breach impacting their business. Notably missing are the other public companies identified as targets by the security community. Whether or not these networks were primary targets of UNC2452 or mere civilian casualties compared to the government victims is yet to be determined.

I made a rookie mistake. I decided to write a series of blog posts without fully outlining any of the content. I said this would be the exploration of a potential attack lifecycle but, survey says, that was a lie. The attacker’s actions from the “Initial Compromise” through to “Maintain Presence” are likely boring compared to the grand finale that are SUNBURST and SUPERNOVA. That isn’t to say it wasn’t sophisticated. If the attack group behind all of this is part of the Russian intelligence apparatus (COZYBEAR et. al.) then they probably had some neat tricks up their sleeves.

As discussed in “Part One: Why SolarWinds Orion?”, it may be weeks or months before we receive more concrete details about the breach from SolarWinds. Barring any legal proceedings, it is unclear how much detail the company is willing or able to share. There is plenty of publicly available information about these attackers [1] that you can leverage to protect your environment and even more if you subscribe to private intelligence feeds but speculating on new techniques without any hard forensic evidence is a fool’s errand.

While we wait for the facts all we have left to do is lean on experience and think creatively. As an incident responder, I’ve always said it is best to pivot on what we know than go looking for needles in haystacks.

What do we know? We know the attacker completed their mission. They began successfully patching SolarWinds Orion updates as early as October 2019 and, in March 2020, shipped the first trojanized update. That’s really it…

How would an attacker distribute malicious signed software? Honestly, I spent too much time trying to organize my thoughts on this matter only to begrudgingly accept the fact that I’m not a software developer. Luckily, everyone is ripping this software package to shreds and Reversing Labs published a great write-up of their analysis [2] that fills in the gap left by FireEye’s technical analysis [3] [4]. As typical with FireEye and Mandiant, if it cannot be proven they aren’t going to say it. Only somewhat surprisingly, the Reversing Labs analysis led to the same hypotheses I did but with solid data to back up their claims.

Analysis of SolarWinds Orion packages suggests the earliest evidence of malicious modifications appeared in October 2019 with the addition of a benign .NET class. The class did nothing but exist as a placeholder for future code. Many reports are referring to this move as a test run, as the attacker made modifications (presumably directly) to source code and waited to see if these changes appeared in final and publicly available releases. If their changes made it to release, the attacker could be reasonably certain that further changes would go equally unnoticed.

The attacker’s efforts to blend in went a lot further than waiting to see if they were caught before moving ahead. They were careful to mimic naming conventions, adapted legitimate code and styling, and carefully selected the placement of each modification. Even the command and control (C2) traffic was designed to resemble regular .NET XML to hide from network monitoring systems. The level of sophistication and maturity shown here lends a lot of credibility to this being the work of a nation-state with a significant budget, a lot of time, and multiple teams of operators for different tasks.

The test run was an important milestone in the attack but the attacker’s patience throughout the campaign leads me to believe there was significant development work being done on SUNBURST before the test run even took place in October 2019. An attacker would not want to be caught debugging when the time came to ship the malware to the masses. Add to that the development of a secondary backdoor (a webshell) dubbed SUPERNOVA by Palo Alto Networks [5] and GuidePoint Security [6] and the development time increases even more. It is unclear if the same attacker is responsible for the SUPERNOVA webshell.

I am not an intelligence analyst but even an amateur eye can see that whatever actor is responsible for SUPERNOVA did not demonstrate the same inclination towards stealth seen in SUNBURST. The compilation timestamp found in the SUPERNOVA PE header (March 24, 2020 09:16:10) lines up very well with the compilation timestamp of the SUNBURST binary (March 24, 2020 08:52:34). Two independent attackers modifying source code at the same time is unlikely but certainly possible. If this is the work of a nation-state, it is likely the development of SUNBURST and SUPERNOVA were coordinated but separate efforts.

The double backdoor is reminiscent of the Juniper ScreenOS supply chain compromise in which a hard-coded password would provide privileged access to the vulnerable device and a weak pseudo-random number generator (PRNG) could allow the decryption of VPN traffic. There is speculation as to whether the latter is the work of the NSA but, naturally, the jury is still out on that one. Disclosed in 2015, these modifications to ScreenOS were believed to have been introduced in 2012. Quite a life compared to its SolarWinds counterpart.

On December 17, 2020, SolarWinds made an interesting statement in a Form 8-K filed with the SEC. “The vulnerability was not evident in the Orion Platform products’ source code but appears to have been inserted during the Orion software build process.” Their choice of the word “vulnerability” is a bit concerning as it diminishes the severity of what transpired. This is much more than a vulnerability caused by lousy developers and lax code reviews. It is a massive unauthorized modification of code creating a backdoor in a flagship product and it occurred on a build system, which should be considered a crown jewel in any environment.

The lack of malicious code on the version control server suggests any code on the build system was not merged back to master. Similarly, I would assume a new build of the Orion package would necessitate a complete sync of the latest release branch so everything functions as expected. Given that multiple releases contained the backdoor, I would guess that either the new code is being moved to the build server piecemeal or the attacker was active on the build system no less than five times in a nine-month period.

If the code was moved piecemeal, the attacker would have to monitor the build system to ensure their code remained in consecutive releases. If I was an attacker, I’d also be monitoring the version control server to prepare any necessary changes based on legitimate updates. If the entire release branch was synced for each build, the attacker would have to have intimate knowledge of when SolarWinds personnel would be building and signing new binaries and software packages, slip in, modify the code, and wait for the public release. No matter the method, these actions show a lack of sufficient integrity checks during the release process and inadequate security monitoring. It is also possible the attacker compromised the build process itself (e.g. a build script) and, during the build, substituted the legitimate SolarWinds.Orion.Core.BusinessLayer.dll with the malicious version. Again, I say all of this as someone without a software development background.

All of this contradicts Reversing Labs’ analysis of file metadata compiling and cross-signing timestamps. They hypothesize that, given the evidence, the easiest way to make all of this happen would be to inject the malicious code straight to source (version control). Something doesn’t add up. It could be a poorly worded statement by public relations or a purposefully misleading statement by legal but we’ll definitely have to wait for more information on that front. Either way, this statement occurred only three days after their initial 8-K filing where they disclosed the breach. Three days is not a lot of time to formulate such a statement. I’d bet money that SolarWinds knew about the compromise before FireEye’s formal announcement but it is hard to estimate how much time they were given.

All of this raises a question on which I have not seen significant speculation. When did this campaign begin? The first known unauthorized modification occurred in October 2019 and the development of two stealthy backdoors takes time. In “Part One: Why SolarWinds Orion?”, we explored rationales for picking Orion over other tools. Even if we assume Orion was the attacker’s primary target from the onset, I would estimate they had access to the source code at least six months prior to the test run putting a potential initial compromise date around April 2019. While they were at it, if they were able to access (and likely steal) source code for one product, there are few reasons why an attacker wouldn’t take the opportunity to steal source code for the rest of the SolarWinds portfolio. This and a lot more in “Part Three: What do we do now?”.

--

--