Blockchain Use in Software Code Signing & Malware C2
I’ve done some small research about blockchain recently, and just want to put my thoughts down on paperblog so I can stop thinking them. Most of this is rehashing information I’ve read, but the “signed code verification” piece towards the end is an idea of mine that I’ve not read about elsewhere.
Blockchain is a hot term these days. It’s a popular management buzzword, and as such it can get thrown about as a cure for just about all that ails you. All businesses need to store data, and blockchain is known as a data-store, so everyone wants to make sure you’ve considered their (probably expensive) blockchain solution for data storage…
Blockchain is good at solving a couple problems though.
- It can provide a publicly-verifiable record of data’s existence at a point in time. At any point in the future, anybody with access to the blockchain can prove that a certain piece of data existed at the point it was stored on the blockchain. If you’ve got a document that has been digitally signed, you can store the hash of that document on the blockchain. Later blockchain links will all chain back to your document hash, and their presence will prove that your document predated their transactions. Because blockchain additions (is most implementations) occur on fixed schedules, it will be possible to reconstruct exactly when your document must have been added to the chain.
- Publicly-verifiable records of data’s existence can prove transactions have occurred, or contracts have been signed. This is how a blockchain can act as a ledger. Transactions on the blockchain can represent physical-world transactions, if the participants assign them that meaning, enabling tracking of the transfer of real-world entities between blockchain participants. On a public blockchain these transfers are transparent to everyone involved at all points in the future. They cannot be reverted by any individual except by a future transaction.
- Transparency means that no trusted third-party needs to exist. Transactions can occur more easily between non-trusting parties without an escrow.
- Public blockchains get stored by many parties, each having an economic incentive to participate. That means any data stored on them is stored in many places, providing data replication and the potential for access from disparate parts of the world. To store a small amount of data, little economic incentive is required. To store larger amounts of data, much greater incentive is required.
These solutions are enabled by some prerequisites.
- Blockchains require lots of computational power. Specifically, they require more than your adversaries. To prove data’s existence at a point in time, links must be added to the chain periodically. When adding links to the chain, the other participants in the blockchain must agree on their addition. If there are malicious participants in the blockchain, they may decide to disagree about a set of new links, and instead agree on a different set of links. If the malicious participants form a majority, other participants in the chain will be influenced to believe the new malicious links are legitimate and ignore the others. Participants make decisions based upon cryptographic data that’s passed around, and computing that cryptographic data requires computational power. Therefore, to prevent a malicious takeover of a blockchain, you must have more computational power than any malicious adversaries can muster. In a public blockchain, you get the benefit of all the disparate participants’ computational power. Your adversaries must then overpower the public, instead of just you.
- Blockchains require a computer network connecting participants. When designing a blockchain for use in low-Internet access areas you really restrict your ability to use existing public blockchains. To add data to a blockchain you must submit your transaction to the other participants, and enough of them must get it for your data to have any chance of being added. If you stop using public blockchains, or use small ones, perhaps because you’re in a remote area, you open yourself up to attacks based on computational power.
- Public blockchains require economic incentive. Computational power costs money - for hardware, network access, and electricity. Participants in a blockchain require computational power. Thus, participants need a monetary incentive to participate. You pay for data additions to Bitcoin’s blockchain by supplying a small amount of bitcoin that is automatically paid to the individual who adds your data to the blockchain. The amount of data added by each transaction is small, so larger chunks of data require multiple transactions and more bitcoin.
Lots of proposed uses for blockchain have limited applicability due to these requirements.
Potential Use Case: Malware Command and Control (C2)
Malware C2 is an interesting use for blockchain, though. Malware running on end-points often needs to reach out to its creators for further instruction. “Steal files”, “learn about the local network”, “propagate to a nearby computer”, “record keystrokes”, or “delete yourself” are all things malware might want to do, but only when commanded by a remote attacker. Often malware reaches out to one destination for these commands. This is the simplest C2 implementation, where one or a few hard-coded server name or IP address provide the C2 to the malware.
Network defenders can try to detect and block this behavior by redirecting those C2 servers to alternate locations, pretending to be those C2 servers, or by taking over the C2 servers with the permission of law enforcement or the server’s rightful owner.
Blockchain’s distributed nature makes this much more difficult. Given a good-enough implementation, it could be difficult or impossible for defenders to block access to a copy of the blockchain. Because there’s no central server it’s not possible for a defender to take over the blockchain, either. Once a C2 command is added to the blockchain, it is impractical to remove it, too.
We’ve seen a few uses of it this method already. Omer Zohar built and demonstrated this use-case in early 2018. He used Ethereum and its “smart contracts” system to implement encrypted C2 of a nearly unlimited number of malware endpoints. The result was a system that is extremely difficult to block or subvert. As Ethereum increases in popularity the system will be increasingly hard to block. His major limitation was operational cost. Each message to and from an endpoint cost a small amount of Ether, translating to (at the time) about $39 per year per malware instance.
Anonymity is a major benefit of such a system. Many consider blockchain participation to be completely anonymous, however participation often requires money, and that money must enter the system from some point. That money typically comes from a traceable source, however malware authors also steal or “mine” cryptocurrency. Such activity would provide a less-traceable source, and make the system nearly entirely anonymous.
Potential Use Case: Code Signing Transparency
Another potential use for blockchain is in software signature verification. Microsoft’s Windows and Apple’s OS X use software signatures to verify that software was produced by an entity (a company or individual). Software producers compile their code into binaries, then digitally sign them before sending those binaries out into the wild.
This provides end users the assurance that a specific company made the binary. An example is if someone emails you a version of Microsoft’s Notepad. Before executing it you would want to make sure it was actually from Microsoft, otherwise a malicious actor could have modified Notepad to include malicious software. If Microsoft has signed this copy of Notepad, you can verify that signature and prove that Microsoft created it. You would know then that the version of Notepad was safe to execute. Windows and OS X now make it more difficult to execute software that’s not digitally signed by some vendor.
Stuxnet is a piece of malware that abused software signatures. It included components that were signed by legitimate, trusted software vendors. This made those components more likely to be trusted and less likely to be detected. Other malware has abused software signatures, but none has been as high-profile as Stuxnet.
Blockchain’s distributed, transparent, nearly-immutable nature can help solve this problem.
Software signatures can currently be verified by a client who trusts a root certificate, and can the follow that root certificate trust through a set of other certificates to the software signature in question. One certificate signs the next, which signs the next… Good verification requires checking a revocation list too, so when invalid signatures are found in-the-wild they can be cancelled.
If an additional step for verification required signatures to be found in a public database of software signatures, all malicious code signers would have to publish their signatures to this public database too. Companies could check the database to determine if someone is signing code on their behalf.
In the event of Stuxnet, JMicron Technology and Realtek Semiconductor would have been able to check the public database regularly. They would have seen a software signature they did not issue, and they could have placed it on the revocation list immediately. They could have then taken action to prevent further signatures in their name.
A blockchain can act as this public database. The result would be widely distributed, and it would be practically impossible to modify or remove signature entries after they were added. As an added benefit, it would become more obvious to observers when a company holds compromised certificates that should no-longer be trusted, and that company’s security practices could become (rightly) suspect. Because blockchain provides an irrefutable timestamp when data is added, signature attack timelines will also become more transparent to security researchers.
Every valid software signature would incur a small cost to be added to the blockchain. Additionally, the signature verification process would become more complex and require Internet access. However, software and hardware vendors could implement API endpoints that handle the blockchain portion of verification, simplifying lookup code for the endpoints they sell. The result would still provide transparency for all signature creators and verifiers.
I haven’t seen this solution proposed elsewhere, however Kim, Kwon and Dumitras recommend that code signing tools log all transactions they complete [Kim, 2017]. Tools like “signtool.exe” in Windows would log “the hash value of program code and the certificates” to Microsoft, then third parties could “periodically audit the log and identify code signing abuse”. This is great, however it doesn’t require software to verify that signatures are present in that log during signature verification. Without that, any attacker that subverts the signature reporting process, by preventing reporting to Microsoft, gets their software signed without reporting it.
References:
Some discussion of blockchain benefits and requirements: https://blog.todotnet.com/2019/03/solving-real-world-problems-with-distributed-ledger-technology/
Paper about blockchain potential in the military: https://www.jcs.mil/Portals/36/Documents/Doctrine/Education/jpme_papers/barnas_n.pdf?ver=2017-12-29-142140-393
Doowon Kim, Bum Jun Kwon, and Tudor Dumitraş. 2017. Certified Malware: Measuring Breaches of Trust in the Windows Code-Signing PKI. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS ‘17). ACM, New York, NY, USA, 1435-1448. DOI: https://doi.org/10.1145/3133956.3133958. URL: http://users.umiacs.umd.edu/~tdumitra/papers/CCS-2017.pdf