Certificate Authority Trustworthiness
The certificate authority (CA) system does an incredible job of solving an impossible challenge. Think about it. The CAs measure control of a domain name and then issue TLS certificates that pair cryptographic keys to those names. They do this on a global scale, often automatically. It’s impossible to do this perfectly, and unfortunately, they occasionally fail. In this post I describe the challenges the CAs face, describe a history of failures, and explain the process we use to maintain confidence in the system in spite of it all. The certificate authorities (CAs) solve a foundational key exchange problem for the Internet. They allow us to authenticate the TLS keys used by web servers, which they do by verifying control of domain names and signing certificates that associate public keys with these names. Authentication is a critical part of encrypting communications. Without authentication you may be encrypting with an attacker’s key, allowing them to eavesdrop on or tamper with your data in transit. Methods like certificate pinning work for things like IoT or mobile applications that communicate with a single back-end server. The developer can hardcode the certificate fingerprint and push an update any time it changes. But pinning doesn’t scale for websites or email. We need something Internet-scale, and we’ve got the CAs. Mozilla maintains a long list of CA compliance bugs that tracks over a thousand concerns. Most of these aren’t worth discussing, so let’s start with noteworthy CA-related problems from the past year: e-Tugra November 2022 An Internet-facing administration tool used by the e-Tugra CA had a wide-open sign-up page that allowed Ian Carrol , a security researcher, to register an account and view sensitive content. Of top concern: the confirmation codes used by the email domain control validation (DCV) method were visible. With this access the security researcher could have started a certificate signing request for a domain they didn't own, chosen the email DCV, and then intercepted the confirmation code via the administration tool 1 . e-Tugra fixed the issue when notified, but the community acknowledged "this isn't a little mistake" . Chrome and Mozilla distrusted e-Tugra after around six months of discussion. Trustcor November 2022 Trustcor appeared to have "shared corporate officers, operational control and technical integrations" with Measurement Systems, which "engaged in the distribution of an SDK containing malware". Chrome distrusted the CA, citing a loss of confidence: "Behavior that attempts to degrade or subvert security and privacy on the web is incompatible with organizations whose CA certificates are included in the Chrome Root Store." Trustcor contested the claims as "opinion, circumstantial evidence, conjecture, and fear-mongering". HiCA July 2023 HiCA used a remote code execution exploit on users' systems as part of a certificate issuance process. While HiCA's use of this technique wasn't malicious, the approach was immediately treated as a security vulnerability and fixed once proper notification was given. The concern was briefly discussed on the Mozilla forum, which acknowledged that HiCA is not a CA. HiCA was assisting in the certificate issuance process using an actual CA, using a process that was otherwise by the book. "Literally anyone can do this and do monumentally stupid/insecure things; it's not productive to have a discussion every time this happens." No action was recommended as the browsers decide on CA trust, not tools or 3rd parties that users choose to assist the issuance process. HiCA shut down soon after these events, citing security incidents. Unknown March 2022 An unnamed CA was hacked as part of a suspected state-sponsored hacking campaign also targeting government agencies and defense contractors. There is "no evidence to suggest [the hackers] were successful in compromising digital certificates". The lack of clarity on which CA was hacked, how they were hacked, and the lack of adequate public disclosure is troubling. There are many more of these that are older, but I think the recent events tell the story well 2 . The above shows successful attacks, poor security practices, and questionable organizations. While these are concerning, none of these events appear to have caused actual certificate mis-issuance 3 . The web browsers carefully consider the risk of maintaining trust relationships when these sorts of events happen, sometimes revoking trust after a thorough review. Mis-issuances occur somewhat rarely, let’s look further back in time. MCS Holding March 2015 MCS Holdings, an intermediary CA, mis-issued certificates for various domains, including Google's. These certificates appeared to be used for an internal man-in-the-middle proxy, but not external to the company. Google immediately distrusted the intermediary CA and quickly distrusted CNNIC, the root CA used by MCS Holdings. ANSSI December 2013 A similar incident occurred with ANSSI. DigiNotar July 2011 Hackers fraudulently obtained certificates from the DigiNotar CA for , , , , and more totaling 531 certificates in all. An active man-in-the-middle attack using these certificates was performed against users connecting to Google's services. The trust in DigiNotar was revoked and the company soon filed for bankruptcy. The DigiNotar hack is a textbook example of what we don’t want. The hackers not only compromised the CA, but they also fraudulently issued certificates, established a man-in-the-middle network position and intercepted the emails of 300,000 people . Thankfully, the DigiNotar hack is an outlier. Each web browser maintains a list of the CAs they trust out-of-the-box. As we’ve already seen, this trust can be revoked when problems arise. But what’s the inclusion process? Review is process heavy, focusing on security assurance and the trustworthiness of the organization operating the CA. There are independent audits and security standards . In the end, it’s a subjective decision with lots of supporting documentation. For transparency, Mozilla and the CA/Browser Forum use public discussion when deciding if a new CA should be added as a trust root. There are many boring examples that prompt little debate, like the inclusion request for LAWtrust. The denials can be terrifying: December 2019 : Accused of spying, inclusion request was denied and their existing intermediate CA certificate was distrusted. December 2015 : “it appears that the owner of this CA has used their certificates to MITM” November 2018 : “did not disclose the incident, nor - given that the other two were never revoked - did they apparently perform a scan of their certificates to identify any others.” March 2018 : “A CA can’t simply fix one problem after another as we find them during the inclusion process.” A rough way to measure the security of a system is to observe how often it is attacked versus adjacent areas. Attackers have limited resources, so they are biased to choose the weakest links (or perceived weakest). Here are some examples of how attackers bypass the protections the CAs provide, without attacking the CAs directly: Opportunity to attack the CAs exists, but these adjacent attacks are significantly more common. CA root certificates are long lived, up to 25 years . The DigiNotar root certificates that were maliciously used by hackers in 2011 are still not expired . So active revocations are required when issues arise. Unfortunately, even when the root certificates expire, they don’t really expire. Many consumer devices contain hard-coded trust stores that stop getting updated soon after the initial sale. This was a concern for Let’s Encrypt when the IdenTrust root certificate expired. Around a third of Android devices trusted the expiring root certificate, but hadn’t been updated to trust Let’s Encrypt’s new root. IdenTrust agreed to sign Let’s Encrypt’s certificate past the expiration of their own trust root. This worked because Android doesn’t enforce the expiration of its trust roots. While this approach allowed many old Android devices to be usable, it underscores a problem. The CAs often cannot be distrusted, not via software update nor expiration dates. As such, some devices will forever trust the problematic CAs referenced earlier in the post. A recurring topic of discussion is government coercion of the CAs. Every CA operates within the jurisdiction of a government which can exert legal pressure on the CA. Many countries have laws that compel technology companies to assist the government in certain circumstances. The CAs promise not to mis-issue certificates, but a request from their government could supersede that promise. Understanding the legal exposure of a CA is a complicated question of foreign law. Since revoking a trust root isn’t always possible, it’s also an exercise in predicting how laws may change. Given those concerns, you may be surprised to know that some of the CAs are directly operated by government agencies. Here are several from the Microsoft trust store : Historic trust also existed for: The Mozilla list and Google list are much shorter than Microsoft’s list. Unfortunately, Microsoft doesn’t operate a public discussion forum, so the purpose and justification of these inclusions is not apparent. One of the most important functions of a government is to provide services to its people. It’s common for governments to provide security services, issue identity documents, and handle delivery of postal mail, for example. I don’t think it’s unusual for governments to seek to provide Internet-based identity services. But there is a conflict of interest, as governments also perform law enforcement, intelligence, and military operations. The global reach a government-operated CA has may not be appropriate. With all these problems, why do we still trust the CA system? One reason is the lack of a better alternative. I’m planning to post about DNSSEC+DANE soon, but in short: it’s a mess. This is an incredibly hard problem space, and nothing else is viable. The web browsers have done an excellent job defining security standards, reviewing inclusion requests, considering revocations, and being transparent to the users about their decisions. It’s a dynamic process needing constant attention and vigilance. Requiring certificate transparency logs (a public log of every issued certificate) provides great tooling to audit the issuance practices and swiftly detect problems . Recent improvements like Certificate Authority Authorization (CAA) reduce the impact of certain classes of CA security incidents. Attacks like the DigiNotar hack are much easier to detect these days and we have tools to reduce impact. The large number of attacks impacting adjacent systems is a strong signal that effort is best spent securing those adjacent systems. Security isn’t about perfection, it’s about strengthening weak areas. With that said, I think there’s still high-value work to be done here. This post is heavily references actions of the Mozilla trust store, largely due to the public discussion forum they use. Without such public discussion users have no idea what the inclusion process looked like. Sometimes objections are raised and the justification used for inclusion helps concerned users understand the nuance of the role the CAs fill. Microsoft, Apple, and Google own major web browsers but do not provide an equivalent open discussion forum. Each browser trusts a distinct set of trusted CAs, so the justification for inclusion can’t always be inferred from Mozilla’s forum. You can review these lists here: None of the CAs offer bug bounties, they instead rely on private auditors. There’s clearly low-hanging fruit the auditors are missing. Bug bounties are a great way to encourage altruistic hackers to take a look at your security while providing safe harbor . Without these incentives, less scrupulous hackers will look anyway and may sell their findings to malicious actors. Some devices can be challenging to update, especially if they are abandoned by the manufacturer. Let’s Encrypt had this issue when they were getting started as a CA, particularly for Android devices. The only way to mass deploy updates to the trust store on Android is via over-the-air (OTA) updates. Thankfully Android is fixing this issue by adding updatable trust stores . All devices should support updating trust stores. If Chrome doesn’t trust a CA, why should my Firefox browser trust it? And vice versa. Any competently operated website wouldn’t use a CA that isn’t widely trusted, so the loss of functionality should be marginal. When a distrust action is taken, it’s common for all the trust stores to agree on revocation, but decisions don’t always happen at the same time. Custom browser builds with a security focus should use a trust store that only includes trust roots that are in all the major browser trust stores. Better UX can be helpful for end-user pruning of trust roots. December 2019 : Accused of spying, inclusion request was denied and their existing intermediate CA certificate was distrusted. December 2015 : “it appears that the owner of this CA has used their certificates to MITM” November 2018 : “did not disclose the incident, nor - given that the other two were never revoked - did they apparently perform a scan of their certificates to identify any others.” March 2018 : “A CA can’t simply fix one problem after another as we find them during the inclusion process.” Hackers often have success simply using invalid certificates. Users may suffer from security fatigue , and click-through when faced with an active man-in-the-middle attack. These attacks are relatively easy to perform and off-the-shelf tools exist. Look-alike domains are quite effective. An attacker registers a domain that looks like the target domain name. Since they own the look-alike domain they can get valid TLS certificates. Social engineering is typically used to trick the victim into connecting to the look-alike. Blocking traffic on port 443 (HTTPS) still works to perform downgrade attacks. Savvy users may notice a missing lock icon in the lock icon, but others won't realize there is an issue. Tools like HTTPS-only mode and HSTS add protection but aren't widely used. Weak encryption and TLS bugs can be exploited: Logjam , weak primes , Sweet32 , Heartbleed , RC4 , Lucky Thirteen , POODLE , FREAK , and BEAST . Between 2015 and 2020 the Government of Kazakhstan repeatedly attempted to mandate the installation of a Government-operated root certificate on its citizen's devices. This certificate would have allowed the government to perform man-in-the-middle attacks on HTTPS traffic. Browser vendors responded by deny-listing these trust roots , such that they would not be trusted even if the user manually installed them. Hackers can easily obtain certificates for sites they've already hacked by manipulating DNS records (ACME DNS-01), modifying files on web servers (ACME HTTP-01), or stealing verification codes from email inboxes. This works because the CAs validate domain control, not ownership. They could even steal existing certificates from servers they've compromised. NB: the hackers don't always need to perform a man-in-the-middle attack if they've already compromised an endpoint; they've already carried out a higher-impact attack. Malware may install its own root CA certificates to allow snooping on HTTPS traffic. Opportunity to attack the CAs exists, but these adjacent attacks are significantly more common. Irrevocable Trust CA root certificates are long lived, up to 25 years . The DigiNotar root certificates that were maliciously used by hackers in 2011 are still not expired . So active revocations are required when issues arise. Unfortunately, even when the root certificates expire, they don’t really expire. Many consumer devices contain hard-coded trust stores that stop getting updated soon after the initial sale. This was a concern for Let’s Encrypt when the IdenTrust root certificate expired. Around a third of Android devices trusted the expiring root certificate, but hadn’t been updated to trust Let’s Encrypt’s new root. IdenTrust agreed to sign Let’s Encrypt’s certificate past the expiration of their own trust root. This worked because Android doesn’t enforce the expiration of its trust roots. While this approach allowed many old Android devices to be usable, it underscores a problem. The CAs often cannot be distrusted, not via software update nor expiration dates. As such, some devices will forever trust the problematic CAs referenced earlier in the post. Government control of CAs A recurring topic of discussion is government coercion of the CAs. Every CA operates within the jurisdiction of a government which can exert legal pressure on the CA. Many countries have laws that compel technology companies to assist the government in certain circumstances. The CAs promise not to mis-issue certificates, but a request from their government could supersede that promise. Understanding the legal exposure of a CA is a complicated question of foreign law. Since revoking a trust root isn’t always possible, it’s also an exercise in predicting how laws may change. Given those concerns, you may be surprised to know that some of the CAs are directly operated by government agencies. Here are several from the Microsoft trust store : Department of Defence Australia (cert) Government of Brazil, Instituto Nacional de Tecnologia da Informação (ITI) (cert) Government of Finland, Population Register Centre’s (Väestörekisterikeskus, VRK) (cert) Government of Hong Kong (SAR), Hongkong Post, Certizen (cert) Government of India, Ministry of Communications & Information Technology, Controller of Certifying Authorities (CCA) (cert) Government of Korea, KLID (cert) Government of Lithuania, Registru Centras (cert) Government of Portugal, Sistema de Certificação Electrónica do Estado (SCEE) / Electronic Certification System of the State (cert) Government of Saudi Arabia, NCDC (cert) Government of South Africa, Post Office Trust Centre (cert) Government of Spain, Autoritat de Certificació de la Comunitat Valenciana (ACCV) (cert) Government of Spain, Dirección General de la Policía – Ministerio del Interior – España (cert) Government of Spain, Fábrica Nacional de Moneda y Timbre (FNMT) (cert) Government of Sweden (Försäkringskassan) (cert) Government of Taiwan, Government Root Certification Authority (GRCA) (cert) Government of The Netherlands, PKIoverheid (Logius) (cert) Government of Turkey, Kamu Sertifikasyon Merkezi (Kamu SM) (cert) Government of Uruguay, Agency for E-Government and Information Society (AGESIC) (cert) Korea Information Security Agency (KISA) (cert) Macao Post and Telecommunications Bureau (cert) Swiss BIT, Swiss Federal Office of Information Technology, Systems and Telecommunication (FOITT) (cert) Thailand National Root Certificate Authority (Electronic Transactions Development Agency) (cert) Historic trust also existed for: China Internet Network Information Center (CNNIC) (discussed earlier) Government of France (ANSSI, DCSSI) (discussed earlier) Government of Japan, Ministry of Internal Affairs and Communications Government of Latvia, Latvian State Radio & Television Centre (LVRTC) Government of Mexico, Autoridad Certificadora Raiz de la Secretaria de Economia Government of Venezuela, Superintendencia de Servicios de Certificación Electrónica (SUSCERTE) Post of Slovenia The Uruguayan Post, “El Correo Uruguayo” U.S. Federal Public Key Infrastructure (US FPKI) (removal) The Mozilla list and Google list are much shorter than Microsoft’s list. Unfortunately, Microsoft doesn’t operate a public discussion forum, so the purpose and justification of these inclusions is not apparent. One of the most important functions of a government is to provide services to its people. It’s common for governments to provide security services, issue identity documents, and handle delivery of postal mail, for example. I don’t think it’s unusual for governments to seek to provide Internet-based identity services. But there is a conflict of interest, as governments also perform law enforcement, intelligence, and military operations. The global reach a government-operated CA has may not be appropriate. Trusting the system With all these problems, why do we still trust the CA system? One reason is the lack of a better alternative. I’m planning to post about DNSSEC+DANE soon, but in short: it’s a mess. This is an incredibly hard problem space, and nothing else is viable. The web browsers have done an excellent job defining security standards, reviewing inclusion requests, considering revocations, and being transparent to the users about their decisions. It’s a dynamic process needing constant attention and vigilance. Requiring certificate transparency logs (a public log of every issued certificate) provides great tooling to audit the issuance practices and swiftly detect problems . Recent improvements like Certificate Authority Authorization (CAA) reduce the impact of certain classes of CA security incidents. Attacks like the DigiNotar hack are much easier to detect these days and we have tools to reduce impact. Call to action The large number of attacks impacting adjacent systems is a strong signal that effort is best spent securing those adjacent systems. Security isn’t about perfection, it’s about strengthening weak areas. With that said, I think there’s still high-value work to be done here. Transparent inclusion decisions This post is heavily references actions of the Mozilla trust store, largely due to the public discussion forum they use. Without such public discussion users have no idea what the inclusion process looked like. Sometimes objections are raised and the justification used for inclusion helps concerned users understand the nuance of the role the CAs fill. Microsoft, Apple, and Google own major web browsers but do not provide an equivalent open discussion forum. Each browser trusts a distinct set of trusted CAs, so the justification for inclusion can’t always be inferred from Mozilla’s forum. You can review these lists here: Chrome