Skip to Content [alt-c]

October 8, 2015

I Don't Accept the Risk of SHA-1

Website operators have to configure a dizzying number of security properties for their website: protocol versions, TLS ciphers, certificate hash algorithm, and so on. Most of these properties provide an individual benefit: when you configure your server to require secure protocol versions and strong ciphers, connections to your website are immediately made more secure. It doesn't affect your website's security if some other schmuck is still using SSLv3 with RC4 and 1024 bit Diffie-Hellman on their website.

However, other security properties, particularly those related to certificates, provide more of a collective security benefit, where everyone's security is determined by the security of the lowest common denominator. A timely example is the hash algorithm used in certificate signatures. Until recently, SHA-1 was the most common algorithm. Unfortunately, SHA-1 is dangerously weak so the Internet is transitioning to the more secure SHA-2. Under the current deprecation schedule, certificate authorities must stop issuing SHA-1 certificates on January 1, 2016, and SHA-1 certificates that are issued before then must not be valid past January 1, 2017, which means that on January 1, 2017, browsers can stop trusting SHA-1 certificates.

Unfortunately, since this is a collective security property, there's nothing an individual website operator can do in the meantime to improve the security of their site. This site, www.agwa.name, uses a SHA-2 certificate, but the truth is that it's no more secure than a site using a SHA-1 certificate. That's because an attacker who can generate a SHA-1 collision can forge a SHA-1 certificate for www.agwa.name. Since so many websites still use SHA-1 certificates, and it's not 2017 yet, web browsers will accept the forged certificate and be none the wiser. None of us will be more secure until certificate authorities stop signing, and web browsers stop accepting, certificates with SHA-1 signatures.

For this reason, I was dismayed by the recent proposal from Symantec to allow certificate authorities to issue SHA-1 certificates through the end of 2016, because some of their "very large enterprise customers" can't complete the migration in time. Although their proposal would not change the date on which browsers would stop trusting SHA-1, it would extend the period during which new collisions could be created. This was troubling enough when the proposal was made last week, and is even more troubling in light of the research released today that estimates the cost of finding a SHA-1 collision on EC2 to be between just $75,000 and $120,000.

What made me really angry about the proposal was the following statement:

These customers accept the risk of continuing to use new SHA-1 certificates

"These customers" accept the risk? As I explained above, the use of SHA-1 is a collective risk shared by the entire Internet, not just the "very large enterprise customers" who want to keep using SHA-1. What about the rest of the Internet, who want their TLS connections to be secure and who have dutifully migrated to SHA-2 in time for the deadline? Did anyone ask them? I sure as hell don't accept the risk.

The statement is therefore vacuous and thoroughly unpersuasive to anyone who understands how certificates work. But to someone who doesn't understand or isn't reading too closely, it makes the proposal seem less bad for the Internet at large than it really is. I hope that the other members of the CA/Browser Forum see through this and reject the proposal.

Comments

August 7, 2015

Hardening OpenVPN for DEF CON

As people head off to DEF CON this week, many are probably relying on OpenVPN to safely tunnel their Internet traffic through "the world's most hostile network" back to an ordinarily hostile network. While I believe OpenVPN itself to be quite secure, the way in which it interacts with the operating system to route your traffic is quite unrobust and can be subverted in numerous ways on a hostile local area network. This article will describe some of the problems and suggest countermeasures. The article is Linux-centric, since I'm most familiar with Linux, but many of these concerns apply to other operating systems as well.

IPv6

Unless you explicitly configure your OpenVPN tunnel to support IPv6 (which is only possible if your server has IPv6 connectivity), then all IPv6 traffic from your client will bypass the VPN and egress over the local network. This should concern you as more and more websites are available over IPv6 (including this blog), and clients generally prefer to use IPv6 if it's available.

The easiest countermeasure is to just disable IPv6 while you're at DEF CON. As a bonus, you'll reduce your network stack's attack surface and will be safe in the unlikely event someone drops an IPv6-specific 0day at DEF CON.

DNS

If you're using a VPN, you want to make sure you're using a trusted DNS server. If you use an attacker-controlled DNS server, they can return rogue IP addresses and redirect all your traffic back to a network they control after it passes through your VPN, rendering your VPN moot. Unfortunately, Unix-based operating systems (including OS X, though it may have improved since I last looked at this a few years ago) handle DNS server configuration incredibly poorly. Even if your VPN server specifies the address of a trusted DNS server, the DHCP server on the local network might return the address of a rogue DNS server. Which DNS server you end up using depends on too many factors to discuss here, but needless to say it does not inspire confidence.

I recommend doing whatever it takes to disable the retrieval of DNS information over DHCP, and hard-coding the IP address of a trusted DNS server in /etc/resolv.conf. On Debian, if isc-dhcp-client is your DHCP client, the most airtight way to stop DHCP messing with your DNS settings is to place the following in /etc/dhcp/dhclient-enter-hooks.d/zzz-preserve-resolvconf:

make_resolv_conf() { true }

I don't know enough about other operating systems/DHCP clients to provide specific instructions.

Denial of service

An attacker can always block your VPN, preventing you from using it. If they did this continuously, you'd probably notice that your VPN failed to start, and then proceed cautiously (or not at all) knowing you didn't have the protection of a VPN. A more clever attacker would let you establish the VPN connection, and only start blocking it later. If the OpenVPN client times out and quits, you'll start sending traffic over the untrusted network, and you might not notice.

I will present a countermeasure for this along with the countermeasure for the next attack.

Attacks on redirect-gateway

The usual way of telling OpenVPN to route all Internet traffic over the VPN is to use the redirect-gateway def1 option. When this option is used, the OpenVPN client adds three routes to your system's main routing table:

  1. A specific route for the VPN server, via the local network's default gateway.
  2. A route for 0.0.0.0/1 via the VPN.
  3. A route for 128.0.0.0/1 via the VPN.

The first route prevents the encrypted VPN traffic from being routed via the VPN itself, which would cause a feedback loop. The last two routes are a clever hack: together, 0.0.0.0/1 and 128.0.0.0/1 cover the entire IPv4 address space, and since they are more specific than the default route for 0.0.0.0/0 that came from the local DHCP server, they take precedence.

However, a DHCP server can also push its own routes (called "classless static routes") to the DHCP client. So a rogue DHCP server can push routes even more specific than the OpenVPN routes, such as for 0.0.0.0/2, 64.0.0.0/2, 128.0.0.0/2, and 192.0.0.0/2. These routes cover the entire IPv4 address space, and take precedence over the less-specific OpenVPN routes.

You could tell your DHCP client to ignore classless static routes, but there's another attack: a rogue DHCP server could push a subnet mask for an extremely large subnet, such as /2. Then the interface route for the local network would be more specific than your OpenVPN routes. The attacker can only grab 25% of the IPv4 address space this way, but that's a sizable percentage of the Internet.

A better countermeasure is to take advantage of Linux's advanced routing and use multiple routing tables. Although rarely used, a Linux system can have multiple routing tables, and you can use routing policy rules to specify which routing table a packet should use. The idea is to put all your OpenVPN routes in a dedicated routing table, and then add routing policy rules that say:

  1. If a packet is destined for the VPN server, use the main routing table.
  2. Otherwise, use the OpenVPN routing table.

This keeps your OpenVPN routes safely segregated from routes pushed by the DHCP server. The only packets that will ever use the routing table controlled by the DHCP server will be encrypted packets to the VPN server itself. Everything else will use a routing table controlled only by OpenVPN.

The first step is to configure the routing rules. Unfortunately, distros don't provide a good way of managing these, leaving you to run a series of ip rule commands by hand. The changes made by these commands are lost when the system reboots, so I suggest placing them in a system startup script such as rc.local.

ip rule add to 203.0.113.41 table main pref 1000 ip rule add to 203.0.113.41 unreachable pref 1001 ip rule add table 94 pref 1002 ip rule add unreachable pref 1003

Replace 203.0.113.41 with the IP address of your VPN server. The preferences (1000-1003) ensure the rules are sorted correctly. 94 is the number of the OpenVPN routing table, which we'll reference below. The second rule prevents VPN server packets from being routed over the VPN itself in case the main routing table is empty, and the final rule prevents packets from using the main routing table in case the OpenVPN routing table is empty (which would happen if OpenVPN quit unexpectedly).

The next step is to configure the OpenVPN client to add its routes to table 94 instead of the main routing table. OpenVPN itself lacks support for this, but I wrote a routing hook that provides support. Download the hook and install it to /usr/local/lib/openvpn/route. Make it executable with chmod +x. Then, add the following options to your OpenVPN client config:

setenv OPENVPN_ROUTE_TABLE 94 route-noexec route-up /usr/local/lib/openvpn/route route 0.0.0.0 0.0.0.0

Remove the existing redirect-gateway option (also check the server config in case it's being pushed to the client).

The first option sets the routing table number. This has to match the number used in the ip rule command above. The second and third options tell OpenVPN to use my routing hook instead of its builtin routing code. The final option tells OpenVPN to route all traffic over the VPN.

Even worse attacks

If you want to be really careful, you should redirect your network device to an isolated VM and run all of your networking config (e.g. DHCP client, wireless supplicant) inside it. The Linux userspace networking stack is pretty hairy, and it all runs as root. A vulnerability would allow an attacker to take over your system before you even start your VPN.

Using a dedicated network VM is pretty complicated and beyond the scope of this blog post. Fortunately, if you're using an up-to-date operating system you're probably safe, since it seems unlikely anyone would burn a 0day at DEF CON just to take over random conference-goers' laptops. I'd be much more worried about the other attacks, which are straightforward enough to be in script kiddie territory.

Comments

March 21, 2015

How to Responsibly Publish a Misissued SSL Certificate

SSL certificate misissuance is in the news again. This time, it's not the certificate authorities who messed up, but rather email service providers who allowed their users to register email addresses at one of the five administrative addresses allowed to approve certificates (admin@, administrator@, hostmaster@, postmaster@, and webmaster@). Last week, Windows Live's Finnish domain (live.fi) fell victim, and today, Remy van Elst got a misissued cert for xs4all.nl, a Dutch ISP.

Unfortunately, van Elst, at the end of an otherwise good blog post, published the private key for the misissued cert. This was irresponsible, because although the cert has been revoked, certificate revocation doesn't work (see also my own WTF moment with revocation), so the certificate will be still be usable for man-in-the-middle attacks until it expires on March 19, 2016. Fortunately, browsers can push out updates to explicitly blacklist it (Chrome is especially good since it has the CRLSet system, and in fact, the cert is already blacklisted), but this doesn't help non-browser clients or users running out-of-date browsers.

If I ever came across an email service provider allowing administrative addresses to be registered, I would contact them first and try to spare everyone the headache of a misissued cert. Unfortunately, sometimes a proof-of-concept is needed to get security problems fixed in a reasonable (or even finite) amount of time. If you need to go this route, you should do it responsibly, and remember that as a responsible security researcher, your goal is not to actually MitM the target, but to demonstrate that you were able to obtain a certificate for it. That means destroying the private key immediately, and using a method besides publishing the key to prove you had possession of it.

Conveniently, the CSR, which you need to generate anyways, is signed with the private key, providing proof of possession. When you generate the CSR, put something unique in the CSR's organization field, such as "YOURNAME's Fake Certs, Inc." (This field is ignored for DV certificates so this shouldn't affect your ability to get a certificate.) The CSR's signature will prove that whoever generated the CSR had possession of the private key, and putting your name in the CSR will prove that you generated the CSR, thus proving that you had possession of the private key. Submit the CSR to the CA, and when you get the misissued certificate back, publish it along with the CSR as your proof.

Here's the OpenSSL command to generate a private key and CSR for www.example.com. Note that I tell OpenSSL to write the private key to /dev/null, ensuring that it's immediately discarded, without even hitting the filesystem.

$ openssl req -new -nodes -newkey rsa:2048 -keyout /dev/null -out www.example.com.csr Generating a 2048 bit RSA private key ....................................................+++ ..................................+++ writing new private key to '/dev/null' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:California Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]:Andrew's Fake Certs, Inc. Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:www.example.com Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:

For reference, here's the resulting CSR, and here's the corresponding certificate (signed by an untrusted certificate authority that I use for testing).

Here's how to compare the public key in the CSR against the public key in the certificate:

$ openssl req -noout -pubkey -in www.example.com.csr | openssl pkey -pubin -pubout -outform DER | sha256sum a501928ab50fb9c0e8c8f006816acb462eb90cfea00c5139ba7333046260bff0 - $ openssl x509 -noout -pubkey -in www.example.com.crt | openssl pkey -pubin -pubout -outform DER | sha256sum a501928ab50fb9c0e8c8f006816acb462eb90cfea00c5139ba7333046260bff0 -

And here's how to verify the CSR's signature and print its subject:

$ openssl req -noout -verify -subject -in www.example.com.csr verify OK subject=/C=US/ST=California/O=Andrew's Fake Certs, Inc./CN=www.example.com

There you go - proof that I got a certificate for a private key under my control, without having to publish, or even keep around, the private key.

Comments

October 17, 2014

Renewing an SSL Certificate Without Even Logging in to My Server

Yesterday I renewed the about-to-expire SSL certificate for one of my websites. I did so without running a single command, filling out a single form, taking out my credit card, or even logging into any server. All I did was open a link in an email and click a single button. Soon after, my website was serving a renewed certificate to visitors.

It's all thanks to the auto-renewal feature offered by my SSL certificate management startup, SSLMate, which I finally had a chance to dogfood yesterday with a real, live expiring certificate.

There are two halves to auto-renewals. The first, relatively straightforward half, is that the SSLMate service is constantly monitoring the expiration dates of my certificates. Shortly before a certificate expires, SSLMate requests a new certificate from a certificate authority. As with any SSL certificate purchase, the certificate authority emails me a link which I have to click to confirm that I still control the domain. Once I've done that, the certificate authority delivers the new certificate to SSLMate, and SSLMate stores it in my SSLMate account. SSLMate charges the credit card already on file in my account, just like any subscription service.

But a certificate isn't very useful sitting in my SSLMate account. It needs to be installed on my web servers so that visitors are served with the new certificate. This is the second half of auto-renewals. If I were using a typical SSL certificate vendor, this step would probably involve downloading an email attachment, unzipping it, correctly assembling the certificate bundle, installing it on each of my servers, and restarting my services. This is tedious, and worse, it's error-prone. And as someone who believes strongly in devops, it offends my sensibilities.

Fortunately, SSLMate isn't a typical SSL certificate vendor. SSLMate comes with a command line tool to help manage your SSL certificates. Each of my servers runs the following script daily (technically, it runs from a configuration management system, but it could just as easily run from a cron job in /etc/cron.daily):

#!/bin/sh

if sslmate download --all
then
	service apache2 restart
	service titus restart
fi

exit 0

The sslmate download command downloads certificates from my SSLMate account to the /etc/sslmate directory on the server. --all tells sslmate to look at the private keys in /etc/sslmate and download the corresponding certificate for each one (alternatively, I could explicitly specify certificate common names on the command line). sslmate download exits with a zero status code if new certificates were downloaded, or a non-zero status if the certificates were already up-to-date. The if condition tests for a zero exit code, and restarts my SSL-using services if new certificates were downloaded.

On most days of the year, this script does nothing, since my certificates are already up-to-date. But on days when a certificate has been renewed, it comes to life, downloading new certificate files (not just the certificate, but also the chain certificate) and restarting services so that they use the new files. This is devops at its finest, applied to SSL certificate renewals for the first time.

You may be wondering - is it a good idea to do this unattended? I think so. First, the risk of installing a broken certificate is very low, and certainly lower than when an installation is done by hand. Since SSLMate takes care of assembling the certificate bundle for you, there's no risk of forgetting to include the chain certificate or including the wrong one. Chain certificate problems are notoriously difficult to debug, since they don't materialize in all browsers. While tools such as SSL Labs are invaluable for verifying certificate installation, they can only tell you about a problem after you've installed a bad certificate, which is too late to avoid downtime. Instead, it's better to automate certificate installation to eliminate the possibility of human error.

I'm also unconcerned about restarting Apache unattended. sslmate download contains a failsafe that refuses to install a new certificate if it doesn't match the private key, ensuring that it won't install a certificate that would prevent Apache from starting. And I haven't done anything reckless with my Apache configuration that might make restarts unreliable. Besides, it's already essential to be comfortable with restarting system services, since you may be required to restart a service at any time in response to a security update.

One more thing: this certificate wasn't originally purchased through SSLMate, yet SSLMate was able to renew it, thanks to SSLMate's upcoming import feature. A new command, sslmate import, will let you import your existing certificates to your SSLMate account. Once imported, you can set up your auto-renewal cron job, and you'll be all set when your certificates begin to expire. And you'll be charged only when a certificate renews; importing certificates will be free.

sslmate import is in beta testing and will be released soon. If you're interested in taking part in the beta, shoot an email to sslmate@sslmate.com. Also consider subscribing to the SSLMate blog or following @SSLMate on Twitter so you get future announcements - we have a lot of exciting development in the pipeline.

Comments

September 30, 2014

CloudFlare: SSL Added and Removed Here :-)

One of the more infamous leaked NSA documents was a slide showing a hand-drawn diagram of Google's network architecture, with the comment "SSL added and removed here!" along with a smiley face, written underneath the box for Google's front-end servers.

NSA slide showing diagram of Google's network architecture, with the comment "SSL added and removed here!" along with a smiley face, written underneath the box for Google's front-end servers. "SSL added and removed here! :-)"

"SSL added and removed here! :-)"

The point of the diagram was that although Google tried to protect their users' privacy by using HTTPS to encrypt traffic between web browsers and their front-end servers, they let traffic travel unencrypted between their datacenters, so their use of HTTPS was ultimately no hindrance to the NSA's mass surveillance of the Internet.

Today the NSA can draw a new diagram, and this time, SSL will be added and removed not by a Google front-end server, but by a CloudFlare edge server. That's because, although CloudFlare has taken the incredibly generous and laudable step of providing free HTTPS by default to all of their customers, they are not requiring the connection between CloudFlare and the origin server to be encrypted (they call this "Flexible SSL"). So although many sites will now support HTTPS thanks to CloudFlare, by default traffic to these sites will be encrypted only between the user and the CloudFlare edge server, leaving plenty of opportunity for the connection to be eavesdropped beyond the edge server.

Arguably, encrypting even part of the connection path is better than the status quo, which provides no encryption at all. I disagree, because CloudFlare's Flexible SSL will lull website visitors into a false sense of security, since these partially-encrypted connections will appear to the browser as normal HTTPS connections, padlock and all. There will be no distinction made whatsoever between a connection that's protected all the way to the origin, and a connection that's protected only part of the way. Providing a false sense of security is often worse than providing no security at all, and I find the security of Flexible SSL to be quite lacking. That's because CloudFlare aims to put edge nodes as close to the visitor as possible, which minimizes latency, but also minimizes the percentage of an HTTPS connection which is encrypted. So although Flexible SSL will protect visitors against malicious local ISPs and attackers snooping on coffee shop WIFI, it provides little protection against nation-state adversaries. This point is underscored by a map of CloudFlare's current and planned edge locations, which shows a presence in 37 different countries, including China. China has abysmal human rights and pervasive Internet surveillance, which is troubling because CloudFlare explicitly mentions human rights organizations as a motivation for deploying HTTPS everywhere:

Every byte, however seemingly mundane, that flows encrypted across the Internet makes it more difficult for those who wish to intercept, throttle, or censor the web. In other words, ensuring your personal blog is available over HTTPS makes it more likely that a human rights organization or social media service or independent journalist will be accessible around the world.

It's impossible for Flexible SSL to protect a website of a human rights organization from interception, throttling, or censoring when the connection to that website travels unencrypted through the Great Firewall of China. What's worse is that CloudFlare includes the visitor's original IP address in the request headers to the origin server, which of course is unencrypted when using Flexible SSL. A nation-state adversary eavesdropping on Internet traffic will therefore see not only the URL and content of a page, but also the IP address of the visitor who requested it. This is exactly the same situation as unencrypted HTTP, yet as far as the visitor can tell, the connection is using HTTPS, with a padlock icon and an https:// URL.

It is true that HTTPS has never guaranteed the security of a connection behind the host that terminates the SSL, and it's already quite common to terminate SSL in a front-end host and forward unencrypted traffic to a back-end server. However, in almost all instances of this architecture, the SSL terminator and the back-end are in the same datacenter on the same network, not in different countries on opposite sides of the world, with unencrypted connections traveling over the public Internet. Furthermore, an architecture where unencrypted traffic travels a significant distance behind an SSL terminator should be considered something to fix, not something to excuse or encourage. For example, after the Google NSA slide was released, Google accelerated their plans to encrypt all inter-datacenter traffic. In doing so, they strengthened the value of HTTPS. CloudFlare, on the other hand, is diluting the value of HTTPS, and in astonishing numbers: according to their blog post, they are doubling the number of HTTPS sites on the Internet from 2 million to 4 million. That means that after today, one in two HTTPS websites will be using encryption where most of the connection path is actually unencrypted.

Fortunately, CloudFlare has an alternative to Flexible SSL which is free and provides encryption between CloudFlare and the origin, which they "strongly recommend" site owners enable. Unfortunately, it requires manual action on the part of website operators, and getting users to follow security recommendations, even when strongly recommended, is like herding cats. The vast majority of those 2 million website operators won't do anything, especially when their sites already appear to be using HTTPS and thus benefit from the main non-security motivation for HTTPS, which is preference in Google search rankings.

This is a difficult problem. CloudFlare should be commended for tackling it and for their generosity in making their solution free. However, they've only solved part of the problem, and this is an instance where half measures are worse than no measures at all. CloudFlare should abolish Flexible SSL and make setting up non-Flexible SSL easier. In particular, they should hurry up the rollout of the "CloudFlare Origin CA," and instead of requiring users to submit a CSR to be signed by CloudFlare, they should let users download, in a single click, both a private key and a certificate to be installed on their origin servers. (Normally I'm averse to certificate authorities generating private keys for their users, but in this case, it would be a private CA used for nothing but the connection between CloudFlare and the origin, so it would be perfectly secure for CloudFlare to generate the private key.)

If CloudFlare continues to offer Flexible SSL, they should at least include an HTTP header in the response indicating that the connection was not encrypted all the way to the origin. Ideally, this would be standardized and web browsers would not display the same visual indication as proper HTTPS connections if this header is present. Even without browser standardization, the header could be interpreted by a browser extension that could be included in privacy-conscious browser packages such as the Tor Browser Bundle. This would provide the benefits of Flexible SSL without creating a false sense of security, and help fulfill CloudFlare's stated goal to build a better Internet.

Comments

Older Posts Newer Posts