It was a huge mistake to have browsers treat an expired cert with so much force. An expired cert website is at the least, just as good as a http site and yet entirely unencrypted sites see a few warnings on login forms while expired cert sites are unaccessable.
Actually, there are certain cases where visiting a site under an expired certificate is strictly more vulnerable than visiting a normal http site. Certain web features only work in a secure context[1], which an http site does not expose. Therefore, a hacker who can convince the user to accept their invalid certificate can extract more information from the user.
Whether it was a good idea to limit those functionalities to a secure context or not, I don't know. I'm also a bit opposed to this forced HTTPS everywhere mentality.
Seems to me, though, that the right solution would be to turn those features off as opposed to denying the whole site. (If the site relies on those features to work and breaks as a result, so be it.)
Disabling these features for expired certificates and limiting secure cookies to a single session sounds reasonable as a "limp home" degraded functionality mode. Obviously one wouldn't want the padlock icon to be displayed in the address bar in this case.
I wish browsers showed a large warning like "This site's TLS certificate EXPIRES IN FIVE DAYS", or something.
It would be less shameful to have it than an expired certificate. But most of all, anyone handling the site would have an advance warning, instead of a catastrophic failure. This would lower the number of actual expirations significantly.
I think that a bunch of things like the JavaScript console, DOM editor, etc. should only be enabled when a browser developer mode has been activated, and such a mode would also enable a variety of warnings like this. It shouldn't be difficult to find or enable the developer mode, but it shouldn't be on by default.
That way, people who know (or should know) what these sorts of warnings mean will see them without ordinary users getting unnecessarily scared or confused. Hopefully a site's own developers regularly view their own site in the same browser they've used in the past for debugging the site.
It would be terrible for visitors who have no clue on what a certificate is, or even what HTTP is. It could be scary for them, they could even think they were "hacked".
a certificate is either valid or its not. this is more for administering it or for users that rely on it and want to know within build systems, at least that is where it did hit me.
the failure then, the same.
wondered although if there is some use in knowing an upcoming expiration.
# curl + GNU date + GNU grep -P
$ echo "cert days left: ~$(( ($(date +%s -d "$(curl -IvsS https://www.w3.org 2>&1 >/dev/null | grep -Po '(?<=^\* expire date: ).*')")-$(date +%s))/86400 ))"
cert days left: ~396
but then what should the information give? that my build breaks, say next Tuesday? and what if until then the certificate expiration is enlarged?
Maybe there should be a "warned from" extension for certificates, which contains a date within the validity period from which on user agents are supposed to display warnings?
Why? The date says “Not valid after”, and the site operator can monitor it, or just set a calendar invite when it’s issued. Making the “real” expiration some arbitrary number of days after the marked expiration date seems like it’s unlikely to modify cert owner behaviors, but it does make validation way more complicated for browsers.
I agree, but it is also overkill to display a warning page saying that "you are in danger" just because the certificate expired today. There should be a middle ground.
Yes,it's a better option for the site admins. It doesn't help the consumer though.
Imagine you can't use your banking site because the certificate expired a few minutes ago and the browser displays a (unnecessarily dramatic) error message. Not everyone is tech savvy enough to ignore those messages.
The browsers already let you visit the site if you want to, so I don't think there is a big deal.
I think that would cause issues during various security reviews, because it would imply that the browser is accepting expired certificates, even if there is a warning.
Also, CAs might not revoke expired certificates any more, as they are already expired, which hurts security as well if there is a reason to revoke the certificate, but no means to do so.
With an extension, this feature can be introduced gently, without risking any security issues.
It all depends on context and browsers have to have defaults that work for every context.
Sure a recently expired cert is pronably the least severe issue a tls cert can have, but still - expired certs that are compromised usually aren't revoked. If i'm visiting my bank, i definitely want things to err on the side of not working.
It's the combination of defaults that's problematic. If the site requires https, because it's e.g. a bank, then sure, require non-expired cert. But my static sites which have no auth, payment, or even subpages (path-obscuration being another of the touted benefits of https-everywhere), do not require https. Except because of the defaults Google's overzealous security team decided to inflict on the world, now I have to have a process that reaches out to LE every 3 months. For a static website which otherwise never needs updating.
Well according to others on this thread, w3.org enabled HSTS - so they specificly opted into strict mode. They were not using the defaults. So that criticism does not apply here.
Https also ensures that the connection has not been tampered with by an ISP. Its quite stupid, especially considering you're paying them already, but used to be common when most of the web was http. Also, router malware has been seen injecting JS into http pages to mine crypto.
Well from their perspective it was great. Why have an extra code path to worry about when you can just make the thing less useful?
I tried to remove a specific cookie from my browser session recently. The only way is to go into Developer Tools, find what tab has "Storage", find the cookies, select the cookie you want, right click and delete it. You can't do it from the 4 other user-friendly screens that already show you the individual cookies, because why would a user ever want to delete an individual cookie?
I think this is the same reason for all of the poor browser UX over the years, like personal certs. The web would be a lot more secure if personal certs had supplanted passwords 15 years ago. But then we'd have to build something other than a single pop-up box to manage them.
Sure, we got a built-in password manager, and we suggest random passwords for users, and save them locally, and the user needs to back them up (or reset them via e-mail). But doing the same thing for certificates might be confusing, meaning, somebody would have to actually talk to a user (until it became common knowledge). Better to wait for something way more complicated and expensive to show up, and ignore UX there too. (https://security.stackexchange.com/questions/1430/is-anybody...)
I totally agree w.r.t. password managers. Luckily, with browsers now having built-in password managers, it's less UI friction if we come up with some standard for user certificate workflow that's close to password workflow.
In a world where users rarely see their actual passwords, it's much less of a UI change to (nearly) silently replace password changes with certificate signings and (nearly) silently replace password logins with certificate presentations. A small extra attribute in the HTML input tag could signal to the password manager that it should perform the certificate workflow instead of the password workflow.
Ideally, instead of specifying a specific mechanism, the extra input tag attribute would signal the password manager to actually perform a SPNEGO mechanism negotiation, so the password manager and the server could negotiate if they were using certificates, Kerberos, or some future mechanism. Though, this would also require adding certificate support to GSSAPI. The upside would be that future changes could be done without any changes to HTML.
Right, the website can opt-in to requiring TLS for connections. Browsers can choose to honor this and disallow all plain HTTP connections... and all TLS connections with "invalid" certificates. https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security It's a great way for site admins to turn a minor certificate issue into a complete disaster :(
That’s the whole point of hsts. Won’t work on safari either. I don’t use chrome and won’t recommend it but if you have to view it you can clear hsts for that domain in chrome://settings
The main event is preventing silent security downgrades. Hiding proceed behind a secret code like it's a 90s NES game is a side show. They should have just made it the Double Dragon cheat code.
This option is likely there for testing only and not meant for use by regular users. Hence it’s not present in regular settings ui afaik. Would you prefer drm-style lock? That’s clearly going overboard
No, it was a mistake not to implement everything in DNS ... since that is what cryptographically determines ownership of a domain anyway. Any other certificate mechanism is just middle men selling snake oil and causing additional administrative overhead.
The greater mistake isn't being perpetrated by the big browser companies scaremongering bad certs. The fault lies with every sysadmin or web dev that choses to 301 redirect from HTTP to HTTPS and not have HTTP+HTTPS. You can do your part to make this a non-problem by always serving HTTP and HTTPS both.
For many sites, and probably most non-commercial sites, yes. Anyone thinking about this issue can decide if that threat model applies to them: do they even have a log in, or is it just html files and jpegs in directories? etc.
Well that depends on what the jpegs and html files contain.
For example, the fact you are reading the wikipedia article on Tiananmen square incident might be very sensitive if you live in China (ignoring the part where they block wikipedia). Other places might object to various other speech, etc.
Although to be fair, the mass survelience threat model is probably less likely to have an active attack like tls stripping. But annonoyminity is all about hiding in the crowd; only securing the connection when you have something to hide from eavesdroppers means that you have no crowd to hide in when you need to.
How do you determine a users privacy requirements by site alone? The content does not imply relevance to privacy abuse; abusers come in all manners and may take abusive objection to content like w3.org that would to most people seem innocuous.
It's pretty easy when you are the person that made the website (like most personal websites).
Again, I am not advocating for not having HTTPS so I don't know why you think HTTP+HTTPS is less private. It is exactly as private only it is also human readable and requires no centralized authority's lease to be visitable.
Do you know how I can set allow http connections to my website (on Github pages with a custom domain), while ensuring that anyone who types "example.com" without a protocol gets the https version by default? I wanted to set up my website that way, but I couldn't figure out a way to do it, since many browsers still default to http.
In many cases this isn't the behavior you as the consumer would want. For an example, if your bank's certificate expires it's not ideal for the end user to be unwittingly redirected to http, and inadvertently access their account over an unencrypted connection.
Serve the same content over HTTP and HTTPS. Many web admins today don’t serve content over http, instead they HTTP/301 redirect all http requests to https.
Its worst than that. Google chrome doesn't like more sites than others. On most you can simply click "proceed to danger" and ignore missing cert. On some other like cryptome, they don't even give you option to continue.
This is generally enforced by HSTS - one of the features is that, if the site has it enabled and the site has a TLS-related error, the browsers prevent users from bypassing the SSL error screen (at least, without typing `thisisunsafe` in chrome).
I'm glad I saw this comment - What can make a site perform like that?
At work we use Chrome as our general browser, and we've had several issues with expired certs before. Some websites allowed you to expand the box and opt "Continue" but some simply didn't have the option. Whats the difference?
HTTP Strict Transport Security (HSTS) is enabled at the DNS level which tells modern browsers "I'm a modern website and want to only be served on valid certificates, otherwise refuse to allow access to my website because something must be very wrong for this to happen".
The assumption is "must be very wrong" is an attack you don't want people to "continue" past. Occasionally it bites back like this if you don't maintain your certificates.
Offering HTTP transport invites attackers to inject advertisements, malware, or viruses into your packet stream. ISP like comcast and ATT are notorious for doing this.
Allowing falsified or expired certificates invites attackers as well.
Yeah, there is no DNS mechanism. Although, IIRC, it is possible to place an entire TLD on the preload list - which still doesn't use DNS, but its a mechanism to enable HSTS where the website itself doesn't do anything.
Firefox 88 here, cryptome.org loads with a red exclamation warning icon in the address bar, w3.org refuses to load and when I click on "advanced" there is not even an "accept risk and continue" option. I don't even know how to load w3.org right now if I had to.
It's nothing wrong with a user deciding "I realize something is wrong, and I shouldn't trust that anything on this website is legitimate, and I shouldn't give it any information, but I'd still like to read the story/article/blog."
There should be a low-opacity button inside the "Advanced" menu.
But if the button isn't there, that's probably because w3 has HSTS enabled (for past visitors), which tells the browser to never load unencrypted. The `thisisunsafe` workaround is there to be a middle ground between actually making it impossible to load and preventing users loading an unsecure page when they shouldn't.
People don't read buttons. However, I share your disdain for Chrome's chosen method; it's very gate-keepery. It requires you to be "in the know"; those who understand the danger but haven't heard of the trick are deliberately left out.
I think a reasonable compromise would be a button that pulls up a dialog prompting you to type out a consent message. The dialog tells you what to write instead of keeping this as esoteric knowledge, but the user is still compelled to at least read the warning instead of blindly pressing buttons.
Kinda agree, but they don't have a button probably to prevent people from habitually pressing the button to continue. Training your users to bypass security measures is not good security.
I don't get that option, and instead get a message referencing HSTS, presumably because at some point in the past my browser picked up the HSTS setting for the site.
Thing is it’s not just the certificate that’s down, the website behind the expired certificate is also down. The two are almost certainly connected, though cause and effect is uncertain.
I think the main issue is now in increasing deployment of ACME. Let's encrypt has been issuing hundreds of millions of certificates for small websites, but how large is their market share in the serious websites market (Alexa ranks w3.org in the top 10k)? It seems to me that it's covered by different CAs. I wonder which angle Let's encrypt or other ACME/automation based CAs need to improve upon to make themselves attractive to that market.
And because I violent agree with this: even w/ Let's Encrypt this is still a pain. (Though, it is at least a better engineered pain, and theoretically tractable.) DNS providers that have bugs¹, services that want to host on port 80 & won't let me proxy them through nginx, auditors that want block port 80 (guess what ACME uses), services that generate malformed X.509 certs (golang's standard library…), programs that want certs in weird formats (PKCS#12 & Java, I'm looking at you), bad OpenSSL defaults, the rate limits (sorry LE, they're still too low) and just there are probably about 2 dozen people who really understand X.509 & web PKI and the rest of my company it is wizardry (it really isn't).
¹a certain large provider for the longest time failed to handle having two TXT records, which ACME requires in some circumstances.
It's not every cert, mind you (I didn't intend to imply that) it's just the ones that happen to not specify some element. Unfortunately, it's the default of a utility that we use, that's written in Go. It generates a very long-lived cert on start, and by default, it'll be malformed.
We have another utility (that crawls certs looking for ones near expiry. That cert trips it up. (And it now has a configurable ignore-list specifically so that we can ignore that cert.)
But more and more I appreciate libraries who (produce well-formed output) xor (error).
Go also has a "simpler" interface to X.509 in its standard library that will easily produce the wrong (but well-formed; i.e., wrong semantically, not syntactically / not what any user is gong to want) output under very easy-to-accomplish circumstances.
A grace period would have been a lovely feature to bake into this from the beginning. It's too late now of course.. oh well. I feel for you w3.org devs I've been on the other side of this many times.
There's not really any reasonable action for a visitor to take if a certificate is almost expired, so I don't know what a grace period would get you.
Some CAs are nice enough to issue certs with the Not Before date a day or so before the time at issuance, which is super handy for all the devices with wrong time and software that does odd things with timezone conversions.
The easiest way to manage I've found so far - put it behind Cloudflare. Caching, certs management, www. redirect handling (with SSL support), basic protection - all comes for free. You would need to install their own cert on your infra, but it has 10 or 15 year to expire.
Then you're shunning anyone who uses a VPN, anyone who uses privacy plugins, etc etc.
I despise Cloudflare these days because I'm pretty much required to use a VPN to access the web and get around geolocked content everywhere, and then Cloudflare blocks me from browsing even basic static sites.
CloudFlare, being a reverse proxy, has many limitations (such as forced cookies and the upload limit) but I agree that it is ok for many use cases. Let's Encrypt is easily automated and doesn't have any of those downsides.
They’re talking about the cert between edge and origin (I.e. cloudflare and your server). The client never sees that cert because they don’t talk directly to your origin.
I'm one of the people who really think that HTTPS should be enabled by default and the only HTTP traffic that should be allowed are permanent redirects with a pinning header.
What really is the difference between a $5 secure certificate from ssls.com, a self-signed cert, a $60 godaddy cert, and a lets encrypt ssl. Just about nothing right? So why not trust self-signed certs.
Everything other than a self signed certificate has technical checks requiring that the person requesting issuance of the domain actually has control over the domain via dns or http challenges. Without those you (or your ISP or government) could self-issue accounts.google.com (or api.stripe.com or bankofamerica.com) and MITM people to steal information that otherwise would be presumed private to middlemen.