Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Plaintext HTTP in a Modern World (jcs.org)
40 points by memorable on July 10, 2022 | hide | past | favorite | 77 comments


This page doesn't seem to grasp the reasons behind using encrypted connections. It's not some 'security' blanket statement or 'have something to hide', it's tamper-prevention (including MITM malware injection), privacy, and a little bit of identity verification (not really something people do in the real world).

https://www.troyhunt.com/heres-why-your-static-website-needs...

If your problem is some ancient device that cannot speak reasonably recent protocols, the solution is to not allow that device on the main internet directly. That might mean the device itself has to live behind a proxy, or it has to use a remote browser, or it might (as you might guess) simply be a matter of decommissioning the device or only using it on a local network.

Any time the general security of a system is downgraded for some edge case, that same downgrade can be used to attack everyone else.


It's generally not a model that has much supportive mindshare for the web currently, but it is possible to achieve tamper-prevention without requiring the content of communications to be encrypted.

For example, most official Debian[1] and Ubuntu[2] package repositories currently use HTTP (not HTTPS) by default for content retrieval.

That's reliable thanks to public-key encryption; the packages are signed, and the receiver verifies the signature.

Someone able to inspect your network traffic could, for example, tell that you've downloaded a genuine copy of "cowsay". Or they could detect that the server replied with a tampered copy (something that your client should reject as invalid).

[1] - https://wiki.debian.org/SourcesList#Example_sources.list

[2] - https://ubuntu.com/server/docs/package-management


Systems that are older than general SSL and TLS usage do indeed have those features, but they are mostly unsuitable for the majority of internet users.

Sadly, it could have been better with varying options of choices in connection, stream and content encryption methods, but that simply isn't feasible with the users and scale we're currently working with.

For niches (and operating systems and software packages are niches, even if an end user is somewhere under the hood using it) it can still be pretty good, especially considering the mirror system where you distribute files to mirrors which might themselves use TLS but you'd still want the distribution authority to be the only one signing those files.


Adding to that: just slapping HTTPS on those connections still would prevent an observer from detecting you downloading cowsay. IIRC every package has a fairly unique size and unless you add padding that is enough metadata to figure out with reasonable certainty which package you requested. So it’s not like HTTPS would add any immediate benefit anyway.


> For example, most official Debian[1] and Ubuntu[2] package repositories currently use HTTP (not HTTPS) by default for content retrieval.

But then you've bootstrapped the trust somehow. If you were to download an ISO from that not-HTTPS website, you'd be at risk.


Visiting a single unencrypted website allows anyone who controls the connection (your government/ISP/WiFi provider) to cause your browser to silently execute malicious code and download malicious content. Compromising the security of 99.9% of users for the benefit of the 0.1% isn't even remotely worth it. Those 0.1% also have a simple workaround of using a proxy if needed.


> to cause your browser to silently execute malicious code and download malicious content

You are making assumptions about my user agent.

If you use insecure JS-executing browsers and problematic ISPs (I know not everyone has a choice), consider VPN or an extension that forces HTTPS.


Seems like a browser issue, not a website issue.


> Seems like a browser issue, not a website issue.

That's why Mozilla and Google have been working on the transition to HTTPS for years.

Mozilla announced deprecating HTTP back in Apr/2015 [1], to allow plenty of time for everyone to upgrade.

1. https://blog.mozilla.org/security/2015/04/30/deprecating-non...


Right, but if the security threat is the browser silently executing malicious code, shouldn't that be the fix, rather than the protocol? HTTP data itself isn't harmful.


JavaScript code can be malicious, even while sandboxed (e.g. downloading a file). There are still dozens of browser security issues found and fixed every month.

A simple HTTP redirect to download a PDF file can also lead compromising the user's computer.


How does HTTPS protect from any of that? PDFs can still compromise your computer, and JS is still sketchy. Anyone can encrypt their server, even the bad guys.


That only happens if you directly visit a malicious site, e.g. https://secure.mybaaankingsite.com

You can protect yourself by going directly to the URL and checking you're on the right domain.

If you don't enforce HTTPS everywhere, any website is potentially malicious, e.g. visiting http://cookierecipes.com can have the same consequences.


Right, but how do I know whether the URL pattern is legitimate? I don't work for the IT department at my bank. If the answer is to just eyeball it, that does seem like a far worse security problem than HTTP has ever been.


HTTPS isn't a panacea for all security issues. It ensures that when you connect to website.com, you'll get whatever website.com sends to you, without anyone else eavesdropping and tampering with the connection.

HTTPS doesn't prevent you from going to weebsite.com. There are other security measures for that, but it's also your responsibility to check.


It doesn't even do the first thing. There are multiple vectors where someone could tamper or eavesdrop on a HTTPS connection, perhaps the biggest one being CDNs. As a visitor, you have no real idea how secure the connection is, even if it has a "padlock". HTTPS offers some protection against local attack vectors, from your ISP or on a public WiFi, but that's about it. The server could be compromised, or malicious, you have no idea.

Putting the responsibility for checking the rest on the user is honestly a mistake. They could be dyslexic, and may not be able to detect a typo. They could also be 85 years old and not understand half of what you are saying. These are the problems browsers should be focusing on. Security is not as easy as encrypting the protocol and saying everything else is user error.


You keep sidestepping the benefits. You want website.com you get website.com. It's impossible to know the infrastructure of that website and simply isn't something HTTPS will fix. That's more of a social/legal problem of how companies can handle user data.


The benefits are pretty small compared to the cost of requiring HTTPS everywhere, which is allowing silicon valley to bully the entire internet into jumping through its hoops to get traffic.

The websites that aren't willing or able to do so are, in my experience, some of the more precious ones we have on the Internet. The websites that aren't trying to monetize their visitors are the ones that get Thanos:ed out of apparent existence. What gets lost isn't the spam or the malicious websites, they of course adapt. What gets lost is the unique views, the personal websites, like from some 80 year old who has meticulously published a catalogue of his astrolabe collection online over the last 30 years.


Browser's can't verify the integrity of website contents.


HTTPS can't verify that either, to be quite honest. It can ensure modest protection against a specific class of MITM attacks. If the traffic goes through a CDN like cloudflares it's decrypted, inspected, possibly manipulated and re-encrypted mid-flight. A well funded actor can also lean on the website owner, or just hack them.


TLS protects completely against network based MitM attacks.

CDNs are purposefully installed in MitM positions. That's a risk that the sites owner has to manage and it's an optional one at that.

A well funded actor would probably use different kind of vectors and not even bother with website MitM.


So why does catmemes.lol need to be encrypted with such urgency?


Because random ISPs will inject ads in websites for example.


That seems like a poor choice of ISP to me. If I dial my phone and have to listen to ad jingles before it connects, I'm changing my phone company.


That type of response comes from a place of privilege. Many have no choice over ISP.


Dysfunctional markets is a legal problem, not a technical one. Concealing the consequences of market dysfunction with technological band-aids only serves to preserve the status quo.


This is just a deeply unhelpful way to think.

Firstly it punishes those in the worst situations. Those in countries with abusive political systems, those who have no legal representation, etc.

I don't even believe the idea is right in practice "serves to preserve the status quo" is just wrong in this case. HTTPS completely breaks most terrible things ISPs can do. It completely dismantles the system.


> Firstly it punishes those in the worst situations. Those in countries with abusive political systems, those who have no legal representation, etc.

HTTPS offers virtually no defense against a state actor.

> I don't even believe the idea is right in practice "serves to preserve the status quo" is just wrong in this case. HTTPS completely breaks most terrible things ISPs can do. It completely dismantles the system.

HTTPS doesn't dismantle the system at all. You're still stuck with no other option for an ISP, which means you are not going to get favorable terms. And even with HTTPS, you need to look up the IP for the servers you're going to visit, and ISPs can snoop on your DNS traffic and sell information about how you, the IP (or the person), regularly looks up the IP for abortionpills.example.com (or connects to the IP associated with the server).


They get an IP but an IP does not always equal a hostname.


It's just one data point though. The real juice comes when you have a hundred thousand traffic logs to compare, then you can start inferring similarities even from vague and incomplete data points.


I thought the whole reason for encrypted connections is to deny information as much as possible from intermediaries (ISPs), so those who control servers can hoard all the information to their own benefit?


Initially, the main reasons were:

- Identification (bi-directional)

- Privacy (or secrecy)

A side-effect of those are tamper resistance. As the internet got older, more tamper-based abuse was happening and the benefit of tamper resistance became practically more important than the identification part. To be realistic: most people don't actually know how to identify what website they are on, and they also don't know what the identity should be. This is also why EV (extended validation) is completely worthless and mostly purged from webbrowsers.


some ISPs still to this day try to inject their ads when HTTP is used, especially on mobile. I was very angry when I noticed it first time with my mobile operator. No such luck with HTTPS.


>While this push for security is good for protecting modern communication, there is a whole web full of information and services that don’t need to be secured

It's not only about security. I wonder if it happens in other countries too - here in Russia ISPs used to inject advertisements directly into HTTP traffic which was very annoying and now they inject propaganda justifying the war. Fortunately very few sites use HTTP nowadays compared to 10 years ago, so I haven't seen such ads in a while.


> It's not only about security.

That's still security, and such security is part of why HTTPS exists. TLS promises three things:

* Integrity :: that the two participants (for HTTPS this will be a web server and user agent) move data without anybody else getting to alter it successfully.

* Confidentiality :: that the two participants move data without anyone else learning what the data is [but they can learn when data is moved and a upper limit on how much data was moved]

* Authenticity :: that the participants can, at their option, reveal some proof of identity to the other tied to this TLS session. For HTTPS typically only web servers provide such identity, as a Certificate.

Injecting crap into your data is forbidden by the Integrity requirement.


The author is aware of this. At the bottom of the article:

> Please don’t contact me to “well ackchyually” me and explain MITM attacks and how your terrible ISP inserts ads into your unencrypted web pages


... which in no way addresses the issue.

Again, a proxy solves the problem for old devices without sacrificing security.


Comcast used to do this, Verizon too I think (as well as DNS hijacking).


Ugh DNS hijacking is the worst. One of the big residential ISPs for a time would trap NXDOMAIN and redirect to a damn search page with ads. So gross.


If you run older machines (I do) and want them to access the modern web, instead if asking everyone else to degrade their security - just run an upstream proxy that can fix the SSL issues for you.

Its like an hours work to set up.


I agree. Having to support now-obsolete systems is a bane of software. HTTP itself sucks because there are so many standards-breaking systems out there. If you want to write a new custom HTTP server you need to understand all these potential quirks, reading the RFCs isn’t enough.


> [Gemini’s] document markup language is based on Markdown so it’s very lightweight and simple to parse without complex HTML/CSS parsers.

I wouldn’t describe it in this way.

Markdown is in no way simple to parse; HTML is actually easier to parse than Markdown, because it’s defined in terms of an actual parser, whereas the best source for Markdown parsing is CommonMark, which uses a more traditional descriptive spec, meaning you have to think a lot more about it to implement it, and run a test suite over it to be fairly confident you’ve got it right. Sure, HTML is heavier, but it’s much easier to implement and more dependable. The HTML spec is sufficiently large that there are some dark corners I expect very few humans to get right based on their own mental models (e.g. script double escaped state), and if you want to omit open or close tags or commit parse errors it requires more knowledge, but by and large, if you’re sticking to what I could imagine being the subject of a hypothetical book “HTML: the good parts”, it’s very easy to predict with only a little special knowledge required (mostly around which elements are void). But Markdown is generally much harder to predict, and becomes awfully unpredictable as soon as you start interleaving much HTML at all.

Gemtext is a completely different beast that works in a completely different way, vastly simpler. Any resemblance to Markdown is superficial. Yes, it uses ``` as a preformatted code delimiter, * for list item lines, # for heading lines and > for blockquotes, but two of these are ancient conventions, and the other two (and slight variants) commonly found in other lightweight markup languages as well. As far as the semantics are concerned, Gemtext is radically different from Markdown, as it’s strictly line-based, and the first three characters are sufficient to determine a line’s type, and there’s no inline formatting. (There’s one more piece of syntax, => links, which is novel in syntax and semantics. And ``` has different semantics from CommonMark too, with alt text instead of an info string, though neither have particularly defined semantics for the use of that part.)

Gemtext is not based on Markdown. Some elements of its (meagre) syntax were most likely inspired by Markdown, but it’s not based on Markdown.


HTTPS is great, of course, but there's still a place for plain HTTP. It is essential that HTTP continues to be universally supported (even behind the ridiculous warnings that browsers put these days). There are several, independent reasons for that:

1. You can build a complete http server and client from scratch with a manageable number of lines of assembly code, as has been demonstrated many times. The innocent "s" in the protocol would be the largest part of that implementation if it was required. Let's allow simple harmless things to stay around.

2. For static http pages with permanent content, encryption is not really necessary. These pages are just like posters hanging on the wall. Of course third parties can "mitm" them and write over them, but it doesn't really matter. It's nice that we can still hang random posters around.

3. Setting-up an https server requires involving another party (the CA) that now holds some power over your content. Since the CA can now effectively shut down your site, they may succumb for pressure to do so. This has already happened. The widely used "let's encrypt" has been made to hold responsibility for the certificates that they issue to sites with problematic content, and they have removed certificates to these sites, which is tantamount to shutting them down. So far, this has just happened for neonazi content, which is alright to censor in my view; but I'm worried that other people will be alright with censoring content which I find reasonable.


> Let's allow simple harmless things to stay around.

On private networks only, please. Otherwise, they will not be harmless for long.

> For static http pages with permanent content, encryption is not really necessary.

Unless you actually care about receiving the true content. Which, if you intend to reach that site, you probably do.

> now holds some power over your content.

That's simply untrue. Complete FUD.

> which is tantamount to shutting them down.

No it is not. Stop whining and go home


> Unless you actually care about receiving the true content. Which, if you intend to reach that site, you probably do.

Not always. My website is a sand castle in the beach. I take care of it and love it, but I don't mind if the wind takes it down, or if some kids destroy part of it, or if some idiot takes a photo of it and photoshops it in order to misrepresent my work. It is just a sand castle. However, I would feel really angry, frothing at the mouth angry, if I was forced to "register" my sand castle in some stupid place so that the passerby could be assured that it is mine, or otherwise not be allowed to build a sand castle at all. This is what I feel by being forced to use https just to store some irrelevant static files.

Regarding CA and censorship, what I said is true and can be verified with a small amount of googling. I don't have the patience right now to do that for you, surely another HN reader can provide the relevant references if you are honestly interested.


If you're tending such a site as pure art project, fine. However, I suspect the site is there because you envision a community of potential users in this world.

You are not "forced to register". You are asked by your community to make mutual assurances. The easiest system out there relies on a third party, true. There are other third parties, and you could make your own (the most popular one, you might be surprised to hear, didn't exist a few years ago). And there are other ideas of ways to make this mutual assurance. None have taken off in a big way yet, but I'd encourage you to look to them or think of other ways we could do it!

Pretending that we don't need to make this mutual assurance to members of our community is not an option. If you don't like the easy way to do it, find another way.


> There are other third parties, and you could make your own (the most popular one, you might be surprised to hear, didn't exist a few years ago).

Am I reading this right? Are you suggesting that because LetsEncrypt exists noe, the user should roll their own CA, and have it entrusted by all major browsers?

That's obvious absurd and unachievable. Especially compared to the _existing and functional_ solution of HTTP.


Everything in this article rings true. But there's more. HTTPS only, combined with almost everyone only using LetsEncrypt (a great service), leads to massive concentration of value for any internal corruption at LE or external political (or other) pressures on LE. The more browsers refuse to show HTTP, the more people in LE, the greater prize it is for those that want to control what is seen.

If you want the web to be free and open, and you run a personal website, please consider providing HTTP+HTTPS.


> HTTPS only, combined with almost everyone only using LetsEncrypt (a great service), leads to massive concentration of value for any internal corruption at LE or external political (or other) pressures on LE.

Well where are all the other free SSL/TLS certificate providers, then?

ZeroSSL sometimes gets mentioned, though they are also pretty keen to charge you, which is the exact reason why many go for Let's Encrypt: https://zerossl.com/pricing/ (admittedly, they're much cheaper than most alternatives, but you can't beat free)

Most of the other paid providers out there do see like they're just running a racket in comparison:

  https://www.ssl.com/certificates/basicssl/
  https://www.digicert.com/tls-ssl/basic-tls-ssl-certificates
  https://www.godaddy.com/en-uk/web-security/ssl-certificate
It's an order of magnitude more expensive than just getting a domain, my expenses would be in the hundreds of dollars per year if I wanted to get the certificates from them, which I can't really afford while working in Latvia and having 30+ sites to manage across different domains.

Let's Encrypt works and once it stops working (should that ever happen, which you kind of need to include in your risk analysis), I'm kind of screwed.


well, to be fair, https://zerossl.com/pricing/ has a "free" column :)

On the other page, https://zerossl.com/features/acme/ it looks like free acme certificates can have a wildcard records (*.example.com).

https://www.sslforfree.com/ claims they use zerossl and support wildcard records.

Also, one of others you mentioned also has a free acme option, but without mentioning wildcards: https://www.ssl.com/how-to/order-free-90-day-ssl-tls-certifi...


Upon a closer look, it indeed seems that the certificate count limitations apply to the manually requisitioned certificates, not the ACME ones:

> By using ZeroSSL's ACME feature, you will be able to generate an unlimited amount of 90-day SSL certificates at no charge, also supporting multi-domain certificates and wildcards. Each certificate you create will be stored in your ZeroSSL account.

So I guess that's a viable option.


I may be wrong, but as I understand it HTTPS connections typically use an ephemeral key with the certificate used to authenticate the servers identity.

So simply having the certificate does not mean one can read the traffic without conducting a man-in-the-middle attack.

This means that anyone who could read traffic from a HTTPS connection could read it from a HTTP connection just as easy.

The arguement seems to boil down to, everyone uses a master lock, some people can open master locks, please consider leaving your locker unlocked... why?


Not just typically. All the uses, even with older protocols, use ephemeral keys to secure the HTTP transaction.

Even the most awful RSA kex (using the RSA algorithm to just encrypt a random key and send it to the other party, rather than doing a Diffie-Hellman key exchange of any kind) is still ephemeral keys and still cannot be decrypted by a CA. It's terrible because it has no Forward Secrecy, but it would not fall to this imaginary attack.

With Forward Secrecy (always in TLS 1.3 and if you didn't deliberately choose insecure options in TLS 1.2 and earlier) even if your adversary kept a transcript of the encrypted transaction and they later obtain your actual private key, that's still no enough to decrypt the transaction because the transaction key was ephemeral.

In fact the CA deliberately by policy must not have the private key needed to do more. If anybody has evidence of a present day Certificate Authority either asking for their private key [other than for a "Key compromise" type event where the certificate gets invalidated] or of them providing a mechanism by which they "randomly" choose your key and would have an opportunity to just remember it - tell m.d.s.policy https://groups.google.com/a/mozilla.org/g/dev-security-polic...

Your processes shouldn't open you to such an attack (private keys should ideally never leave the machines using them) but it's also policy that the CA shouldn't want to ever know these keys. Not least because this would make them a target.


Just to be complete: it’s not that the key is ephemeral, it’s the way it’s agreed upon. The symmetric key is never in the communication stream.


> The arguement seems to boil down to, everyone uses a master lock, some people can open master locks, please consider leaving your locker unlocked... why?

In fairness it should be more of "everyone uses a master lock, some people can open master locks, please consider not making master locks own 90% of the locks"


I believe there's three or four well accepted ACME providers now, which is why a lot of tools now support or even default to alternative vendors, partly to mitigate this concern. e.g. acme.sh will try ZeroSSL first, Caddy will rotate between ZeroSSL and Let's Encrypt.


acme.sh moving to ZeroSSL has nothing to do with increasing variety.

acme.sh moved to ZeroSSL as default due to being sponsored by them.

my primary reason for avoiding ZeroSSL is the requirement of providing an email address to them that can be abused.


Caddy doesn't require an email address for ZeroSSL (but we still strongly recommend that for troubleshooting sake).


Yes, there is a certain measure of trust going into parties like Let’s Encrypt, but I don’t think it’s as bad as you are saying.

• A CA can’t intercept anything based on your using them.

• A CA can refuse to issue a certificate for your site, but if this got out (which it would), it would significantly damage their reputation and I don’t think it will ever make sense for them to unless legally compelled. In that case, there are other providers, unless legal compulsion has removed them too, in which case the world wide web is dead anyway, so this is not worth worrying about.

• A CA can issue a false certificate for your site. There is, however, a non-trivial risk of them being detected in doing this (much smaller for small sites, but the system is set up in such a way that almost anyone can check and detect most instances, and I think you can be reasonably sure Let’s Encrypt specifically is being monitored for suspicious patterns by multiple parties that have vested interests in the system running smoothly), and if it did get detected, they would certainly be subjected to extreme scrutiny, so that any more instances would be very likely to be detected, and if a good explanation was not provided in short order (and even then it wouldn’t be too easy to recover), they would be dead, though it would certainly be a period of great pain for the web. In other words, not only does the potential upside of cheating go up, but also the downside.

I don’t think your conclusion that you should support HTTP is reasonable.


Even better, maybe add TLSA records too! TLSA when combined with DNSSEC can replace the need for Certificate authorities. If you want to show independence from CAs, that's one way to do it.


TLSA replaces certificate authorities like LetsEncrypt with DNS registrars, and, in the process, gives up certificate transparency and the rest of WebPKI surveillance; WebPKI surveillance has resulted in the termination of some of the largest certificate authorities, for misissuance; meanwhile, your browser can't terminate the TLD operators. It's not a good tradeoff.


Not sure I understand the concern about access from “modern embedded devices”. Something like a Raspberry Pi, or really anything with a decent ARM processor, can easily handle TLS.


TLS can be handled by much, much small devices, too! Most of the cheap Xtensa line of things, like the ESP can handle decoding TLS [0].

[0] https://docs.espressif.com/projects/esp-idf/en/latest/esp32/...


The problem is in supplying the CA certificate(s) used to verify the web-of-trust. Web browsers are pre-loaded with a huge number of trusted CA certificates. Pre-loading and maintaining that list on an embedded microcontroller is non-trivial. Not to mention what happens when a CA root is compromised or goes rogue, you have to deal with the revocation process.

Your link mentions global_ca_store but provides no guidance on how to effectively populate it. That's the problem.

Interestingly, providing a non-Tivo-ized system, e.g. one that allows connection to an arbitrary cloud server, requires even more work than just hardcoding in "your" CA certificates.

None of this is insurmountable, but it leaves devs pining for a pre-HTTPS world where you can just do a DNS lookup and send "GET / HTTP/1.0" and not have to worry about all the attack vectors that HTTPS protects against, as well as the ones that HTTPS opens you up to.


It is a potential point of failure though, you normally need persistent storage or ramdisk tricks to keep the certs up to date.


The word "modern" in the phrase is dumb -- as usual with the word modern; it's on of those word that convey no real meaning most of the times. It's not about modern, it's about power.

Current 32bits SoCs prolly sale for the same price as 16bits systems of a decade ago, so yes, they probably can handle HTTP. However, 8 and 16 bits microcontrollers with only a few kb of RAM/Flash are still made ("modernly"), sold and used - and those cannot handle HTTPS.


You would definitely have trouble on an 8-bit micro, but I wouldn’t expect to browse a “personal website” with one of those, which is what the article seems to focus on. Maybe a “personal HTTP API” at best.


I love everything about this post, and the (short) discussion here. I worked out some complex PHP user-agent detection on my websites to serve HTTPS to newer machines, and HTTP to older ones. I instinctively knew Apache2/nginx should be able to do it, but all the documentation online is about redirecting everyone to HTTPS. I passionately believe older machines still have a place on the Internet, but everyone's in a hurry to donate all their data to big companies on the latest iPhone, so they don't care about that.

I also run an upstream SSL-bump proxy for my own older devices, and a small community of others, allowing us to browse the modern web (or at least, those sites that will still render on older browsers). The LE service is so important for certain applications and users, but I'm saddened that the push for HTTPS seemed to require the death of HTTP.

I appreciate the OP sharing his nginx config, and the few of you who replied with additional thoughts.


It's not just death of HTTP. It's the death of the "trust by default" model of Internet communication.

When the Internet didn't matter much, everything was in the open. Now that real money and real control critically depend on the Internet, everything is going to be encrypted, digitally signed, firewalled, etc.

I don't thunk there is a way back.

For old machines one can run a TLS-terminating proxy inside the secured internal network.


In general, browsers default security policy force a valid “https” connection to allow access to even the most rudimentary functionality. Admittedly SSL doesn’t stop every issue, especially considering the number of times issuers were caught signing bogus content (Symantec, Microsoft, and so on…). However, it does narrow the number of variables when tracking down 3rd party issues like ISPs fiddling with content.

1. disable crossorigin on your server if you can manage without..

2. hash sign static included javaScript and CSS files (for CDN services this is good practice):

cat file.js | openssl dgst -sha384 -binary | openssl base64 -A

In your html/script add content include tag check:

... script src="file.js" integrity="theHash" …

3. Force re-encode of media in a temporary VM instance: strips all user media files of non-media data/metadata to protect people from themselves, mitigates adversarial nuisances, and ensures format conforms to a known standard.

4. DNS over HTTPS (DoH) is an option now, and recommended in many cases

5. DNS Security Extensions (DNSSEC) can be a huge resource drain for high-traffic sites.. YMMV.. sometimes a necessary evil to turn it off..

6. Should one ever allow users to upload compressed archives, unidentified binaries, webp, SVG, fonts, or TIFF? https://www.youtube.com/watch?v=pmePLg3hdCw

7. Sanitize your input data with standard filters, and never trust the input content.. even your own (both client and server side checking of data).

8. Traffic shaping firewall to force dropping clients outside a "normal" traffic service range/session-time.

9. Interface transaction quota limit tripwire, and firewall service abuse breaker logic

This is hardly an exhaustive list, but will often reduce common nuisances. ;-)


I was going to comment that they would probably enjoy learning about Gopher, the competitor to http, which is a simple enough protocol to browse without a client, but they beat me to it when they brought up Gemini at the end of the article. Gemini they describe as similar to Gopher but with Markdown, then berate it for requiring TLS. Perhaps they can fork it.


I like Lagrange: it supports both Gemini and Gopher. That's the way to do it IMHO.


Maintaining support for unmaintained client software is a recipe for disaster. Insecure connections are by definition, insecure.


Ironically, the article redirects http to https


The article recommends redirecting if the browser sends a Upgrade-Insecure-Requests header.


As an aside, if OP is the website owner: The font-size here is 10.5, it's really small and consequently hard to read.


Note that that’s 10.5pt, which is 14px, so it’s not so much smaller than 16px. Still smaller than advisable (I recommend 16–20px), but nowhere near as bad as 10.5px which is really tiny.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: