Why did Google declare war on HTTP?

Google is waging a war to force websites to only serve content over secure
https connections by demoting the search ranking of websites that continue to use normal http connections. “…we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure.”

At first take, this seams like a magnanimous move by the internet’s benevolent dictator. Security is a good thing, so by forcing lazy websites to finally go secure we are all better off… right?

Unfortunately, things are not so simple and Google’s motivations are likely not so benevolent…

The benefits of https

https encrypts the conversation between you and a website so that no one can electronically eavesdrop on or change any content. This is important for connections where secrets are exchanged. You do not want the person sitting next to you at Starbucks to be able to grab your account number and balance as you log into your bank account over the public Wifi. As long as your connection to the bank is using https, the most an eavesdropper can divine is what website you are looking at- they can not see the actual data (or at least not easily).

The other benefit of https is that it guarantees that the pages you see are really coming from the web site you see in your address bar. With just http, is it possible for someone who happens to control the network between you and the website (i.e. the person running the Wifi access point you are using or your ISP) to spoof the website and send you their own content but make it look like it was coming from the web server you thought you were connected to.

These both sound like worthy benefits that everyone would always want, right?

https is not free

Unfortunately, https is not all benefit – there are significant costs to serve a site over https compared to http.

https costs to users

From the point of view of you the user, pulling up the exact same site over an https connection will be slower than it would have over an http link because…

  • establishing an https connection takes twice as many round trips across the network between you and the webserver. This hurts more for people who are physically far away from the webserver- especially people with satellite links since each round trip involves two journeys to space and back!
  • it takes more bytes to send the same content over an https link than over an http one. This hurts more for people with low bandwidth links.
  • content served over https connections can not be cached on a nearby proxy server, so everything must be fetched from the webserver separately for each connection. This hurts more for people with shared internet connections.
  • your computer must do the extra work of decrypting the received content before it can be displayed. This hurts more for people with old and slow computers.

A google engineer sitting in his apartment in San Francisco browsing on his Mac Pro connected to a 150 megabyte per second dedicated fiber link might not notice the extra 25 milliseconds https cost him to connect to a website. A 12 year kid in Tanzania will definitely notice the extra 10 seconds for his OLPC laptop to pull up the same site over his village’s shared satellite down-link.

https costs to websites

From the point of view of a website, serving the exact same site over https will be more expensive than it would have been over http for all the same reasons it hurts users.

A given website will need more or faster servers and more bandwidth to serve the exact same traffic over https than they would need have served it over http.

Google has more than 1 million cutting edge, custom designed and built servers connected to their 1 petabit/sec of bandwidth distributed across data centers around the globe. They well can afford the extra overhead of making all connections https.

For smaller websites with modest hardware and network connections located in far away places, the costs of being forced to needlessly switch to https can have a huge impact.

Does the Wikipedia logo really need to be secured with https?

For some content, the extra costs of https are well worth it. I absolutely want to log into my bank account over an https connection even if it takes a bit longer. But what about the WikiPedia logo?

What benefit does anyone get by forcing this image (or the Wikipedia homepage) to go over https? None.

There is a huge amount of content on the internet that does not need the protections of https and does not justify the costs. But google does not make any distinctions- it punishes any website who refuses to switch over to https regardless of the nature of the content.

Why would google do this?

Maybe they really are just trying to do the good and right thing for the world, but are blind to the negative impacts of their actions on people not like themselves.

Or maybe, just maybe…. (Warning: for wacky conspiracy theory ahead)

It’s all about the ads

Google made $74 billion last year and 90% of that came from serving ads. Making sure that you see ads is very important to google.

Blocking ads used to be something that was possible, but really required some heavy tech skills to actually pull off and the results were spotty. Blocking (or at least trying to) your ads was more of a geeky act protest than a useful practice.

Recently ad-blocking technology has gotten better and easier. One way of blocking ads is to use ad-blocking software. In the past, Google has shown a willingness to use some surprisingly heavy handed tactics try and stop ad-blocking.  But even with their wild popularity, the impact of these blockers is limited because they must be individually installed on each device. Users must take an active step block the ads on their own device.

More troubling I think to google is the other way of blocking ads – content modifying proxy servers. These proxies are able to filter out ads wholesale for every downstream user, and they work without any user action required. They also work for all types of devices from text-based Linux PCs to Apple watches – all automatically without any software or changes on the device at all. The user might not even know that the ad filtering is happening, they just don’t see the ads in webpages they pull up.

It is almost impossible to stop a filtering proxy from blocking ads over http connections, conversely but it turns out that it is almost impossible for a proxy to filter ads over https connections. Might this be a motivation for Google to attempt to effectively ban the http protocol?

A funny thing happened on the way to my server

I needed to debug a problem with a web app running on the computer sitting next to me, so I pulled up the webpage on my phone and then looked though the webserver’s logs to find my connection. It should have been very easy to find since my phone was on the same physical network as the web server I was debugging – but the request was nowhere to be found. How could this be possible? I know the phone somehow got the page because it was on my screen, but how could it do so without making a log entry?

After much head scratching and trail and error, I finally found the log entry that corresponded to my phone’s request, but it was coming from Mountain View, CA! I was in NYC, my phone was in NYC, my server was in NYC- so how was this request getting routed though a server on the other side of the continent?

After even more head scratching and network sniffing and reverse DNSing, I discovered that…

By default, every Chrome http request from every Android on earth is redirected to google servers over an encrypted connection

Did you hear that? Let me say it again. If you buy and brand new Android phone, turn it on, connect it to your Wifi network, and then open a webpage form a server that still accepts http requests, that request is automatically and silently captured by the phone, encrypted, and rerouted to a google proxy server. This is a real thing people.


UPDATE 1/25/2018

A Web App I wrote more than a decade ago and has been chugging along since then without single hiccup… broke today. After an hours worth of work the issue turned out to be that Flickr now literally hangs up the phone (sends a 0 byte response) on any incoming connection that does not support one of the newest versions of HTTPS connections. Because the server this app is running on is so old, it literally now can not work any more thanks to this policy that is supposed to be protecting me. Good thing no one can ease-drop on me downloading lists of publicly visible photos from Flickr!

UPDATE 12/21/2020

Predictably moving all traffic to https prevented caching which hurt performance and was especially hard on websites that serve the same content over and over again to many different people (think about an NY TIMES article – you explicitly do not want a personalized version) to mobile users.

So google came up with a plan to fix the problem they created, while further deepening their strategic goals. It was called AMP and effectively what it did was to make google a cache for your web content. With AMP, your content is literally stored on and served from google servers – it does not even have your URL any more. Seriously.

If you want to see the AMP version of this very webpage, it is *not* at josh.com – it is here…


Click on it and you will see this web page, but explicitly the origin is ampproject.org over HTTPs from google’s servers. Do you understand how messed up this is? Here is the google narative…

  1. We need to kill http to protect people from having their webpages getting intercepted
  2. …which predibly is devastating to to the performance of highly cacheable content
  3. …so we introduce a new (complicated google) system that intercepts webpages.

So some people started grumbling that google was redirecting and intercepting large parts of the internet into their proprietary system, so google “opened” AMP up and made a committee with some non-google people on it to give the appearance that it was not just a google thing. Well, one of those non-google people just resigned from the AMP committee and said…

The stated goal of the AMP AC is to “make AMP a great web citizen.”

I am concerned that – despite the hard work of the AC – Google has limited interest in that goal.

I have resigned from the Google AMP Advisory Committee

…to the surprise of no one following this issue.


Q: Come on, I’ve tested it and the extra latency for establishing an https connection is nominal.
A: You almost certainly do not live on a high latency link like huge parts of the world do. Even on an amazing first-world awesome high speed HughesNet Gen4 link, each round trip costs 500+ milliseconds.  Now imagine how it feels to be on the other end of a multi-hop link in Asia or Africa where google’s righteous we-know-what-best-for-you stand means you have to wait an extra 5 seconds every time you connect to a new https website.

Q: I don’t want eavesdroppers to be able to see what websites I am visiting, so I want all connections to be https.
A: First off, while https does hide the url and content of the pages you are pulling up, it does not hide what websites you are visiting. An eavesdroppers can still see that you visited facebook.com and then cupcakes.com even if those connections were completely over https because (1) the address and port you connect to are still in the clear, and (2) your DNS requests preceding the https opens are in the clear. Sorry.

Next off, if you want to hide what you are doing and you are willing to incur any extra overhead that might cause, that’s fine and it is your choice. You can either set your browser to use https by default or manually enter https when you go to a new (secret) website. But google is forcing all websites to serve https to all visitors for all content. If I try to goto “http://wikipedia.com”, I am unilaterally redirected to the https site and there is nothing I can do to stop it. I am forced to take all the costs of https even though I would much rather go over http. I am all for giving people more choices and trusting them to make the right decisions for themselves. Google’s no-http policy takes options away from people.

Q: I am not worried about someone else seeing the same Wikipedia article that I am seeing, but I need https to insure that the Wikipedia page I see has not been messed with. 
A: This is a very valid thing to do, but https is a terrible way to do it.  If this was really what google was concerned about, they could have advocated having content providers sign their content. This would make it possible for browsers to verify that the content really came from the expected source and had not been altered. Signing is much more efficient than encrypting because (1) signed content can be locally cached and and it is still verifiable as being authentic without any round trips to the remote server at all, (2) while signing something does take some computation, you only need to sign any object once in an off-line process rather than having to do a computationally expensive re-encrypt it each and every time it is sent down an https connection.

Further, signed content is potentially even more secure than relying on https. An https server must both (1) be connected to the internet and (2) have access to private authentication keys. Since these machines are on the internet, they are vulnerable to attack, and once compromised can be used to serve malicious content and that content will be encrypted with the valid private key on the server. With content signing, all signing can be done before the content is loaded onto the web-servers so the private keys need never touch any internet facing machine. An attacker who compromises a web server with signed content can not change that content without invalidating the signature.

Q: This is about privacy. Forcing all traffic over https blocks my government/isp/employer/parents from filtering my web traffic!
A: Anyone who is in a position to filter your http traffic is also in a position to filter your https traffic. With http, they have the option of only filtering specific pages or content while letting the rest of a site though. With https, their only option is to block the entire site, which in practice is typically what they do.


  1. gerben123

    I totally disagree.

    Let me explain.

    You are forgetting that there is the new HTTP/2. Which does header compression, prioritization, parallel requests, and a lot more to reduce latency and bandwidth usage. It also handles bad connections better. A lot better and faster than opening multiple TCP connections like browsers currently do.

    This is what Android is doing. It has a server that proxies all HTTP requests and delivers those over HTTP/2, making the sites load faster (on average). Those servers also transcodes images to WebP, and enable content compression, even if the actual server doesn’t.
    Opera is doing something similar with their Opera Turbo.
    I agree it’s slightly disconcerting this is enabled by default, but at least it does provide the end user with a faster web (and cheaper!).

    As for https taking a lot of resources on the server. To quote google; “On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.”

    I also don’t thing setting up a proxy connections is easier than installing a plugin. You don’t even need a proxy, as you can block ad-related domains using a custom DNS server instead of a proxy server.
    Also, proxies can bust https, by replacing all https:// links with http://. So unless the user types in the url with https:// the proxy can intercept any attempt by the server to redirect to https. Though to would have to have a list of sites that really need https (like banks), or rely on the “Content-Security-Policy” header.

    I’d also suggest reading this article: https://scotthelme.co.uk/still-think-you-dont-need-https/

    Kind regards

    • bigjosh2

      RE: You are forgetting that there is the new HTTP/2.
      Everything you point out about HTTP/2 being better than HTTP is true, but irrelevant in an argument for or against https. The HTTP/2 spec describes what the client and server say to each other over the communication channel be it secure or not secure(*). Creating an HTTP/2 connection over TLS (that is, an “https://…” URL to a server that supports HTTP/2) will always have an extra round trip compared to creating an HTTP/2 connection directly (that is, an “http://…” URL to a Server that supports HTTP/2). This round trip is necessary for the TLS handshake.

      It is unfortunate that, as a practical matter, current browsers will not setup a non-encrypted HTTP/2 connection, but this is a political decision not a technical one. The HTTP/2 spec explicitly does not require encryption.

      *The only reason that the HTTP/2 even mentions the TLS transport protocol to add a restriction on TLS-carried connections. “A deployment of HTTP/2 over TLS 1.2 MUST disable compression”, so not only will a secure HTTP/2 have double the setup latency, it will also loose some of the bandwidth reduction benefits of compression compred to non-encrypted HTTP/2. (If you are interested, this strange requirement for compression prohibition was added to thwart the CRIME attack.)

      RE: This is what Android is doing. It has a server that proxies all HTTP requests and delivers those over HTTP/2, making the sites load faster (on average).
      Yes, google proxies *all* http requests. It even proxies (and encrypts) requests that could have been carried over non-encrypted HTTP/2. If the true goal was to make sites load faster, then there is no reason to encrypt those requests since that can only and in every case is slower.

      RE: Those servers also transcodes images to WebP
      This is tangent to the http/https debate since they could do this trans-coding just as easily (and faster) over an un-encrypted connection. That said, I have to add that it takes a lot of hubris for google to unilaterally decide that they are better at deciding how to encode an image than the entity who is serving that image. It is just obnoxious, goes against every RFC in the book, violates the spirit of the internet, and breaks things for people who never agreed to be be subject to it, and makes people wastes hours and hours trying to figure out what the hell is happening (like me!).

      RE: As for https taking a lot of resources on the server. To quote google; “On our production front-end machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead.
      It is not surprising that google says that google’s mandate doesn’t hurt google (that much). Just because google is willing to bear these extra costs doesn’t mean that other people should feel as good about paying a needless tax that is imposed on them to further google’s strategic goals. While it might only add a 1% hit to encrypt a gmail session stream on one of google’s blazing custom servers, I can say that for my modest hosting having to switch over to https meant having to buy new hardware that otherwise would have been completely unnecessary.

      Again, it comes do to politics – if you have something that you think will help others, then you should convince them to use that thing because it is in their best interests. If you have a thing that you think will help *you*, then you use your power to force others to use that thing.

      RE: I also don’t thing setting up a proxy connections is easier than installing a plugin
      One person can set up (or just buy and plug in) a filtering proxy that blocks ads for dozens (or hundreds or thousands) of downstream users – and the users do not have to do anything at all to enable the filtering, it just happens. Compare this to requiring each and every downstream user to install software on their local device. Not to mention that the current crop of client-side ad blockers (at least the ones I’ve looked at) cause enough slowdowns and rendering problems that they are really more about making a statement than actually improving your browsing experience.

      RE: Also, proxies can bust https, by replacing all https:// links with http://. So unless the user types in the url with https:// the proxy can intercept any attempt by the server to redirect to https.
      I’ve recently worked on this very problem, and it is much harder than it sounds. Do you know of any available working system (besides mine! :) ) that successfully does this reliably?

      RE: I’d also suggest reading this article: https://scotthelme.co.uk/still-think-you-dont-need-https/
      I’d not seen that exact article before, but I do not find it compelling. Let’s go point by point…

      “HTTPS makes things faster! HTTP/2 is here!”
      Again, he falls into the same trap of comparing unencrypted http/1.1 with encrypted http/2. The real comparisons that matter are (1) comparing http/1.1 over encrypted versus unencrypted connections, and (2) comparing http/2 running over encrypted versus unencrypted connections. If you compare apples-to-apples, then a page always loads faster over an unencrypted connection (an http: URL) than an encrypted one (an https: URL) .

      Besides conflating (http/1.1 versus http/2) with (http versus https), the two examples he sites are shockingly contrived.

      The first demo separately downloads 360 identical copies of the same tiny 20×20 image from the same server. Of course no actual website would ever ever do this. You’d download one copy of the tiny image and display it 360 times. Or if you really wanted the page to load quickly, you’d put the one tiny image inside a “data:” inside the containing page. If you download this same page the way any normal website would do it (i.e. not contrived to prove a point), then plain un-encrypted http wins massively.

      The second demo breaks up a normal sized image into 361 individual tiny 32×32 images and then downloads them sequentially from a single server. Boy, if you have to twist and contort a problem that hard to prove your point, well…. Of course any real website would just serve one normal size image, and the webpage with that one image would load faster over http than even https with http/2.

      This is all pretty complicated, so let me restate the basic point – ANY web page (including the above examples) will always load faster and use less bandwidth over an unencrypted connection than over an encrypted one regardless of whether you use http/1.1 or http/2. Thew above demos are just interesting to nit-pick because if you un-contrive them and load them the normal obvious way then they even loose the point they are trying to make comparing the apples-to-oranges case of unencrypted http versus encrypted http/2.

      “SEO Ranking”
      He makes the point that google will punish you unless you follow their edict and use https. Absolutely agreed.

      “HTTP will be getting nasty warnings”
      Yep, it is likely that google will punish you even more in the future for disobeying them.

      “Stop 3rd party content injection”
      If this is the problem you want to solve, then content signing is the right solution not https for reasons outlined above.

      “Stop malicious content injection”
      Yes, governments/schools/companies (not just China) want to filter what their pages their users can access on the networks they control. No, switching a page to https: will not make them throw up their arms and give up on wanting to filter that page. Instead, they can simply block the whole site rather than the specific pages (or do other more disruptive things). Is that a win for the site owner? For the user who wanted to see the page? Anyone who thinks that switching sites over to https will increase people’s ability to access them does not live in China (or go to a US elementary school).

      “Deprecating Powerful Features on Insecure Origins”
      Yep, it is likely that google will punish you even more in the future for disobeying them.

      “Better Referrer Data”
      Yep, if you want good referrer data, you now have no choice but to use Google Analytics since google switched to https.

      “iOS and Android upping the ante”
      Yep, apple has some skin in this game too.

      ‘Brotli Compression”
      Yep, it is likely that google will punish you even more in the future for disobeying them.

      “Myth-Encryption introduces server overheads”
      Yep, according to google, google decided it was worth it to google to pay the significant extra costs for https for a google app running on google boxes.

      “Myth-Encryption is expensive”
      Yes, someday you might not have to waste money on expensive and difficult to obtain certificates, but even when that sunny day someday arrives, there are still many other costs to https besides the certificate.

      Thanks for the thoughtful feedback- not many people who think about these issues are so well informed technically. I look forward to more!

      • Karl Tiedt

        A few points that really stand out out in the previous reply… And I will say up front, I work for Google, but my sentiments on these points really have no bearing on who my employer is and similarly, my opinions are precisely that, my own opinions.

        Significant extra costs:

        1-2% is not significant. Yes the ONE extra round trip to negotiate under HTTP2 will be *more* than a single HTTP connection but you must also realize that in HTTP 1 a single page may use as many as 8 connections depending on your browser, each with their own overhead. If you are truly concerned about the initial extra overhead, maybe you should stick with using a landline to call your local library, it has even less overhead and has the added advantage of being yet another step back in technological progress.

        Expensive and difficult to obtain certificates?

        Judging by your “Google will punish you remarks” you must be living under an anti-Google rock. If you would come out from under it you might discover things like https://letsencrypt.org/ which offers free short lived (90 day) certificates which is striving to promote better security through more reasonable practices all while reducing the barrier for entry.

        Deprecating Powerful Features on Insecure Origins:

        These are things the web standards committees agree on, not something driven by Google (yes Google is involved, but it is not driven by any *one* company).

        Contrary to [not so] popular beliefs, no one company dictates the Web, not even Google.

        • bigjosh2

          Thanks for the reply, but I’m not sure I follow your arguments.

          Significant extra costs: 1-2% is not significant.

          What costs are you referring to here? The costs of the additional bandwidth used to serve the same exact page over a TLS connection compared to a non-TLS connection? Can you share any tests that justify these figures?

          Even if (for the sake of argument) we take your 1%-2% extra cost numbers as correct (and as someone who hosts lots of TLS and non-TLS sites, I think these numbers are way, way too low), to whom are they “not significant”? To google, or to the people who actually have to pay them?

          Yes the ONE extra round trip to negotiate under HTTP2 will be *more* than a single HTTP connection but you must also realize that in HTTP 1 a single page may use as many as 8 connections depending on your browser, each with their own overhead. If you are truly concerned about the initial extra overhead, maybe you should stick with using a landline to call your local library, it has even less overhead and has the added advantage of being yet another step back in technological progress.

          Hyperbolic landline comments notwithstanding, it seams like you are arguing that HTTP2 is usually better than HTTP1 for pages that have lots of requests over low latency connections- and I agree. But this does not seem relevant to the argument.

          Do you disagree that a secure connection over either HTTP1 or HTTP2 will always be slower and use more bandwidth and have higher latency than a non-secure one over the same protocol?

          Judging by your “Google will punish you remarks” you must be living under an anti-Google rock.

          Do you disagree that google actively penalizes sites that don’t switch to TLS? Google publicly acknowledges this in their own documentation.

          If you would come out from under it you might discover things like https://letsencrypt.org/ which offers free short lived (90 day) certificates which is striving to promote better security through more reasonable practices all while reducing the barrier for entry.

          Even under my rock, I know all about letsencrypt.org and have use their certificates on some on some of my sites – including the one you are reading this post on! :)

          I love what they are doing, but there are still reasons why letsencrypt.org certs can not replace traditional certs for all websites – but that’s another article…

          But let’s (again, for the sake of argument) pretend that we live in a world where all certs are available for free and are trivially easy to acquire and install and maintain and revoke (again, as someone who hosts letsencrypt & normal cert sites, I’d argue this dreamland is far, far away), does that materially change the argument? Why would google want to dictate that sties *must* switch to TLS even though the sites and their users do not want it? I’d argue that google wants to do it for political and strategic reasons. And hey, there is nothing wrong with google doing things for political and strategic reasons – all’s fair in love and search result ranking – I just want to call them out for pretending that they have technological and benevolence motivations. The only reason they can get away with it is because this stuff is very complicated. I’m trying to explain the technicalities so that people can see what is really going on.

          Deprecating Powerful Features on Insecure Origins: These are things the web standards committees agree on, not something driven by Google (yes Google is involved, but it is not driven by any *one* company).

          Again, I am not sure what you are arguing here. The RFC standards are clear – TLS and non-TLS are supported over HTTP1 and HTTP2 in any combination.

          Do you disagree?

          Looking forward to your reply. Thanks!

          • Karl Tiedt

            Sorry for the harder to read replies, not familiar with what markdown is supported, so double returns between unrelated paragraphs (responses).

            The numbers are not mine, they came from the quote where Google released the overhead caused by their use of HTTPS.

            In an HTTP1 only environment, HTTPS will incur more overhead for every connection and HTTP will be much more performant. However, in HTTP2 environment this extra cost is minimal and ultimately in the best interest of the end user. My statement implies that anytime a site opens more than 1 connection, you have already lost to HTTP2, not only from the single connection rquired due to multiplexing but also from header compression.

            As far as agreeing, my hyperbolic statement basically said as much. Your *one* connection will be faster, less overhead, but in the grand scheme, when you realize that with HTTP1 you are paying this price up to 8 times at once as the page is loading, you drive that cost down significantly… For example… it’s a fair statement that a car burns less gas than an 18 wheeler, excluding complicating this by fuel costs and distances, lets assume there are no fuel costs and distance is irrelevant and for our sake, lets assume no bottle necks beyond physical limitations, we only care about time and both vehicles travel at the same speed… This car can haul 200 pieces of cargo (small boxes) mean while the 18 wheeler can carry 20,000…. Would you rather make that 1 trip 100 times in the car, or 1 time in the 18 wheeler? Lets say the 18 wheeler even had to drive there and back to confirm the order before picking it up and delivering it… It still got all the packages to the destination in 98 trips before the car did…. Ohh, medication brain I almost forgot, lets assume the car can “multi task” and deliver 8 sets of packages at once… (because we can have up to 8 connections on some browsers with HTTP1), this allows the car to deliver a total of 1600 packages a trip, meaning it delivers 3200 packages in the same time the truck is able to get 20,000 delivered. Small up front cost, large backend gain. Yes this is not a perfect example, but it is illustrative of the gains of this approach over the previous.

            While it may come across as punishment to you, as a consumer I would much rather land on a properly secured website when I search for something I am interested in, than to land on a half-assed setup that may or may not, expose data through my browser. Take your wikipedia logo example, sure it doesn’t *need* to be encrypted unless you want to insure it is not tampered with, however, a single insecure resource on a “secure” website is all it takes to provide a vector for several forms of web based attacks… Why would *any* company want to encourage this practice then. Which by the way, your site follows this practice which is why your website does not show up with a ‘secure’ icon in the address bar.

            And yes, I realize LetsEncrypt doesn’t offer the higher end certificates but for the large majority of the web, what they offer is sufficient (and still free).

            Regarding supporting TLS and non TLS yes, the spec does not specify TLS is mandatory however… (from https://http2.github.io/faq/ )

            “However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection, and currently no browser supports HTTP/2 unencrypted.”

            This means that not only Google, but *every* browser vendor has currently agreed on this approach, Microsoft, Mozilla, Opera and Apple, as well as Google.

            I think I touched on each of the responses, started new medication today so, a bit foggy headed and hard to focus,.

  2. bigjosh2

    Let’s start simple. I assert that it will always take round trips & use more bandwidth to load a page over an HTTP2 if it is TLS encrypted compared to loading the same webpage over an HTTP2 connection that is not encrypted. Do you agree or disagree?

  3. j7n

    This post has a rare voice of reason. Almost the whole internet unanimously agrees that SSL is a welcome and essential change – on every site, including local intranet. Most computer users would unfortunately also apply every update without question and think that all change is for the better.

    The latency from SSL is indeed noticeable while browsing; I can easily distinguish the delay that happens once I’ve been inactive on a site for a while and existing connections have timed out. HTTP/1.1 with pipelining doesn’t have serious performance issues that need to be addressed with Google’s HTTP/2, especially since the count of image resources on modern “flat” websites is lower. It is unfortunate that encryption was forced in this new standard.

    The modern Web and its apps are among the most resource demanding tasks for an “old and slow” computer for a number of reasons. With large keys and modern ciphers, the CPU cost incurred for every new handshake is more significant than it used to be back when SSL was used selectively. Certificate revocation lists and OCSP validation add more overhead above what is absolutely required to establish a connection. Enforced SSL has now rendered obsolete old browsers and operating systems (for Chrome-based browsers and other software that uses Windows’ certificate management).

    Content filtering is almost a must for the modern web. Not just for ads, but also for dozens of scripts, beacons and web-bugs for statistics gathering, which are delivered from multiple web servers and commonly over SSL now, each with associated CPU cost. I do use DNS on the router to remove the most obnoxious ads, mainly Google’s. I am worried that DNSSec (yet another unwelcome security update) will make this kind of filtering impossible. And DNS cannot filter ads (or ad selection scripts) hosted on the same domain as legitimate content.

    About a year ago I asked on Wikipedia’s IRC if HTTPS could be opted-out of. A few members expressed a surprise about my inquiry, but eventually directed me to a checkbox, which did that for all pages except login. That option was soon removed though.

    In my opinion, a case where a user has legitimate reasons for hiding what – public – articles or other news media he reads, is an exception, and could be addressed via optional use of HTTPS selected in their browser.

    I observe that this site also has forced Let’s Encrypt SSL.

  4. bergamotewilks

    Interesting read, it’s good to see someone at least arguing the point of https. Maybe it all boils down to some more ethics/philosophical perception of the internet as a whole. Since I did my first crappy website hosted on a half broken laptop in the corner of the bedroom, I always saw the internet as a public network, more like an amateur radio network: you can broadcast to everybody and everybody’s equal, just tune in my ip address. Almost two decades later, the web is where you can do your banking, have intimate conversations with your loved ones, be subject to personally tailored advertisement, do your shopping and save all your photos for eternity.

    Now if, like me, you’re hosting 5 websites on a £100 micro computer in a corner of the living room, switching to https is either expensive (buy a certificate) or a pain in the butt (letsencrypt renewal…), but either way it will make your websites slower. More importantly, the security concerns https addresses are really the concerns of banks and big websites, not my food blog that gets 100 visitors max per day.
    Amazon, Facebook, Google, Apple,Twitter, etc… all of them have fast servers and aren’t too scared of losing customers/users in sub-Saharan Africa. But they don’t intentionally set out against anyone in particular. They’re just genuinely trying to build an internet that’s a solid platform for business and money transfers. Whilst personally, I think the internet should be a solid platform for people to express themselves equally and publicly. If some companies make money with it by creating privacy within the public network, fine. But we can’t let their concerns set our goals for the future development of the web.

  5. bigjosh2

    A web apps that I wrote 10 years ago and has been humming along happily ever since just broke. Know why? Flickr just started answering non-HTTPS requests with a 0 byte payload. What is the rational for protecting me from making HTTP requests to my own public meta data? None, just a lot of extra work (for me, and for every transaction ever again). Thanks, Google.


  6. bigjosh2

    Recently got a request from a factory in China to email them a PDF printout of a public github repo. Good thing Github is protecting them from oppression… by blocking their access to public information. Thanks Github.

  7. bigjosh2

    Goddammit Google. In Chrome you can not generate an SHA256 hash using the built-in Javascript command unless the page was loaded over https. So fricken obnoxious…


    Note that this adds absolutely no security at all – you can still generate an SHA256 using Javascript, it will just force the user’s computer to work harder for no reason. If anything it is less secure because if forces you to use downloaded code rather than code built into the browser. :/

Leave a Reply