HTTP/1.1 has been a pretty good protocol, but it does have some issues in regards to performance. It leads to additional HTTP requests, usually repeating some of the information sent in each request. It also requires one request finish before the next one begins. HTTP/2 aims to correct these issues.

For several weeks I’ve been talking about the first two steps in the critical path and their impact on website performance. I introduced the time to first byte (TTFB) metric and then talks about DNS lookups and DNS caching. The last couple of weeks I discussed the HTTP/1.1 protocol, it’s performance issues and solutions, including HTTP caching.
Today I want to finish the talk about HTTP and introduce you to HTTP/2, which you may or may not already use. I’ll talk about the ways in which is attempts to solve the issues of HTTP/1.1 and the difference in strategies when optimizing for each. I’ll close with a few words about HTTPS and performance.
The Performance Benefits of HTTP/2
HTTP/2 was developed by IETF’s (The Internet Engineering Task Force) HTTP Working Group to address some of the performance issues of HTTP/1.1. According to the Working Group, the key differences with HTTP/1.x are that HTTP/2:
- is binary, instead of textual.
- is fully multiplexed, instead of ordered and blocking.
- can serve multiple files in parallel over a single connection.
- uses header compression to reduce overhead.
- allows servers to “push” responses proactively into client caches.
Binary protocols are more efficient to parse and less error prone than textual protocols. They make it faster to transfer data and they’re machine friendly. This is good for performance and not something you need to configure. It’s a free optimization you get for switching to HTTP/2.
Multiplexing means multiple files and requests can be transferred at the same time as opposed to HTTP/1.1, which only accepts a single request per connection. In other words, it allows requests to be transferred in parallel instead of in series, which means it doesn’t lead to the overhead of establishing multiple connections. Multiplexing eliminates the issues associated with head-of-line blocking.
Header compression reduces the overhead of each request making the requests smaller and allowing more requests to fit into a single IP packet. You don’t have to send the same cookies, referrers, and other headers with every request. It leads to less data being transferred and less requests overall.
With server push, a server can anticipate future requests and send information before it’s requested. For example, instead of sending an HTML file and then waiting for requests for CSS, Javascript, images, etc., the server can send these resources knowing in advance that the client is going to request them.
The downside to server push is the server is sending resources that may already live in cache. A solution is cache digests which lets the client tell the server what it already has cached so the server only needs to push the resources that are needed.
The articles below will give you more information about cache digests.
- Cache Digests for HTTP/2
- Caching and HTTP/2 Push
- Cache Digests: Solving the Cache Invalidation Problem of HTTP/2 Server Push to Reduce Latency and Bandwidth
Another advantage of HTTP/2 is stream prioritization, which allows the client to specify the order in which it wants to receive resources. HTTP/2 is also backwards compatible. Your browser likely uses it already for sites that are delivered over HTTP/2 and falls back for those delivered over HTTP/1.1.
Reading all this, you might be thinking that HTTP/2 is the way to go and it is the near future, after all. However about half of all sites are still delivered over HTTP/1.1 at the moment. You can check to see which protocol your site currently uses here. You may or may not be able to upgrade depending on your hosting company, the type of hosting you’re using, and the options your company allows.
Assuming you can upgrade, which you probably can, be aware that HTTP/2 only runs with HTTPS so you’ll need an SSL certificate, though HTTPS is something we should all be moving toward anyway. Eventually we’re all going to be running HTTPS over HTTP/2.
Some performance optimizations under HTTP/1.1, like GZip compression aren’t available under HTTP/2, though the reason they aren’t available is because they shouldn’t be needed.
If you’d like to see some performance comparison between HTTP/1.1 and HTTP/2 here you go.
- HTTP/2 – A Real-World Performance Test and Analysis
- A Simple Performance Comparison of HTTPS, SPDY and HTTP/2
Performance Strategies for HTTP/1.1 and HTTP/2
The differences in HTTP/1.1 and HTTP/2 mean there are different strategies to optimize performance for each. Many techniques for optimization with HTTP/1.1 revolve around minimizing HTTP requests.
- Concatenating Javascript and CSS files
- Creating image sprites
- Inlining assets (including CSS and Javascript inside header tags and base64 encoding images)
- Sharding domains or spreading requests over several domains to increase the number of open connections
None of these are necessary or advised under HTTP/2. Concatenation and sprites can lead to unnecessary data being transferred. Inline assets prohibit their being cached. Domain sharding isn’t needed with multiplexing.
Optimizing performance for HTTP/2 requires a different strategy. You won’t have to worry as much about reducing HTTP requests and instead you should focus on delivering smaller, more granular resources, transfer them in parallel, and cache them independently of each other.
This change is mainly due to the result of multiplexing and header compression. The former eliminates the head-of-line blocking issues of HTTP/1.1 and allows multiple resources to be downloaded at the same time over a single TCP connection. The latter reduces the overhead, the redundant headers, of each request making them smaller than those delivered uncompressed over HTTP/1.1.
If you look back up at the list of four things you wouldn’t want to do under HTTP/2, the first three are all essentially concatenation. Each combines multiple smaller files into a single larger file to reduce the number of requests.
Since we’re less concerned with the number of requests, we’d rather send fonts.css, header.css, footer.css, grid.css separately instead of combining them all into a single main.css or styles.css.
Having said that, there are probably times where it still makes sense to combine multiple assets into a single asset even when serving your site over HTTP/2
HTTP/2 Performance Downsides
There are some potential downsides with HTTP/2. Since the optimization strategies are different, if we only optimize for HTTP/2 we potentially penalize anyone using a browser that doesn’t yet support it. This problem gets smaller every day and will continue to become less of an issue in the days ahead.
By unbundling all our assets into smaller, more granular files, there’s a possible risk of increasing the total bytes transferred over the network. Compressing a single large file is sometimes more efficient than compressing many small files. Parallel transfers make this less of an issue, but depending on your site the increase in bytes could be significant.
HTTP/2 requires TLS so an SSL certificate is a necessity, but this is something the web is moving toward anyway and it won’t be long before everything is delivered with encryption.
It’s also fair to say that developers are still figuring out best practices for HTTP/2 and that the benefits of switching will depend on the details of a given website. Still HTTP/2 is the future where HTTP/1.1 is the past or soon will be.
HTTPS and Performance
HTTPS isn’t really about performance. The S, as you likely know, stands for secure. Information sent over HTTPS is encrypted before sending. If anything, HTTPS over HTTP/1.1 could be slower because of the added encryption, though not to the extent where it should become an issue.
However, HTTPS over HTTP/2 is very fast. Since HTTP/2 requires HTTPS we can’t break out the difference in performance with and without using it, but the combination of the two is faster than HTTPS over HTTP/1.1, likely the result of performance improvements in HTTP/2.
That said, even if you are currently using HTTP/1.1, adding HTTPS makes sense. It provides security for your customers and visitors and search engines appear to be giving preference to secure sites and any performance hit will be minimal.
I won’t spend too much time on HTTPS as using it isn’t about performance. Instead I’ll point you to a couple of articles that walk you through how to make the switch from HTTP to HTTPS. Your host will gladly set it up for you for a fee, but there will be some things you’ll need to do like updating absolute URLs for your domain across your site and setting up canonical information or redirects.
- A Complete Guide To Switching From HTTP To HTTPS
- Moving your website to HTTPS/SSL
- Let’s Encrypt (Free SSL)
Closing Thoughts
Odds are you’re site currently runs over HTTP/1.1 with or without security. HTTP/1.1 isn’t a bad protocol, but it’s aging and it does have some performance issues in that it’s chatty because of all the requests and it uses head-of-line blocking. HTTP/2 addresses these issues with multiplexing, header compression, and server push.
While there are some performance optimizations you’ll want to make for both protocols, understand they have different strengths and weaknesses and so they have different strategies for improving performance.
With either protocol it’s time for all of us to use HTTPS. This is a requirement of HTTP/2, but should still be used with HTTP/1.1, even though it’s not really something that will improve performance.
Next week I want to share some of the changes I’ve made as a result of writing this series. I’ll fill you in on the changes I’ve made regarding DNS and HTTP and why I made them and I’ll share any differences I’ve measured in the performance of the site after making the changes.
Download a free sample from my book, Design Fundamentals.
“Some performance optimizations under HTTP/1.1, like GZip compression aren’t available under HTTP/2, though the reason they aren’t available is because they shouldn’t be needed”
What?
Gzip doesn’t depend upon version protocol and is still relevant with HTTP/2.
My apologies if the information is incorrect, but everything I read suggested that HTTP/2 uses a different type of header compression. For example, the HTTP/2 FAQ.
“As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity”
Again, I apologize if I’m in error, but I’m still struggling to find anything about using gzip header compression under HTTP/2. Do you have any sources you can point me to? This isn’t my area of expertise and my goal in writing this series was as much to understand the topic myself as anything else. I definitely don’t want to lead anyone astray.
Gzip compression of HTTP BODIES absolutely is still relevant for HTTP/2.
For compression of HTTP HEADERS they went with a different option due to some security attacks that can be used against gzip type compression (see https://en.wikipedia.org/wiki/CRIME). These attacks are potentially dangerous for headers (that contain cookies) but less so for bodies. Under HTTP/1 HEADERS could not be compressed so this was not really an issue but that causes its own problems as HTTP Headers are getting quite large now.
For bodies this is typically less of an issue and the performance benefits of using gzip (or deflate or brotli) are well worth it – even under HTTP/2.
I’d also question this statement “However about half of all sites are still delivered over HTTP/1.1 at the moment.”
Only about 17.5% of websites use HTTP/2:
https://w3techs.com/technologies/details/ce-http2/all/all