One reason I’ve been writing about performance this year is so that I can better learn how to improve the performance of this site. There were changes I knew I should make before starting, like enabling a caching plugin. I used to have one setup, but somehow left it behind the last time I moved to a new host.
Since starting the series, I’ve purposely held off making the improvements I knew I should make, so I could share changes as I make them and why before showing you the results. I’ll begin that sharing today. First here are the earlier posts in the series in case you missed any.
- Time To First Byte (TTFB)
- How DNS Lookups Affect Website Performance
- How To Leverage DNS Caching
- The Performance Of HTTP Requests Over HTTP/1.1
- HTTP Caching And Cache Validation Over HTTP/1.1
- How HTTP/2 Solves The Performance Issues Of HTTP/1.1
I want to gather some initial results, set some goals and a strategy for achieving them, and then let you know what changes I’ve made. I’ll recheck the results and we’ll see how well I’ve done and what improvements, if any, I can make to the DNS and HTTP times for this site.
I mentioned at the start of this series that DNS and HTTP tweaks weren’t likely to have a huge performance impact on the site, perhaps some, but not a lot. As you’ll also see, it’s difficult to get consistent measurements for DNS and HTTP times and as a result it can be hard to definitely show the effects of some changes.
Before Results
The first thing to do is collect some before results. I’ll be making changes that will affect DNS and HTTP. I can get results specifically for DNS times and I’ll use the performance tester at KeyCDN to gather those results. I don’t know of an easy way to test HTTP request times specifically, so I’ll compare the before and after time to first byte (TTFB) results as well as the before and after SpeedIndex measurement, both from the tester at WebPageTest.
KeyCDN provides DNS results from a number of different physical locations around the world. WebPageTest allows you to choose physical locations so I ran several test from Dallas, Paris, and Sydney, which are available in both.
Finally both sites offered a time for TTFB, with WebPageTest usually being the higher of the two. I’ll use the WebPageTest results, mostly because I’ll use their tester more in future series, but also because of the two testers, WebPageTest showed the higher, and worse TTFB for the site.
WebPageTest runs three tests for each URL you supply. Tests after the first should include some cached information. I ran the test at KeyCDN twice to compare uncached and cached results. The numbers below in parenthesis are the results of 2nd and 3rd tests.
Location | DNS (2nd) | TTFB (2nd, 3rd) | Speed Index (2nd, 3rd) |
---|---|---|---|
Dallas | 255.266 (7.275) | 2.500 (2.600, 2.300) | 2746 (2882, 2735) |
Paris | 127.265 (6.541) | 1.267 (1.225, 1.114) | 2414 (2514, 2410) |
Sydney | 510.564 (5.523) | 1.562 (2.231, 1.430) | 3546 (6400, 3882) |
In the case of the DNS results (first column), you can see the second test, in parenthesis, with cached DNS records shows greatly reduced times. It’s similar with the WebPageTest results for TTFB and Speed Index, though not across the board. I assume there was an anomaly of sorts with the Sydney tests as the second run results were by far the worst.
One last bit of before testing. WebPageTest shows several high level grades and I haven’t been doing well this semester as you can see in the image below. Let’s see if I can improve those.

DNS Changes
If you remember the first DNS post in this series, I mentioned having a difficult time finding recommended times as a goal. I found numbers suggesting the initial DNS lookup should take no more than 125ms and any cached lookup should be no greater than 15ms. I do ok with the cached results, but my times are mixed in the pre-cached results.
I checked my DNS records and the domain is set up as an A record, which is faster than CName and I don’t see any reason to mess with these as I wouldn’t expect much to change with the times.
My DNS records also showed that time to live (TTL) set at 43200s, which is 12 hours. I didn’t change this either. Had TTL been set lower, I might have raised it, but I think 12 hours is fine to keep the records. I’ll remind myself to lower the value a day before moving the site to a new IP, should I choose to move to a new host in the future.
The main change I wanted to make was to reduce the number of DNS lookups. The site makes a little more than a half dozen requests to external resources. Looking at the results from WebPageTest I noticed a DNS lookup for StumbleUpon, which I think is a bit of leftover code from when I had social sharing buttons below each post. Since those buttons are no longer on the site, I removed a block of Javascript that included a link to StumbleUpon.
The remainder of the lookups include calls to Stripe, Google Analytics, and several of my social profiles. Stripe and Google Analytics I don’t want to remove. I could remove all the buttons as I really don’t pay any attention to social media, but instead I’ll keep those too and prefetch all their domains.
Here’s the code I added into the head section of the page.
[code type=html] [/code]
I reran the tests and as you might guess, there’s not a lot of savings in time. The KeyCDN DNS times are pretty much the same. There’s some variation, but nothing significant, at least nothing to suggest that these changes made much of a difference, though I didn’t expect they would, since I didn’t make any changes that should affect the initial lookup.
The first row below are the before results for Dallas and the second row are the after results.
Location | DNS (2nd) | TTFB (2nd, 3rd) | Speed Index (2nd, 3rd) |
---|---|---|---|
Dallas | 255.266 (7.275) | 2.500 (2.600, 2.300) | 2746 (2882, 2735) |
Dallas | 256.159 (7.117) | 1.007 (1.038, 0.959) | 2457 (2723, 2557) |
As you can there’s not much difference at all in the DNS times. In fact the first run is actually slower after the changes. Since I didn’t change anything that should affect the time, it shows the difficulty in getting consistent results. The changes here are due to running the tests 15 minutes apart.
On the other hand the TTFB results show significant improvement of about 1.5 seconds or roughly 60%. Unfortunately I have a feeling this is more of an anomaly than anything else as results from Paris and Sydney show no such improvement and I wouldn’t expect the prefetching to impact TTFB either.
Suffice it to say there’s not a lot of change from the small DNS improvements I made. The changes in DNS times from KeyCDN are insignificant. The TTFB times did go down by a lot in the case of Dallas, but probably not because of anything I did. The SpeedIndex results are improved, likely as a result of prefetching the 3rd party domains.
HTTP Changes
As I mentioned in the intro, I couldn’t find an easy way to measure HTTP request times specifically. KeyCDN shows a connection time, but I think that’s for the TCP connection and the not the HTTP request and response. I found advice on how to code up something in the command line or how to use the application Wireshark to test, however, both were more than I wanted to do.
I did check with my web host and they can upgrade to HTTP/2 instead of the HTTP/1.1 the site currently uses. However, for the time being I’m going to hold off on that change as well. I’ve been considering switching hosts and think I’ll wait until I’ve decided where the site will be for the next few years.
Also switching protocols would require additional changes such as switching image sprites to separate images. I don’t have the time to make these kind of changes at the moment so again I’ll stick with HTTP/1.1 for now.
That said, there were three things I wanted to check and optimize, Keep-Alive, GZip, and Caching.
Keep-Alive Settings
Keep-Alive was already enabled, but I didn’t have anything specifically set for timeout and max. I added the following to my .htaccess file to correct that.
[code type=html]
Header set Keep-Alive timeout=5,max=100
[/code]
GZip Compression
When I first started to check performance results, I was surprised I didn’t have GZip turned on. I think I had it on when the site was on the previous host, but I’d somehow lost Gzip in the move. Because I currently have mod_deflate installed, I used it to turn on Gzip.
Here’s the code I added to my .htaccess file. I found it on the GiftofSpeed website.
[code type=html]
# BEGIN GZIP
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE text/javascript
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE image/x-icon
AddOutputFilterByType DEFLATE image/svg+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/x-font
AddOutputFilterByType DEFLATE application/x-font-truetype
AddOutputFilterByType DEFLATE application/x-font-ttf
AddOutputFilterByType DEFLATE application/x-font-otf
AddOutputFilterByType DEFLATE application/x-font-opentype
AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
AddOutputFilterByType DEFLATE font/ttf
AddOutputFilterByType DEFLATE font/otf
AddOutputFilterByType DEFLATE font/opentype
# For Olders Browsers Which Can’t Handle Compression
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# END GZIP
[/code]
After setting up GZip, the page still failed the WebPageTest for compress transfer and it received a grade of C for compress images. The image issue was an easy fix. I never compressed the two book cover images on the home page and the rss.png image in the footer was also uncompressed. I ran all three images through ImageOptim and then uploaded them to the server. The home page now receives an A for compress images.
WebPageTest was still giving me a failing grade for compress transfer, pointing to several javascript files as being delivered uncompressed. A little searching and I found the solution to be the following bit of code added to my .htaccess file.
[code type=html]
AddType text/javascript .js
[/code]
With that small addition, I now receive an A for both compress transfer and compress images.
HTTP Caching
With so many different possibilities for setting HTTP cache, I leaned on the strategies in Jake Archibald’s article, though I didn’t follow them exactly.
I decided to use Cache-Control over Expires as the former is now the preferred way to set HTTP caching. I also wanted to target different types of files with different types of caching.
It makes sense to cache static assets like images, and CSS and Javascript files. A month seemed an appropriate length of time and so I added the following code to my .htaccess file. The code matches any resource ending with the listed extensions.
[code type=html]
# Cache static assets for a month
Header set Cache-Control “max-age=2592000, public”
[/code]
If there’s a match, cache-control is set, along with a max-age of a month or rather 30 days and any cache can store the files, not just the visitor’s browser.
The images aren’t likely to change often or at all, and changes to .css and .js files usually get a version number so they’d be new files to download anyway.
I had planned on using cache validation for .html and .php files with the following:
[code type=html]
# Don’t cache, but do validate dynamic files
Header set Cache-Control “no-cache”
FileETag INode MTime Size
[/code]
In the end I removed the validating code. As I mentioned earlier, one of the changes I know I need to make is to setup a caching plugin for WordPress. I plan to add one again as part of my next performance series. I’m pretty sure the plugin I plan to use will handle setting ETags for the appropriate files and I decided to let the plugin do the work instead of me writing the code.
I’ll double check when I get to that part of the series, but for now I decided to forgo the no-cache and Etags for .html and .php files.

The results of caching static assets is that the home page now receives a grade of B for Cache Static Assets with WebPageTest run from Dallas and a high C for Paris and Sydney. The reason for a B/C and not an A is out of my control. Several files from Stripe, and the one from Google Analytics aren’t cached and as they aren’t on my server, there’s not much I can do about it. I’ll live with a B/C for now and look into whether or not I can cache the external resources in a satisfactory way.
A Few Words About HTTPS
As I said last week, HTTPS is for security and not for performance. Because of that, I won’t be running any before and after performance tests here. However, I would like to address something in regards to HTTPS and this site.
A few years ago I purchased an SSL certificate for the shopping cart plugin I have installed. Not long after I moved to a new host (my current one) and they weren’t able to install the certificate. After a lot of back and forth I became frustrated and moved on to other things and left everything on HTTP and promptly forgot about the missing S. That’s my bad and it’s something I shouldn’t have let slide.
Writing this series led me to look into again. I just got off the phone with my current host to ask them about purchasing a certificate and I learned that there’s an issue with the operating system of the server and I need to upgrade to a new OS before I’ll be able to run HTTPS.
I say just got off the phone, but that’s from my perspective while writing. By the time you’re reading this all the work should be done and the cart should be working over HTTPS. Even more the upgrade will allow me to make some additional changes I wasn’t sure I’d make with my current host, like switching over to HTTP/2.
In time, I do want to switch the whole site over to HTTPS, possibly running over HTTP/2. I don’t know if I’ll have done the work by the time you’re reading this, though it’s possible I will have. It’s something I do want to set up and when I do, I’ll likely write up a post or two about my experience and what was involved in making the change.
Again, my bad for not dealing with this sooner.
After Results
My hope for this section was to show you lots of improvements in DNS, TTFB, and Speed Index. That didn’t quite happen. Here are the results after all my changes.
Location | DNS (2nd) | TTFB (2nd, 3rd) | Speed Index (2nd, 3rd) |
---|---|---|---|
Dallas | 257.953 (7.066) | 1.154 (1.100, 1.706) | 2605 (2659, 3193) |
Paris | 127.031 (6.568) | 1.364 (1.170, 1.394) | 2216 (2200, 2246) |
Sydney | 254.265 (5.638) | 1.556 (1.682, 1.582) | 3092 (3169, 3092) |
If you compare the before and after DNS times, you’ll notice they’re pretty much the same. In fact, the times are nearly all a bit slower after the improvements.
The TTFB and SpeedIndex results are up and down. In some cases they’ve improved, while in others they’ve gotten worse. They’re also sometimes worse on a second or third run, when you would expect them to improve as some assets are being cached in runs beyond the first.
I think the reason is part of why DNS and HTTP performance can be difficult to track. You can run the same test moments apart without making any changes to your site and get back varying results. I think it’s also due to the fact that the changes I’ve made here aren’t any that would have great impact on performance. Some impact, but not as much as other things we’ll get to in future series.
I also want to remind you that the time to first byte improvements aren’t yet complete. So far I’ve covered DNS and HTTP, but there are still some things to do involving the server and the time it takes to process requests and send back the first byte of data. I expect to see more impactful results with some of the changes I’ll make in the next series.
Closing Thoughts
Hopefully this series into DNS and HTTP has been useful and helped you understand what’s going on and some of the things you can do to improve performance with both.
I realize the performance gains were minimal at best, but I know it was worth it to me to learn more about both DNS and HTTP and maybe it was the same for you. Hopefully I made you aware of something you didn’t know before.
Also as I mentioned at the start of the series, these were all relatively easy tweaks to make. The majority of time I spent on this post was waiting while the testing sites did their thing and returned results. The hardest part of making the tweaks was deciding on an appropriate strategy before making changes.
I’ll continue with another performance series soon, probably before the end of the year or the start or next year at the latest. I’ll cover the third step in the critical path, server resources. I plan on talking about servers, both hardware and software, to help you understand differences in hosts, hosting packages, and server software. I’ll also talk about specific performance issues with dynamic sites and once again I’ll talk about caching content from databases.
Download a free sample from my book, Design Fundamentals.