One of the recurring themes in this series and many articles about performance is caching. Serving cached content often leads to significant performance improvements as it saves round trips and thus time over a network.
Today I want to continue talking about DNS, specifically DNS caching and how you can encourage browsers to access cached records instead of having to make requests of domain name servers over the network.
Last week I mentioned a tool from KeyCDN, which shows you DNS lookup times from 14 different physical locations around the world. The physical location of the requesting device impacts the DNS lookup time. That’s because of network latency or how much time it takes for a packet of data to get from one point on the network to another.
It takes time to send packets around the web and much like in the physical world, it takes longer to get to things that are farther away.
Someone in New York could make a request for a web page on a server located halfway around the world in Singapore. The packets might need to make several hops, any of which could wind up at a dead end requiring a new request to start the process again. Another person in Singapore could get the same packets with many fewer hops and many fewer potential points of failure.
People can configure their browsers to use DNS servers closer to them physically, but there’s only so much you and I can do about this. You could opt for premium DNS so your DNS records can be accessed by the DNS records nearest the person, but my guess is you’d rather not spend extra money for what’s likely a very small performance gain.
A better option is to make use of DNS cache.
Caching DNS Records
The the best way to improve DNS lookup performance is to cache your DNS records. In my previous performance series I briefly talked about DNS and mentioned how DNS records might be cached at several points to avoid having to contact a domain name server.
Since local cache can be checked in about 10–15% of the time it takes to perform a full DNS lookup, you’d prefer requests use local cache.
When your browser makes a request for a web page, there are several potential caches to check before doing the lookup and they’re checked in this order.
- Browser cache
- Operating system cache
- Router cache
- ISP DNS cache
You might look at the list and think what can you do, since you don’t have access to the browsers, operating systems, routers, and ISPs of the people who visit your site. You don’t have access, but you can suggest to them how long they should cache DNS records.
The downside to having the records cached is if you move your site from one server to another, you have to wait for all the caches to update their information before the internet will understand that domain.com no longer points to IP1 and now points to IP2. This is why your registrar tells you nameserver changes could take up to 48 hours to process, when it almost always happens much quicker.
Time to Live (TTL)
Time to Live (TTL) provides instructions for how long your DNS records should remain cached. All four of the caches mentioned above will generally save your DNS records for however long you tell them to.
TTL is set in seconds so:
- 300s = 5 minutes
- 3600s = 1 hour
- 86400s = 24 hours or 1 day
- 604800s = 7 days.
I’ll let you do the math for longer timeframes.
The TTL set for this site is 43200s or 12 hours, which is a standard setting. You can set it for longer if you want, but keep in mind if you move to a new web host, your DNS records will remain cached for as long as you set TTL.
One way around this is to change your TTL settings before moving to a new host. If you set the time at 604800, then a week before moving to a new host you’d want to change TTL to a smaller timeframe.
Changing TTL settings could differ from host to host, but depending on your type of account you likely have some way to edit DNS records through your hosting control panel, including setting the TTL.
You may not have access, if you’re using shared hosting. In that case you may need to contact your host if you want your TTL changed.
These external resources are all called in the background after the initial page begins to render. Because of this you can prefetch the DNS lookup so it will ideally have been made before the resource needs to load.
1 2 3
<head> <link rel="dns-prefetch" href="www.domain.com"> </head>
As you can see it’s pretty simple. You place a link element inside the head of your HTML, point to the domain in question, and include the rel attribute with a value of dns-prefetch. You’d likely do this for all your external calls at the same time.
1 2 3 4 5 6
<head> <link rel="dns-prefetch" href="//fonts.googleapis.com"> <link rel="dns-prefetch" href="//www.google-analytics.com"> <link rel="dns-prefetch" href="https://twitter.com"> <link rel="dns-prefetch" href="https://facebook.com"> </head>
Support for dns-prefetch is close to 70% as I write this. If you look closer at which browsers do and don’t support it, you’ll notice that modern desktop browsers all support dns-prefetch, while mobile browser support is mostly unknown.
There are other resource hints such as preconnect, which prefetches the DNS and also initiates a TCP handshake. It has less support and for the moment, you might want to limit yourself to dns-prefetch.
<link rel="preconnect" href="http://domain.com">
Here again is the link to the W3C spec on resource hints as these rel values are called. There are a few more of them beyond dns-prefetch and preconnect.
As I said at the beginning of last week’s post, DNS is probably not a performance bottleneck for your site. It’s definitely worth testing just to make sure, but chances are it’s not causing any issue.
That said, changing your TTL setting if it’s set too low or adding DNS prefetching for any external resources are simple tweaks with limited downside. Every millisecond helps.
I also think it’s good to know more about DNS and to understand what’s going on when someone requests a page on your site, the same way it makes sense for a graphic designer working in print to learn about paper quality and ink.
Next up I want to talk about the second step in the critical path. Over the next few weeks I want to look at HTTP. I’ll talk about performance issues involved with HTTP/1.1 and the strategies we’ve used to optimize for them. Then I’ll talk about HTTP/2 and how it addresses some of these performance issues. I’ll briefly talk about HTTPS and how it affects performance.
Download a free sample from my book, Design Fundamentals.