The site I’m working on now, deploys as static files. I haven’t put up a non-server-side-dynamic site since high school, so I’m exploring my options. I thought I could just throw the whole thing up on Amazon S3, but was surprised that it was slower than the current setup (nginx on Linode). I have been reading about the importance of fast load speeds on conversion, google ranking, etc (for example), so speed is a big priority for me. Here’s how I cut my site’s page load time down from around a second to around 500ms.
To start, I threw the site up on my Linode, using nginx. The server runs a bunch of other sites, in a variety of languages, all with pretty low volumes. The server has keep-alive enabled, but not much else has been tweaked.
I looked at hosting on S3, Cloudfront (amazon’s CDN), and Rackspace’s CDN. Using apachebench (from slicehost), I compared the performance between all 3 of those options, with my nginx as the baseline. S3 was the slowest (slower even then nginx on my little VPS). Cloudfront was comparable to my nginx, and Rackspace would spike to about double the requests per second of my linode, but would also dip to slower than my nginx. Rackspace also has two big drawbacks. It doesn’t let you auto-serve index.html when a directory is requested, so I would still have to run nginx for the html files (as opposed to amazon, which supports this). Rackspace also doesn’t support directories. There are work-arounds, but it’s a pain that I can’t just mirror my directory structure on their cloud. Overall, the cloud hosting solutions were a let-down, and I decided to just host it myself, on linode/nginx.
I used Chrome’s debugging tools (with browser caching turned off) to get a sense of pageload times. I would refresh the page several times, and make a note of the spread of times. Not entirely scientific, but good enough for a rough idea of the relative improvements I was making.
I had several external resources being referenced. I was sourcing some images from my Etsy store, using a public CDN for jquery, google fonts, etc. I pulled all those into my own repo, rather than rely on whatever bandwidth these providers want to give out for free. I didn’t see any speed change from this, probably because the files in question were pretty small.
Minify CSS & JS
The same images appear in a number of places on my site, and I was using CSS to change the size. I switched to having the script generate different versions of the image, for every size I needed. This drastically reduced the filesizes, and dropped the pageload time down another 100ms to around 600ms.
Asset Server Pool
The last thing I added was an asset server pool. Browsers limit the number of connections they’ll make to a given server. I made a CNAME DNS alias for the site’s url (so the same nginx serves both domains the same), and sent half the assets (images, js, css, etc) to the alias. This bought me another 100ms, and left me at around 500ms.
The pageload times I was measuring were definitely anecdotal, and heavily influenced by my internet connection, laptop resources, etc. They are used more to show the relative improvement, rather than as a reliable indication of the speeds users will see.
All the code and project setup described here is on github.