While PHP accelerators deliver better server-side performance and scalability, we can also do other things to speed up the end-user experience. HTTP compression is a common technique for improving performance across the network between the server and the client. Sugar Labs measured the client-side performance of the SugarCRM application both with and without HTTP compression using Yahoo’s slick YSlow plug-in for Firebug on the FireFox browser. We sampled the SugarCRM application under a couple of different scenarios.
What are the benefits of HTTP Compression?
Depending on the latency between the server and browser, HTTP compression can have a dramatic effect on the performance of round trips. The greater the latency, the better the impact. That means implementing HTTP compression across a LAN connection normally doesn’t help and can often negatively impact performance as the time to compress/un-compress on a LAN outweighs the value of fewer network packets. HTTP compression works best when there are many network hops between the server and the browser such as when your users connect to their Sugar application over a broadband connection.
A common compression utility used by Web servers and browsers is Gzip. All browsers that Sugar supports also support accepting data that has been compressed using Gzip. The Sugar Wiki details how to configure Gzip compression for Apache for the best results with SugarCRM.
Here is a brief sampling of the data collected:
After the initial page hit for each action, the average page size reduced to 101 Kb per page without compression. Once you enable compression, that number is reduced to 14 Kb per page, but it still makes 58 HTTP requests to the server. Even though the files are cached by the browser, the browser still makes a call to the server asking if the file has changed. That is why the number of requests per page is still the same. The server will only send back data that has changed, but there is still the overhead of making all those HTTP requests.
If you enable cache headers in your web server, the number of HTTP requests made reduces significantly. On average, the browser will only make 1 request per page view with cache headers enabled. You can set the cache for as long as you would like. I would recommend about a week. Note that if your users are connecting over HTTPS, the browser will only cache an object for a given session. As soon as a user logs out, the browser cache will clear by default. Firefox does allow you to change this browser behavior for HTTPS connections by setting the browser.cache.disk_cache_ssl value to true by typing in “about:config” into your Firefox browser and navigating to this setting.
If you want to calculate the overall amount of data being transferred from the server to the client, the formula would be as follows:
for a single user who views z pages of which k are distinct actions
Total Data Transferred = 925 * k + n * (z – k ) * 101
With Compression :
Total Data Transferred = 270 * k + n * (z -k) * 14
So for 1 user viewing 40 pages of which five pages are distinct actions
No Compression: 925 * 5 + 100 * (40 – 5) * 101 = 8,160 Kb per user
With Compression: 270 * 5+ 100 * (40 – 5) * 14 = 1,840 Kb per user
Now assuming that each user views a new page every 30 seconds (i.e. a 30 second “think” time) we can calculate the bandwidth needed
40 page views * 30 seconds/page view = 1,200 seconds
bandwidth = Total Data Transferred/ Seconds
8,160 K/1200 s = 6.80 Kbps without compression per user
1,840 K/1200 s = 1.53 Kbps with compression per user
Now to get the bandwidth for n users, we simply multiply by n
so for 100 users it would be
6.8Kbps * 100 = 680 Kbps without compression for 100 users
1.53Kbps * 100 = 153 Kbps with compression for 100 users
Doesn’t the browser automatically cache files without cache headers?
Does compression work with everything?
For the most part compression works well, but I would recommend not using it for SOAP since there are several SOAP clients out there that do not handle Gzip compressed data very well. Also you may want to disable it for downloads since often times you will be downloading data that has already been compressed which won’t see any benefit from being Gziped again. Also note the discussion above on the value of HTTP compression when the server and the browser are on the same LAN.