Deploying Squid, Part 2 of 2
Pages: 1, 2
Creating a Peer Cache
For small operations in a single location, one cache server may be sufficient. For more complicated scenarios, Squid and many other cache servers allow communictions between caches. With this capability you can deploy a mesh of cache servers, where parent and sibling caches share their content with one another using the Internet Cache Protocol. This can be useful for load balancing and redundancy.
It can also be used to set up a distributed cache infrastructure, where remote offices with slow network connections need their own local cache. Traffic on the slow network connections can be reduced by creating a parent cache at the Internet connection and child caches at each remote office. You can also join an existing mesh of caches on the Internet if appropriate (see section four of the Squid FAQ).
For our example configuration, we'll set up a single sibling server
proxy2 in addition to
proxy1, already configured.
We'll assume that the Domain Name System (DNS) configuration will handle resolution of a
single cache server name to the two physical servers. The hardware
proxy2 should be at least as capable as
proxy1 but there's no requirement that they be identical.
proxy2, the two servers can be made siblings
by making the following Squid configuration changes (with the
appropriate domain names):
icp_access allow all cache_peer proxy2.my.domain sibling 3128 3130
icp_access allow all cache_peer proxy1.my.domain sibling 3128 3130
After a restart of both Squid processes, the servers should begin checking each other's caches before going to origin servers on the Internet. You should see new output in the access log relating to the sibling server.
On an active proxy server, access logs can get extremely large. If allowed to grow unchecked, they can become difficult to work with. Worse, they could quickly fill the partition holding the log directory. Implementing a scheme to rotate logs frequently will help to prevent this scenario.
Squid is capable of doing its own log rotations. Though you could
use other facilities to handle it, a single signal to the running
Squid process will do the rotation for you neatly and cleanly. To
enable it, first choose the number of old logs you wish to keep and
enter it in
After restarting Squid, you can initiate the rotation with this command:
# /usr/local/squid/bin/squid -k rotate
By putting this command into the daily cron configuration (or in root's crontab) we'll fully automate the rotation process.
As the logs are rotated, they are given numeric extensions. The
log currently in service is
access.log. Yesterday's file
access.log.0. The file from three days ago would be
access.log.2, and so on up to the maximum specified in
squid.conf. Squid's own server-information logs
are rotated in the same way. After the logs reach the maximum
squid.conf, the oldest files are deleted by the
rotation. This should help keep the log partition from getting too
The procedures presented in these two articles should be enough to get Squid running on your network. Next, you may want to implement some monitoring and tune Squid to your particular needs. A good place to start is the Squid User's Guide. This document is a little outdated, but provides a nice foundation for understanding Squid and caching in general. The Squid FAQ is also a must-read document. The Squid Mail Archive may also be of interest
If you're interested in seeing side-by-side comparisons of Squid with other cache products, the folks who maintain the Web Polygraph proxy performance benchmark have just completed their latest "bake-off" of cache servers and posted these results.
You may also enjoy reading this detailed review of Squid and its deployment.
I hope that this introductory tutorial has been interesting and useful to you. Of course, Squid has far more capability than has been explored here, and you are encouraged to review the resources linked above for further information. If you choose to implement Squid for your enterprise you should find it to be robust and easy to manage. Good luck.
Jeff Dean is an engineering and IT professional currently writing a Linux certification handbook for O'Reilly and Associates.
System Administrator Michael Alan Dorman responded to Jeff Dean's Squid articles in our Linux forum. Dorman told a cautionary tale about how he got burned when he set up an open Squid cache on an unsecured university system. Dean replied in the forum and has updated his first article with security information.
We are interested in hearing your stories and questions about Squid caches. Share them with us in the O'Reilly Network Linux forum.
Return to the Linux DevCenter.