A Squid cache can be set to check other Squid servers (its peers) for cached web pages before going direct to the requested page. Peering your Squid caches can provide faster responses and lower costs. Cache peers exchange cached objects, returning them to the users faster and reducing upstream bandwidth costs.
Squid first looks for requested objects in its own cache. If the object is not in the cache, then Squid must retrieve it from another source. To oversimplify the algorithm: Squid creates an ordered list of possible sources.
This list consists of parent(s), sibling(s), and the origin server(s). Squid attempts to fetch the object from each server in order until it is successful, one of the servers responds with a 404 (object not found), or there are no more sources to try.
Access controls and ICP queries modify the list using these rules:
never_direct settings forbid Squid to contact the origin server(s) directly, Squid removes the origin server from the list.
always_direct settings require Squid to use the origin server, Squid removes all peers from the list.
cache_peer_access prevents a particular peer from being queried for this object, the peer is removed from the list.
Remaining peers are queried. Siblings (but not parents) that answer in the negative are removed.
Any remaining peers in the list are ordered according to speed of response.
Squid maintains a small database of response-times for origin servers. If going direct is an option, the origin server is placed in the list based on average weighted response time.
Peers need to tell each other which objects are in their caches. ICP and HTCP allow instant queries and responses. Digests are snapshots of what is currently in the cache.
Squid usually uses the ICP protocol to communicate between caches, and can use HTCP if both caches are configured for it. ICP and HTCP are similar enough that, unless noted otherwise, the article uses ICP to describe both.
There are lots of ways to utilize multiple cache servers. Tell us what is working for you.
When Squid receives a request for an object that is not in its cache, it sends an ICP query packet to each of its configured peers. If at least one peer says it has the object, Squid requests the object from the fastest of these peers. If all the peers answer "no" or fail to respond before a timeout setting, Squid requests the object from the origin or fails, depending on
ICP uses untracked UDP packets. Because untracked UDP packets may be lost, dropped, or damaged, Squid uses a timeout value. Any queries which are not responded to in this time are assumed to be lost, and Squid drops those peers from its list for this request.
Squid uses a dynamic ICP timeout by default, but this can be overridden with
icp_query_timeout or capped with
icp_maximum_query_timeout if you find that the values that Squid calculates are suboptimal for your environment.
A cache digest is a block of URI keys which represents the set of objects the cache holds. The URIs are run through a deterministic algorithm which compacts the data and makes the keys. Digests are exchanged with digest-capable peers, and are used to determine whether a peer is likely to have a requested object.
Digests are subject to false hits and false misses, depending on the frequency of the exchange, but they reduce the immediate network traffic and are useful if the bandwidth between peers is narrow or unreliable.
Use a parent cache if you want to reduce your upstream costs and make page collection faster for your users, especially if the users are in groups which can have child caches.
Set up ACLs (Access Control List) for your downstream users:
acl cache_users src 192.168.0.0/255.255.0.0
Give those users HTTP access to your cache:
http_access allow cache_users
They will also require MISS access, otherwise they'll be denied if requests aren't in the cache:
miss_access allow cache_users
Give them ICP access as well:
icp_access allow cache_users
Tell your cache-clients the cache's hostname, HTTP and ICP ports, and why they should be using it. A knowledgeable user is more likely to use the cache.
CAVEAT: The more caches you have downstream of you, the lower your hit-rate as a parent. Caches downstream of you will cache what they can, and pass up requests only for content that they do not have, or that is hard to cache. Every hit is still a benefit, even if the rate is low.
If your bandwidth is paid by the byte, you'll find that even a low hit-rate will cover the hardware and operating expenses.
Some ISPs give a cost reduction if you use their parent. Parent caches may contain the page you're about to want, providing faster service.
You will need:
no_queryconfiguration option if your parent doesn't support ICP.
For each parent you have, add a line like this to your
cache_peer hostname parent HTTP_port ICP_port [OPTIONS]
cache_peer proxy.cache.example.org parent 3128 3130 no_query no-digest
If you have one parent, use the
no-digest configuration options. If you're always going to request the object from the parent, there's no need to check for the object.
If you have multiple parents, ICP queries can improve performance. An example with two parents:
One parent says yes, the other says no. We use the one that said yes.
Both parents said yes. They both have it, so use the one that answered first. It's likely to be fastest.
Both parents said no. Again, whoever answered first.
One parent failed to respond. We go with the other one. The parent that didn't answer may have failed or be temporarily unreachable.
CAUTION: When a peer fails to respond to ICP queries in
dead_peer_timeout seconds, Squid assumes it is unavailable or unreachable until it sees another ICP response from that peer. While a peer is in this "presumed dead" state, Squid will send ICP queries, but won't wait for it to answer. Squid will base decisions on the responses of the "live" peers.
If all your parents are "dead" according to this test and Squid is not configured to go direct, Squid will not be able to return objects that are not in its cache.
Sibling caches work well where you have groups of users. Each group's local proxy shares the cache with other proxies, only "going forward" to a parent or origin server if none of the caches have a fresh copy of the requested object.
Use sibling caches if several caches are behind some sort of bottleneck but have good connections to each other.
Siblings can group together to allow several smaller computers to simulate one expensive computer and serve as a larger proxy.
To allow another cache to use you as a sibling, configure your cache as if it were a parent cache, but instead of giving MISS access, deny it to your siblings.
acl sibling1 src 192.168.44.55/255.255.255.255
miss_access deny sibling1
To use another cache as a sibling, both caches must support either ICP/HTCP or cache digests. These allow Squid to check for objects in other caches.
cache_peer entry looks like this:
cache_peer hostname sibling HTTP_port ICP_port [OPTIONS]
cache_peer sibling1.myinternalnet.org sibling 3128 3130 proxy-only
proxy-only option. Normally, caching objects fetched from a sibling is a waste of disk space. If the bandwidth to a sibling is narrow, lossy or expensive, consider leaving the option out and caching objects from that sibling.
FALSE HITS: (ICP only) Because ICP does not communicate request headers (only the URI is presented in an ICP query), it is possible for a peer to return an affirmative for a given URI but not be able to satisfy the request from cache.
cache1 sends an ICP query to
cache2 for http://www.example.org/index.html.
cache2 has a cached copy of the object (87,376 seconds old), and answers in the affirmative.
cache1 then issues the request to
cache2, but the request headers contain "Max-Age: 86400").
cache2's copy is too old to satisfy this request.
cache2 will go forward to the origin server (or a parent) and fetch a new copy,
cache2 will return a 504 HTTP response and
cache1 will have to select a new source for the object.
HTCP incorporates the request headers into the query packet, and is thus almost immune to false hits -- although they are still theoretically possible under rare circumstances.
Jennifer Vesperman is the author of Essential CVS. She writes for the O'Reilly Network, the Linux Documentation Project, and occasionally Linux.Com.
Return to the Linux DevCenter.
Copyright © 2009 O'Reilly Media, Inc.