HTTP Proxy Caching¶
HTTP proxy caching enables you to store copies of frequently-accessed webobjects (such as documents, images, and articles) and then serve thisinformation to users on demand. It improves performance and frees upInternet bandwidth for other tasks.
Understanding HTTP Web Proxy Caching¶
Internet users direct their requests to web servers all over theInternet. A caching server must act as aweb proxy server so it canserve those requests. After a web proxy server receives requests for webobjects, it either serves the requests or forwards them to theoriginserver (the web server that contains the original copy of therequested information). The Traffic Server proxy supportsexplicitproxy caching, in which the user’s client software must be configuredto send requests directly to the Traffic Server proxy. The followingoverview illustrates how Traffic Server serves a request.
-
Traffic Server receives a client request for a web object.
-
Using the object address, Traffic Server tries to locate therequested object in its object database (cache).
-
If the object is in the cache, then Traffic Server checks to see ifthe object is fresh enough to serve. If it is fresh, then TrafficServer serves it to the client as acache hit (see the figurebelow).
A cache hit
-
If the data in the cache is stale, then Traffic Server connects tothe origin server and checks if the object is still fresh (arevalidation). If it is, then Traffic Server immediately sendsthe cached copy to the client.
-
If the object is not in the cache (a cache miss) or if the serverindicates the cached copy is no longer valid, then Traffic Serverobtains the object from the origin server. The object is thensimultaneously streamed to the client and the Traffic Server localcache (see the figure below). Subsequent requests for the object canbe served faster because the object is retrieved directly from cache.
A cache miss
Caching is typically more complex than the preceding overview suggests.In particular, the overview does not discuss how Traffic Server ensuresfreshness, serves correct HTTP alternates, and treats requests forobjects that cannot or should not be cached. The following sections discussthese issues in greater detail.
Ensuring Cached Object Freshness¶
When Traffic Server receives a request for a web object, it first triesto locate the requested object in its cache. If the object is in cache,then Traffic Server checks to see if the object is fresh enough toserve. For HTTP objects, Traffic Server supports optionalauthor-specified expiration dates. Traffic Server adheres to theseexpiration dates; otherwise, it picks an expiration date based on howfrequently the object is changing and on administrator-chosen freshnessguidelines. Objects can also be revalidated by checking with the originserver to see if an object is still fresh.
HTTP Object Freshness¶
Traffic Server determines whether an HTTP object in the cache is freshby checking the following conditions in order:
-
Checking the
Expires
ormax-age
headerSome HTTP objects contain
Expires
headers ormax-age
headersthat explicitly define how long the object can be cached. TrafficServer compares the current time with the expiration time todetermine if the object is still fresh. -
Checking the
Last-Modified
/Date
headerIf an HTTP object has no
Expires
header ormax-age
header,then Traffic Server can calculate a freshness limit using thefollowing formula:freshness_limit = ( date - last_modified ) * 0.10
where date is the date in the object’s server response headerand last_modified is the date in the
Last-Modified
header.If there is noLast-Modified
header, then Traffic Server uses thedate the object was written to cache. The value0.10
(10 percent)can be increased or reduced to better suit your needs. Refer toModifying Aging Factor for Freshness Computations.The computed freshness limit is bound by a minimum and maximum value.Refer to Setting Absolute Freshness Limits for more information.
-
Checking the absolute freshness limit
For HTTP objects that do not have
Expires
headers or do not havebothLast-Modified
andDate
headers, Traffic Server uses amaximum and minimum freshness limit. Refer toSetting Absolute Freshness Limits. -
Checking revalidate rules in
cache.config
Revalidate rules apply freshness limits to specific HTTP objects. Youcan set freshness limits for objects originating from particulardomains or IP addresses, objects with URLs that contain specifiedregular expressions, objects requested by particular clients, and soon. Refer to
cache.config
.
Modifying Aging Factor for Freshness Computations¶
If an object does not contain any expiration information, then TrafficServer can estimate its freshness from theLast-Modified
andDate
headers. By default, Traffic Server stores an object for 10% ofthe time that elapsed since it last changed. You can increase or reducethe percentage according to your needs.
To modify the aging factor for freshness computations:
- Change the value for
proxy.config.http.cache.heuristic_lm_factor
. - Run the
traffic_line-x
command to apply the configuration changes.
Setting Absolute Freshness Limits¶
Some objects do not have Expires
headers or do not have bothLast-Modified
andDate
headers. To control how long theseobjects are considered fresh in the cache, specify anabsolutefreshness limit.
To specify an absolute freshness limit:
- Edit the variables
proxy.config.http.cache.heuristic_min_lifetime
andproxy.config.http.cache.heuristic_max_lifetime
inrecords.config
. - Run the
traffic_line-x
command to apply the configuration changes.
Specifying Header Requirements¶
To further ensure freshness of the objects in the cache, configureTraffic Server to cache only objects with specific headers. By default,Traffic Server caches all objects (including objects with no headers);you should change the default setting only for specialized proxysituations. If you configure Traffic Server to cache only HTTP objectswithExpires
or max-age
headers, then the cache hit rate will benoticeably reduced (since very few objects will have explicit expirationinformation).
To configure Traffic Server to cache objects with specific headers:
- Change the value for
proxy.config.http.cache.required_headers
inrecords.config
. - Run the
traffic_line-x
command to apply the configuration changes.
Cache-Control Headers¶
Even though an object might be fresh in the cache, clients or serversoften impose their own constraints that preclude retrieval of the objectfrom the cache. For example, a client might request that a object notbe retrieved from a cache, or if it does allow cache retrieval, then itcannot have been cached for more than 10 minutes.
Traffic Server bases the servability of a cached object on Cache-Control
headers that appear in both client requests and server responses. The followingCache-Control
headers affect whether objects are served from cache:
- The
no-cache
header, sent by clients, tells Traffic Server thatit should not serve any objects directly from the cache. When present in aclient request, Traffic Server will always obtain the object from theorigin server. You can configure Traffic Server to ignore clientno-cache
headers. Refer toConfiguring Traffic Server to Ignore Client no-cache Headersfor more information. - The
max-age
header, sent by servers, is compared to the objectage. If the age is less thanmax-age
, then the object is freshand can be served from the Traffic Server cache. - The
min-fresh
header, sent by clients, is anacceptablefreshness tolerance. This means that the client wants the object tobe at least this fresh. Unless a cached object remains fresh at leastthis long in the future, it is revalidated. - The
max-stale
header, sent by clients, permits Traffic Server toserve stale objects provided they are not too old. Some browsersmight be willing to take slightly stale objects in exchange forimproved performance, especially during periods of poor Internetavailability.
Traffic Server applies Cache-Control
servability criteria after HTTPfreshness criteria. For example, an object might be considered fresh but willnot be served if its age is greater than itsmax-age
.
Revalidating HTTP Objects¶
When a client requests an HTTP object that is stale in the cache,Traffic Server revalidates the object. Arevalidation is a query tothe origin server to check if the object is unchanged. The result of arevalidation is one of the following:
- If the object is still fresh, then Traffic Server resets itsfreshness limit and serves the object.
- If a new copy of the object is available, then Traffic Server cachesthe new object (thereby replacing the stale copy) and simultaneouslyserves the object to the client.
- If the object no longer exists on the origin server, then TrafficServer does not serve the cached copy.
- If the origin server does not respond to the revalidation query, thenTraffic Server serves the stale object along with a
111Revalidation Failed
warning.
By default, Traffic Server revalidates a requested HTTP object in thecache if it considers the object to be stale. Traffic Server evaluatesobject freshness as described inHTTP Object Freshness.You can reconfigure how Traffic Server evaluates freshness by selectingone of the following options:
-
Traffic Server considers all HTTP objects in the cache to be stale:
- Always revalidate HTTP objects in the cache with the origin server. Traffic Server considers all HTTP objects in the cache to be fresh:
- Never revalidate HTTP objects in the cache with the origin server.
Traffic Server considers all HTTP objects without Expires
orCache-control
headers to be stale:
Revalidate all HTTP objects withoutExpires
orCache-Control
headers.
To configure how Traffic Server revalidates objects in the cache, youcan set specific revalidation rules incache.config
.
To configure revalidation options
- Edit the variable
proxy.config.http.cache.when_to_revalidate
inrecords.config
. - Run the
traffic_line-x
command to apply the configuration changes.
Scheduling Updates to Local Cache Content¶
To further increase performance and to ensure that HTTP objects arefresh in the cache, you can use theScheduled Update option. Thisconfigures Traffic Server to load specific objects into the cache atscheduled times, regardless of whether there is an active client requestfor those objects at the scheduled time. You might find this especiallybeneficial in a reverse proxy setup, where you can preload content youanticipate will be in demand.
To use the scheduled update option, you must:
- Specify the list of URLs that contain the objects you want to schedulefor update.
- Specify the time the update should take place.
- Specify the recursion depth for the URL.
- Enable the scheduled update option and configure optional retrysettings.
Traffic Server uses the information you provide to determine URLs forwhich it is responsible. For each URL, Traffic Server derives allrecursive URLs (if applicable) and then generates a unique URL list.Using this list, Traffic Server initiates an HTTPGET
for eachunaccessed URL. It ensures that it remains within the user-definedlimits for HTTP concurrency at any given time. The system logs thecompletion of all HTTPGET
operations so you can monitor theperformance of this feature.
Traffic Server also provides a Force Immediate Update option thatenables you to update URLs immediately without waiting for the specifiedupdate time to occur. You can use this option to test your scheduledupdate configuration. Refer to`Forcing an Immediate Update`_.
Configuring the Scheduled Update Option¶
To configure the scheduled update option
- Edit
update.config
to enter a line in the file for each URL youwant to update. - Adjust the following variables in
records.config
: - Run the
traffic_line-x
command to apply the configuration changes.
Forcing Immediate Updates¶
Traffic Server provides a Force Immediate Update option that enablesyou to immediately verify the URLs listed inupdate.config
.This option disregards the offset hour and interval set inupdate.config
and immediately updates the URLs listed.
To configure the Force Immediate Update option:
-
Enable
proxy.config.update.enabled
inrecords.config
:CONFIG proxy.config.update.enabled INT 1
-
Enable
proxy.config.update.force
inrecords.config
:CONFIG proxy.config.update.force INT 1
While enabled, this overrides all normal scheduling intervals.
-
Run the command
traffic_line-x
to apply the configuration changes.
Important
When you enable the Force Immediate Update option, Traffic Servercontinually updates the URLs specified inupdate.config
until youdisable the option. To disable the Force Immediate Update option, setproxy.config.update.force
to 0
(zero).
Pushing Content into the Cache¶
Traffic Server supports the HTTP PUSH
method of content delivery.Using HTTPPUSH
, you can deliver content directly into the cachewithout client requests.
Configuring Traffic Server for PUSH Requests¶
Before you can deliver content into your cache using HTTP PUSH
, youmust configure Traffic Server to accept PUSH
requests.
-
Edit
ip_allow.config
to allowPUSH
from the appropriate addresses. -
Update
proxy.config.http.push_method_enabled
inrecords.config
:CONFIG proxy.config.http.push_method_enabled INT 1
-
Run the command
traffic_line-x
to apply the configuration changes.
Understanding HTTP PUSH¶
PUSH
uses the HTTP 1.1 message format. The body of aPUSH
request contains the response header and response body that you want toplace in the cache. The following is an example of aPUSH
request:
PUSH http://www.company.com HTTP/1.0
Content-length: 84
HTTP/1.0 200 OK
Content-type: text/html
Content-length: 17
<HTML>
a
</HTML>
Important
Your PUSH
headers must includeContent-length
, the value for whichmust include both headers and body byte counts. The value is not optional,and an improper (too large or too small) value will result in undesirablebehavior.
Tools that will help manage pushing¶
Traffic Server comes with a Perl script for pushing, tspush,which can assist with understanding how to write scripts for pushingcontent yourself.
Pinning Content in the Cache¶
The Cache Pinning Option configures Traffic Server to keep certainHTTP objects in the cache for a specified time. You can use this optionto ensure that the most popular objects are in cache when needed and toprevent Traffic Server from deleting important objects. Traffic Serverobserves Cache-Control
headers and pins an object in the cache onlyif it is indeed cacheable.
To set cache pinning rules:
-
Enable
proxy.config.cache.permit.pinning
inrecords.config
:CONFIG proxy.config.cache.permit.pinning INT 1
-
Add a rule in
cache.config
for each URL you want Traffic Server topin in the cache. For example:url_regex=^https?://(www.)?apache.org/dev/ pin-in-cache=12h
-
Run the command
traffic_line-x
to apply the configuration changes.
Caching HTTP Objects¶
When Traffic Server receives a request for a web object that is not inthe cache, it retrieves the object from the origin server and serves itto the client. At the same time, Traffic Server checks if the object iscacheable before storing it in its cache to serve future requests.
Traffic Server responds to caching directives from clients and originservers, as well as directives you specify through configuration optionsand files.
Client Directives¶
By default, Traffic Server does not cache objects with the followingrequest headers:
-
Authorization
-
Cache-Control:no-store
-
Cache-Control:no-cache
To configure Traffic Server to ignore this request header, refer toConfiguring Traffic Server to Ignore Client no-cache Headers.
-
Cookie
(for text objects)By default, Traffic Server caches objects served in response torequests that contain cookies (unless the object is text). You canconfigure Traffic Server to not cache cookied content of any type,cache all cookied content, or cache cookied content that is of imagetype only. For more information, refer to Caching Cookied Objects.
Configuring Traffic Server to Ignore Client no-cache Headers¶
By default, Traffic Server strictly observes clientCache-Control:no-cache
directives. If a requested object contains ano-cache
header, then Traffic Server forwards the request to theorigin server even if it has a fresh copy in cache. You can configureTraffic Server to ignore client no-cache
directives such that itignores no-cache
headers from client requests and serves the objectfrom its cache.
-
Edit
proxy.config.http.cache.ignore_client_no_cache
inrecords.config
.CONFIG proxy.config.http.cache.ignore_client_no_cache INT 1
-
Run the command
traffic_line-x
to apply the configuration changes.
Origin Server Directives¶
By default, Traffic Server does not cache objects with the following responseheaders:
-
Cache-Control:no-store
-
Cache-Control:private
-
WWW-Authenticate
To configure Traffic Server to ignore
WWW-Authenticate
headers,refer toConfiguring Traffic Server to Ignore WWW-Authenticate Headers. -
Set-Cookie
-
Cache-Control:no-cache
To configure Traffic Server to ignore
no-cache
headers, refer toConfiguring Traffic Server to Ignore Server no-cache Headers. -
Expires
header with a value of 0 (zero) or a past date.
Configuring Traffic Server to Ignore Server no-cache Headers¶
By default, Traffic Server strictly observes Cache-Control:no-cache
directives. A response from an origin server with ano-cache
headeris not stored in the cache and any previous copy of the object in thecache is removed. If you configure Traffic Server to ignoreno-cache
headers, then Traffic Server also ignoresno-store
headers. Thedefault behavior of observingno-cache
directives is appropriatein most cases.
To configure Traffic Server to ignore server no-cache
headers:
-
Edit
proxy.config.http.cache.ignore_server_no_cache
inrecords.config
.CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1
-
Run the command
traffic_line-x
to apply the configuration changes.
Configuring Traffic Server to Ignore WWW-Authenticate Headers¶
By default, Traffic Server does not cache objects that containWWW-Authenticate
response headers. TheWWW-Authenticate
headercontains authentication parameters the client uses when preparing theauthentication challenge response to an origin server.
When you configure Traffic Server to ignore origin serverWWW-Authenticate
headers, all objects withWWW-Authenticate
headers are stored in the cache for future requests. However, thedefault behavior of not caching objects withWWW-Authenticate
headers is appropriate in most cases. Only configure Traffic Server toignore serverWWW-Authenticate
headers if you are knowledgeableabout HTTP 1.1.
To configure Traffic Server to ignore server WWW-Authenticate
headers:
-
Edit
proxy.config.http.cache.ignore_authentication
inrecords.config
.CONFIG proxy.config.http.cache.ignore_authentication INT 1
-
Run the command
traffic_line-x
to apply the configuration changes.
Configuration Directives¶
In addition to client and origin server directives, Traffic Serverresponds to directives you specify through configuration options andfiles.
You can configure Traffic Server to do the following:
- Not cache any HTTP objects. Refer to Disabling HTTP Object Caching.
- Cache dynamic content. That is, objects with URLs that end in
.asp
orcontain a question mark (?
), semicolon (;
), orcgi
. For moreinformation, refer toCaching Dynamic Content. - Cache objects served in response to the
Cookie:
header. Refer toCaching Cookied Objects. - Observe
never-cache
rules incache.config
.
Disabling HTTP Object Caching¶
By default, Traffic Server caches all HTTP objects except those forwhich you have setnever-cache
as action rulesin cache.config
. You can disable HTTP object caching so that all HTTPobjects are served directly from the origin server and never cached, asdetailed below.
To disable HTTP object caching manually:
-
Set
proxy.config.http.enabled
to0
inrecords.config
.CONFIG proxy.config.http.enabled INT 0
-
Run the command
traffic_line-x
to apply the configuration changes.
Caching Dynamic Content¶
A URL is considered dynamic if it ends in .asp
or contains aquestion mark (?
), a semicolon (;
), or cgi
. Bydefault, Traffic Server caches dynamic content. You can configure thesystem to ignore dynamic looking content, although this is recommendedonly if the content is truly dynamic, but fails to advertise so withappropriate Cache-Control
headers.
To configure Traffic Server’s cache behaviour in regard to dynamiccontent:
-
Edit
proxy.config.http.cache.cache_urls_that_look_dynamic
inrecords.config
. To disable caching, set the variable to0
,and to explicitly permit caching use1
.CONFIG proxy.config.http.cache.cache_urls_that_look_dynamic INT 0
-
Run the command
traffic_line-x
to apply the configuration changes.
Caching Cookied Objects¶
By default, Traffic Server caches objects served in response to requeststhat contain cookies. This is true for all types of objects except fortext. Traffic Server does not cache cookied text content because objectheaders are stored along with the object, and personalized cookie headervalues could be saved with the object. With non-text objects, it isunlikely that personalized headers are delivered or used.
You can reconfigure Traffic Server to:
- Not cache cookied content of any type.
- Cache cookied content that is of image type only.
- Cache all cookied content regardless of type.
To configure how Traffic Server caches cookied content:
- Edit
proxy.config.http.cache.cache_responses_to_cookies
inrecords.config
. - Run the command
traffic_line-x
to apply the configuration changes.
Forcing Object Caching¶
You can force Traffic Server to cache specific URLs (including dynamicURLs) for a specified duration, regardless ofCache-Control
responseheaders.
To force document caching:
-
Add a rule for each URL you want Traffic Server to pin to the cache
cache.config
:url_regex=^https?://(www.)?apache.org/dev/ ttl-in-cache=6h
-
Run the command
traffic_line-x
to apply the configuration changes.
Caching HTTP Alternates¶
Some origin servers answer requests to the same URL with a variety ofobjects. The content of these objects can vary widely, according towhether a server delivers content for different languages, targetsdifferent browsers with different presentation styles, or providesdifferent document formats (HTML, XML). Different versions of the sameobject are termedalternates and are cached by Traffic Server basedon Vary
response headers. You can specify additional request andresponse headers for specificContent-Type
values that Traffic Serverwill identify as alternates for caching. You can also limit the numberof alternate versions of an object allowed in the cache.
Configuring How Traffic Server Caches Alternates¶
To configure how Traffic Server caches alternates:
- Edit the following variables in
records.config
: - Run the command
traffic_line-x
to apply the configuration changes.
Note
If you specify Cookie
as the header field on which to varyin the above variables, make sure that the variableproxy.config.http.cache.cache_responses_to_cookies
is set appropriately.
Limiting the Number of Alternates for an Object¶
You can limit the number of alternates Traffic Server can cache perobject (the default is 3).
Important
Large numbers of alternates can affect Traffic Servercache performance because all alternates have the same URL. AlthoughTraffic Server can look up the URL in the index very quickly, it mustscan sequentially through available alternates in the object store.
To alter the limit on the number of alternates:
-
Edit
proxy.config.cache.limits.http.max_alts
inrecords.config
.CONFIG proxy.config.cache.limits.http.max_alts INT 5
-
Run the command
traffic_line-x
to apply the configuration changes.
Using Congestion Control¶
The Congestion Control option enables you to configure TrafficServer to stop forwarding HTTP requests to origin servers when theybecome congested. Traffic Server then sends the client a message toretry the congested origin server later.
To enable this option:
-
Set
proxy.config.http.congestion_control.enabled
to1
inrecords.config
.CONFIG proxy.config.http.congestion_control.enabled INT 1
-
Create rules in
congestion.config
to specify:- Which origin servers Traffic Server tracks for congestion.
- The timeouts Traffic Server uses, depending on whether a server iscongested.
- The page Traffic Server sends to the client when a server becomescongested.
- Whether Traffic Server tracks the origin servers by IP address or byhostname.
-
Run the command
traffic_line-x
to apply the configuration changes.
Using Transaction Buffering Control¶
By default, I/O operations are run at full speed, as fast as either TrafficServer, the network, or the cache can support. This can be problematic forlarge objects if the client side connection is significantly slower. In suchcases the content will be buffered in ram while waiting to be sent to theclient. This could potentially also happen forPOST
requests if the clientconnection is fast and the origin server connection slow. If very large objectsare being used this can cause the memory usage of Traffic Server to becomevery large.
This problem can be ameloriated by controlling the amount of buffer space usedby a transaction. A high water and low water mark are set in terms of bytesused by the transaction. If the buffer space in use exceeds the high watermark, the connection is throttled to prevent additional external data fromarriving. Internal operations continue to proceed at full speed until thebuffer space in use drops below the low water mark and external data I/O isre-enabled.
Although this is intended primarily to limit the memory usage of Traffic Serverit can also serve as a crude rate limiter by setting a buffer limit and thenthrottling the client side connection either externally or via a transform.This will cause the connection to the origin server to be limited to roughlythe client side connection speed.
Traffic Server does network I/O in large chunks (32K or so) and therefore thegranularity of transaction buffering control is limited to a similar precision.
The buffer size calculations include all elements in the transaction, includingany buffers associated withtransform plugins.
Transaction buffering control can be enabled globally by using configurationvariables or byTSHttpTxnConfigIntSet()
in a plugin.
Value | Variable | TSHttpTxnConfigIntSet() key |
---|---|---|
Enable buffering | proxy.config.http.flow_control.enabled | TS_CONFIG_HTTP_FLOW_CONTROL_ENABLED |
Set high water | proxy.config.http.flow_control.high_water | TS_CONFIG_HTTP_FLOW_CONTROL_HIGH_WATER |
Set low water | proxy.config.http.flow_control.low_water | TS_CONFIG_HTTP_FLOW_CONTROL_LOW_WATER |
Be careful to always have the low water mark equal or less than the high watermark. If you set only one, the other will be set to the same value.
If using TSHttpTxnConfigIntSet()
, it must be called no later thanTS_HTTP_READ_RESPONSE_HDR_HOOK
.
Reducing Origin Server Requests (Avoiding the Thundering Herd)¶
When an object can not be served from cache, the request will be proxied to theorigin server. For a popular object, this can result in many near simultaneousrequests to the origin server, potentially overwhelming it or associatedresources. There are several features in Traffic Server that can be used toavoid this scenario.
Read While Writer¶
When Traffic Server goes to fetch something from origin, and upon receivingthe response, any number of clients can be allowed to start serving thepartially filled cache object once background_fill_completed_threshold % of theobject has been received.
While some other HTTP proxies permit clients to begin reading the responseimmediately upon the proxy receiving data from the origin server, ATS does notbegin allowing clients to read until after the complete HTTP response headershave been read and processed. This is a side-effect of ATS making nodistinction between a cache refresh and a cold cache, which prevents knowingwhether a response is going to be cacheable.
As non-cacheable responses from an origin server are generally due to thatcontent being unique to different client requests, ATS will not enableread-while-writer functionality until it has determined that it will be ableto cache the object.
The following settings must be made in records.config
to enableread-while-writer functionality in ATS:
CONFIG proxy.config.cache.enable_read_while_writer INT 1
CONFIG proxy.config.http.background_fill_active_timeout INT 0
CONFIG proxy.config.http.background_fill_completed_threshold FLOAT 0.000000
CONFIG proxy.config.cache.max_doc_size INT 0
All four configurations are required, for the following reasons:
-
proxy.config.cache.enable_read_while_writer
being set to1
turnsthe feature on, as it is off (0
) by default. -
The background fill feature (both
proxy.config.http.background_fill_active_timeout
andproxy.config.http.background_fill_completed_threshold
) should beallowed to kick in for every possible request. This is necessary in the eventthe writer (the first client session to request the object, which triggeredATS to contact the origin server) goes away. Another client session needs totake over the writer.As such, you should set the background fill timeouts and threshold to zero;this assures they never time out and are always allowed to kick in.
-
The
proxy.config.cache.max_doc_size
should be unlimited (set to 0),since the object size may be unknown, and going over this limit would causea disconnect on the objects being served.
Once these are enabled, you have something that is very close, but not quitethe same, to Squid’s Collapsed Forwarding.
Fuzzy Revalidation¶
Traffic Server can be set to attempt to revalidate an object before it becomesstale in cache.records.config
contains the settings:
CONFIG proxy.config.http.cache.fuzz.time INT 240
CONFIG proxy.config.http.cache.fuzz.min_time INT 0
CONFIG proxy.config.http.cache.fuzz.probability FLOAT 0.005
For every request for an object that occursproxy.config.http.cache.fuzz.time
before (in the example above, 240seconds) the object is set to become stale, there is a smallchance (proxy.config.http.cache.fuzz.probability
== 0.5%) that therequest will trigger a revalidation request to the origin.
Note
When revalidation occurs, the requested object is no longer available to beserved from cache. Subsequent requests for that object will be proxied tothe origin.
For objects getting a few requests per second, these settings would offer afairly low probability of revalidating the cached object before it becomesstale. This feature is not typically necessary at those rates, though, sinceodds are only one or a small number of connections would hit origin upon theobjects going stale.
Once request raise rise, the same fuzz.probability
leads to a greaterchance the object may be revalidated before becoming stale. This can preventmultiple clients simultaneously triggering contact with the origin serverunder higher loads, as they would do if no fuzziness was employed forrevalidations.
These settings are also overridable by remap rules and via plugins, so can beadjusted per request if necessary.
Finally, proxy.config.http.cache.fuzz.min_time
allows fordifferent time periods to evaluate the probability of revalidation for smallTTLs and large TTLs. Objects with small TTLs will start “rolling therevalidation dice” near the fuzz.min_time
, while objects with large TTLswould start atfuzz.time
.
A logarithmic like function between determines the revalidation evaluationstart time (which will be betweenfuzz.min_time
and fuzz.time
). As theobject gets closer to expiring, the window start becomes more likely. Bydefault this setting is not enabled, but should be enabled anytime you haveobjects with small TTLs. Note that this option predates overridableconfigurations, so you can achieve something similar with a plugin orremap.config
settings.
These configuration options are similar to Squid’s refresh_stale_hitconfiguration option.
Open Read Retry Timeout¶
The open read retry configurations attempt to reduce the number of concurrentrequests to the origin for a given object. While an object is being fetchedfrom the origin server, subsequent requests would waitproxy.config.http.cache.open_read_retry_time
milliseconds beforechecking if the object can be served from cache. If the object is still beingfetched, the subsequent requests will retryproxy.config.http.cache.max_open_read_retries
times. Thus, subsequentrequests may wait a total of (max_open_read_retries
xopen_read_retry_time
)milliseconds before establishing an origin connection of its own. For instance,if they are set to5
and 10
respectively, connections will wait up to50ms for a response to come back from origin from a previous request, untilthis request is allowed through.
Important
These settings are inappropriate when objects are uncacheable. In thosecases, requests for an object effectively become serialized. The subsequentrequests would await at leastopen_read_retry_time
milliseconds beforebeing proxied to the origin.
It is advisable that this setting be used in conjunction with Read While Writerfor big (those that take longer than (max_open_read_retries
xopen_read_retry_time
) milliseconds to transfer) cacheable objects. Withoutthe read-while-writer settings enabled, while the initial fetch is ongoing, notonly would subsequent requests be delayed by the maximum time, but also, thoserequests would result in unnecessary requests to the origin server.
Since ATS now supports setting these settings per-request or remap rule, youcan configure this to be suitable for your setup much more easily.
The configurations are (with defaults):
CONFIG proxy.config.http.cache.max_open_read_retries INT -1
CONFIG proxy.config.http.cache.open_read_retry_time INT 10
The defaults are such that the feature is disabled and every connection isallowed to go to origin without artificial delay. When enabled, you will trymax_open_read_retries
times, each with anopen_read_retry_time
timeout.