About HTTP caching
HTTP caching is a way of storing web traffic so that people connecting to it use the caching server instead of the actual site, thus making everything faster. The reasons that we use HTTP caching are firstly that clients will feel that their internet connection is performing faster, and secondly that you can both filter and transform your traffic. A side effect of HTTP caching is that clients might still be able to get web pages even if the actual web site is down, provided that they are already stored in the cache.
A proxy server interacts with the clients and the destination web server in the following fashion. When the proxy server receives a request that can be served either by its memory cache or by its hard disk cache, the requested message is not forwarded to its actual destination and the proxy returns the requested files stored in its cache. Similarly, when a proxy server receives a request that matches one of its access control lists and should be denied, it does not serve the request and blocks the access.
If are you are ‘lucky’ and the desired data is found in the cache, then this is called a hit; otherwise, it is called a miss. When you have a miss, the retrieval of the desired data might take longer because the searching of the cache introduces an additional delay. There are also passes, which are requests that are ignored and are never cached, and pipes which establish a direct connection to the backend and are mainly used for streaming media. Please note that passes do not affect the hit rate.