To get a better understanding of what caching is, let's use a quick example of a user arriving at the home page of a web site. The page contains many images and uses dynamic content derived from a database to populate it. When a user arrives at the page, the server must do a number of things. At a high level, the server must call each of the images from the directory to display, connect to the database, retrieve the data from the database tables, and present the data to the user.
Using the same flow, let's view the process at a granular level: the CPU and memory level. At each of the steps, the data must be converted into bits: ones and zeros. With the bits converted, you present the data to the browser, which then formats the data into a nice web page for the user. The process described is costly compared with reading only from memory. Reading from memory is what caching is.
Taking the example currently in use to the next level, you'll update the code behind the home page to use caching. When a user arrives at the home page, the data must be initially read from disk before it's stored into memory. Requesting this in-memory data during subsequent requests then involves the process of loading the data using an identifier and determining whether there was a hit or a miss.
In cache lingo, a hit happens any time the request for the specific piece of cached data marked with the specified ID has been located. When a hit is made, the data is retrieved from the cache and used for processing. If the request does not locate the identified cached data, it is a miss. When you encounter a miss, you load the content from disk and place it into the cache for subsequent calls. Zend Framework has a similar approach.
Was this article helpful?