The concept of the Cache memory has been around since long time. Cache memory is generally used to fill the performance gap between computing architecture and persistent storage. Persistent bulk storage units cannot keep up with the performance requirements that of Compute processing. Just take an example of RAM to understand performance gap of CPU and Disk. Why do we need RAM?
Cache can be hardware or software component that stores data in it so that future requests for the data can be served using data stored in cache instead of re-computing or fetching from Persistent storage for each operation.
During read operations, if the requested data is found in a cache, It is called as Cache Hit and data is loaded from cache only as there is no need to load all the way from Persistent layer. In case, requested data is not found in cache and needs to recompute or be fetched from persistent storage, it is called as Cache Miss.
As mentioned earlier, cache can be hardware or software component. Also cache can be used at various levels in computing architecture. Below are some of the examples of hardware and software cache based on their usage at various levels.
Hardware Cache:
- CPU Cache
- GPU Cache
- TLB (Translation LookAside Buffer)
Software Cache:
- Disk Cache
- Web Cache
Caching Benefits:
- Reduced latency of operations
- Higher performance
- Reduced IOPS to storage which results in lower SAN traffic and contention
- Cost-effective use of high $/GB storage
Caching Techniques:
There are three main caching techniques that can be deployed. Each method comes with its pros and cons.
- Write-through
- write-around
- write-back
Write Through:
- Write-through cache directs write I/O through cache and to underlying persistent storage before confirming I/O completion to the host. This ensures data updates are safely stored.
- Disadvantage with this technique is that the Write I/O experiences latency since I/O is written to persistent storage instead of cache.
- Write-through cache is good for applications that write and then re-read written data immediately or frequently. Data is stored in cache as it traversed through cache during Write I/O and hence results in lower read latency.

Write Around:
- Write-around cache is a similar technique to write-through cache, but write I/O is written directly to permanent storage, bypassing the cache.
- This can reduce the cache being flooded with write I/O that will not subsequently be re-read.
- Disadvantage is that a read request for recently written data will create a “cache miss” and have to be read from slower bulk storage and experience higher latency.

Write Back:
- Write-back cache is where write I/O is written to cache and completion is immediately confirmed to the host. It is committed to Persistent storage later.
- This results in low latency and high throughput for write-intensive applications, but there is data availability exposure risk because the only copy of the written data is in cache.
- Write-back cache is the best performer for mixed workloads as both read and write I/O have similar response time levels.

Relating above information in VMware vSphere virtualization, vSphere has features like vFRC, vSAN that uses cache mechanism. vFRC supports write-through or read caching whereas VMware vSAN uses Write back caching mechanism.