Web App Development and Caching

Any web developer who works with external services or databases (that’s probably almost every web developer) has probably run into performance problems. The problem is that running code by itself is pretty fast. Databases and external services / APIs are very slow. Waiting on an external API to load is basically the computer equivalent of waiting for a brontosaurus to walk a kilometer.

As web developers, we have a very powerful tool called caching. It’s being used in your computer reading this sentence every microsecond, with various levels of caching happening between the CPU, memory, and hard drive (or SSD). The act of caching is saving the result of a slow operation in a easily accessible space. In this case, we will be talking about caching database results and API results.

There are only two hard things in computer science: cache invalidation and naming things.

-- Phil Karlton (Adapted)

Cache invalidation is a hard problem. Let me illustrate:

  1. Server A fetches Record A and associated records from the database, and caches it.
  2. Server B updates Record A.
  3. Server A continues serving its cached copy from step 1 (until it expires).

There are a few ways to solve this problem.

You can manually invalidate caches:

  1. Server A fetches Record A and associated records from the database, and caches it.
  2. Server B updates Record A, notifying all servers to remove cached copies of Record A.
  3. Server A continues serving its cached copy from step 1 (until it expires).

This is tenable with one or two cache servers, but clearly not scalable — you’ll need to send cache purge requests to all your cache servers.

Then there’s my favorite — key-based cache invalidation:

  1. Server A fetches Record A, and looks up the cache key “Record A [timestamp when Record A was updated]”. It doesn’t exist, so it fetches associated records and stores everything in the cache.
  2. Server B updates Record A.
  3. Server A fetches Record A, and looks up the cache key “Record A [timestamp when Record A was updated]”. It exists, so it serves the cached copy.

This method has some drawbacks – it still requires one query to the canonical data store, and you need to remember to update the updated_at attribute of your record when any associated records change. If you’re using Rails, this is trivial:

class MyRecord < ActiveRecord::Base
  has_many :associated_records
end

class AssociatedRecord < ActiveRecord::Base
  belongs_to :my_record, touch: true
end

Another drawback is that your cache is going to be full of old keys, when a record is updated. Luckily, there are caches that already deal with this! LRU, or Least Recently Used, is a cache eviction policy that removes the least recently used records first, making room for new records. Redis can be used as a LRU cache and Memcached is sort of LRU. The Rails Memory Cache Store also uses a LRU algorithm.

“Caching sounds great! How do I use it?”

Caching is not something that you should “tack on” to an app. There are awesome tools, such as Varnish, that are based around this concept, but it is not ideal. The ideal web application will be designed from the ground up with caching in mind — even in the development environment. If you’re writing tests, make sure your test environment is connected to a cache, then test cache invalidation and lookup. Ideally, you should use the cache you use in production in both development and test environments.

Leave a Reply

Your email address will not be published. Required fields are marked *