Caching in Ocean

From Ocean Framework Documentation Wiki
Jump to: navigation, search

HTTP level caching is used to the full in Ocean. You have control of all HTTP caching and can use any cache policies you want. We of course urge you to use the Cache-Control header liberally and to support Conditional GET wherever possible. Rails has built-in means to do so, and the ocean-rails gem enhances this functionality further.


The key to caching in Ocean is Varnish, a high-capacity reverse HTTP proxy which is an integral part of the architecture. All requests always pass through Varnish, whether from the outside or from the inside. External HTTPS requests are SSL terminated before they reach Varnish.


Varnish is thus the face of Ocean visavi the world, as well as an accelerator for all traffic, including Ocean's internal operations. This has several interesting consequences.

  • We can always be sure that all requests will be cached in a fully predictable manner in a shared cache very near to the origin servers.
  • HTTPS requests will be cached just like HTTP requests.
  • Authorisations can be shared via Varnish, making Ocean authorisation lightweight and extremely scalable.
  • Varnish will handle all Conditional GETs without bothering the Ocean services. This permits, indeed encourages, massive polling from clients.
  • Varnish will also handle all must-revalidate requests, again without bothering the Ocean origin servers.
  • Smaller instances can be used for Ocean services, reducing costs dramatically.
  • Varnish can be programmed to give us full control also of the expiration of cached data.
  • As we can make sure Varnish holds authoritative information at all times, aggressive caching becomes possible.

Cached HTTPS

The above means that even when a client is communicating with Ocean over HTTPS, we still have the use and full control of the shared Varnish cache layer closest to the servers:


As mentioned earlier, HTTPS normally disallows shared caches from caching entities at all; Ocean, however, gives you full control of server-side Varnish caching. This is very important since most of the Ocean traffic is HTTPS. The presence of a server-side cache over which you have full control allows very powerful caching strategies to be implemented, both by Ocean itself and by applications running on top of Ocean.

Ocean Caching is Per User

Ocean programs Varnish in such a way as to include the authentication token representing a user as part of the cache identity for a URI. In this way, all cached items are per user. This prevents leakage of cached information. The authentication is passed using the HTTP header X-API-Token. Ocean shares tokens for each user in such a way as to maximise cache efficiency.

The Magic of s-maxage

Ocean places no restriction on the use of Cache-Control attributes – except for one, s-maxage, to which Varnish has been programmed to respond in a special way. The s-maxage attribute is like max-age in that it specifies how long an entity should be regarded as fresh: requests for the entity during that period of time will never reach the origin server. However, max-age only applies to private caches intended for only one user, such as a browser cache, while s-maxage is intended for shared caches serving many users.

In order to give Ocean full control over Varnish, the s-maxage attribute in Cache-Control headers is reserved in Ocean and interpreted by Varnish in a special way. The behaviour doesn't break any standards, but it does limit the effect of s-maxage to the Varnish layer only.

When Varnish receives a Cache-Control header with an s-maxage value, the following happens:

  1. Varnish obeys the directive for its own use. That is, the effect of s-maxage=3600 as seen by Varnish will be exactly like max-age=3600 as seen by a private cache. In this case, Varnish will hold on to the entity for an hour, during which period it will be regarded as fresh.
  2. The s-maxage attribute and its value is removed from the outgoing Cache-Control header.
  3. An Age: 0 HTTP header will be added to the entity whenever delivered.

Thus in Ocean, s-maxage is like max-age, but for the local Varnish layer only.

Bottle.jpg NOTE: In Ocean, s-maxage never reaches beyond Varnish. Only Varnish will ever see s-maxage.

A response sent from the Ocean servers containing the following header:

Cache-Control: public, max-age=0, s-maxage=31557600

tells Varnish to cache the entity for a year. Clients beyond Varnish will only see

Cache-Control: public, max-age=0

which effectively makes them check back with Varnish each time a request is made; if the entity isn't stale, then the client is free to use any locally stored representation of the entity.

The net effect is that we have full control of how the entity is cached, as we now have a means of telling Varnish to hold on to the entity longer than other caches. Other values than 0 are of course also possible, as in

Cache-Control: public, max-age=10, s-maxage=3600

This will allow local browser caches to cache an entity for 10 seconds, after which they must revalidate the entity with Varnish. The origin server will only receive a request once an hour.


See for more information.

Invalidating Cached resources

All the above special programming of Varnish allows us to instruct our server-side enterprise-level HTTP accelerator that it and it alone is to hang on to an entity for a longer period than other caches. This in itself would be interesting but fairly useless unless we also had the ability to expire entities in the Varnish cache at will.

And we do: Ocean programs Varnish to respond to two non-standard HTTP methods, called PURGE and BAN. They work just like GET or any other standard HTTP method in that they take an URL as the target of the operation.

Bottle.jpg NOTE: Varnish will only respond to PURGE and BAN requests originating from your local network.


When Varnish receives a PURGE request for an URL, it will immediately purge the entity it has stored for that URL from its cache. A subsequent request for the same URL will hit the origin server.


The URL of a BAN request is not treated literally; instead it is interpreted as a regular expression. The effect of a BAN is to immediately invalidate all URLs matching the regex in the Varnish cache.

However, traversing the whole Varnish cache (typically 5GB in size) whenever a BAN is received would be a very bad idea from a performance perspective. Instead, whenever an entity is to be served from the cache, Varnish examines a list of active bans. If any ban matches the URL of the cached entity, the entity is purged and the origin server will be hit instead.

Thus, purging from RAM is not instant with BAN, but logically the effect is exactly the same. Varnish uses a sophisticated algorithm to decide with which bans to match an URL; the net effect is that BAN can be freely used to invalidate multiple entities with only negligible overhead.

The BAN algorithm is complemented with something the Varnish developers call "The Ban Lurker". Its sinister name nonwithstanding, the Ban Lurker is a benign creature: it's simply a daemon which traverses the Varnish cache in the background, evicting entities which previously have been BANned. In this way, entities with extremely long expiration times won't take up memory unnecessarily.

Aggressive Caching

Ocean uses all the above to implement Aggressive Caching. It is used extensively in the Core Services.

  1. Resource representations, both single resources and collections of resources, are cached in Varnish by using s-maxage expiration times. These can if so desired be extremely long.
  2. When a resource changes, it is invalidated in the Varnish cache using a BAN request.
  3. All collections of which the resource is a part (or of which it might be a part), are also invalidated using BAN.

The ocean-rails gem provides automated support for this functionality through its included classes, modules, and generators. The developer has complete control of all aspects of aggressive caching, and may choose to use it in full or in part, or not at all.