As you start scaling up your serverless applications, caching is one of elements that can greatly improve your site’s performance and responsiveness.
For Django deployments hosted on Cloud Run, you can add caching with Memorystore for Redis in just a few steps, providing low latency access and high throughput for your heavily accessed data.
A note on costs: While Cloud Run has a generous free tier, Memorystore and Serverless VPC have a base monthly cost. At the time of writing, the estimated cost of setting up this configuration is around USD $50/month, but you should confirm all costs when provisioning any cloud infrastructure.
The networking complexities of Memorystore
If you’ve deployed complex services on Cloud Run before, you’ll be aware that managed products like Cloud SQL have a public IP address, meaning they can be accessed from the public internet. Cloud SQL databases can optionally be configured to have a private IP address (often looking like 10.x.x.x), meaning they are only accessible by entities on the same internal network. That internal network is also known as a Virtual Private Cloud Network, or VPC network.
Memorystore allows only internal IP addresses, so additional configurations are required to be able to allow Cloud Run access.
In order for Memorystore to be able to connect with Cloud Run, you will need to establish a Serverless VPC Access connector, allowing for connectivity between Cloud Run and the VPC where your Memorystore instance lives.
Connecting to Redis with Serverless VPC
To start, you will need to decide if you want to use the existing default network, or create a new one. The default network is automatically available in your Google Cloud project when you enable the Compute Engine API, and offers a /20 subnet (4,096 addresses) of network space for different applications to communicate with each other.
If you want to create a new network, or have other networking constraints, you can read more about creating and using VPC networks. This article will opt to use the default network.
To create a Serverless VPC Connector, go to the Create Connector page in the Google Cloud console, and enter your settings, connecting to your selected network.
If you have previously created anything on this network, ensure that whatever subnet range you enter does not overlap with any other subnets you have in that network!
You can also create a connector using gcloud, using the sample configuration below, which uses the default network, and suggested configurations from the console:
Once your network and VPC connector is set up, you can then provision your Memorystore instance to that network.
Redis vs Memcached
While Memcache has been supported in core Django since version 3.2 (April 2021), the 2021 Django Developers Survey showed that for developers that chose to use caching on their sites, they were nearly four times more likely to use Redis over Memcache.
This post will focus on deploying Redis caching, but you can read the Memcache implementation notes on the Django documentation to see who you could adapt this post for Memcache.
Provisioning Memorystore for Redis
Following the Creating a Redis instance on a VPC Network instructions, you can provision an instance in moments after selecting your tier, capacity, and region.
You can also create a Redis instance from gcloud, using the sample minimum configuration below:
For reference, examples of configurations for the rest of the article will reference this instance as “myredis”.
Configuring Django for caching
One of the simplest ways to connect your Memorystore instance with Django is by implementing per-site caching. This will cache every page on your site for a default of 600 seconds. Django also has options for configuring per-view and template fragment caching.
To implement per-site caching, you will need to make some changes to your settings.py file to configure this.
Adding CACHES to settings.py
After your DATABASES configuration, you’ll need to add a CACHES setting, with the backend and location settings. The backend will be “django.core.cache.backends.redis.RedisCache”, and the location will be the “redis://” scheme, followed by the IP and Port of your Redis instance. You can get the IP and port of your instance using gcloud:
Your final CACHES entry will look something like this:
If you’re opting to use django-environ with Secret Manager (as we do in our tutorials), you would add the REDIS_URL to your configuration settings:
Then use that value in your settings.py:
(As of django-environ 0.8.1 there is a pending feature request to support this in the existing cache_url helper.)
Adding MIDDLEWARE to settings.py
In your existing MIDDLEWARE configuration, add the UpdateCacheMiddleWare before and FetchFromCacheMiddleware after your (probably) existing CommonMiddleware entry:
This order is important, as middleware is applied differently during different phases of the request-response cycle. You can read more about this in the Django documentation.
Even though Django handles a lot of the caching implementation for you, you will still need to add the Python bindings as dependencies to your application. Django suggests installing both redis-py (for native binding support) and hiredis-py (to speed up multi-bulk replies). To do this, add redis and hiredis to your requirements.txt (or pyproject.toml if you’re using Poetry).
While you’re in your dependencies, make sure you bump django dependency to the most recent version!
Updating your deployment
With the configurations now set, you can now update your service, ensuring you connect your VPC you set up earlier! Updating your deployment will depend on how you deploy.
If you’re using continuous deployment, it will be easier to manually update your service before committing your code.
If you haven’t discovered source-based deployments, they allow you to build and deploy a Cloud Run service in one step:
If you originally set up your service to reference the latest version of the secret, you won’t need to make any changes to that setting now. But if you did set a specific version, you’ll need to make those changes now.
After deploying, you may feel like your site is much more speedy, but how do you confirm your cache is being used?
If you previously have any performance monitoring, you can check for any improvements over time. Or if you have any pages or searches on your site, you can see if they run faster the second time.
But to confirm your cache is having entries written to it, you can confirm this in a number of ways: In Memorystore, you can go to the Monitoring tab of your instance and check the “Calls” graph for statistics about the get and set operations. You can also export data to Cloud Storage, then read the data using programs like rbdtools.
For optimal performance, caching should be in the same network as the primary data source. Since you’ve just set up a Serverless VPC, you should also ensure that your Cloud SQL database is in the same VPC. You’ll want to edit your database to set a private IP in the VPC you created, then update your secret settings to reference this new IP (rather than the /cloudsql/instance) value.
Because the socket-based Cloud SQL connections were handled by Cloud Run in the Cloud SQL option, you can also remove that setting.
Website go brrr
As your serverless applications scale, you may find you need to expand your infrastructure to suit. Using hosted services like Memorystore in Google Cloud, you can adapt and upgrade your applications to grow as you do without adding to your operations burden.
You can learn more about Memorystore for Redis in this Serverless Expeditions episode.
Cloud BlogRead More