Wednesday, April 24, 2024
No menu items!
HomeCloud ComputingUsing Memorystore for Redis to cache your Django applications

Using Memorystore for Redis to cache your Django applications

As you start scaling up your serverless applications, caching is one of elements that can greatly improve your site’s performance and responsiveness. 

With the release of Django 4.0, Redis is now a core supported caching backend. Redis is available as part of Memorystore, Google Cloud’s managed in-memory data store.

For Django deployments hosted on Cloud Run, you can add caching with Memorystore for Redis in just a few steps, providing low latency access and high throughput for your heavily accessed data. 

A note on costs: While Cloud Run has a generous free tier, Memorystore and Serverless VPC have a base monthly cost. At the time of writing, the estimated cost of setting up this configuration is around USD $50/month, but you should confirm all costs when provisioning any cloud infrastructure.

The networking complexities of Memorystore

If you’ve deployed complex services on Cloud Run before, you’ll be aware that managed products like Cloud SQL have a public IP address, meaning they can be accessed from the public internet. Cloud SQL databases can optionally be configured to have a private IP address (often looking like 10.x.x.x), meaning they are only accessible by entities on the same internal network. That internal network is also known as a Virtual Private Cloud Network, or VPC network. 

Memorystore allows only internal IP addresses, so additional configurations are required to be able to allow Cloud Run access. 

In order for Memorystore to be able to connect with Cloud Run, you will need to establish a Serverless VPC Access connector, allowing for connectivity between Cloud Run and the VPC where your Memorystore instance lives.

Connecting to Redis with Serverless VPC

To start, you will need to decide if you want to use the existing default network, or create a new one. The default network is automatically available in your Google Cloud project when you enable the Compute Engine API, and offers a /20 subnet (4,096 addresses) of network space for different applications to communicate with each other.

If you want to create a new network, or have other networking constraints, you can read more about creating and using VPC networks. This article will opt to use the default network.

To create a Serverless VPC Connector, go to the Create Connector page in the Google Cloud console, and enter your settings, connecting to your selected network.

If you have previously created anything on this network, ensure that whatever subnet range you enter does not overlap with any other subnets you have in that network! 

You can also create a connector using gcloud, using the sample configuration below, which uses the default network, and suggested configurations from the console:

code_block[StructValue([(u’code’, u’gcloud compute networks vpc-access connectors create myserverlessvpc \rn –region=us-central1 \rn –network=default \rn –range=10.8.0.0/28 \rn –min-instances=2 \rn –max-instances=10 \rn –machine-type=e2-micro’), (u’language’, u”)])]

Once your network and VPC connector is set up, you can then provision your Memorystore instance to that network.

Redis vs Memcached

While Memcache has been supported in core Django since version 3.2 (April 2021), the 2021 Django Developers Survey showed that for developers that chose to use caching on their sites, they were nearly four times more likely to use Redis over Memcache.

This post will focus on deploying Redis caching, but you can read the Memcache implementation notes on the Django documentation to see who you could adapt this post for Memcache.

Provisioning Memorystore for Redis

Following the Creating a Redis instance on a VPC Network instructions, you can provision an instance in moments after selecting your tier, capacity, and region. 

You can also create a Redis instance from gcloud, using the sample minimum configuration below:

code_block[StructValue([(u’code’, u’gcloud redis instances create myredis \rn –size=1GB \rn –region=us-central1′), (u’language’, u”)])]

You can optionally configure Redis AUTH for this instance.You can read more about the configuration options in the reference documentation. 

For reference, examples of configurations for the rest of the article will reference this instance as “myredis”. 

Configuring Django for caching

One of the simplest ways to connect your Memorystore instance with Django is by implementing per-site caching. This will cache every page on your site for a default of 600 seconds. Django also has options for configuring per-view and template fragment caching. 

To implement per-site caching, you will need to make some changes to your settings.py file to configure this. 

Adding CACHES to settings.py

After your DATABASES configuration, you’ll need to add a CACHES setting, with the backend and location settings. The backend will be “django.core.cache.backends.redis.RedisCache”, and the location will be the “redis://” scheme, followed by the IP and Port of your Redis instance. You can get the IP and port of your instance using gcloud: 

code_block[StructValue([(u’code’, u’gcloud redis instances describe myredis \rn –region us-central1 \rn –format “value[separator=’:’](host,port)”‘), (u’language’, u”)])]

Your final CACHES entry will look something like this: 

code_block[StructValue([(u’code’, u”CACHES = {rn ‘default’: {rn ‘BACKEND’: ‘django.core.cache.backends.redis.RedisCache’, rn ‘LOCATION’: ‘redis://MEMORYSTOREIP:6379′,rn }rn}”), (u’language’, u”)])]

If you’re opting to use django-environ with Secret Manager (as we do in our tutorials), you would add the REDIS_URL to your configuration settings: 

code_block[StructValue([(u’code’, u’DATBASE_URL=…rnSECRET_KEY=…rnREDIS_HOST=”MEMORYSTOREIP:6379″‘), (u’language’, u”)])]

Then use that value in your settings.py:

code_block[StructValue([(u’code’, u’CACHES = {rn “default”: {rn “BACKEND”: “django.core.cache.backends.redis.RedisCache”, rn “LOCATION”: f”redis://{env(‘REDIS_HOST’)}”rn }rn}’), (u’language’, u”)])]

(As of django-environ 0.8.1 there is a pending feature request to support this in the existing cache_url helper.) 

Adding MIDDLEWARE to settings.py

In your existing MIDDLEWARE configuration, add the UpdateCacheMiddleWare before and FetchFromCacheMiddleware after your (probably) existing CommonMiddleware entry: 

code_block[StructValue([(u’code’, u’u200bu200bMIDDLEWARE = [rn …rn “django.middleware.cache.UpdateCacheMiddleware”, // add me!rn “django.middleware.common.CommonMiddleware”,rn “django.middleware.cache.FetchFromCacheMiddleware”, // add me!rn …rn]’), (u’language’, u”)])]

This order is important, as middleware is applied differently during different phases of the request-response cycle. You can read more about this in the Django documentation. 

Adding dependencies

Even though Django handles a lot of the caching implementation for you, you will still need to add the Python bindings as dependencies to your application. Django suggests installing both redis-py (for native binding support) and hiredis-py (to speed up multi-bulk replies). To do this, add redis and hiredis to your requirements.txt (or pyproject.toml if you’re using Poetry). 

While you’re in your dependencies, make sure you bump django dependency to the most recent version!

Updating your deployment

With the configurations now set, you can now update your service, ensuring you connect your VPC you set up earlier! Updating your deployment will depend on how you deploy. 

If you’re using continuous deployment, it will be easier to manually update your service before committing your code. 

code_block[StructValue([(u’code’, u’gcloud run services update myservice \rn –vpc-connector myserverlessvpc’), (u’language’, u”)])]

If you haven’t discovered source-based deployments, they allow you to build and deploy a Cloud Run service in one step: 

code_block[StructValue([(u’code’, u’gcloud run deploy myservice \rn –source . \rn –vpc-connector myserverlessvpc’), (u’language’, u”)])]

If you originally set up your service to reference the latest version of the secret, you won’t need to make any changes to that setting now. But if you did set a specific version, you’ll need to make those changes now. 

Checking success

After deploying, you may feel like your site is much more speedy, but how do you confirm your cache is being used? 

If you previously have any performance monitoring, you can check for any improvements over time. Or if you have any pages or searches on your site, you can see if they run faster the second time. 

But to confirm your cache is having entries written to it, you can confirm this in a number of ways: In Memorystore, you can go to the Monitoring tab of your instance and check the “Calls” graph for statistics about the get and set operations. You can also export data to Cloud Storage, then read the data using programs like rbdtools. 

Improving performance

For optimal performance, caching should be in the same network as the primary data source. Since you’ve just set up a Serverless VPC, you should also ensure that your Cloud SQL database is in the same VPC. You’ll want to edit your database to set a private IP in the VPC you created, then update your secret settings to reference this new IP (rather than the /cloudsql/instance) value.

Because the socket-based Cloud SQL connections were handled by Cloud Run in the Cloud SQL option, you can also remove that setting. 

Website go brrr

As your serverless applications scale, you may find you need to expand your infrastructure to suit. Using hosted services like Memorystore in Google Cloud, you can adapt and upgrade your applications to grow as you do without adding to your operations burden. 

You can learn more about Memorystore for Redis in this Serverless Expeditions episode.

Related Article

Get 6X read performance with Memorystore for Redis Read Replicas

Memorystore for Redis supports Read Replicas preview, allowing you to scale up to five replicas and achieve over one million read request…

Read Article

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments