Back when Cloud Run was still in its public preview stage in 2019, I wrote about a dozen reasons why Cloud Run complies with the Twelve-Factor App methodology. That article described how Cloud Run covered most of the twelve factors of application development. However the twelfth factor — Admin Processes — was noted as “outside the scope of Cloud Run”. This gave Cloud Run a “near perfect score” of 11 out of 12.Â
With Cloud Run jobs, we can now check off that last factor, as Cloud Run now has a way to run admin processes! You can use this pattern for things such as applying one-off commands, or more complex tasks like running database schema migrations for your deployments that rely on web frameworks (such as in Django, Rails, or Laravel). This article will focus on the database schema migration use case.Â
While there are existing patterns to apply database migrations, they come with considerations and are possibly limited by your organization’s security policies. Using Cloud SQL Auth Proxy means you can apply migrations against your production database, but you’d be running those commands on your local machine. You can run migration commands as part of your Cloud Build processes, but there are going to be situations where you don’t want to automatically apply any database schema migrations when you deploy.
One job at a time
Cloud Run jobs allows you to run commands within the scope of Cloud Run itself. For Cloud Run services, your container has to start and have a process that is constantly running (like a web server). With Cloud Run jobs you can invoke a command that will exit when complete.Â
Cloud Run jobs have the same configuration as Cloud Run services, so you can configure secrets, databases, and service accounts for your job. This also means you can use the same container, and make use of existing IAM configurations from your service, for example, without having to reconfigure things for a new product.
How you approach this pattern will depend on if you have Dockerfile-based containers or Cloud Buildpack-based containers.Â
Dockerfile-based database migrations
As a simple example, take an existing Cloud Run deployment command for a container built with a Dockerfile:
Usually, this service will run as is, with the default entrypoint of the container running your web server of choice. However, you can run something else, overriding the default entrypoint with –command. The only limitation with a Cloud Run service is that whatever you deploy has to have a process that listens for requests,Â
With Cloud Run jobs, you don’t have this limitation, and so you can set whatever custom command you want, using the same command line arguments to ensure connectivity with your database and secrets.Â
To adapt the previous example to a job, change deploy to jobs create, and specify your –command:
In this example, we’re using the –execute-now flag to create the job and immediately execute it. Any time we need to execute that command again, we can just re-run the job:
The power of Procfile
When using Dockerfiles, you have to specify the full command you want to run. However, if you’re using Google Cloud’s buildpacks, you can add entries in your Procfile, and then call these entrypoints in your –command.
For example, take the following Procfile:
By default, Google Cloud’s buildpacks run the web entrypoint. When using Cloud Run jobs with this configuration, you can use –command migrate to run migrations, or –command test to run tests.Â
In the previous examples we’ve used general commands, but you can create a Procfile with entries based on your application. For example, depending on your web framework of choice:
You can still run any command without having to add definitions in the Procfile, as buildpacks come with a “launcher” entrypoint, the entrypoint for user-provided shell processes with Buildpacks. Use this entrypoint to run any command you like:
Migrations in practice
So how would we go about adapting this pattern to an existing deployment?Â
Say I have a Cloud Run service that’s running a Django application. In Django, I use python3 manage.py migrate to apply database migrations.
So, I can add that command to my Procfile in my application code:
Then, I redeploy my Cloud Run service with this new code change using source deployments, which will build a new container and update my service:
From there, I can create my job. In this case, I want to use gcloud to get the configurations from my service, specifically the image name and Cloud SQL instance name. I could also copy these from the Cloud Console, but I like scripts.
Then, I can execute the job:
Using –wait, I can confirm the job runs successfully, then check the logs in the URL provided.Â
Since I configured the job to run against the latest version of my container, if I want to re-run migrations, after I deploy my service, the new container will be available so I just execute the job again:
I don’t just have to run this from the command line, either. I could add this as a step in my Cloud Build deployment script after the deployment step!
The only time I would need to update this job is if something like my Cloud SQL instance changed:
This does presume, however, that I’m building my service in a way that I can deploy my service and then apply database migrations. At the moment, I’m building my container and deploying my service in one action (using source deployments), then running my job. If I need to make sure my database migrations run first, I’d need to build my container, run my job, then deploy my service. Read more about Cloud Schema Migrations.
A note on Procfiles
If you’re already using Google Cloud’s buildpacks and a language other than Python, Procfiles may be unfamiliar to you. In JavaScript, for example, there’s “one way” to start a service (npm run start, as defined in package.json.). When Google Cloud’s buildpacks detects a JavaScript application, it runs this by default if you don’t have a Procfile.Â
In Python, though, there’s no “one command” to start things going, so Python users have to work with Procfiles.Â
If you start using Procfiles, you’ll have to make sure you work out what your web command would be, and add that to your Procfile. (At the time of writing, if you include Procfile, you can’t rely on the default entrypoint functionality). You can still run ad-hoc commands, Procfile or no, just make sure you use –command launcher and use –args for your command.Â
What else can you automate?
With this pattern, think about what other things you could do.Â
Are you working with Firestore? Do you have backups? You can automate this by setting up a job to run gcloud firestore export $BUCKET.Â
Do you have reports you want to run? You could set those up as well. You can also set up a schedule so they are run daily, or weekly.Â
You can also create jobs to do maintenance tasks, like cleaning up records, or for admin tasks, like clearing caches.
Now a perfect scope as a matter of fact(or)
With the introduction of Cloud Run jobs, Cloud Run now scores a perfect 12/12 for all the twelve factors of application development, which means it is now the perfect choice for your next serverless deployment.Â
To learn more about Cloud Run Jobs:Â
Quickstart: Create and execute a job in Cloud RunÂ
Codelab: Getting started with Cloud Run jobsÂ
To learn more about buildpacks:Â
Docs: Google Cloud’s buildpacksÂ
Cloud BlogRead More