Monday, April 15, 2024
No menu items!
HomeCloud ComputingPracticing the principle of least privilege with Cloud Build and Artifact Registry

Practicing the principle of least privilege with Cloud Build and Artifact Registry

People often use Cloud Build and Artifact Registry in tandem to build and store software artifacts – these include container images, to be sure, but also OS packages and language specific packages. 

Consider a venn diagram where these same users are also users who use the Google Cloud project as a shared, multi-tenant environment. Because a project is a logical encapsulation for services like Cloud Build and Artifact Registry, administrators of these services want to apply the principle of least privilege in most cases. 

Of the numerous benefits from practicing this, reducing the blast radius of misconfigurations or malicious users is perhaps most important. 

Users and teams should be able to use Cloud Build and Artifact Registry safely – without the ability to disrupt or damage one another.

With per-trigger service accounts in Cloud Build and per-repository permissions in Artifact Registry, let’s walk through how we can make this possible.

The before times 

Let’s consider the default scenario – before we apply the principle of least privilege. In this scenario, we have a Cloud Build trigger connected to a repository. 

When an event happens in your source code repository (like merging changes into the main branch), this trigger is, well, triggered, and it kicks off a build in Cloud Build to build an artifact and subsequently push that artifact to Artifact Registry.

Fig. 1 – A common workflow involving Cloud Build and Artifact Registry

But what are the implications of permissions in this workflow? Well, let’s take a look at the permissions scheme. Left unspecified, a trigger will execute a build with the Cloud Build default service account. Of the several permissions granted by default to this service account are Artifact Registry permissions at the project level. 

Fig. 2 – The permissions scheme of the workflow in Fig. 1

Builds, unless specified otherwise, will run using this service account as its identity. This means those builds can interact with any artifact repository in Artifact Registry within that Google Cloud project. So let’s see how we can set this up!

Putting it into practice

In this scenario, we’re going to walk through how you might set up the below workflow, in which we have a Cloud Build build trigger connected to a GitHub repository. In order to follow along, you’ll need to have a repository set up and connected to Cloud Build – instructions can be found here, and you’ll need to replace variable names with your own values.

This build trigger will kick off a build in response to any changes to the main branch in that repository. The build itself will build a container image and push it to Artifact Registry.

The key implementation detail here is that every build from this trigger will use a bespoke service account that only has permissions to a specific repository in Artifact Registry.

Fig. 3 – The permissions scheme of the workflow with principle of least privilege

Let’s start by creating an Artifact Registry repository for container images for a fictional team, Team A.

code_block[StructValue([(u’code’, u’gcloud artifacts repositories create ${TEAM_A_REPOSITORY} \rn–repository-format=docker \rn–location=${REGION}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e546c3afa10>)])]

Then we’ll create a service account for Team A.

code_block[StructValue([(u’code’, u’gcloud iam service-accounts create ${TEAM_A_SA} \rn–display-name=$TEAM_A_SA_NAME’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e545fbfcd10>)])]

And now the fun part. We can create an IAM role binding between this service account and the aforementioned Artifact Registry repository; below is an example of how you would do this with gcloud:

code_block[StructValue([(u’code’, u’gcloud artifacts repositories add-iam-policy-binding ${TEAM_A_REPOSITORY} –location $REGION –member=”serviceAccount:${TEAM_A_SA}@${PROJECT_ID}” –role=roles/artifactregistry.writer’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e545fbfc590>)])]

What this effectively does is it gives the service account permissions that come with the artifactregistry.writer role, but only for a specific Artifact Registry repository.

Now, for many moons, Cloud Build has already allowed for users to provide a specific service account for use in their build specification – for manually executed builds. You can see an example of this in the following build spec:

code_block[StructValue([(u’code’, u”steps:rn- name: ‘bash’rn args: [‘echo’, ‘Hello world!’]rnlogsBucket: ‘LOGS_BUCKET_LOCATION’rn# provide your specific service account belowrnserviceAccount: ‘projects/PROJECT_ID/serviceAccounts/${TEAM_A_SA}rnoptions:rn logging: GCS_ONLY”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e545fbfcbd0>)])]

But, for many teams, automating the execution of builds and incorporating it with how code and configuration flows through their teams and systems is a must. Triggers in Cloud Build are how folks achieve this! 

When creating a trigger in Cloud Build, you can either connect it to a source code repository or set up your own webhook. Whatever the source may be, triggers depend on systems beyond the reach of permissions we can control in our Google Cloud project using Identity and Access Management. 

Let’s now consider what could happen when we do not apply the principle of least privilege when using build triggers with a Git repository.

What risk are we trying to mitigate?

The Supply Chain Levels for Software Artifacts (SLSA) security framework details potential threats in the software supply chain – essentially the process of how your code is written, tested, built, deployed, and run.  

Fig. 4 – Threats in the software supply chain identified in the SLSA framework

With a trigger taking action to start a build based on a compromised source repo, as seen in threat B, we can see how this effect may compound in effect downstream. If builds run based on actions in a compromised repo, we have multiple threats now in play that follow.

By minimizing the permissions that these builds have, we reduce the scope of impact that a compromised source repo can have. This walkthrough specifically looks at minimizing the effects of having a compromised package repo in threat G. 

In this example we are building out, if the source repo is compromised, only packages in the specific Artifact Registry repository created will be affected; this is because our service account associated with the trigger only has permissions to that one repository.

Creating a trigger to run builds with a bespoke service account requires only one additional parameter; when using gcloud for example, you would specify the –-service-account parameter as follows:

code_block[StructValue([(u’code’, u’gcloud beta builds triggers create github \rn–name=team-a-build \rn–region=${REGION} \rn–repo-name=${TEAM_A_REPO} \rn–repo-owner=${TEAM_A_REPO_OWNER} \rn–pull-request-pattern=main \rn–build-config=cloudbuild.yaml \rn–service-account=projects/${PROJECT_ID}/serviceAccounts/${TEAM_A_SA}@${PROJECT_ID}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e547518ead0>)])]

TEAM_A_REPO will be the GitHub repository you created and connected to Cloud Build earlier, TEAM_A_REPO_OWNER will be the GitHub username of the repository owner, and TEAM_A_SA will be the service account we created earlier. Aside from that, all you’ll need is a cloudbuild.yaml manifest in that repository, and your trigger will be set! 

With this trigger set up, you can now test the scope of permissions your builds that run based on this trigger have, verifying that they only have permission to work with the TEAM_A_REPOSITORY in Artifact Registry.

In conclusion

Configuring minimal permissions for build triggers is only one part of the bigger picture, but a great step to take no matter where you are in your journey of securing your software supply chain. 

To learn more, we recommend taking a deeper dive into the SLSA security framework and Software Delivery Shield – Google Cloud’s fully managed, end-to-end solution that enhances software supply chain security across the entire software development life cycle from development, supply, and CI/CD to runtimes. Or if you’re just getting started, check out this tutorial on Cloud Build and this tutorial on Artifact Registry!

Related Article

Introducing Cloud Build private pools: Secure CI/CD for private networks

With new private pools, you can use Google Cloud’s hosted Cloud Build CI/CD service on resources in your private network or in other clouds.

Read Article

Cloud BlogRead More



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments