As covered in our recent blog posts, the security foundations blueprint is here to curate best practices for creating a secured Google Cloud deployment and provide a Terraform automation repo for adapting, adopting, and deploying those best practices in your environment. In today’s blog post, we’re diving a little deeper into the security foundations guide to highlight several best practices for security practitioners and platform teams to use with setting-up, configuring, deploying, and operating a security-centric infrastructure for their organization.
The best practices described in the blueprint are a combination of both preventative controls and detective controls, and are organized as such in the step-by-step guide. The first topical sections cover preventative controls, which are implemented through architecture and policy decisions. The next set of topical sections cover detective controls, which use monitoring capabilities to look for drift, anomalous or malicious behavior as it happens.
If you want to follow along in the full security foundations guide as you read this post, we are covering sections 4-11 of the Step-by-step guide (chapter II).
The first several topics cover how to protect your organization and prevent potential breaches using both programmatic constraints (policies) and architecture design.
One of the benefits of moving to Google Cloud is your ability to manage resources, their organization and hierarchy, in one place! The best practices in this section give you a resource hierarchy strategy that does just that. As implemented, it provides isolation and allows for segregation of policies, privileges, and access, which help reduce risk of malicious activity or error. And while this sounds like you might be doing more work, the capabilities in GCP make this possible while easing administrative overhead.
The step-by-step guide’s recommended organization structure
The best practices include:
using a single organization for top-level ownership of resources,implementing a folder hierarchy to group projects into related groups (prod, non-prod, dev, common, bootstrap) where you can create segmentation and isolation, and subsequently apply security policies and grant access permissions, andestablishing organizational policies that define resource configuration constraints across folders and projects.
Whether you are rolling out foundational or infrastructure resources, or deploying an application, the way you manage your deployment pipeline can provide extra security, or create extra risk. The best practices in this section show you how to set up review, approval, and rollback processes that are automated and standardized. They limit the amount of manual configuration, and therefore, reduce the possibility of human error, drive consistency, allow revision control, and enable scale. This allows for governance and policy controls to help you avoid exposing your organization to security or compliance risks.
The best practices described include:
codifying the Google Cloud infrastructure into Terraform modules which provides an automated way of deploying resources,using private Git repositories for the Terraform modules,initiating deployment pipeline actions with policy validation and approval stages built into the pipeline, anddeploying foundations, infrastructure, and workloads through separate pipelines and access patterns.
Authentication and authorization
Many data breaches come from incorrectly-scoped or over-granted privileges. Controlling access precisely allows you to keep your deployments secure by permitting only certain users access to your protected resources. This section delivers best practices for authentication (validating a user’s identity) and authorization (determining what that user can do) in your cloud deployment. Recommendations include managing user credentials in one place (for example, either Google Cloud Identity or Active Directory) and enabling syncs so that the removal of access and privileges for suspended or deleted user accounts are propagated appropriately.
This section also reinforces the importance of using multi-factor authentication (MFA) and phishing-resistant security keys (covered more in-depth in the Organization structure chapter). Privileged identities especially should use multi-factor authentication and consider adding multi-party authorization as well since, due to their access, they are frequently targets and thus at higher risk.
Throughout all the best practices in this section, the overarching theme is the principle of least privilege: only necessary permissions are to be granted. No more, no less
A few more of the best practices include:
maintaining user identities automatically with Cloud Identity federated to your on-prem Active Directory (if applicable) as the single source of truth,using Single sign-on (SSO) for authentication,establishing privileged identities to provide elevated access in emergency situations, andusing Groups with a defined naming convention, rather than individual identities, to assign permissions with IAM.
As your network is the communication layer between your resources and to the internet, making sure it is secure is critical in preventing external (also known as north-south) and internal (east-west) attacks. This section of the step-by-step guide goes into how to secure and segment your network so that services that store highly sensitive data are protected. It also includes architecture alternatives based on your deployment patterns.
The guide goes deeper to show how best to configure the networking of your cloud deployment so that resources can communicate with each other, with your on-prem environment, as well as the public internet. And it does all that while maintaining security and reliability. By keeping network policy and control centralized, implementing these best practices is easier to manage.
This section is robust in providing detailed, opinionated guidance, so if you would like to dive in further to this topic, head to section 7 of the full step-by-step guide to learn more. A few of the high-level best practices in this section are:
centralizing network policies and control through use of Shared VPC, or a hub-and-spoke architecture if this fits your use case,
separating services that contain sensitive data in separate Shared VPC networks (base and restricted) and using separate projects, IAM, and a VPC-Service Control perimeter to limit data transfers in or out of the restricted network,
using Dedicated Interconnect (or alternatives) to connect on-prem with Google Cloud and using Cloud DNS to communicate with on-prem DNS servers,
accessing Google Cloud APIs from the cloud and from on-premises through private IP addresses, and
establishing tag-based firewall rules to control network traffic flows.
Key and secret management
When you are trying to figure out where to store keys and credentials, it is often a trade-off between level of security and convenience. This section outlines a secure and convenient method for storing keys, passwords, certificates, and other sensitive data required for your cloud applications using Cloud Key Management Services and Secret Manager. Following these best practices ensure that storing secrets in code is avoided, the lifecycles of your keys and secrets are managed properly, and the principles of least privilege and separation of duties are adhered to.
The best practices described include:
creating, managing, and using cryptographic keys with Cloud Key Management Services,
storing and retrieving all other general-purpose secrets using Secret Manager, and
using prescribed hierarchies to separate keys and secrets between the organization and folder levels.
Logs are used by diverse teams across an organization. Developers use them to understand what is happening as they write code, security teams use them for investigations and root cause analysis, administrators use them to debug problems in production, and compliance teams use them to support regulatory requirements. The best practices in this section keep all those use cases in mind to ensure the diverse set of users are supported with the logs they need.
The guide recommends a few best practices around logs including:
centralizing your collection of logs in an organization-level log sink project,
unifying monitoring data at the folder-level,
ingesting, aggregating, and processing logs with the Cloud Logging API and Cloud Log Router, andF
exporting logs from sinks to Cloud Storage for audit purposes, to BigQuery for analysis, and/or to a SIEM through Cloud Pub/Sub.
Logging structure described in the step-by-step guide
The terminology “detective controls” might evoke the sense of catching drift and malicious actions as they take place or just after. But in fact, these latter sections of the step-by-step guide cover how to prevent attacks as well using monitoring capabilities to detect vulnerabilities and misconfigurations before they have an opportunity to be exploited.
Much like a detective trying to solve a crime may whiteboard a map of clues, suspects, and their connections, this section covers how to detect and bring together possible infrastructure misconfigurations, vulnerabilities, and active threat behavior into one pane of glass. This can be achieved through a few different options: using Google Cloud’s Security Command Center Premium; using native capabilities in security analytics leveraging BQ and Chronicle; as well as integrating with third-party SIEM tools, if applicable for your deployment.
The guide lists several best practices including:
aggregating and managing security findings with Security Command Center Premium to detect and alert on infrastructure misconfigurations, vulnerabilities, and active threat behavior,using logs in BigQuery to augment detection of anomalous behavior by Security Command Center Premium, andintegrating your enterprise SIEM product with Google Cloud Logging.
Security Command Center in the Cloud Console
Since your organization’s cloud usage flows through billing, setting up billing alerts and monitoring your billing records can work as an additional mechanism for enhancing governance and security by detecting unexpected consumption.
The supporting best practices described include:
setting up billing alerts are used on a per-project basis to warn at key thresholds (50%, 75%, 90%, and 95%), andexporting billing records to a BigQuery dataset in a Billing-specific project.
If you want to learn more about how to set up billing alerts, export your billing records to BigQuery, and more, you can also check out the Beyond Your Bill video series.
Bringing it all together and next steps
This post focused on the best practices provided in the blueprint for building the foundational infrastructure for your cloud deployment, including preventative and detective controls.
While the best practices are many, they can be adopted, adapted, and deployed efficiently using templates provided in the Terraform automation repository. And of course, the non-abbreviated details of implementing these best practices is available in the security foundations guide itself. Go forth, deploy and stay safe out there.
Cloud BlogRead More