We’re excited to announce the General Availability of cross-project service referencing with Internal HTTP(S) Load Balancing and Regional External HTTP(S) Load Balancing. This new capability allows organizations to configure one central load balancer and route traffic to hundreds of services distributed across multiple different projects. You can centrally manage all traffic routing rules and policies in one URL map. You can also associate the load balancer to a single set of hostnames and SSL certificates, optimizing the number of load balancers needed to deploy your application, and lower manageability, operational costs, and quota requirements. By having different projects for each of your functional teams, you can also achieve separation of roles within your organization.
Google Cloud Load Balancing
Google Cloud Load Balancing, a fully managed and distributed service, helps your applications reach planet scale, no matter where you deploy your workloads — cloud or on-prem — while supporting millions of queries per second and meeting your high availability and security requirements. Our HTTP(S) load balancers support advanced traffic management capabilities out-of-the-box, such as traffic mirroring, weight-based traffic splitting, and request/response-based header transformations, giving you fine-grained control over how traffic is handled. Our load balancers are built on the open-source Envoy Proxy, which allows you to extend your traffic management across Google Cloud, other clouds, or on-premises.
Why use cross-project service referencing?
The introduction of cross-project service referencing brings numerous benefits to Cloud Load Balancing environments.
1. Reduce operational complexity and costs by exposing multi-project services using a single load balancer
As shown in the diagram above, you can now configure the frontend resources of a load balancer: forwarding rules, target proxy and URL map in one project (this project has to be within a Shared VPC setup using host and service projects, see image below), and backend services and backends in different service projects, in the same shared VPC setup.
With this capability, you can create one central load balancer and configure just one URL map with all your routing rules. This central URL map can then refer to hundreds of cross-project backend services that can be distributed across multiple different projects, but use the same centrally provisioned shared VPC network. With a shared VPC network, you don’t have to worry about trying to link multiple VPCs or managing firewall rules for many VPCs.
Further, you can expose all of your services with just one forwarding rule, thus reducing the number of hostnames and SSL certificates that you have to manage. With fewer forwarding rules and other load balancing resources, you not only incur lower costs, but also reduce your operational overhead and quota requirements.
2. Achieve separation of roles for your functional teams with the flexibility of secure cross-project access of services
Service owners can focus on building services in service projects, while network teams can provision and maintain load balancers in another project, and both can be connected using cross-project service referencing. Both teams have access to view, configure, and modify only those resources that come within their purview. This enables seamless separation of responsibilities, minimizes confusion, and accidental errors, while providing the flexibility for cross-team collaboration.
3. Provide service owners exclusive control over service-centric traffic management policies
Service owners can have exclusive control over policies that are configured at the backend service and determine how the load balancer distributes traffic to their services. For example, service owners can define policies for session affinity, health checks, identity-based access, outlier detection, and several other advanced traffic-management capabilities.
4. Expose services securely with fine-grained access control
Service owners can maintain autonomy over the exposure of their services, and control which users can access their services via the load balancer. This is achieved by a special IAM role, the Load Balancer Service User IAM role. Only users who are provided this role can access cross-project services. You can further define Organizational Policy Constraints that can limit cross-project referencing capabilities to specific projects, specific folders, or even completely disallow the usage of this feature within your organization. Using both IAM and Organizational Policies, you can achieve a granular access control as per your needs, prevent accidental misconfigurations, and follow your organization’s security norms.
How do I get started?
(For step-by-step instructions, refer to the setup guides: Internal and External Load Balancing)
At a high level, you perform the following steps to configure your cross-project services and the central load balancer.
Step 1: As a shared VPC and network administrator, enable shared VPC on the host project and attach service projects to it. Then, create network, subnetworks and firewall rules in the host project, and grant subnetwork permissions to the service administrator and load balancer administrator.
Step 2 : As a service owner or administrator, create a backend service in a service project and attach backends to it. Then grant IAM permissions to load balancer administrators to access your backend service.
Step 3: As a load balancer administrator, create a load balancer in a different service project or the host project that directs traffic to the cross-project backend service.
This capability will be soon introduced in the Global External HTTP(S) Load Balancing, thus covering all HTTP(S) Load Balancing products. You can learn more about this capability using the guides Internal HTTP(S) Load Balancing and Regional External HTTP(S) Load Balancing.
Cloud BlogRead More